id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
14,445,588 | https://en.wikipedia.org/wiki/Prokineticin%20receptor%202 | Prokineticin receptor 2 (PKR2), is a dimeric G protein-coupled receptor encoded by the PROKR2 gene in humans.
Function
Prokineticins are secreted proteins that can promote angiogenesis and induce strong gastrointestinal smooth muscle contraction. The protein encoded by this gene is an integral membrane protein and G protein-coupled receptor for prokineticins. PKR2 is composed of 384 amino acids. Asparagine residues at position 7 and 27 undergo N-linked glycosylation. Cysteine residues at position 128 and 208 form a disulfide bond. The encoded protein is similar in sequence to GPR73, another G protein-coupled receptor for prokineticins. PKR2 is also linked to mammalian circadian rhythm. Levels of PKR2 mRNA fluctuate in the suprachiasmatic nucleus, increasing during the day and decreasing at night.
Mutations in the PROKR2 (also known as KAL3) gene have been implicated in hypogonadotropic hypogonadism and gynecomastia. Total loss of PKR2 in mice leads to spontaneous torpor usually beginning at dusk and lasting for 8 hours on average.
PKR2 functions as a G protein-coupled receptor, thus it has a signaling cascade when it's ligand binds. PKR2 is a Gq-coupled protein, so when the ligand binds, beta-type phospholipase C is activated which creates inositol triphosphate. This then triggers calcium release inside the cell.
See also
Prokineticin receptor
Kallmann syndrome
References
Further reading
External links
GeneReviews/NCBI/NIH/UW entry on Kallmann syndrome
G protein-coupled receptors | Prokineticin receptor 2 | [
"Chemistry"
] | 369 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,445,717 | https://en.wikipedia.org/wiki/GPR112 | G protein-coupled receptor 112 is a protein encoded by the ADGRG4 gene. GPR112 is a member of the adhesion GPCR family.
Adhesion GPCRs are characterized by an extended extracellular region often possessing N-terminal protein modules that is linked to a TM7 region via a domain known as the GPCR-Autoproteolysis INducing (GAIN) domain.
GPR112 is expressed in human enterochromaffin cells and in the mouse intestine. The N-terminal fragment (NTF) of GPR112 contains pentraxin (PTX)-like modules. GPR112 gene expression has been identified as a marker for neuroendocrine carcinoma cells.
References
External links
Adhesion GPCR consortium
G protein-coupled receptors | GPR112 | [
"Chemistry"
] | 168 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,445,752 | https://en.wikipedia.org/wiki/GPR119 | G protein-coupled receptor 119 also known as GPR119 is a G protein-coupled receptor that in humans is encoded by the GPR119 gene.
GPR119, along with GPR55 and GPR18, have been implicated as novel cannabinoid receptors.
Pharmacology
GPR119 is expressed predominantly in the pancreas and gastrointestinal tract in rodents and humans, as well as in the brain in rodents. Activation of the receptor has been shown to cause a reduction in food intake and body weight gain in rats. GPR119 has also been shown to regulate incretin and insulin hormone secretion. As a result, new drugs acting on the receptor have been suggested as novel treatments for obesity and diabetes.
Ligands
A number of endogenous, synthetic and plant derived ligands for this receptor have been identified:
2-Oleoylglycerol (2OG)
Anandamide
AR-231,453
MBX-2982
Oleoylethanolamide (OEA) (Endogenous Ligand)
PSN-375,963
PSN-632,408
Human microbiota and GPR119 activation
Commensal bacteria are found to have important roles in human health, as bacterial metabolites are likely to be key components of host interactions by which they affect mammalian physiology. N-acyl amide synthase genes are found enriched in gastrointestinal bacteria and the lipids, that they encode, interact with GPCRs, which regulate gastrointestinal tract physiology, where cell-based models have demonstrated, that commensal GPR119 agonists regulate metabolic hormones and glucose homeostasis as efficiently as human ligands, and the clearest overlap in structure and function between bacterial and human GPCR-active ligands, is found for the endocannabinoid receptor GPR119.
The experiment have isolated both the palmitoyl and oleoyl analogs of N-acyl serinol, and found the latter only differs from 2-OG: C21H40O4 by the presence of an amide instead of an ester, and from OEA: C20H39NO2 by the presence of an additional ethanol substituent, where the N-oleoyl serinol (C21H41NO3; 18:1,n-9), is a similarly potent GPR119 agonist compared to the endogenous ligand OEA (EC50 12 μM vs. 7 μM), but elicits almost a 2-fold greater maximum activation, do suggest that chemical mimicry of eukaryotic signalling molecules may be common among commensal bacteria, that communicate through interactions between these two fundamental systems—which form the gut microbiota-endocannabinoidome axis.
Evolution
Paralogues
Source:
GPR6
MC5R
MC3R
MC4R
CNR1
GPR12
S1PR1
MC1R
S1PR3
S1PR5
GPR3
S1PR2
CNR2
LPAR3
LPAR1
LPAR2
MC2R
S1PR4
References
Further reading
G protein-coupled receptors | GPR119 | [
"Chemistry"
] | 664 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,445,815 | https://en.wikipedia.org/wiki/GPR155 | Integral membrane protein GPR155, also known as G protein-coupled receptor 155, is a protein that in humans is encoded by the GPR155 gene. Mutations in this gene may be associated with autism.
References
Further reading
G protein-coupled receptors | GPR155 | [
"Chemistry"
] | 54 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,445,837 | https://en.wikipedia.org/wiki/GPR113 | GPR113 is a gene that encodes the Probable G-protein coupled receptor 113 protein.
Gene
The human GPR113 gene is located on chromosome 2 (2p23.3). This gene spans the length of a 38.65kb region from base 26531041 to 26569685 on the negative strand. The GPR113 gene has two neighbors on either side on the negative strand: OTOF otoferlin preceding and HADHA hydroxyacyl-CoA following. Directly opposite the GPR113 on the positive strand is the EPT1 gene.
The GPR113 gene is also known by the aliases PGR23 and HGPCR37.
Evolution
The GPR113 has 5 human paralogs GPR110, GPR115, GPR128, GPR111, and GPR116. GPR113 is well conserved in mammals from primates to semi-aquatic species, as well as some amphibians. These include the Common Chimpanzee, the African Bush Elephant, the Platypus, and the Western Clawed Frog. Homologous domains that are well conserved throughout orthologs center in the 7 transmembrane receptor (Secretin family) region highlighted in purple in the figure.
Protein
The protein product of GPR113 gene is a G-protein coupled receptor. The protein has three transcript variants in humans. Of these three, GPR113 Variant 1 has the longest amino acid sequence, and has the highest identity to orthologs. This leads to the conclusion that GPR113 Variant 1 is the homo sapiens descendant of the ancestral GPR113 gene. GPR113 Var 1 contains 1079 Amino Acids, and is integral to the plasma membrane. The 7-pass receptor contains 4 domains highlighted in the figure at right: Signal Peptide (Red), Hormone Receptor Domain (Blue), Latrophilin/CL-1-like GPS domain (Orange), and the 7-transmembrane receptor (Purple). Between the Hormone Receptor Domain and the GPS is a Domain of unknown function that is not highlighted.
Function
GPR113 is a G protein-coupled receptor that is involved in a neuropeptide signaling pathway.
Clinical signficance
GPR113 has been found to be expressed differentially under diseased conditions. Under the condition of Type 2 diabetes, the percentile rank relative to other transcripts decreases relative to normal cell function. The deletion of TP63, which mediates a wide variety of important body processes, also produces decreased GPR113 expression. In mice brains, the cerebellum and the olfactory bulb both show transcription of the GPR113 gene. Additionally, a study from the National Institute of Deafness and Other Communication Disorders has identified GPR113 expression to be highly restricted to a subset of taste receptor cells. This paper's conclusions, coupled with olfactory bulb expression levels, could provide an avenue for future research, potentially illuminating more about GPR113's function.
Interacting protein
GPR113 has been shown to associate with the orphan G protein-coupled receptor GPR123.
Transcription factors
Clinical significance
The clinical significance of this protein has not been established. However, the expression profiles provide exciting directions for future research of the GPR113 gene, especially in fields studying taste and smell.
References
Further reading
G protein-coupled receptors | GPR113 | [
"Chemistry"
] | 719 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,445,863 | https://en.wikipedia.org/wiki/GPR156 | GPR156 (G protein-coupled receptor 156), is a human gene which encodes a G protein-coupled receptor belonging to metabotropic glutamate receptor subfamily. By sequence homology, this gene was proposed as being a possible GABAB receptor subunit, however when expressed in cells alone or with other GABAB subunits, no response to GABAB ligands could be detected. In vitro studies on GPR156 constitutive activity revealed a high level of basal activation and coupling with members of the Gi/Go heterotrimeric G protein family. In 2021, an article was reported that GPR156 modulates hair cell orientation in the cochlea. Also, it was proposed that GPR156 is related to congenital hearing loss. GPR156 in complex with any of the Gi/o heterotrimers regulates the hair cell orientation. In 2024, molecular structures of G-free and Go-bound GPR156 were characterized by using cryogenic electron microscopy.
Structure
Among class C GPCR family members, GPR156 is unique because it lacks a large extracellular domain. Structural analyses revealed that the asymmetric binding of Go-protein to GPR156 triggers conformational change of its cytoplasmic face without altering dimer interface. Although the inactive class C GPCRs undergo rearrangement of their dimeric interface, the agonist- and/or the positive allosteric modulator-bound class C GPCRs retain their dimeric interface upon G-protein binding. Thus, the G-free GPR156 is likely to represent an active state. Structural and functional analyses suggest that abundant endogenous phospholipids, receptor dimerization, and the G-protein binding-induced conformational change of the cytoplasmic face are the primary reasons for constitutive activation of GPR156. Phosphatidylglycerol further stimulates the activity of GPR156, which suggests the environmental changes of the phospholipid composition may regulate the GPR156 activity.
References
Further reading
G protein-coupled receptors | GPR156 | [
"Chemistry"
] | 448 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,445,873 | https://en.wikipedia.org/wiki/GPR125 | Adhesion G-protein coupled receptor A3 (ADGRA3), also known as GPR125, is an adhesion GPCR that in humans is encoded by the Adgra3 gene (previously Gpr125).
References
Further reading
G protein-coupled receptors | GPR125 | [
"Chemistry"
] | 58 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,446,096 | https://en.wikipedia.org/wiki/GPR114 | G protein-coupled receptor 114 is a protein encoded by the ADGRG5 gene. GPR114 is a member of the adhesion GPCR family.
Adhesion GPCRs are characterized by an extended extracellular region often possessing N-terminal protein modules that is linked to a TM7 region via a domain known as the GPCR-Autoproteolysis INducing (GAIN) domain.
Tissue distribution
GPR114 mRNA is specifically expressed in human eosinophils as well as in mouse lymphocytes, monocytes, macrophage, and dendritic cells.
Signaling
The cyclic adenosine monophosphate (cAMP) assay in overexpressing HEK293 cells has demonstrated coupling of GPR114 to Gαs protein.
References
External links
Adhesion GPCR consortium
G protein-coupled receptors | GPR114 | [
"Chemistry"
] | 175 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,446,102 | https://en.wikipedia.org/wiki/OPN5 | Opsin-5, also known as G-protein coupled receptor 136 or neuropsin is a protein that in humans is encoded by the OPN5 gene. Opsin-5 is a member of the opsin subfamily of the G protein-coupled receptors. It is a photoreceptor protein sensitive to ultraviolet (UV) light. The OPN5 gene was discovered in mouse and human genomes and its mRNA expression was also found in neural tissues. Neuropsin is bistable at 0 °C and activates a UV-sensitive, heterotrimeric G protein Gi-mediated pathway in mammalian and avian tissues.
Function
Human neuropsin is expressed in the eye, brain, testes, and spinal cord. Neuropsin belongs to the seven-exon subfamily of mammalian opsin genes that includes peropsin (RRH) and retinal G protein coupled receptor (RGR). Neuropsin has different isoforms created by alternative splicing.
Photochemistry
When reconstituted with 11-cis-retinal, mouse and human neuropsins absorb maximally at 380 nm. When illuminated these neuropsins are converted into blue-absorbing photoproducts (470 nm), which are stable in the dark. The photoproducts are converted back to the UV-absorbing form, when they are illuminated with orange light (> 520 nm).
Species distribution
Neuropsins are known from echinoderms, annelids, arthropods, brachiopods, tardigrades, mollusks, and most are known from craniates. The craniates are the taxon that contains mammals and with them humans. However, neuropsin orthologs have only been experimentally verified in a small number of animals, among them human, mouse (Mus musculus), chicken (Gallus gallus domesticus), the Japanese quail (Coturnix japonica), the European brittle star Amphiura filiformis (related to starfish), the tardigrade water bear (Hypsibius dujardini), and the tadpole of Xenopus laevis.
Searches of publicly available databases of genetic sequences have found putative neuropsin orthologs in both major branches of Bilateria: protostomes and deuterostomes. Among protostomes, putative neuropsins have been found in the molluscs owl limpet (Lottia gigantea) (a species of sea snail) and Pacific oyster (Crassostrea gigas), in the water flea (Daphnia pulex) (an arthropod), and in the annelid worm Capitella teleta.
Phylogeny
The neuropsins are one of three subgroups of the tetraopsins (also known as RGR/Go or Group 4 opsins). The other groups are the chromopsins and the Go-opsins. The tetraopsins are one of the five major groups of the animal opsins, also known as type 2 opsins). The other groups are the ciliary opsins (c-opsins, cilopsins), the rhabdomeric opsins (r-opsins, rhabopsins), the xenopsins, and the nessopsins. Four of these subclades occur in Bilateria (all but the nessopsins). However, the bilaterian clades constitute a paraphyletic taxon without the opsins from the cnidarians.
In the phylogeny above, Each clade contains sequences from opsins and other G protein-coupled receptors. The number of sequences and two pie charts are shown next to the clade. The first pie chart shows the percentage of a certain amino acid at the position in the sequences corresponding to position 296 in cattle rhodopsin. The amino acids are color-coded. The colors are red for lysine (K), purple for glutamic acid (E), dark and mid-gray for other amino acids, and light gray for sequences that have no data at that position. The second pie chart gives the taxon composition for each clade, green stands for craniates, dark green for cephalochordates, mid green for echinoderms, pale pink for annelids, dark blue for arthropods, light blue for mollusks, and purple for cnidarians. The branches branches to the clades have pie charts, which give support values for the branches. The values are from right to left SH-aLRT/aBayes/UFBoot. The branches are considered supported when SH-aLRT ≥ 80%, aBayes ≥ 0.95, and UFBoot ≥ 95%. If a support value is above its threshold the pie chart is black otherwise gray.
References
Further reading
G protein-coupled receptors | OPN5 | [
"Chemistry"
] | 1,032 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,446,113 | https://en.wikipedia.org/wiki/GPR115 | Probable G-protein coupled receptor 115 is a protein that in humans is encoded by the GPR115 gene.
References
Further reading
G protein-coupled receptors | GPR115 | [
"Chemistry"
] | 33 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,446,126 | https://en.wikipedia.org/wiki/GPR116 | Probable G-protein coupled receptor 116 is a protein that in humans is encoded by the GPR116 gene. GPR116 has now been shown to play an essential role in the regulation of lung surfactant homeostasis.
References
Further reading
G protein-coupled receptors | GPR116 | [
"Chemistry"
] | 58 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,446,137 | https://en.wikipedia.org/wiki/GPR97 | G-protein coupled receptor 97 also known as adhesion G protein-coupled receptor G3 (ADGRG3) is a protein that in humans is encoded by the ADGRG3 gene. GPR97 is a member of the adhesion GPCR family.
Adhesion GPCRs are characterized by an extended extracellular region often possessing N-terminal protein modules that is linked to a TM7 region via a domain known as the GPCR-Autoproteolysis INducing (GAIN) domain.
GPR97 is expressed in human granulocytes and endothelial cells of the vasculature as well as in mouse granulocytes, monocytes, macrophages, and dendritic cells.
Signaling
The inositol phosphate (IP3) accumulation, aequorin, and 35S isotope binding assays in overexpressing HEK293 cells have demonstrated coupling of GPR97 to Gαo protein triggering cyclic adenosine monophosphate (cAMP). GPR97 actives cAMP response element-binding protein (CREB), NF-κB, and small GTPases to regulate cellular functions.
Function
Systemic steroid exposure is a therapy to treat a variety of medical conditions and is associated with epigenetic processes such as DNA methylation that may reflect pharmacological responses and/or side effects. GPR97 was found to be differently methylated at CpG sites in the genome of blood cells from patient under systemic steroid treatment. GPR97 is transcribed in immune cells. Gene-deficient mice revealed that Gpr97 is crucial for maintaining B-cell population via constitutive CREB and NF-κB activities. Human lymphatic endothelial cells (LECs) abundantly express GPR97. Silencing GPR97 in human LECs indicated that GPR97 modulates cytoskeletal rearrangement, cell adhesion and migration through regulating the small GTPase RhoA and cdc42. In vertebrates, GPR97 has an indispensable role in the bone morphogenetic proteins (BMP) signaling pathway in bone formation. A microarray meta-analysis revealed that mouse Gpr97 is a direct transcriptional target of BMP signaling in long bone development.
References
External links
Adhesion GPCR consortium
G protein-coupled receptors | GPR97 | [
"Chemistry"
] | 501 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,446,159 | https://en.wikipedia.org/wiki/GPR111 | Probable G-protein coupled receptor 111 is a protein that in humans is encoded by the GPR111 gene.
References
Further reading
G protein-coupled receptors | GPR111 | [
"Chemistry"
] | 33 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,446,279 | https://en.wikipedia.org/wiki/GPR110 | Probable G-protein coupled receptor 110 is a protein that in humans is encoded by the GPR110 gene. This gene encodes a member of the adhesion-GPCR receptor family. Family members are characterized by an extended extracellular region with a variable number of N-terminal protein modules coupled to a TM7 region via a domain known as the GPCR-Autoproteolysis INducing (GAIN) domain.
References
Further reading
G protein-coupled receptors | GPR110 | [
"Chemistry"
] | 96 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,446,391 | https://en.wikipedia.org/wiki/GPR133 | Probable G-protein coupled receptor 133 is a protein that in humans is encoded by the GPR133 gene.
This gene encodes a member of the adhesion-GPCR family of receptors. Family members are characterized by an extended extracellular region with a variable number of protein domains coupled to a TM7 domain via a domain known as the GPCR-Autoproteolysis INducing (GAIN) domain.
References
Further reading
G protein-coupled receptors | GPR133 | [
"Chemistry"
] | 94 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,446,411 | https://en.wikipedia.org/wiki/GPR150 | Probable G-protein coupled receptor 150 is a protein that in humans is encoded by the GPR150 gene.
References
Further reading
G protein-coupled receptors | GPR150 | [
"Chemistry"
] | 33 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,446,454 | https://en.wikipedia.org/wiki/P2RY8 | P2Y purinoceptor 8 is a protein that in humans is encoded by the P2RY8 gene.
Function
The protein encoded by this gene belongs to the family of G-protein coupled receptors, that are preferentially activated by adenosine and uridine nucleotides. This gene is moderately expressed in undifferentiated HL60 cells, and is located on both chromosomes X and Y.
Clinical relevance
Recurrent mutations in this gene have been associated to cases of diffuse large B-cell lymphoma.
See also
P2Y receptor
References
Further reading
G protein-coupled receptors | P2RY8 | [
"Chemistry"
] | 128 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,446,461 | https://en.wikipedia.org/wiki/VN1R2 | Vomeronasal type-1 receptor 2 is a protein that in humans is encoded by the VN1R2 gene.
References
Further reading
G protein-coupled receptors | VN1R2 | [
"Chemistry"
] | 36 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,446,478 | https://en.wikipedia.org/wiki/VN1R3 | Vomeronasal type-1 receptor 3 is a protein that is encoded by the VN1R3 gene in humans.
References
Further reading
G protein-coupled receptors | VN1R3 | [
"Chemistry"
] | 36 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,446,488 | https://en.wikipedia.org/wiki/VN1R4 | Vomeronasal type-1 receptor 4 is a protein that in humans is encoded by the VN1R4 gene.
References
Further reading
G protein-coupled receptors | VN1R4 | [
"Chemistry"
] | 36 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,446,494 | https://en.wikipedia.org/wiki/VN1R5 | Vomeronasal type-1 receptor 5 is a protein that in humans is encoded by the VN1R5 gene.
References
Further reading
G protein-coupled receptors | VN1R5 | [
"Chemistry"
] | 36 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,446,502 | https://en.wikipedia.org/wiki/TAAR6 | Trace amine associated receptor 6, also known as TAAR6, is a protein which in humans is encoded by the TAAR6 gene.
Function
TAAR6 belongs to the trace amine-associated receptor family. Trace amines are endogenous amine compounds that are chemically similar to classic biogenic amines like dopamine, norepinephrine, serotonin, and histamine. Trace amines were thought to be 'false transmitters' that displace classic biogenic amines from their storage and act on transporters in a fashion similar to the amphetamines, but the identification of brain receptors specific to trace amines indicates that they also have effects of their own. RNA expression analysis shows hTAAR6 is expressed in the hippocampus, where murine TAAR receptors have been shown to be involved with neurogenesis.
Computational modeling suggests TAAR6 can bind to the foul smelling compounds produced by rotting flesh, putrescine and cadaverine.
TAAR6 mutant mice have differences in behavior compared with wild-type mice. Also, they have elevated brain serotonin levels in several brain regions and enhanced hypothermic response to 5-HT1A receptor agonist 8-OH-DPAT.
References
Further reading
G protein-coupled receptors | TAAR6 | [
"Chemistry"
] | 271 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,446,548 | https://en.wikipedia.org/wiki/Relaxin/insulin-like%20family%20peptide%20receptor%204 | Relaxin/insulin-like family peptide receptor 4, also known as RXFP4, is a human G-protein coupled receptor.
Function
GPR100 is a member of the rhodopsin family of G protein-coupled receptors (GPRs) (Fredriksson et al., 2003).[supplied by OMIM]
See also
Relaxin receptor
References
External links
Further reading
G protein-coupled receptors | Relaxin/insulin-like family peptide receptor 4 | [
"Chemistry"
] | 86 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,446,614 | https://en.wikipedia.org/wiki/Frank%E2%80%93Read%20source | In materials science, a Frank–Read source is a mechanism explaining the generation of multiple dislocations in specific well-spaced slip planes in crystals when they are deformed. When a crystal is deformed, in order for slip to occur, dislocations must be generated in the material. This implies that, during deformation, dislocations must be primarily generated in these planes. Cold working of metal increases the number of dislocations by the Frank–Read mechanism. Higher dislocation density increases yield strength and causes work hardening of metals.
The mechanism of dislocation generation was proposed by and named after British physicist Charles Frank and Thornton Read.
In 2024, Cheng Long and coworkers demonstrated that the Frank-Read mechanism can generate disclination loops in nematic liquid crystals. This finding suggests that the Frank-Read mechanism may arise in a broader class of materials containing topological defect lines.
History
Charles Frank detailed the history of the discovery from his perspective in Proceedings of the Royal Society in 1980.
In 1950 Charles Frank, who was then a research fellow in the physics department at the University of Bristol, visited the United States to participate in a conference on crystal plasticity in Pittsburgh. Frank arrived in the United States well in advance of the conference to spend time at a naval laboratory and to give a lecture at Cornell University. When, during his travels in Pennsylvania, Frank visited Pittsburgh, he received a letter from fellow scientist Jock Eshelby suggesting that he read a recent paper by Gunther Leibfried. Frank was supposed to board a train to Cornell to give his lecture at Cornell, but before departing for Cornell he went to the library at Carnegie Institute of Technology to obtain a copy of the paper. The library did not yet have the journal with Leibfried's paper, but the staff at the library believed that the journal could be in the recently arrived package from Germany. Frank decided to wait for the library to open the package, which did indeed contain the journal. Upon reading the paper he took a train to Cornell, where he was told to pass the time until 5:00, as the faculty was in meeting. Frank decided to take a walk between 3:00 and 5:00. During those two hours, while considering the Leibfried paper, he formulated the theory for what was later named the Frank–Read source.
A couple of days later, he traveled to the conference on crystal plasticity in Pittsburgh where he ran into Thornton Read in the hotel lobby. Upon encountering each other, the two scientists immediately discovered that they had come up with the same idea for dislocation generation almost simultaneously (Frank during his walk at Cornell, and Thornton Read during tea the previous Wednesday) and decided to write a joint paper on the topic. The mechanism for dislocation generation described in that paper is now known as the Frank–Read source.
Mechanism
The Frank–Read source is a mechanism based on dislocation multiplication in a slip plane under shear stress.
Consider a straight dislocation in a crystal slip plane with its two ends, A and B, pinned. If a shear stress is exerted on the slip plane then a force , where b is the Burgers vector of the dislocation and x is the distance between the pinning sites A and B, is exerted on the dislocation line as a result of the shear stress. This force acts perpendicularly to the line, inducing the dislocation to lengthen and curve into an arc.
The bending force caused by the shear stress is opposed by the line tension of the dislocation, which acts on each end of the dislocation along the direction of the dislocation line away from A and B with a magnitude of , where G is the shear modulus. If the dislocation bends, the ends of the dislocation make an angle with the horizontal between A and B, which gives the line tensions acting along the ends a vertical component acting directly against the force induced by the shear stress. If sufficient shear stress is applied and the dislocation bends, the vertical component from the line tensions, which acts directly against the force caused by the shear stress, grows as the dislocation approaches a semicircular shape.
When the dislocation becomes a semicircle, all of the line tension is acting against the bending force induced by the shear stress, because the line tension is perpendicular to the horizontal between A and B. For the dislocation to reach this point, it is thus evident that the equation:
must be satisfied, and from this we can solve for the shear stress:
This is the stress required to generate dislocation from a Frank–Read source. If the shear stress increases any further and the dislocation passes the semicircular equilibrium state, it will spontaneously continue to bend and grow, spiraling around the A and B pinning points, until the segments spiraling around the A and B pinning points collide and cancel. The process results in a dislocation loop around A and B in the slip plane which expands under continued shear stress, and also in a new dislocation line between A and B which, under renewed or continued shear, can continue to generate dislocation loops in the manner just described.
A Frank–Read loop can thus generate many dislocations in a plane in a crystal under applied stress. The Frank–Read source mechanism explains why dislocations are primarily generated on certain slip planes; dislocations are primarily generated in just those planes with Frank–Read sources. It is important to note that if the shear stress does not exceed:
and the dislocation does not bend past the semicircular equilibrium state, it will not form a dislocation loop and instead revert to its original state.
References
Materials science
de:Frank-Read-Quelle | Frank–Read source | [
"Physics",
"Materials_science",
"Engineering"
] | 1,190 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
14,446,628 | https://en.wikipedia.org/wiki/GPR149 | Probable G-protein coupled receptor 149 is a protein that in humans is encoded by the GPR149 gene.
References
G protein-coupled receptors | GPR149 | [
"Chemistry"
] | 31 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,446,658 | https://en.wikipedia.org/wiki/GPR144 | Probable G-protein coupled receptor 144 is a protein that in humans is encoded by the GPR144 gene. This gene encodes a member of the adhesion-GPCR family of receptors. Family members are characterised by an extended extracellular region with a variable number of protein domains coupled to a TM7 domain via a domain known as the GPCR-Autoproteolysis INducing (GAIN) domain.
References
Further reading
G protein-coupled receptors | GPR144 | [
"Chemistry"
] | 94 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,446,701 | https://en.wikipedia.org/wiki/GPR141 | Probable G-protein coupled receptor 141 is a protein that in humans is encoded by the GPR141 gene.
GPR141 is a member of the rhodopsin family of G protein-coupled receptors (GPRs).
References
G protein-coupled receptors | GPR141 | [
"Chemistry"
] | 57 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,446,726 | https://en.wikipedia.org/wiki/GPR153 | Probable G-protein coupled receptor 153 is a protein that in humans is encoded by the GPR153 gene.
References
Further reading
G protein-coupled receptors | GPR153 | [
"Chemistry"
] | 33 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,446,862 | https://en.wikipedia.org/wiki/Heddle | A heddle or heald is an integral part of a loom. Each thread in the warp passes through a heddle, which is used to separate the warp threads for the passage of the weft. The typical heddle is made of cord or wire and is suspended on a shaft of a loom. Each heddle has an eye in the center where the warp is threaded through. As there is one heddle for each thread of the warp, there can be near a thousand heddles used for fine or wide warps. A handwoven tea-towel will generally have between 300 and 400 warp threads and thus use that many heddles.
In weaving, the warp threads are moved up or down by the shaft. This is achieved because each thread of the warp goes through a heddle on a shaft. When the shaft is raised the heddles are too, and thus the warp threads threaded through the heddles are raised. Heddles can be either equally or unequally distributed on the shafts, depending on the pattern to be woven. In a plain weave or twill, for example, the heddles are equally distributed.
The warp is threaded through heddles on different shafts in order to obtain different weave structures. For a plain weave on a loom with two shafts, for example, the first thread would go through the first heddle on the first shaft, and then the next thread through the first heddle on the second shaft. The third warp thread would be threaded through the second heddle on the first shaft, and so on. In this manner the heddles allow for the grouping of the warp threads into two groups, one group that is threaded through heddles on the first shaft, and the other on the second shaft.
While the majority of heddles are as described, this style of heddle has derived from older styles, several of which are still in use. Rigid heddle looms, for example, instead of having one heddle for each thread, have a shaft with the 'heddles' fixed, and all threads go through every shaft.
Metal heddles
Within wire heddles there is a large variety in quality. Heddles should have a smooth eye, with no sharp edges to either catch or fray (and thus weaken) the warp. The warp must be able to slide through the heddle without impairment. The heddle should also be light and not bulky.
There are three common types of metal heddles: wire, inserted eye, and flat steel. The inserted eye are considered to be the best, as they have a smooth eye with no rough ends to catch the warp. Wire heddles are second in quality, followed by the flat steel. Wire heddles look much like the inserted eye heddles, but where in the inserted eye there is a circle of metal for the eye, the wire ones are simply twisted at the top and bottom. The flat metal heddles are considered the poorest in quality as they are heavier and bulkier, as well as not being as smooth. They are a flat piece of steel, with the ends rotated slightly so that the flat side is at an angle of 45 degrees to the shaft. The eye is simply a hole cut in the middle of the piece of metal.
String heddles
Traditional heddles were made of cord. However, cord deteriorates with time and creates friction between the warp and the heddle that can damage the warp. Today, traditional cord heddles are mainly used by historical reenactors.
A very simple string heddle can be made with a series of five knots in a doubled length of cord, which creates five loops. Of these loops, the important ones are the two loops on the ends and the loop in the center. The loops on the ends are used to stretch the heddle between the top and bottom bars of a shaft and are typically just large enough for the heddle to slide along the shaft. The center loop is the eye through which a warp thread is passed and is placed in the center of the heddle. String heddles can also be crocheted, and come in many different forms.
Some modern hand weavers use machine-crocheted polyester heddles. These synthetic heddles minimize some of the problems with traditional knotted string heddles. They are used as an alternative to metal heddles to lessen the weight of the shafts.
Inkle looms
Inkle loom heddles are generally made of string and consist of a simple loop. Alternating warp threads pass through a heddle, as in a rigid heddle loom.
Tapestry loom
Tapestry loom heddles are generally made of string. They consist of a loop of string with an eye at one end for the warp thread and a loop at the other for attaching to a heddle bar. See Loom#Heddle-bar.
Repair heddles
A repair heddle can be used if a heddle breaks, which is rare, or when the loom has been warped incorrectly. If the weaver finds a mistake in the pattern, instead of rethreading all of the threads, a repair heddle can be slipped onto the shaft in the correct location. Thus repair heddles have a method to open the bottom and top loop that holds them onto the shaft. Repair heddles can save a lot of time in fixing a mistake, however they are bulky, in general, and catch on the other heddles.
Rigid heddles
In rigid heddle looms there is typically a single shaft, with the heddles fixed in place in the shaft. The warp threads pass alternately through a heddle and through a space between the heddles, so that raising the shaft will raise half the threads (those passing through the heddles), and lowering the shaft will lower the same threads—the threads passing through the spaces between the heddles remain in place.
Rigid heddles are thus very different from the heddle in common use, though the single heddle derived from the rigid heddle. The advantage of non-rigid heddles is that the weaver has more freedom, and can create a wider variety of fabrics. Rigid heddle looms resemble the standard floor loom in appearance.
Single and double heddle looms
Single and double heddle looms are types of rigid heddle loom, in that the heddles are all together. Heddles are normally suspended above the loom. The weaver operates them by pedals and works while seated.
Among hand woven African textiles, single-heddle looms are in wide use among weaving regions of Africa. Mounting position varies according to local custom. Double-heddle looms are used in West Africa, Ethiopia and in Madagascar for the production of lamba cloth.
See also
Loom#Shedding methods
References
External links
Weaving equipment | Heddle | [
"Engineering"
] | 1,415 | [
"Weaving equipment"
] |
14,446,959 | https://en.wikipedia.org/wiki/GPR152 | Probable G-protein coupled receptor 152 is a protein that in humans is encoded by the GPR152 gene.
References
Further reading
G protein-coupled receptors | GPR152 | [
"Chemistry"
] | 33 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,447,511 | https://en.wikipedia.org/wiki/Well%20car | A well car, also known as a double-stack car (or also intermodal car/container car), is a type of railroad car specially designed to carry intermodal containers (shipping containers) used in intermodal freight transport. The "well" is a depressed section that sits close to the rails between the wheel trucks of the car, allowing a container to be carried lower than on a traditional flatcar. This makes it possible to carry a stack of two containers per unit on railway lines (double-stack rail transport) wherever the structure gauge assures sufficient clearance.
The top container is secured to the bottom container either by a bulkhead built into the car — possible when bottom and top containers are the same dimensions, or through the use of inter-box connectors (IBC). Four IBCs are needed per well car. In the terminal there are four steps: unlock and lift off the top containers of an inbound train, remove the bottom containers, insert outbound bottom containers, lock assembly after top containers emplaced. Generally this is done car-by-car unless multiple crane apparatus are employed.
Advantages of using well cars include increased stability due to the lower center of gravity of the loads, lower tare weight, and in the case of articulated units, reduced slack action.
Well cars are most common in North America and Australia where intermodal traffic is heavy and electrification is less widespread; thus overhead clearances are typically more manageable. In India double stacking of containers is done on flatcars under -high catenary because the wider gauge permits more height while keeping the centre of gravity still low.
History
Southern Pacific Railroad (SP), along with SeaLand, devised the first double-stack intermodal car in 1977. SP then designed the first car with ACF Industries that same year. At first it was slow to become an industry standard, then in 1984 American President Lines started working with the Thrall Company to develop a refined well car and with the Union Pacific to operate a train service using the new well cars. That same year, the first all "double stack" train left Los Angeles for South Kearny, New Jersey, under the name of "Stacktrain" rail service. Along the way the train transferred from the UP to CNW and then to Conrail.
Multiple unit cars
Each unit of a double-stack car contains a single well; they often are constructed with three to five cars connected by articulated connectors. The intermediate connectors are supported by the centerplate of single trucks, often a -capacity truck but sometimes a -capacity one.
Also, in a number of cases multiple single-well cars (usually 3 or 5) are connected by drawbars and share a single reporting mark. Alternatively the multiple single-well cars each share a single truck.
On both types of multiple-unit cars, the units are typically distinguished by letters, with the unit on one end being the "A" unit, and the unit on the other end being the "B" unit. Middle units are labeled starting with "C", and going up to "E" for five-unit cars starting from the "B" unit and increasing towards the "A" unit.
Autonomous trains and terminals have been proposed.
Carrying capacity
Double-stack wellcars come in a number of sizes, related to the standard sizes of the containers they are designed to carry. Well lengths of , and are most common. A number of wells and wells also exist. (The sizes of wells are frequently marked in large letters on the sides of cars to assist yard workers in locating suitable equipment for freight loads.)
Larger containers (45 ft or up) are often placed on top of smaller containers fitting in the available wells to efficiently utilize all available space. All wells are also capable of carrying two 20 ft ISO containers in the bottom position.
Some double-stack well cars have also been equipped with hitches at each end that allow them to carry semi-trailers as well as containers. These are known as "all-purpose" well cars.
Articulated well cars typically have a capacity of per well. Highway weight limits in the US restrict most containers to less than so this is adequate for two containers stacked. Some single well cars have capacity for two fully loaded containers.
Econo Stack or Twin Stack well car
Econo Stack (a brand name of Gunderson) well cars are a variation of conventional well cars which feature a bulkhead at each end; their main purpose is to give the double-stacked containers more support. A disadvantage is they do not allow 53-foot containers to be stacked on top; however, 45-foot containers still fit and can be stacked on top. As the empty weight of bulkhead cars is significantly higher than other well cars, they are now unpopular with railroads.
Gallery
Usage
– double stack trains operate between Perth, Adelaide, Darwin and Parkes, NSW with clearances. , the Inland Railway between Melbourne and Brisbane was being built for operation of double stacked trains using wellcars.
– using double stacked container trains under 25 kV AC overhead lines using X2H and X2K type wellcars manufactured by CRRC. Initial tests where done with a standard container and a reduced height container on top, later increasing to a high hi-cube and a standard container on top. Even after increasing the height of the overhead wire it is not possible to use a stack of two hi-cube containers on the lines under electrification, even in well cars.
– The Mombasa-Nairobi standard gauge railway operates double-stacked trains using X2K type wellcars manufactured by CRRC, the first such trains being launched on October 1, 2018.
– the Panama Canal Railway runs double stack trains using well cars manufactured by Gunderson Inc.
– Saudi Railways Organization line to Dammam
– The small structure gauges and consequently small loading gauges on British railways mean that intermodal well wagons are required to be able to transport high intermodal containers on routes where the loading gauge is W9 or smaller.
Choke points
Low bridges and narrow tunnels in various locations prevent the operation of double-stack trains until costly upgrades are made. Some Class I railroad companies in the U.S. have initiated improvement programs to remove such obstructions. Examples include the Heartland Corridor (Norfolk Southern Railway) and National Gateway (CSX Transportation).
See also
Kangourou wagon
Lowmac
Pocket wagon
Slack action
Tiphook
Well wagon
References
Magazine articles
Mainline Modeler:
Fortenberry, Curt & Bill McKean. - "APL Container Car". - February 1987. - p.65-69.
Fortenberry, Curt & Robert L. Hundman. - "APL container car part II the brake system". - March 1987. - p.78-81.
Hundman, Robert L., & Curt Fortenberry. - "APL 45-foot container car". - May 1987. - p.54-57.
Model Railroader:
Durrenberger, Cyril. - "SP/ACF double stack cars". - October 1983. - p.83-93.
Model Railroading:
Bontrager, David A. - "Articulated double stacks: a prototype overview". - June 1993. p.24-29.
Bontrager, David A. - "The Newest Prototype Well Cars: An Abundance of Kitbashing Possibilities". August 1997. - p.46-49.
Casdorph, David G. and Ed McCaslin. - "Gunderson's Husky-Stack: The Prototype and Detailing A-Line's HO Model". - October 1995. - p.32-37.
Casdorph, David G. - "NSC 53' Drawbarred Well Car Roster and Pictorial". - August 2002. - p.30-33
Geiger, Doug. - "Thrall Double-Stacks: Three-Well DTTX Drawbar-Connected Car". - October 1994. - p.50-55.
Geiger, Doug. - "Gunderson Husky Stack Three-Well BN Drawbar-Connected Car". - July 1995. - p.48-53.
Geiger, Doug. - "Gunderson Maxi-Stack IIIs Part I: The Prototype". - December 1995. - p.58-63.
Geiger, Doug. - "Maxi-Stack Well Car Part I: The Prototype". - April 1997. - p.28-31.
Mansfield, Jim. - "Thrall Five-Unit Double-Stack Car - Series TWG50J". - October 1993. - p.19-23.
Mansfield, Jim. - "Thrall Five-Unit Double-Stack Car - Series APLX 5000". - November 1993. - p.24-25, 27-31.
Railroad Model Craftsman:
Panza, Jim & Chuck Yungkurth. - "Thrall's double-stack cars". - January 1989. - p.89-98.
Panza, Jim & Bruce Keating. - "The Gunderson Husky-Stack well car". - July 1992. - p.71-75.
Panza, Jim & William Halliar. - "Thrall stand-alone and drawbar connected well cars". - October 1992. - p.64-68.
External links
Freight Cars
http://people.hofstra.edu/geotrans/eng/ch3en/conc3en/pbdblstk.html The here mentioned is too low, it is more like AAR Plate "H" loading gauge diagrams compared to UIC (pdf & Autocad)
A Partnership of Two Old Rivals, Time magazine, June 7, 1954
Guide to Rail Cars
Association of American Railroads Mechanical Division, page 238
Greenbrier 53’ All-Purpose double-stack well car
Freight rolling stock
Intermodal transport | Well car | [
"Physics"
] | 2,029 | [
"Physical systems",
"Transport",
"Intermodal transport"
] |
14,447,758 | https://en.wikipedia.org/wiki/Generic%20Bootstrapping%20Architecture | Generic Bootstrapping Architecture (GBA) is a technology that enables the authentication of a user. This authentication is possible if the user owns a valid identity on an HLR (Home Location Register) or on an HSS (Home Subscriber Server).
GBA is standardized at the 3GPP (http://www.3gpp.org/ftp/Specs/html-info/33220.htm). The user authentication is instantiated by a shared secret, one in the smartcard, for example a SIM card inside the mobile phone and the other is on the HLR/HSS.
GBA authenticates by making a network component challenge the smartcard and verify that the answer is the one predicted by the HLR/HSS.
Instead of asking the service provider to trust the BSF and relying on it for every authentication request, the BSF establishes a shared secret between the simcard card and the service provider. This shared secret is limited in time and for a specific domain.
Strong points
This solution has some strong points of certificate and shared secrets without having some of their weaknesses:
- There is no need for user enrollment phase nor secure deployment of keys, making this solution a very low cost one when compared to PKI.
- Another advantage is the ease with which the authentication method may be integrated into terminals and service providers, as it is based on HTTP's well known "Digest access authentication". Every Web server already implement HTTP digest authentication and the effort to implement GBA on top of digest authentication is minimal. For example, it could be implemented on SimpleSAMLPhP http://rnd.feide.no/simplesamlphp with 500 PHP lines of code and only a few tens of lines of code are Service Provider specific making it really easy to port it to another Web site.
- On device side is needed:
A Web browser (in fact an HTTP client) implementing digest authentication and the special case designed by a "3gpp" string in the HTTP header.
A means to dialog with a smartcard and signed the challenge sent by the BSF, either Bluetooth SAP or a Java or native application could be used to serve the request coming from the browser.
Technical overview
Actually, contents in this section are from external literature.
There are two ways to use GAA (Generic Authentication Architecture).
The first, GBA, is based on a shared secret between the client and server
The second, SSC, is based on public-private key pairs and digital certificates.
In the shared secret cases, the customer and the operator are first mutually authenticated through 3G and Authentication Key (AKA) and they agree on session keys which can then be used between the client and services that the customer wants to use.
This is called bootstrapping.
After that, the services can retrieve the session keys from the operator, and they can be used in some application specific protocol between the client and services.
Figure above shows the network GAA entities and interfaces between them. Optional entities are drawn with lines
network and borders dotted the scoreboard. The User Equipment (UE) is, for example, the user's mobile phone. The UE and
Bootstrapping Server Function (BSF) mutually authenticate themselves during the Ub (number [2] above) interface, using the Digest access authentication AKA protocol. The UE also communicates with the Network Application Functions (NAF), which are the implementation servers, over the Ua [4] interface, which can use any specific application protocol necessary.
BSF retrieves data from the subscriber from the Home Subscriber Server (HSS) during the Zh [3] interface, which uses the
Diameter Base Protocol. If there are several HSS in the network, BSF must first know which one to use. This can be done by either setting up a pre-defined HSS to BSF, or by querying the Subscriber Locator Function (SLF).
NAFs recover the key session of BSF during the Zn [5] interface, which also uses the diameter at the base Protocol. If
NAF is not in the home network, it must use a Zn-proxy to contact BSF .
Uses
The SPICE project developed an extended Use Case named "split terminal" where a user on a PC can authenticate with their mobile phone: http://www.ist-spice.org/demos/demo3.htm . The NAF was developed on SimpleSAMLPhP and a Firefox extension was developed to process the GBA digest authencation request from the BSF. Bluetooth SIM Access Profile was used between the Firefox browser and the mobile phone. Later a partner developed a "zero installation" concept.
The research institute Fraunhofer FOKUS developed an OpenID extension for Firefox which uses GBA authentication.Presentation at ICIN 2008 by Peter Weik
The Open Mobile Terminal Platform http://www.omtp.org references GBA in its Advanced Trusted Environment: OMTP TR1 recommendation, first released in May 2008.
Sadly, despite many advantages and potential uses of GBA, its implementation in handsets has been limited since GBA standardization in 2006. Most notably, GBA was implemented in Symbian-based handsets.
References
Cryptographic protocols
Mobile technology | Generic Bootstrapping Architecture | [
"Technology"
] | 1,099 | [
"nan"
] |
14,448,011 | https://en.wikipedia.org/wiki/Double-deck%20aircraft | A double-deck aircraft has two decks for passengers; the second deck may be only a partial deck, and may be above or below the main deck. Most commercial aircraft have one passenger deck and one cargo deck for luggage and ULD containers, but a few have two decks for passengers, typically above or below a third deck for cargo.
History
Many early flying boat airliners, such as the Boeing 314 Clipper and Short Sandringham, had two decks. Following World War II, the Stratocruiser, a partially double-decked derivative of the B-29 Superfortress, became popular with airlines around the world.
The first full double-deck aircraft was the French Breguet Deux-Ponts, in service from 1953. The first partial double-deck jet airliner was the widebody Boeing 747, in service from 1970, with the top deck smaller than the main deck. Boeing originally designed the distinctive 747 bubble top with air cargo usage in mind. The small top deck permitted the cockpit and a few passengers and nose doors with unobstructed access to the full length of the hold. Most 747s are passenger jets, and a small percentage are cargo jets with nose doors.
The first full double-deck jet airliner is the Airbus A380, which has two passenger decks extending the full length of the fuselage, as well as a full-length lower third deck for cargo. It entered regular service in late-October 2007.
List of double-deck aircraft
Double-deck flying boats
Latécoère 521/522
Martin M-130
Latécoère 631
Sud-Est SE.200 Amphitrite
Boeing 314 Clipper
Dornier Do-X
Short Sandringham
Short Empire C-Class and the related G-class
Saunders-Roe Princess - did not enter service.
Partial second passenger deck
Caproni Ca.48/58
Extra seats on top of the passenger cabin.
Airbus A330 and Airbus A340
Optional lower deck lavatories and crew rest
Boeing 377 Stratocruiser
Lower deck could be configured for lounge areas or additional seating
Boeing 747
Partial upper deck lounge areas or seating
Optional upper deck crew rest and galleys
Boeing 767
Optional lower level crew rest area sleeps six
Boeing 777
Optional lower deck lavatories and galley
Optional upper deck crew rest
Junkers G.38
Ilyushin Il-86
Lower deck galley
Lower deck "self loading luggage storage"
Lockheed L-1011 Tristar
Lower deck galley
Lower deck lounge (Pacific Southwest Airlines) (LTU International)
McDonnell Douglas DC-10
Lower deck galleys
Tupolev Tu-114
Lower deck galleys.
Lower deck aircrew rest area.
Full second passenger deck
Breguet 761, 763 and 765
Airbus A380
Cargo aircraft with a separate passenger deck
Antonov An-225 Mriya
Antonov An-124 Ruslan
Lockheed C-5 Galaxy
Boeing C-97 Stratofreighter
Douglas C-124 Globemaster II
Short Belfast
Lockheed R6V Constitution
Blackburn Beverley - military transport, the main deck could be used for cargo or troops
Double-deck cargo aircraft
Aviation Traders Carvair
Armstrong Whitworth AW.660 Argosy
Bristol Freighter
Convair XC-99
Douglas C-124 Globemaster II
Canceled projects for double-deck passenger aircraft
Bach Super Transport
McDonnell Douglas MD-12
Sukhoi KR-860
Vickers VC-10 Superb: see
See also
Large aircraft
Wide-body aircraft
References
Aircraft configurations | Double-deck aircraft | [
"Engineering"
] | 706 | [
"Aircraft configurations",
"Aerospace engineering"
] |
14,448,037 | https://en.wikipedia.org/wiki/DaT%20scan | DaT Scan (DaT scan or Dopamine Transporter Scan) commonly refers to a diagnostic method, based on SPECT imaging, to investigate if there is a loss of dopaminergic neurons in striatum. The term may also refer to a brand name of Ioflupane (123I) tracer used for the study. The scan principle is based on use of the radiopharmaceutical Ioflupane (123I) which binds to dopamine transporters (DaT). The signal from them is then detected by the use of single-photon emission computed tomography (SPECT) which uses special gamma-cameras to create a pictographic representation of the distribution of dopamine transporters in the brain.
DaTSCAN is indicated in cases of tremor when its origin is uncertain. Although this method can distinguish essential tremor from Parkinson's syndrome, it is unable to distinguish between Parkinson's disease, Dementia with Lewy bodies, Parkinson's disease dementia, multiple system atrophy or progressive supranuclear palsy.
There is evidence that DaTSCAN is accurate in diagnosing early Parkinson's.
Procedure
At the beginning a patient should take two iodine tablets and wait for one hour. These pills are important because they prevent the accumulation of radioactive substances in the thyroid gland. After one hour, the patient gets an injection to the shoulder, which contains the radiopharmaceutical, and then waits for 4 hours. The concentration of the substance increases, and then it is scanned by a gamma-camera, which is located around the patient's head. The whole examination lasts about 30–45 minutes, and it is non-invasive.
If a patient uses certain medications listed below, it is necessary to stop usage for a few days or weeks before the DaTSCAN, but only after a consultation with the patient's doctor.
The examination takes just a few hours, so patients do not need to stay in a hospital overnight, but they have to drink much more than they are used to and go to the toilet more often. It is important for a fast elimination of the radioactive substances from the body.
Contraindications
pregnancy
breast-feeding
severe renal or hepatic insufficiency
allergy to iodine substances
certain medications – stimulants or noradrenalin and some antidepressants
Differential Diagnosis
Parkinson's disease, multiple system atrophy or progressive supranuclear palsy
Essential tremor
Lewy body disease
References
External links
European Parkinson's Disease Association
DaTSCAN
Patient's view
Neurology
Neuroimaging
Medical physics
Dopamine reuptake inhibitors
Parkinson's disease
3D nuclear medical imaging
Radiobiology | DaT scan | [
"Physics",
"Chemistry",
"Biology"
] | 550 | [
"Radiobiology",
"Radioactivity",
"Applied and interdisciplinary physics",
"Medical physics"
] |
14,448,292 | https://en.wikipedia.org/wiki/Wave%20radar | Wave radar is a type of radar for measuring wind waves. Several instruments based on a variety of different concepts and techniques are available, and these are all often called. This article (see also Grønlie 2004), gives a brief description of the most common ground-based radar remote sensing techniques.
Instruments based on radar remote sensing techniques have become of particular interest in applications where it is important to avoid direct contact with the water surface and avoid structural interference. A typical case is wave measurements from an offshore platform in deep water, where swift currents could make mooring a wave buoy enormously difficult. Another interesting case is a ship under way, where having instruments in the sea is highly impractical and interference from the ship's hull must be avoided.
Radar remote sensing
Terms and definitions
Basically there are two different classes of radar remote sensors for ocean waves.
Direct sensor measures directly some relevant parameter of the wave system (like surface elevation or water particle velocity).
Indirect sensors observe the surface waves via the interaction with some other physical process as for example the radar cross section of the sea surface.
Microwave radars may be used in two different modes;
The near vertical mode. The radar echo is generated by specular reflections from the sea surface.
The low grazing angle mode. The radar echo is generated by Bragg scattering, hence wind generated surface ripple (capillary waves) must be present. The backscattered signal will be modulated by the large surface gravity waves and the gravity wave information is derived from the modulation of the backscattered signal. An excellent presentation of the theories of microwave remote sensing of the sea surface is given by Plant and Shuler (1980).
The radar footprint (the size of the surface area which is illuminated by the radar) must be small in comparison with all ocean wavelengths of interest. The radar spatial resolution is determined by the bandwidth of the radar signal (see radar signal characteristics) and the beamwidth of the radar antenna.
The beam of a microwave antenna diverges. Consequently, the resolution decreases with increasing range. For all practical purposes, the beam of an IR radar (laser) does not diverge. Therefore, its resolution is independent of range.
HF radars utilize the Bragg scattering mechanism and do always operate at very low grazing angles. Due to the low frequency of operation the radar waves are backscattered directly from the gravity waves and surface ripple need not be present.
Radar transceivers may be coherent or non-coherent. Coherent radars measure Doppler-modulation as well as amplitude modulation, while non-coherent radars only measure amplitude modulation. Consequently, a non-coherent radar echo contains less information about the sea surface properties. Examples of non-coherent radars are conventional marine navigation radars.
The radar transmitter waveform may be either unmodulated continuous wave, modulated or pulsed. An unmodulated continuous wave radar has no range resolution, but can resolve targets on the basis of different velocity, while a modulated or pulsed radar can resolve echoes from different ranges. The radar waveform plays a very important role in radar theory (Plant and Shuler, 1980).
Factors influencing performance
Mode of operation or measurement geometry (vertical or grazing)
Class of system (direct or indirect)
Frequency of operation
Radar waveform (unmodulated CW or modulated/pulsed)
Type of transceiver (coherent or non-coherent)
Radar antenna properties
Remote sensing techniques
An excellent survey of different radar techniques for remote sensing of waves is given by Tucker (1991).
Microwave rangefinders
Microwave rangefinders also operate in vertical mode at GHz frequencies and are not as affected by fog and water spray as the laser rangefinder. A continuous wave frequency modulated (CWFM) or pulsed radar waveform is normally used to provide range resolution. Since the beam diverges, the linear size of the footprint is directly proportional to range, while the area of the footprint is proportional to the square of range.
One example of a microwave range finder is the Miros SM-094, which is designed to measure waves and water level, including tides. This sensor is used as an air gap (bridge clearance) sensor in NOAA's PORTS system. Another example is the WaveRadar REX, which is a derivative of a Rosemount tank radar.
From data on the elevation of the surface of the water at three or more locations, a directional spectrum of wave height can be computed. The algorithm is similar to the one which generates a directional spectrum from data on heave (vertical motion), pitch and roll at a single location, as provided by a disc-shaped wave buoy. An array of three vertical radars, having footprints at the vertices of a horizontal, equilateral triangle, can provide the necessary data on water surface elevation. “Directional WaveGuide” is a commercial radar system based on this technique. It is available from the Dutch companies Enraf and Radac.
Marine navigation radars
Marine navigation radars (X band) provide sea clutter images which contain a pattern resembling a sea wave pattern. By digitizing the radar video signal it can be processed by a digital computer. Sea surface parameters may be calculated on the basis of these digitized images. The marine navigation radar operates in low grazing angle mode and wind generated surface ripple must be present.
The marine navigation radar is non-coherent and is a typical example of an indirect wave sensor, because there is no direct relation between wave height and radar back-scatter modulation amplitude. An empirical method of wave spectrum scaling is normally employed. Marine navigation radar based wave sensors are excellent tools for wave direction measurements. A marine navigation radar may also be a tool for surface current measurements. Point measurements of the current vector as well as current maps up to a distance of a few km can be provided
(Gangeskar, 2002).
Miros WAVEX has its main area of application as directional wave measurements from moving ships. Another example of a marine radar based system is OceanWaves WaMoS II.
The range gated pulsed Doppler microwave radar
The range gated pulsed Doppler microwave radar operates in low grazing angle mode. By using several antennas it may be used as a directional wave sensor, basically measuring the directional spectrum of the horizontal water particle velocity. The velocity spectrum is directly related to the wave height spectrum by a mathematical model based on linear wave theory and accurate measurements of the wave spectrum can be provided under most conditions. As measurements are taken at a distance from the platform on which it is mounted, the wave field is to a small degree disturbed by interference from the platform structure.
Miros Wave and current radar is the only available wave sensor based on the range gated pulsed Doppler radar technique. This radar also uses the dual frequency technique (see below) to perform point measurements of the surface current vector
The dual frequency microwave radar
The dual frequency microwave radar transmits two microwave frequencies simultaneously. The frequency separation is chosen to give a “spatial beat” length which is in the range of the water waves of interest. The dual frequency radar may be considered a microwave equivalent of the high frequency (HF) radar (see below). The dual frequency radar is suitable for the measurement of surface current. As far as wave measurements are concerned, the back-scatter processes are too complicated (and not well understood) to allow useful measurement accuracy to be attained.
The HF radar
The HF radar CODAR SeaSonde and Helzel WERA are well established as a powerful tool for sea current measurements up to a range of 300 km. It operates in the HF and low VHF frequencies band corresponding to a radar wavelength in the range of 10 to 300m. The Doppler shift of the first order Bragg lines of the radar echo is used to derive sea current estimates in very much the same way as for the dual frequency microwave radar. Two radar installations are normally required, looking at the same patch of the sea surface from different angles. The latest generation of shore-based ocean radar can reach more than 200 km for ocean current mapping and more than 100 km for wave measurements Helzel WERA. For all ocean radars, the accuracy in range is excellent. With shorter ranges, the range resolution gets finer. The angular resolution and accuracy depends on the used antenna array configuration and applied algorithms (direction finding or beam forming). The WERA system provides the option to use both techniques; the compact version with direction finding or the array type antenna system with beam forming methods.
Specialized X-Band
The FutureWaves technology was originally developed as an Environmental Ship and Motion Forecasting (ESMF) system for the Navy's ONR (Office of Naval Research) by General Dynamics' Applied Physical Sciences Corporation. The technology was adapted to be released in the commercial market and made its first public appearance at the 2017 Offshore Technology Conference in Houston Texas.
This technology differs from existing wave forecasting systems by using a customized wave sensing radar capable of measuring backscatter Doppler out to ranges of approximately 5 km. The radar antenna is vertically polarized to enhance the sea-surface backscatter signal. It also uses an innovative radar signal processing scheme that addresses the inherently noisy backscatter signals through a mathematical process termed least squares inversion. This approach applies a highly over-determined filter to the radar data, and rejects radar scans that do not observe incoming waves. The result is an accurate representation of the propagating incident wave field that will force ship motions over a 2-3 minute window. The wave processing algorithms also enable real-time calculation of wave field two-dimensional power spectra and significant wave height similar to that provided by a wave buoy.
It also uses a vessel motion prediction process that relies on a pre-calculated force/response database. Dynamic motional degrees of freedom are then represented as a lumped mechanical system whose future motions are predicted by numerically solving a multi-degree-of-freedom, forced, coupled differential equation with initial inertial state provided by vessel motion sensor outputs. The time-domain solution allows for nonlinear forcing mechanisms, such as quadratic roll damping and roll control systems, to be captured in the forecasting.
Finally, it uses the Gravity open architecture middleware solution to integrate the sensor feeds, processing subroutines and user displays. This open architecture approach allows for the implementation of customized operator displays along with physics based models of specific vessels and machinery (e.g. cranes) into the system.
References
External links
Microwave range finders:
Physical Oceanographic Real-Time System (PORTS)
NOAA Technical Report NOS CO-OPS 042; Microwave Air Gap-Bridge Clearance Sensor; Test, Evaluation, and Implementation Report
ESEAS RI, Assessment of accuracy and operational properties of different tide gauge sensors.
The Global Sea Level Observing System
Volume IV of the IOC Manual on Sea Level Measurement and Interpretation
The range gated pulsed Doppler microwave radar:
MIROS System Evaluation during Storm Wind Study II; F.W. Dobson, Fisheries and Oceans Canada, Bedford Institute of Oceanography, Dartmouth, NS, Canada; E. Dunlap ASA Consulting Ltd, Halifax, NS, Canada
X-band based wave sensors:
Radac level, tide and wave monitoring systems
WaMoS II (OceanWaves GmbH)
Remocean
Miros AS
FutureWaves
HF-Radar:
CODAR Ocean Sensors
WERA (Helzel Messtechnik GmbH)
Sea radars
Water waves | Wave radar | [
"Physics",
"Chemistry"
] | 2,320 | [
"Water waves",
"Waves",
"Physical phenomena",
"Fluid dynamics"
] |
14,449,116 | https://en.wikipedia.org/wiki/History%20of%20timekeeping%20devices | The history of timekeeping devices dates back to when ancient civilizations first observed astronomical bodies as they moved across the sky. Devices and methods for keeping time have gradually improved through a series of new inventions, starting with measuring time by continuous processes, such as the flow of liquid in water clocks, to mechanical clocks, and eventually repetitive, oscillatory processes, such as the swing of pendulums. Oscillating timekeepers are used in modern timepieces.
Sundials and water clocks were first used in ancient Egypt BC (or equally acceptable BCE) and later by the Babylonians, the Greeks and the Chinese. Incense clocks were being used in China by the 6th century. In the medieval period, Islamic water clocks were unrivalled in their sophistication until the mid-14th century. The hourglass, invented in Europe, was one of the few reliable methods of measuring time at sea.
In medieval Europe, purely mechanical clocks were developed after the invention of the bell-striking alarm, used to signal the correct time to ring monastic bells. The weight-driven mechanical clock controlled by the action of a verge and foliot was a synthesis of earlier ideas from European and Islamic science. Mechanical clocks were a major breakthrough, one notably designed and built by Henry de Vick in , which established basic clock design for the next 300 years. Minor developments were added, such as the invention of the mainspring in the early 15th century, which allowed small clocks to be built for the first time.
The next major improvement in clock building, from the 17th century, was the discovery that clocks could be controlled by harmonic oscillators. Leonardo da Vinci had produced the earliest known drawings of a pendulum in 14931494, and in 1582 Galileo Galilei had investigated the regular swing of the pendulum, discovering that frequency was only dependent on length, not weight. The pendulum clock, designed and built by Dutch polymath Christiaan Huygens in 1656, was so much more accurate than other kinds of mechanical timekeepers that few verge and foliot mechanisms have survived. Other innovations in timekeeping during this period include inventions for striking clocks, the repeating clock and the deadbeat escapement.
Error factors in early pendulum clocks included temperature variation, a problem tackled during the 18th century by the English clockmakers John Harrison and George Graham. Following the Scilly naval disaster of 1707, after which governments offered a prize to anyone who could discover a way to determine longitude, Harrison built a succession of accurate timepieces, introducing the term chronometer. The electric clock, invented in 1840, was used to control the most accurate pendulum clocks until the 1940s, when quartz timers became the basis for the precise measurement of time and frequency.
The wristwatch, which had been recognised as a valuable military tool during the Boer War, became popular after World War I, in variations including non-magnetic, battery-driven, and solar powered, with quartz, transistors and plastic parts all introduced. Since the early 2010s, smartphones and smartwatches have become the most common timekeeping devices.
The most accurate timekeeping devices in practical use today are atomic clocks, which can be accurate to a few billionths of a second per year and are used to calibrate other clocks and timekeeping instruments.
Continuous timekeeping devices
Ancient civilizations observed astronomical bodies, often the Sun and Moon, to determine time. According to the historian Eric Bruton, Stonehenge is likely to have been the Stone Age equivalent of an astronomical observatory, used for seasonal and annual events such as equinoxes or solstices. As megalithic civilizations left no recorded history, little is known of their timekeeping methods. The Warren Field calendar monument is currently considered to be the oldest lunisolar calendar yet found.
Mesoamericans modified their usual vigesimal (base-20) counting system when dealing with calendars to produce a 360-day year. Aboriginal Australians understood the movement of objects in the sky well, and used their knowledge to construct calendars and aid navigation; most Aboriginal cultures had seasons that were well-defined and determined by natural changes throughout the year, including celestial events. Lunar phases were used to mark shorter periods of time; the Yaraldi of South Australia being one of the few people recorded as having a way to measure time during the day, which was divided into seven parts using the position of the Sun.
All timekeepers before the 13th century relied upon methods that used something that moved continuously. No early method of keeping time changed at a steady rate. Devices and methods for keeping time have improved continuously through a long series of new inventions and ideas.
Shadow clocks and sundials
The first devices used for measuring the position of the Sun were shadow clocks, which later developed into the sundial. The oldest known sundial dates back to BC (during the 19th Dynasty), and was discovered in the Valley of the Kings in 2013. Obelisks could indicate whether it was morning or afternoon, as well as the summer and winter solstices. A kind of shadow clock was developed BC that was similar in shape to a bent T-square. It measured the passage of time by the shadow cast by its crossbar, and was oriented eastward in the mornings, and turned around at noon, so it could cast its shadow in the opposite direction.
A sundial is referred to in the Bible, in 2 Kings 20:911, when Hezekiah, king of Judea during the 8th century BC, is recorded as being healed by the prophet Isaiah and asks for a sign that he would recover:
A clay tablet from the late Babylonian period describes the lengths of shadows at different times of the year. The Babylonian writer Berossos () is credited by the Greeks with the invention of a hemispherical sundial hollowed out of stone; the path of the shadow was divided into 12 parts to mark the time. Greek sundials evolved to become highly sophisticated—Ptolemy's Analemma, written in the 2nd century AD, used an early form of trigonometry to derive the position of the Sun from data such as the hour of day and the geographical latitude.
The Romans inherited the sundial from the Greeks. The first sundial in Rome arrived in 264 BC, looted from Catania in Sicily. This sundial offered the innovation of the hours of the "horologium" throughout the day where before the Romans simply split the day into early morning and forenoon (mane and ante merididiem). Still, there were unexpected astronomical challenges; this clock gave the incorrect time for a century. This mistake was noticed only in 164 BC, when the Roman censor came to check and adjusted for the appropriate latitude.
According to the German historian of astronomy Ernst Zinner, sundials were developed during the 13th century with scales that showed equal hours. The first based on polar time appeared in Germany ; an alternative theory proposes that a Damascus sundial measuring in polar time can be dated to 1372. European treatises on sundial design appeared .
An Egyptian method of determining the time during the night, used from at least 600 BC, was a type of plumb-line called a merkhet. A north–south meridian was created using two merkhets aligned with Polaris, the north pole star. The time was determined by observing particular stars as they crossed the meridian.
The Jantar Mantar in Jaipur built in 1727 by Jai Singh II includes the Vrihat Samrat Yantra, 88 feet (27 m) tall sundial. It can tell local time to an accuracy of about two seconds.
Water clocks
The oldest description of a clepsydra, or water clock, is from the tomb inscription of an early 18th Dynasty ( BC) Egyptian court official named Amenemhet, who is identified as its inventor. It is assumed that the object described on the inscription is a bowl with markings to indicate the time. The oldest surviving water clock was found in the tomb of pharaoh Amenhotep III ( 14171379 BC). There are no recognised examples in existence of outflowing water clocks from ancient Mesopotamia, but written references have survived.
The introduction of the water clock to China, perhaps from Mesopotamia, occurred as far back as the 2nd millennium BC, during the Shang dynasty, and at the latest by the 1st millennium BC. Around 550 AD, Yin Kui (殷蘷) was the first in China to write of the overflow or constant-level tank in his book "Lou ke fa (漏刻法)". Around 610, two Sui dynasty inventors, Geng Xun (耿詢) and Yuwen Kai (宇文愷), created the first balance clepsydra, with standard positions for the steelyard balance. In 721 the mathematician Yi Xing and government official Liang Lingzan regulated the power of the water driving an astronomical clock, dividing the power into unit impulses so that motion of the planets and stars could be duplicated. In 976, the Song dynasty astronomer Zhang Sixun addressed the problem of the water in clepsydrae freezing in cold weather when he replaced the water with liquid mercury. A water-powered astronomical clock tower was built by the polymath Su Song in 1088, which featured the first known endless power-transmitting chain drive.
The Greek philosophers Anaxagoras and Empedocles both referred to water clocks that were used to enforce time limits or measure the passing of time. The Athenian philosopher Plato is supposed to have invented an alarm clock that used lead balls cascading noisily onto a copper platter to wake his students.
A problem with most clepsydrae was the variation in the flow of water due to the change in fluid pressure, which was addressed from 100 BC when the clock's water container was given a conical shape. They became more sophisticated when innovations such as gongs and moving mechanisms were included. There is strong evidence that the 1st century BC Tower of the Winds in Athens once had a water clock, and a wind vane, as well as the nine vertical sundials still visible on the outside. In Greek tradition, clepsydrae were used in court, a practise later adopted by the Ancient Romans.
Ibn Khalaf al-Muradi in medieval Al-Andalus described a water clock that employed both segmental and epicyclic gearing. Islamic water clocks, which used complex gear trains and included arrays of automata, were unrivalled in their sophistication until the mid-14th century. Liquid-driven mechanisms (using heavy floats and a constant-head system) were developed that enabled water clocks to work at a slower rate. Some have argued that the first known geared clock was rather invented by the great mathematician, physicist, and engineer Archimedes during the 3rd century BC. Archimedes created his astronomical clock, which was also a cuckoo clock with birds singing and moving every hour. It is the first carillon clock as it plays music simultaneously with a person blinking his eyes, surprised by the singing birds. The Archimedes clock works with a system of four weights, counterweights, and strings regulated by a system of floats in a water container with siphons that regulate the automatic continuation of the clock. The principles of this type of clock are described by the mathematician and physicist Hero, who says that some of them work with a chain that turns a gear in the mechanism.
The 12th-century Jayrun Water Clock at the Umayyad Mosque in Damascus was constructed by Muhammad al-Sa'ati, and was later described by his son Ridwan ibn al-Sa'ati in his On the Construction of Clocks and their Use (1203). A sophisticated water-powered astronomical clock was described by Al-Jazari in his treatise on machines, written in 1206. This castle clock was about high. In 1235, a water-powered clock that "announced the appointed hours of prayer and the time both by day and by night" stood in the entrance hall of the Mustansiriya Madrasah in Baghdad.
Chinese incense clocks
Incense clocks were first used in China around the 6th century, mainly for religious purposes, but also for social gatherings or by scholars. Due to their frequent use of Devanagari characters, American sinologist Edward H. Schafer has speculated that incense clocks were invented in India. As incense burns evenly and without a flame, the clocks were safe for indoor use. To mark different hours, differently scented incenses (made from different recipes) were used.
The incense sticks used could be straight or spiralled; the spiralled ones were intended for long periods of use, and often hung from the roofs of homes and temples. Some clocks were designed to drop weights at even intervals.
Incense seal clocks had a disk etched with one or more grooves, into which incense was placed. The length of the trail of incense, directly related to the size of the seal, was the primary factor in determining how long the clock would last; to burn 12 hours an incense path of around has been estimated. The gradual introduction of metal disks, most likely beginning during the Song dynasty, allowed craftsmen to more easily create seals of different sizes, design and decorate them more aesthetically, and vary the paths of the grooves, to allow for the changing length of the days in the year. As smaller seals became available, incense seal clocks grew in popularity and were often given as gifts.
Astrolabes
Sophisticated timekeeping astrolabes with geared mechanisms were made in Persia. Examples include those built by the polymath Abū Rayhān Bīrūnī in the 11th century and the astronomer Muhammad ibn Abi Bakr al‐Farisi in 1221. A brass and silver astrolabe (which also acts as a calendar) made in Isfahan by al‐Farisi is the earliest surviving machine with its gears still intact. Openings on the back of the astrolabe depict the lunar phases and gives the Moon's age; within a zodiacal scale are two concentric rings that show the relative positions of the Sun and the Moon.
Muslim astronomers constructed a variety of highly accurate astronomical clocks for use in their mosques and observatories, such as the astrolabic clock by Ibn al-Shatir in the early 14th century.
Candle clocks and hourglasses
One of the earliest references to a candle clock is in a Chinese poem, written in 520 by You Jianfu, who wrote of the graduated candle being a means of determining time at night. Similar candles were used in Japan until the early 10th century.
The invention of the candle clock was attributed by the Anglo-Saxons to Alfred the Great, king of Wessex (r. 871–889), who used six candles marked at intervals of , each made from 12 pennyweights of wax, and made to be in height and of a uniform thickness.
The 12th century Muslim inventor Al-Jazari described four different designs for a candle clock in his book Book of Knowledge of Ingenious Mechanical Devices. His so-called "scribe" candle clock was invented to mark the passing of 14 hours of equal length: a precisely engineered mechanism caused a candle of specific dimensions to be slowly pushed upwards, which caused an indicator to move along a scale.
The hourglass was one of the few reliable methods of measuring time at sea, and it has been speculated that it was used on board ships as far back as the 11th century, when it would have complemented the compass as an aid to navigation. The earliest unambiguous evidence of the use an hourglass appears in the painting Allegory of Good Government, by the Italian artist Ambrogio Lorenzetti, from 1338.
The Portuguese navigator Ferdinand Magellan used 18 hourglasses on each ship during his circumnavigation of the globe in 1522. Though used in China, the hourglass's history there is unknown, but does not seem to have been used before the mid-16th century, as the hourglass implies the use of glassblowing, then an entirely European and Western art.
From the 15th century onwards, hourglasses were used in a wide range of applications at sea, in churches, in industry, and in cooking; they were the first dependable, reusable, reasonably accurate, and easily constructed time-measurement devices. The hourglass took on symbolic meanings, such as that of death, temperance, opportunity, and Father Time, usually represented as a bearded, old man.
History of early oscillating devices in timekeepers
The English word clock first appeared in Middle English as , , or . The origin of the word is not known for certain; it may be a borrowing from French or Dutch, and can perhaps be traced to the post-classical Latin ('bell'). 7th century Irish and 9th century Germanic sources recorded clock as meaning 'bell'.
Judaism, Christianity and Islam all had times set aside for prayer, although Christians alone were expected to attend prayers at specific hours of the day and night—what the historian Jo Ellen Barnett describes as "a rigid adherence to repetitive prayers said many times a day". The bell-striking alarms warned the monk on duty to toll the monastic bell. His alarm was a timer that used a form of escapement to ring a small bell. This mechanism was the forerunner of the escapement device found in the mechanical clock.
13th century
The first innovations to improve on the accuracy of the hourglass and the water clock occurred in the 10th century, when attempts were made to slow their rate of flow using friction or the force of gravity. The earliest depiction of a clock powered by a hanging weight is from the Bible of St Louis, an illuminated manuscript made between 1226 and 1234 that shows a clock being slowed by water acting on a wheel. The illustration seems to show that weight-driven clocks were invented in western Europe. A treatise written by Robertus Anglicus in 1271 shows that medieval craftsmen were attempting to design a purely mechanical clock (i.e. only driven by gravity) during this period. Such clocks were a synthesis of earlier ideas derived from European and Islamic science, such as gearing systems, weight drives, and striking mechanisms.
In 1250, the artist Villard de Honnecourt illustrated a device that was the step towards the development of the escapement. Another forerunner of the escapement was the , which used an early kind of verge mechanism to operate a knocker that continuously struck a bell. The weight-driven clock was probably a Western European invention, as a picture of a clock shows a weight pulling an axle around, its motion slowed by a system of holes that slowly released water. In 1271, the English astronomer Robertus Anglicus wrote of his contemporaries that they were in the process of developing a form of mechanical clock.
14th century
The invention of the verge and foliot escapement in 1275 was one of the most important inventions in both the history of the clock and the history of technology. It was the first type of regulator in horology. A verge, or vertical shaft, is forced to rotate by a weight-driven crown wheel, but is stopped from rotating freely by a foliot. The foliot, which cannot vibrate freely, swings back and forth, which allows a wheel to rotate one tooth at a time. Although the verge and foliot was an advancement on previous timekeepers, it was impossible to avoid fluctuations in the beat caused by changes in the applied forces—the earliest mechanical clocks were regularly reset using a sundial.
At around the same time as the invention of the escapement, the Florentine poet Dante Alighieri used clock imagery to depict the souls of the blessed in Paradiso, the third part of the Divine Comedy, written in the early part of the 14th century. It may be the first known literary description of a mechanical clock. There are references to house clocks from 1314 onwards; by 1325 the development of the mechanical clock can be assumed to have occurred.
Large mechanical clocks were built that were mounted in towers so as to ring the bell directly. The tower clock of Norwich Cathedral constructed 1273 (reference to a payment for a mechanical clock dated to this year) is the earliest such large clock known. The clock has not survived. The first clock known to strike regularly on the hour, a clock with a verge and foliot mechanism, is recorded in Milan in 1336. By 1341, clocks driven by weights were familiar enough to be able to be adapted for grain mills, and by 1344 the clock in London's Old St Paul's Cathedral had been replaced by one with an escapement. The foliot was first illustrated by Dondi in 1364, and mentioned by the court historian Jean Froissart in 1369.
The most famous example of a timekeeping device during the medieval period was a clock designed and built by the clockmaker Henry de Vick in 1360, which was said to have varied by up to two hours a day. For the next 300 years, all the improvements in timekeeping were essentially developments based on the principles of de Vick's clock. Between 1348 and 1364, Giovanni Dondi dell'Orologio, the son of Jacopo Dondi, built a complex astrarium in Florence.
During the 14th century, striking clocks appeared with increasing frequency in public spaces, first in Italy, slightly later in France and England—between 1371 and 1380, public clocks were introduced in over 70 European cites. Salisbury Cathedral clock, dating from about 1386, is one of the oldest working clocks in the world, and may be the oldest; it still has most of its original parts. The Wells Cathedral clock, built in 1392, is unique in that it still has its original medieval face. Above the clock are figures which hit the bells, and a set of jousting knights who revolve around a track every 15 minutes.
Later developments
The invention of the mainspring in the early 15th century—a device first used in locks and for flintlocks in guns— allowed small clocks to be built for the first time. The need for an escapement mechanism that steadily controlled the release of the stored energy, led to the development of two devices, the stackfreed (which although invented in the 15th century can be documented no earlier than 1535) and the fusee, which first originated from medieval weapons such as the crossbow. There is a fusee in the earliest surviving spring-driven clock, a chamber clock made for Philip the Good in 1430. Leonardo da Vinci, who produced the earliest known drawings of a pendulum in 14931494, illustrated a fusee in 1500, a quarter of a century after the coiled spring first appeared.
Clock towers in Western Europe in the Middle Ages struck the time. Early clock dials showed hours; a clock with a minutes dial is mentioned in a 1475 manuscript. During the 16th century, timekeepers became more refined and sophisticated, so that by 1577 the Danish astronomer Tycho Brahe was able to obtain the first of four clocks that measured in seconds, and in Nuremberg, the German clockmaker Peter Henlein was paid for making what is thought to have been the earliest example of a watch, made in 1524. By 1500, the use of the foliot in clocks had begun to decline. The oldest surviving spring-driven clock is a device made by Bohemian in 1525. The first person to suggest travelling with a clock to determine longitude, in 1530, was the Dutch instrument maker Gemma Frisius. The clock would be set to the local time of a starting point whose longitude was known, and the longitude of any other place could be determined by comparing its local time with the clock time.
The Ottoman engineer Taqi ad-Din described a weight-driven clock with a verge-and-foliot escapement, a striking train of gears, an alarm, and a representation of the Moon's phases in his book The Brightest Stars for the Construction of Mechanical Clocks (), written around 1565. Jesuit missionaries brought the first European clocks to China as gifts.
The Italian polymath Galileo Galilei is thought to have first realized that the pendulum could be used as an accurate timekeeper after watching the motion of suspended lamps at Pisa Cathedral. In 1582, he investigated the regular swing of the pendulum, and discovered that this was only dependent on its length. Galileo never constructed a clock based on his discovery, but prior to his death he dictated instructions for building a pendulum clock to his son, Vincenzo.
Era of precision timekeeping
Pendulum clocks
The first accurate timekeepers depended on the phenomenon known as harmonic motion, in which the restoring force acting on an object moved away from its equilibrium position—such as a pendulum or an extended spring—acts to return the object to that position, and causes it to oscillate. Harmonic oscillators can be used as accurate timekeepers as the period of oscillation does not depend on the amplitude of the motion—and so it always takes the same time to complete one oscillation. The period of a harmonic oscillator is completely dependent on the physical characteristics of the oscillating system and not the starting conditions or the amplitude.
The period when clocks were controlled by harmonic oscillators was the most productive era in timekeeping. The first invention of this type was the pendulum clock, which was designed and built by Dutch polymath Christiaan Huygens in 1656. Early versions erred by less than one minute per day, and later ones only by 10 seconds, very accurate for their time. Dials that showed minutes and seconds became common after the increase in accuracy made possible by the pendulum clock. Brahe used clocks with minutes and seconds to observe stellar positions. The pendulum clock outperformed all other kinds of mechanical timekeepers to such an extent that these were usually refitted with a pendulum—a task that could be done without difficulty—so that few verge escapement devices have survived in their original form.
The first pendulum clocks used a verge escapement, which required wide swings of about 100° and so had short, light pendulums. The swing was reduced to around 6° after the invention of the anchor mechanism enabled the use of longer, heavier pendulums with slower beats that had less variation, as they more closely resembled simple harmonic motion, required less power, and caused less friction and wear. The first known anchor escapement clock was built by the English clockmaker William Clement in 1671 for King's College, Cambridge, now in the Science Museum, London. The anchor escapement originated with Hooke, although it has been argued that it was invented by Clement, or the English clockmaker Joseph Knibb.
The Jesuits made major contributions to the development of pendulum clocks in the 17th and 18th centuries, having had an "unusually keen appreciation of the importance of precision". In measuring an accurate one-second pendulum, for example, the Italian astronomer Father Giovanni Battista Riccioli persuaded nine fellow Jesuits "to count nearly 87,000 oscillations in a single day". They served a crucial role in spreading and testing the scientific ideas of the period, and collaborated with Huygens and his contemporaries.
Huygens first used a clock to calculate the equation of time (the difference between the apparent solar time and the time given by a clock), publishing his results in 1665. The relationship enabled astronomers to use the stars to measure sidereal time, which provided an accurate method for setting clocks. The equation of time was engraved on sundials so that clocks could be set using the Sun. In 1720, Joseph Williamson claimed to have invented a clock that showed solar time, fitted with a cam and differential gearing, so that the clock indicated true solar time.
Other innovations in timekeeping during this period include the invention of the rack and snail striking mechanism for striking clocks by the English mechanician Edward Barlow, the invention by either Barlow or Daniel Quare, a London clock-maker, in 1676 of the repeating clock that chimes the number of hours or minutes, and the deadbeat escapement, invented around 1675 by the astronomer Richard Towneley.
Paris and Blois were the early centres of clockmaking in France, and French clockmakers such as Julien Le Roy, clockmaker of Versailles, were leaders in case design and ornamental clocks. Le Roy belonged to the fifth generation of a family of clockmakers, and was described by his contemporaries as "the most skillful clockmaker in France, possibly in Europe". He invented a special repeating mechanism which improved the precision of clocks and watches, a face that could be opened to view the inside clockwork, and made or supervised over 3,500 watches during his career of almost five decades, which ended with his death in 1759. The competition and scientific rivalry resulting from his discoveries further encouraged researchers to seek new methods of measuring time more accurately.
Any inherent errors in early pendulum clocks were smaller than other errors caused by factors such as temperature variation. In 1729 the Yorkshire carpenter and self-taught clockmaker John Harrison invented the gridiron pendulum, which used at least three metals of different lengths and expansion properties, connected so as to maintain the overall length of the pendulum when it is heated or cooled by its surroundings. In 1781 the clockmaker George Graham compensated for temperature variation in an iron pendulum by using a bob made from a glass jar of mercury—a liquid metal at room temperature that expands faster than glass. More accurate versions of this innovation contained the mercury in thinner iron jars to make them more responsive. This type of temperature compensating pendulum was improved still further when the mercury was contained within the rod itself, which allowed the two metals to be thermally coupled more tightly. In 1895, the invention of invar, an alloy made from iron and nickel that expands very little, largely eliminated the need for earlier inventions designed to compensate for the variation in temperature.
Between 1794 and 1795, in the aftermath of the French Revolution, the French government mandated the use of decimal time, with a day divided into 10 hours of 100 minutes each. A clock in the Palais des Tuileries kept decimal time as late as 1801.
Marine chronometer
After the Scilly naval disaster of 1707, in which four ships were wrecked as a result of navigational mistakes, the British government offered a prize of £20,000, equivalent to millions of pounds today, for anyone who could determine the longitude to within at a latitude just north of the equator. The position of a ship at sea could be determined to within if a navigator could refer to a clock that lost or gained less than about six seconds per day. Proposals were examined by a newly created Board of Longitude. Among the many people who attempted to claim the prize was the Yorkshire clockmaker Jeremy Thacker, who first used the term chronometer in a pamphlet published in 1714. Huygens built the first sea clock, designed to remain horizontal aboard a moving ship, but that stopped working if the ship moved suddenly.
In 1715, at the age of 22, John Harrison had used his carpentry skills to construct a wooden eight-day clock. His clocks had innovations that included the use of wooden parts to remove the need for additional lubrication (and cleaning), rollers to reduce friction, a new kind of escapement, and the use of two different metals to reduce the problem of expansion caused by temperature variation.
He travelled to London to seek assistance from the Board of Longitude in making a sea clock. He was sent to visit Graham, who assisted Harrison by arranging to finance his work to build a clock. After 30 years, his device, now named "H1" was built and in 1736 it was tested at sea. Harrison then went on to design and make two other sea clocks, "H2" (completed in around 1739) and "H3", both of which were ready by 1755.
Harrison made two watches, "H4" and "H5". Eric Bruton, in his book The History of Clocks and Watches, has described H4 as "probably the most remarkable timekeeper ever made". After the completion of its sea trials during the winter of 17611762 it was found that it was three times more accurate than was needed for Harrison to be awarded the Longitude prize.
Electric clocks
In 1815, the prolific English inventor Francis Ronalds produced the forerunner of the electric clock, the electrostatic clock. It was powered with dry piles, a high voltage battery with extremely long life but the disadvantage of its electrical properties varying according to the air temperature and humidity. He experimented with ways of regulating the electricity and his improved devices proved to be more reliable.
In 1840 the Scottish clock and instrument maker Alexander Bain, first used electricity to sustain the motion of a pendulum clock, and so can be credited with the invention of the electric clock. On January 11, 1841, Bain and the chronometer maker John Barwise took out a patent describing a clock with an electromagnetic pendulum. The English scientist Charles Wheatstone, whom Bain met in London to discuss his ideas for an electric clock, produced his own version of the clock in November 1840, but Bain won a legal battle to establish himself as the inventor.
In 1857, the French physicist Jules Lissajous showed how an electric current can be used to vibrate a tuning fork indefinitely, and was probably the first to use the invention as a method for accurately measuring frequency. The piezoelectric properties of crystalline quartz were discovered by the French physicist brothers Jacques and Pierre Curie in 1880.
The most accurate pendulum clocks were controlled electrically. The Shortt–Synchronome clock, an electrical driven pendulum clock designed in 1921, was the first clock to be a more accurate timekeeper than the Earth itself.
A succession of innovations and discoveries led to the invention of the modern quartz timer. The vacuum tube oscillator was invented in 1912. An electrical oscillator was first used to sustain the motion of a tuning fork by the British physicist William Eccles in 1919; his achievement removed much of the damping associated with mechanical devices and maximised the stability of the vibration's frequency.
The first quartz crystal oscillator was built by the American engineer Walter G. Cady in 1921, and in October 1927 the first quartz clock was described by Joseph Horton and Warren Marrison at Bell Telephone Laboratories. The following decades saw the development of quartz clocks as precision time measurement devices in laboratory settings—the bulky and delicate counting electronics, built with vacuum tubes, limited their practical use elsewhere. In 1932, a quartz clock able to measure small weekly variations in the rotation rate of the Earth was developed. Their inherent physical and chemical stability and accuracy has resulted in the subsequent proliferation, and since the 1940s they have formed the basis for precision measurements of time and frequency worldwide.
Development of the watch
The first wristwatches were made in the 16th century. Elizabeth I of England had made an inventory in 1572 of the watches she acquired, all of which were considered to be part of her jewellery collection. The first pocketwatches were inaccurate, as their size precluded them from having sufficiently well-made moving parts. Unornamented watches began to appear in 1625.
Dials that showed minutes and seconds became common after the increase in accuracy made possible by the balance spring (or hairspring). Invented separately in 1675 by Huygens and Hooke, it enabled the oscillations of the balance wheel to have a fixed frequency. The invention resulted in a great advance in the accuracy of the mechanical watch, from around half an hour to within a few minutes per day. Some dispute remains as to whether the balance spring was first invented by Huygens or by Hooke; both scientists claimed to have come up with the idea of the balance spring first. Huygens' design for the balance spring is the type used in virtually all watches up to the present day.
Thomas Tompion was one of the first clockmakers to recognise the potential of the balance spring and use it successfully in his pocket watches; the improved accuracy enabled watches to perform as well as they are generally used today, as a second hand to be added to the face, a development that occurred during the 1690s. The concentric minute hand was an earlier invention, but a mechanism was devised by Quare that enabled the hands to be actuated together. Nicolas Fatio de Duillier, a Swiss natural philosopher, is credited with the design of the first jewel bearings in watches in 1704.
Other notable 18th-century English horologists include John Arnold and Thomas Earnshaw, who devoted their careers to constructing high-quality chronometers and so-called 'deck watches', smaller versions of the chronometer that could be kept in a pocket.
Military use of the watch
Watches were worn during the Franco-Prussian War (18701871), and by the time of the Boer War (18991902), watches had been recognised as a valuable tool. Early models were essentially standard pocket watches fitted to a leather strap, but, by the early 20th century, manufacturers began producing purpose-built wristwatches. In 1904, Alberto Santos-Dumont, an early aviator, asked his friend the French watchmaker Louis Cartier to design a watch that could be useful during his flights.
During World War I, wristwatches were used by artillery officers. The so-called trench watch, or 'wristlets' were practical, as they freed up one hand that would normally be used to operate a pocket watch, and became standard equipment. The demands of trench warfare meant that soldiers needed to protect the glass of their watches, and a guard in the form of a hinged cage was sometimes used. The guard was designed to allow the numerals to be read easily, but it obscured the hands—a problem that was solved after the introduction of shatter-resistant Plexiglass in the 1930s. Prior to the advent of its military use, the wristwatch was typically only worn by women, but during World War I they became symbols of masculinity and bravado.
Modern watches
Fob watches were starting to be replaced at the turn of the 20th century. The Swiss, who were neutral throughout World War I, produced wristwatches for both sides of the conflict. The introduction of the tank influenced the design of the Cartier Tank watch, and the design of watches during the 1920s was influenced by the Art Deco style. The automatic watch, first introduced with limited success in the 18th century, was reintroduced in the 1920s by the English watchmaker John Harwood. After he went bankrupt in 1929, restrictions on automatic watches were lifted and companies such as Rolex were able to produce them. In 1930, Tissot produced the first ever non-magnetic wristwatch.
The first battery-driven watches were developed in the 1950s. High quality watches were produced by firms such as Patek Philippe, an example being a Patek Philippe ref. 1518, introduced in 1941, possibly the most complicated wristwatch ever made in stainless steel, which fetched a world record price in 2016 when it was sold at auction for $11,136,642.
The manual winding Speedmaster Professional or "Moonwatch" was worn during the first United States spacewalk as part of NASA's Gemini 4 mission and was the first watch worn by an astronaut walking on the Moon during the Apollo 11 mission. In 1969, Seiko produced the world's first quartz wristwatch, the Astron.
During the 1970s, the introduction of digital watches made using transistors and plastic parts enabled companies to reduce their work force. By the 1970s, many of those firms that maintained more complicated metalworking techniques had gone bankrupt.
Smartwatches, essentially wearable computers in the form of watches, were introduced to the market in the early 21st century.
Atomic clocks
Atomic clocks are the most accurate timekeeping devices in practical use today. Accurate to within a few seconds over many thousands of years, they are used to calibrate other clocks and timekeeping instruments. The U.S. National Bureau of Standards (NBS, now National Institute of Standards and Technology (NIST)) changed the way it based the time standard of the United States from quartz to atomic clocks in the 1960s.
The idea of using atomic transitions to measure time was first suggested by the British scientist Lord Kelvin in 1879, although it was only in the 1930s with the development of magnetic resonance that there was a practical method for measuring time in this way. A prototype ammonia maser device was built in 1948 at NIST. Although less accurate than existing quartz clocks, it served to prove the concept of an atomic clock.
The first accurate atomic clock, a caesium standard based on a certain transition of the caesium-133 atom, was built by the English physicist Louis Essen in 1955 at the National Physical Laboratory in London. It was calibrated by the use of the astronomical time scale ephemeris time (ET).
In 1967 the International System of Units (SI) standardized its unit of time, the second, on the properties of caesium. The SI defined the second as 9,192,631,770 cycles of the radiation which corresponds to the transition between two electron spin energy levels of the ground state of the 133Cs atom. The caesium atomic clock maintained by NIST is accurate to 30 billionths of a second per year. Atomic clocks have employed other elements, such as hydrogen and rubidium vapor, offering greater stability (in the case of hydrogen clocks) and smaller size, lower power consumption, and thus lower cost (in the case of rubidium clocks). Recent advances in clock technology have largely been based on trapped ion platforms, with the record for the lowest systematic uncertainty being traded between aluminum ion clocks and strontium optical lattice clocks. Next-generation clocks will likely be based on nuclear transitions in the 229mTh nucleus, as nuclei are shielded from external effects by the accompanying electron cloud, and the transition frequency is much higher than optical and ion clocks, allowing for much lower systematic uncertainty in the clock frequency.
See also
(UTC)
Explanatory notes
Citations
References
External links
Relativity Science Calculator – Philosophic Question: are clocks and time separable?
Ancient Discoveries Islamic Science Part 4 clip from History Repeating of Islamic time-keeping inventions (YouTube).
Timekeeping devices
Timekeeping devices
Timekeeping | History of timekeeping devices | [
"Physics",
"Technology"
] | 8,655 | [
"Physical quantities",
"Time",
"Timekeeping",
"Science and technology studies",
"History of technology",
"Spacetime",
"History of science and technology"
] |
14,449,395 | https://en.wikipedia.org/wiki/Waxtite | Waxtite, also WaxTite is the trade name of the heat-sealed waxed-paper packaging system that was used by Will Keith Kellogg in 1914, around the outside of their cereal boxes. Subsequently, the Waxtite packaging was moved inside the box.
References
Packaging materials | Waxtite | [
"Physics"
] | 56 | [
"Materials stubs",
"Materials",
"Matter"
] |
14,449,729 | https://en.wikipedia.org/wiki/Vallum%20%28Hadrian%27s%20Wall%29 | The Vallum is a huge earthwork associated with Hadrian's Wall in England. Unique on any Roman frontier, it runs practically from coast to coast to the south of the wall. It was built a few years after the wall. Current opinion is that the Vallum demarcated the southern boundary of a military zone, bounded on the north by the wall.
The earliest surviving mention of the earthwork is by Bede who refers to a vallum, or earthen rampart, as distinct from the wall, or murus; the term is still used despite the fact that the essential element is a ditch, or fossa.
Layout and course
The Vallum comprises a ditch that is nominally wide and deep, with a flat bottom, flanked by two mounds about 6 metres wide and high, set back some from the ditch edges. For a great deal of its length a third lower mound, the so-called marginal mound, occupies the south berm (flat area between mound and ditch), right on the southern lip of the ditch. The total width of the fortification (consisting from north to south of mound, berm, ditch, marginal mound, berm, mound) was thus about . In several places (for example at Heddon-on-the-Wall and Limestone Corner) the Vallum was cut through solid rock, sometimes for lengthy distances.
The distance of the Vallum from the Wall varies. In general there was a preference for the earthwork to run close to the rear of the wall where topography allowed. In the central sector the wall runs along the top of the crags of the Whin Sill, while the Vallum, laid out in long straight stretches, lies in the valley below to the south, as much as away.
History
The Vallum was constructed a few years after the wall was completed, as it deviates to the south around the first series of forts (including Chesters) but earlier than that at Carrawburgh, datable to c. 130 by a fragmentary inscription. There would have been a crossing point like a causeway or bridge to the south of each wall-fort; several such causeways are known, such as the one still visible with the base of an ornate arch at the fort of Condercum in Benwell, a western suburb of Newcastle. Causeways have also been detected to the south of several milecastles.
It is thought that the easternmost section of Hadrian's Wall between the forts of Pons Aelius (Newcastle upon Tyne) and Segedunum (Wallsend) was an addition to the original plan. The Vallum was not constructed behind this extra length of the wall and did not apparently even reach the fort at Newcastle; instead it seems it stopped in the western Newcastle suburb of Elswick. This was probably because from here on the Vallum's function as a southern barrier to the wall was performed by the River Tyne.
Sometime later in the 2nd century and certainly by the 3rd, the Vallum was "slighted", that is, the ramparts were broken through and the ditch filled in especially near the forts and the undefended settlements which grew up outside them. Archaeologists have speculated that either the Vallum was then deemed unnecessary because economic development and pacification of the frontier district had rendered it obsolete, or that it was proving to be a hindrance to military and authorised civilian traffic. Some have suggested that this coincided with the building of the Antonine Wall in Scotland and the temporary abandonment of Hadrian's Wall.
Purpose
Although there is no definitive historical evidence as to why the Roman army built this unusual barrier, modern archaeological opinion is that the Vallum established the southern boundary of an exclusion zone bounded on the north by the wall itself. The zone would have been "out-of-bounds" to civilians and those with no valid reason to be there.
Excavations
The first excavation was undertaken in 1893 at Great Hill (at Heddon-on-the-Wall), where it was observed that the Vallum ditch was cut through a seam of fire-clay which was deployed in both mounds. This excavation demonstrated that the main north and south mounds were contemporary and built using material dug from the ditch. In the late 20th century several excavations established that the marginal mound was also contemporary.
References
External links
Hadrian's Wall
Linear earthworks | Vallum (Hadrian's Wall) | [
"Engineering"
] | 883 | [
"Hadrian's Wall",
"Fortification lines"
] |
14,449,786 | https://en.wikipedia.org/wiki/Transport%20standards%20organisations | Transport standards organisations is an article transport Standards organisations, consortia and groups that are involved in producing and maintaining standards that are relevant to the global transport technology, transport journey planning and transport ticket/retailing industry. Transport systems are inherently distributed systems with complex information requirements. Robust modern standards for transport data are important for the safe and efficient operation of transport systems. These include:
Formal standards development organisations;
Other international bodies developing definitive core specifications;
Other important international bodies;
Other National bodies developing definitive core specifications;
Other important National bodies
Formal standards development organisations
The formal development of international standards is organised in three tiers of Standards Development Organisations, recognised by international agreements :
World: International Organization for Standardization, ISO. International Electrotechnical Commission (IEC)
Regional Regional Standards bodies coordinate standardisation between geographically or politically connected regions with a need to harmonise products and practices. For example, in Europe, the European Committee for Standardization or CEN
National, e.g. Most Nations have a coordinating body responsible for organizing participation in CEN & ISO activities, for publishing ISO & CEN standards within the country, and for coordinating national standardisation activities. The National SDO in turn will delegate responsibility as appropriate to the relevant trade associations, government departments and other stakeholders for a specific are of technical expertise. For example, in the UK the British Standards Institution or BSI is the National SDO.
The SDOs conduct their work through a system of working groups, responsible for different areas of expertise. These evolve over time to accommodate changes in technology. key current working groups for transport standards are outlined below.
CEN Working Groups and leaders for Transport Standards
CEN Allocates responsibility for different areas of transport standardisation to working groups
WG1 - Automatic Fee Collection and Access Control - CEN
WG2 - Freight and Fleet Management System - ISO
WG3 - Public Transport - ISO
WG4 - TTI – Traffic and Traveller Information - ISO
WG5 - TC - Traffic Control - ISO
WG6 - Parking Management - n/a
WG7/8 - Geographic Road Data Base: Road Traffic Data - ISO
WG9 - Dedicated Short Range Communications - CEN
WG10 - Man-machine Interface - n/a
WG11 - Subsystem- Intersystem Interfaces - ISO
WG12 - Automatic Vehicle and Equipment Identification - CEN
WG13 - System Architecture and Terminology - ISO
ISO Working Groups and leaders for Transport Standards
ISO Technical Committee 204 is responsible for Transport Information and Control Systems. It has a number of standing Working Groups, which set up Subgroups from time to time.
Current ISO TC204 Working Groups, Work program & Countries that provide Secretariat are as follows
WG1 Architecture - UK
WG2 Quality and Reliability Requirements - Japan
WG3 TICS Database Technology - Japan
WG4 Automatic Vehicle Identification - Norway
WG5 Fee and Toll Collection Holland
WG7 General Fleet Management and Commercial and Freight - Canada
WG8 Public Transport/Emergency - America
WG9 Integrated Transport Information, Management, and Control - Australia
WG10 Traveller Information Systems - UK
WG11 Route Guidance and Navigation Systems - Germany
WG14 Vehicle/Roadway Warning and Control Systems - Japan
WG15 Dedicated Short Range Communications for TICS Applications - Germany
WG16 Wide Area Communications/Protocols and Interfaces - America
For an up-to-date schedule of the remit of TC204, its current Working Groups and their points of contact please refer to:
The U.S. standards developing organization which is tasked with the domestic implementation of ISO TC204 Transport Standards, is the Telecommunications Industry Association.
Other international bodies developing definitive core specifications
As well as the formal SDOs, a number of other international bodies undertake work that is important for Transport and Transport Information standards
International Air Transport Association (IATA)
International Union of Railways (UIC)
Institute for Transportation and Development Policy (ITDP) which develops the BRT Standard
European Broadcasting Union (EBU) - See TPEG
World Wide Web Consortium (W3C)
OpenTravel Alliance (OTA)
Open Geospatial Consortium (OGC)
Organization for the Advancement of Structured Information Standards (OASIS)
railML.org Railway data standardisation body defining railML
Other important international bodies
Object Management Group (OMG)
EuroRoadS
Media Oriented Systems Transport (MOST)
European Railway Agency (ERA)
National bodies developing definitive core specifications
German organisations active in Transport Standards development
Verband Deutscher Verkehrsunternehmen (VDV)
UK organisations active in Transport Standards development
UK bodies developing definitive core specifications
Department for Transport (DfT)
Ordnance Survey (OS)
Rail Settlement Plan (RSP)
National Rail Enquiries (NRE)
Integrated Transport Smartcard Organisation (ITSO)
UTMC Development Group (UDG)
Real Time Information Group (RTIG)
Travel Information Highway (TIH)
Other important UK bodies and initiatives
Transport for London
National Traffic Control Centre (NTCC)
Association of Transport Operating Officers (ATCO)
Royal National Institute of Blind People (RNIB)
Royal National Institute for Deaf People (RNID)
Journey Solutions
Oyster card
US bodies developing definitive core Transit Standard specifications
National Transportation Communications for Intelligent Transportation System Protocol or NTCIP
See also
Standards organisations
References
Catalogue of standards for travel information and retailing. March 2007 CC-PR149-D005-0.6 UK Department of Transport
Information systems
Travel technology
Standards organizations
Transport organizations | Transport standards organisations | [
"Physics",
"Technology"
] | 1,097 | [
"Transport organizations",
"Physical systems",
"Transport",
"Information systems",
"Information technology"
] |
10,794,057 | https://en.wikipedia.org/wiki/Mesoporous%20silica | Mesoporous silica is a form of silica that is characterised by its mesoporous structure, that is, having pores that range from 2 nm to 50 nm in diameter. According to IUPAC's terminology, mesoporosity sits between microporous (<2 nm) and macroporous (>50 nm). Mesoporous silica is a relatively recent development in nanotechnology. The most common types of mesoporous nanoparticles are MCM-41 and SBA-15. Research continues on the particles, which have applications in catalysis, drug delivery and imaging. Mesoporous ordered silica films have been also obtained with different pore topologies.
A compound producing mesoporous silica was patented around 1970. It went almost unnoticed and was reproduced in 1997. Mesoporous silica nanoparticles (MSNs) were independently synthesized in 1990 by researchers in Japan. They were later produced also at Mobil Corporation laboratories and named Mobil Composition of Matter (or Mobil Crystalline Materials, MCM).
Six years later, silica nanoparticles with much larger (4.6 to 30 nanometer) pores were produced at the University of California, Santa Barbara. The material was named Santa Barbara Amorphous type material, or SBA-15. These particles also have a hexagonal array of pores.
The researchers who invented these types of particles planned to use them as molecular sieves. Today, mesoporous silica nanoparticles have many applications in medicine, biosensors, thermal energy storage, water/gas filtration and imaging.
Synthesis
Mesoporous silica nanoparticles are synthesized by reacting tetraethyl orthosilicate with a template made of micellar rods. The result is a collection of nano-sized spheres or rods that are filled with a regular arrangement of pores. The template can then be removed by washing with a solvent adjusted to the proper pH.
Mesoporous particles can also be synthesized using a simple sol-gel method such as the Stöber process, or a spray drying method. Tetraethyl orthosilicate is also used with an additional polymer monomer (as a template).
However, TEOS is not the most effective precursor for synthesizing such particles; a better precursor is (3-Mercaptopropyl)trimethoxysilane, often abbreviated to MPTMS. Use of this precursor drastically reduces the chance of aggregation and ensures more uniform spheres.
Drug delivery
The large surface area of the pores allows the particles to be filled with a drug or a cytotoxin. Like a Trojan Horse, the particles will be taken up by certain biological cells through endocytosis, depending on what chemicals are attached to the outside of the spheres. Some types of cancer cells will take up more of the particles than healthy cells will, giving researchers hope that MCM-41 will one day be used to treat certain types of cancer.
Ordered mesoporous silica (e.g. SBA-15, TUD-1, HMM-33, and FSM-16) also show potential to boost the in vitro and in vivo dissolution of poorly water-soluble drugs. Many drug-candidates coming from drug discovery suffer from a poor water solubility. An insufficient dissolution of these hydrophobic drugs in the gastrointestinal fluids strongly limits the oral bioavailability. One example is itraconazole which is an antimycoticum known for its poor aqueous solubility. Upon introduction of itraconazole-on-SBA-15 formulation in simulated gastrointestinal fluids, a supersaturated solution is obtained giving rise to enhanced transepithelial intestinal transport. Also the efficient uptake into the systemic circulation of SBA-15 formulated itraconazole has been demonstrated in vivo (rabbits and dogs). This approach based on SBA-15 yields stable formulations and can be used for a wide variety of poorly water-soluble compounds.
Biosensors
The structure of these particles allows them to be filled with a fluorescent dye that would normally be unable to pass through cell walls. The MSN material is then capped off with a molecule that is compatible with the target cells. When the MSNs are added to a cell culture, they carry the dye across the cell membrane. These particles are optically transparent, so the dye can be seen through the silica walls. The dye in the particles does not have the same problem with self-quenching that a dye in solution has. The types of molecules that are grafted to the outside of the MSNs will control what kinds of biomolecules are allowed inside the particles to interact with the dye.
See also
Mesoporous material
Mesoporous silicates
References
Silicon dioxide
Silica | Mesoporous silica | [
"Materials_science"
] | 1,005 | [
"Mesoporous material",
"Porous media"
] |
10,794,084 | https://en.wikipedia.org/wiki/WinBUGS | WinBUGS is statistical software for Bayesian analysis using Markov chain Monte Carlo (MCMC) methods.
It is based on the BUGS (Bayesian inference Using Gibbs Sampling) project started in 1989. It runs under Microsoft Windows, though it can also be run on Linux or Mac using Wine.
It was developed by the BUGS Project, a team of British researchers at the MRC Biostatistics Unit, Cambridge, and Imperial College School of Medicine, London. Originally intended to solve problems encountered in medical statistics, it soon became widely used in other disciplines, such as ecology, sociology, and geology.
The last version of WinBUGS was version 1.4.3, released in August 2007. Development is now focused on OpenBUGS, an open-source version of the package. WinBUGS 1.4.3 remains available as a stable version for routine use, but is no longer being developed.
References
Further reading
External links
WinBUGS Homepage
Statistical software
Monte Carlo software
Windows-only freeware
Bayesian statistics | WinBUGS | [
"Mathematics"
] | 210 | [
"Statistical software",
"Mathematical software"
] |
10,794,564 | https://en.wikipedia.org/wiki/HTC%20Shift | HTC Shift (code name: Clio) is an Ultra-Mobile PC by HTC.
Features
Dual Operating System
Microsoft Windows Vista Business 32-Bit (notebook mode)
SnapVUE (PDA mode)
Processor
Intel A110 Stealey CPU 800 MHz (for Windows Vista)
ARM11 CPU (for SnapVUE)
Memory and Storage
1 GB RAM (notebook mode)
64 MB RAM (PDA mode)
40/60 GB HDD
SD card slot
Intel GMA 950 graphics
Communications
Quad band GSM / GPRS / EDGE (data only): GSM 850, GSM 900, GSM 1800, GSM 1900
Triband UMTS / HSDPA (data only): UMTS 850, UMTS 1900, UMTS 2100
Wi-Fi 802.11 b/g
Bluetooth v2.0
USB port
7" display
Active TFT touchscreen, 16M colors
800 x 480 pixels (Wide-VGA), 7 inches
QWERTY keyboard
Handwriting recognition
Fingerprint Recognition
Ringtones
MP3
Dual speakers
Upgrading
In November 2011 the team from DistantEarth have succeeded in loading the developer preview of Windows 8 onto the HTC Shift.
References
HTC Shift forums on XDA-Developers
HTC Shift on pof blog: technical information about HTC Shift
HTC Source: a news blog dedicated to HTC devices
HTC Shift X9500 on Pocketables: Many photos, features, and reviews
TechCast Reviews the HTC Shift
Mobile computers
Shift | HTC Shift | [
"Technology"
] | 300 | [
"Mobile computer stubs",
"Mobile technology stubs"
] |
10,795,030 | https://en.wikipedia.org/wiki/Anatoxin-a | Anatoxin-a, also known as Very Fast Death Factor (VFDF), is a secondary, bicyclic amine alkaloid and cyanotoxin with acute neurotoxicity. It was first discovered in the early 1960s in Canada, and was isolated in 1972. The toxin is produced by multiple genera of cyanobacteria and has been reported in North America, South America, Central America, Europe, Africa, Asia, and Oceania. Symptoms of anatoxin-a toxicity include loss of coordination, muscular fasciculations, convulsions and death by respiratory paralysis. Its mode of action is through the nicotinic acetylcholine receptor (nAchR) where it mimics the binding of the receptor's natural ligand, acetylcholine. As such, anatoxin-a has been used for medicinal purposes to investigate diseases characterized by low acetylcholine levels. Due to its high toxicity and potential presence in drinking water, anatoxin-a poses a threat to animals, including humans. While methods for detection and water treatment exist, scientists have called for more research to improve reliability and efficacy. Anatoxin-a is not to be confused with guanitoxin (formerly anatoxin-a(S)), another potent cyanotoxin that has a similar mechanism of action to that of anatoxin-a and is produced by many of the same cyanobacteria genera, but is structurally unrelated.
History
Anatoxin-a was first discovered by P.R. Gorham in the early 1960s, after several herds of cattle died as a result of drinking water from Saskatchewan Lake in Ontario, Canada, which contained toxic algal blooms. It was isolated in 1972 by J.P. Devlin from the cyanobacteria Anabaena flos-aquae.
Occurrence
Anatoxin-a is a neurotoxin produced by multiple genera of freshwater cyanobacteria that are found in water bodies globally. Some freshwater cyanobacteria are known to be salt tolerant and thus it is possible for anatoxin-a to be found in estuarine or other saline environments. Blooms of cyanobacteria that produce anatoxin-a among other cyanotoxins are increasing in frequency due to increasing temperatures, stratification, and eutrophication due to nutrient runoff. These expansive cyanobacterial harmful algal blooms, known as cyanoHABs, increase the amount of cyanotoxins in the surrounding water, threatening the health of both aquatic and terrestrial organisms. Some species of cyanobacteria that produce anatoxin-a don't produce surface water blooms but instead form benthic mats. Many cases of anatoxin-a related animal deaths have occurred due to ingestion of detached benthic cyanobacterial mats that have washed ashore.
Anatoxin-a producing cyanobacteria have also been found in soils and aquatic plants. Anatoxin-a sorbs well to negatively charged sites in clay-like, organic-rich soils and weakly to sandy soils. One study found both bound and free anatoxin-a in 38% of aquatic plants sampled across 12 Nebraskan reservoirs, with much higher incidence of bound anatoxin-a than free.
Experimental studies
In 1977, Carmichael, Gorham, and Biggs experimented with anatoxin-a. They introduced toxic cultures of A. flos-aquae into the stomachs of two young male calves, and observed that muscular fasciculations and loss of coordination occurred in a matter of minutes, while death due to respiratory failure occurred anywhere between several minutes and a few hours. They also established that extensive periods of artificial respiration did not allow for detoxification to occur and natural neuromuscular functioning to resume. From these experiments, they calculated that the oral minimum lethal dose (MLD) (of the algae, not the anatoxin molecule), for calves is roughly 420 mg/kg body weight.
In the same year, Devlin and colleagues discovered the bicyclic secondary amine structure of anatoxin-a. They also performed experiments similar to those of Carmichael et al. on mice. They found that anatoxin-a kills mice 2–5 minutes after intraperitoneal injection preceded by twitching, muscle spasms, paralysis and respiratory arrest, hence the name Very Fast Death Factor. They determined the LD50 for mice to be 250 μg/kg body weight.
Electrophysiological experiments done by Spivak et al. (1980) on frogs showed that anatoxin-a is a potent agonist of the muscle-type (α1)2βγδ nAChR. Anatoxin-a induced depolarizing neuromuscular blockade, contracture of the frog's rectus abdominis muscle, depolarization of the frog sartorius muscle, desensitization, and alteration of the action potential. Later, Thomas et al., (1993) through his work with chicken α4β2 nAChR subunits expressed on mouse M 10 cells and chicken α7 nAChR expressed in oocytes from Xenopus laevis, showed that anatoxin-a is also a potent agonist of neuronal nAChR.
Toxicity
Effects
Laboratory studies using mice showed that characteristic effects of acute anatoxin-a poisoning via intraperitoneal injection include muscle fasciculations, tremors, staggering, gasping, respiratory paralysis, and death within minutes. Zebrafish exposed to anatoxin-a contaminated water had altered heart rates.
There have been cases of non-lethal poisoning in humans who have ingested water from streams and lakes that contain various genera of cyanobacteria that are capable of producing anatoxin-a. The effects of non-lethal poisoning were primarily gastrointestinal: nausea, vomiting, diarrhea, and abdominal pain. A case of lethal poisoning was reported in Wisconsin after a teen jumped into a pond contaminated with cyanobacteria.
Exposure routes
Oral
Ingestion of drinking water or recreational water that is contaminated with anatoxin-a can pose fatal consequences since anatoxin-a was found to be quickly absorbed through the gastrointestinal tract in animal studies. Dozens of cases of animal deaths due to ingestion of anatoxin-a contaminated water from lakes or rivers have been recorded, and it is suspected to have also been the cause of death of one human. One study found that anatoxin-a is capable of binding to acetylcholine receptors and inducing toxic effects with concentrations in the nano-molar (nM) range if ingested.
Dermal
Dermal exposure is the most likely form of contact with cyanotoxins in the environment. Recreational exposure to river, stream, and lake waters contaminated with algal blooms has been known to cause skin irritation and rashes. The first study that looked at in vitro cytotoxic effects of anatoxin-a on human skin cell proliferation and migration found that anatoxin-a exerted no effect at 0.1 μg/mL or 1 μg/mL, and a weak toxic effect at 10 μg/mL only after an extended period of contact (48 hours).
Inhalation
No data on inhalation toxicity of anatoxin-a is currently available, though severe respiratory distress occurred in a water skier after they inhaled water spray containing a fellow cyanobacterial neurotoxin, saxitoxin. It is possible that inhalation of water spray containing anatoxin-a could pose similar consequences.
Mechanism of toxicity
Anatoxin-a is an agonist of both neuronal α4β2 and α4 nicotinic acetylcholine receptors present in the CNS as well as the (α1)2βγδ muscle-type nAchRs that are present at the neuromuscular junction. (Anatoxin-a has an affinity for these muscle-type receptors that is about 20 times greater than that of acetylcholine.) However, the cyanotoxin has little effect on muscarinic acetylcholine receptors; it has a 100 fold lesser selectivity for these types of receptors than it has for nAchRs. Anatoxin-a also shows much less potency in the CNS than in neuromuscular junctions. In hippocampal and brain stem neurons, a 5 to 10 times greater concentration of anatoxin-a was necessary to activate nAchRs than what was required in the PNS.
In normal circumstances, acetylcholine binds to nAchRs in the post-synaptic neuronal membrane, causing a conformational change in the extracellular domain of the receptor which in turn opens the channel pore. This allows Na+ and Ca2+ ions to move into the neuron, causing cell depolarization and inducing the generation of action potentials, which allows for muscle contraction. The acetylcholine neurotransmitter then dissociates from the nAchR, where it is rapidly cleaved into acetate and choline by acetylcholinesterase.
Anatoxin-a binding to these nAchRs cause the same effects in neurons. However, anatoxin-a binding is irreversible, and the anatoxin-a nAchR complex cannot be broken down by acetylcholinesterase. Thus, the nAchR is temporarily locked open, which leads to overstimulation due to the constant generation of action potentials.
Two enantiomers of anatoxin-a, the positive enantiomer, (+)-anatoxin-a, is 150 fold more potent than the synthetic negative enantiomer, (−)-anatoxin-a. This is because (+)-anatoxin-a, the s-cis enone conformation, has a distance a 6.0 Å between its nitrogen and carbonyl group, which corresponds well to the 5.9 Å distance that separate the nitrogen and oxygen in acetylcholine.
Respiratory arrest, which results in a lack of an oxygen supply to the brain, is the most evident and lethal effect of anatoxin-a. Injections of mice, rats, birds, dogs, and calves with lethal doses of anatoxin-a have demonstrated that death is preceded by a sequence of muscle fasciculations, decreased movement, collapse, exaggerated abdominal breathing, cyanosis and convulsions. In mice, anatoxin-a also seriously impacted blood pressure and heart rate, and caused severe acidosis.
Cases of toxicity
Many cases of wildlife and livestock deaths due to anatoxin-a have been reported since its discovery. Domestic dog deaths due to the cyanotoxin, as determined by analysis of stomach contents, have been observed at the lower North Island in New Zealand in 2005, in eastern France in 2003, in California of the United States in 2002 and 2006, in Scotland in 1992, in Ireland in 1997 and 2005, in Germany in 2017 and 2020. In each case, the dogs began showing muscle convulsions within minutes, and were dead within a matter of hours. Numerous cattle fatalities arising from the consumption of water contaminated with cyanobacteria that produce anatoxin-a have been reported in the United States, Canada, and Finland between 1980 and the present.
A particularly interesting case of anatoxin-a poisoning is that of lesser flamingos at Lake Bogoria in Kenya. The cyanotoxin, which was identified in the stomachs and fecal pellets of the birds, killed roughly 30,000 flamingos in the second half of 1999, and continues to cause mass fatalities annually, devastating the flamingo population. The toxin is introduced into the birds via water contaminated with cyanobacterial mat communities that arise from the hot springs in the lake bed.
Synthesis
Laboratory synthesis
Cyclic expansion of tropanes
The first biologically occurring initial substance for tropane expansion into anatoxin-a was cocaine, which has similar stereochemistry to anatoxin-a. Cocaine is first converted into the endo isomer of a cyclopropane, which is then photolytically cleaved to obtain an alpha, beta unsaturated ketone. Through the use of diethyl azodicarboxylate, the ketone is demethylated and anatoxin-a is formed. A similar, more recent synthesis pathway involves producing 2-tropinone from cocaine and treating the product with ethyl chloroformate producing a bicyclic ketone. This product is combined with trimethylsilyldiazylmethane, an organoaluminum Lewis acid and trimethylsinyl enol ether to produce tropinone. This method undergoes several more steps, producing useful intermediates as well as anatoxin-a as a final product.
Cyclization of cyclooctenes
The first and most extensively explored approach used to synthesize anatoxin-a in vitro, cyclooctene cyclization involves 1,5-cyclooctadiene as its initial source. This starting substance is reacted to form methyl amine and combined with hypobromous acid to form anatoxin-a. Another method developed in the same laboratory uses aminoalcohol in conjunction with mercuric (II) acetate and sodium borohydride. The product of this reaction was transformed into an alpha, beta ketone and oxidized by ethyl azodicarboxylate to form anatoxin-a.
Enantioselective enolization strategy
This method for anatoxin-a production was one of the first used that does not utilize a chimerically analogous starting substance for anatoxin formation. Instead, a racemic mixture of 3-tropinone is used with a chiral lithium amide base and additional ring expansion reactions in order to produce a ketone intermediate. Addition of an organocuprate to the ketone produces an enol triflate derivative, which is then lysed hydrogenously and treated with a deprotecting agent in order to produce anatoxin-a. Similar strategies have also been developed and utilized by other laboratories.
Intramolecular cyclization of iminium ions
Iminium ion cyclization utilizes several different pathways to create anatoxin-a, but each of these produces and progresses with a pyrrolidine iminium ion. The major differences in each pathway relate to the precursors used to produce the imium ion and the total yield of anatoxin-a at the end of the process. These separate pathways include production of alkyl iminium salts, acyl iminium salts and tosyl iminium salts.
Enyne metathesis
Enyne metathesis of anatoxin-a involves the use of a ring closing mechanism and is one of the more recent advances in anatoxin-a synthesis. In all methods involving this pathway, pyroglutamic acid is used as a starting material in conjunction with a Grubb's catalyst. Similar to iminium cyclization, the first attempted synthesis of anatoxin-a using this pathway used a 2,5-cis-pyrrolidine as an intermediate.
Biosynthesis
Anatoxin-a is synthesized in vivo in the species Anabaena flos-aquae, as well as several other genera of cyanobacteria. Anatoxin-a and related chemical structures are produced using acetate and glutamate. Further enzymatic reduction of these precursors results in the formation of anatoxin-a. Homoanatoxin, a similar chemical, is produced by Oscillatoria formosa and utilizes the same precursor. However, homoanatoxin undergoes a methyl addition by S-adenosyl-L-methionine instead of an addition of electrons, resulting in a similar analogue. The biosynthetic gene cluster (BGC) for anatoxin-a was described from Oscillatoria PCC 6506 in 2009.
Stability and degradation
Anatoxin-a is unstable in water and other natural conditions, and in the presence of UV light undergoes photodegradation, being converted to the less toxic products dihydroanatoxin-a and epoxyanatoxin-a. The photodegradation of anatoxin-a is dependent on pH and sunlight intensity but independent of oxygen, indicating that the degradation by light is not achieved through the process of photo-oxidation.
Studies have shown that some microorganisms are capable of degrading anatoxin-a. A study done by Kiviranta and colleagues in 1991 showed that the bacterial genus Pseudomonas was capable of degrading anatoxin-a at a rate of 2–10 μg/ml per day. Later experiments done by Rapala and colleagues (1994) supported these results. They compared the effects of sterilized and non-sterilized sediments on anatoxin-a degradation over the course of 22 days, and found that after that time vials with the sterilized sediments showed similar levels of anatoxin-a as at the commencement of the experiment, while vials with non-sterilized sediment showed a 25-48% decrease.
Detection
There are two categories of anatoxin-a detection methods. Biological methods have involved administration of samples to mice and other organisms more commonly used in ecotoxicological testing, such as brine shrimp (Artemia salina), larvae of the freshwater crustacean Thamnocephalus platyurus, and various insect larvae. Problems with this methodology include an inability to determine whether it is anatoxin-a or another neurotoxin that causes the resulting deaths. Large amounts of sample material are also needed for such testing. In addition to the biological methods, scientists have used chromatography to detect anatoxin-a. This is complicated by the rapid degradation of the toxin and the lack of commercially available standards for anatoxin-a.
Public health
Despite the relatively low frequency of anatoxin-a relative to other cyanotoxins, its high toxicity (the lethal dose is not known for humans, but is estimated to be less than 5 mg for an adult male) means that it is still considered a serious threat to terrestrial and aquatic organisms, most significantly to livestock and to humans. Anatoxin-a is suspected to have been involved in the death of at least one person. The threat posed by anatoxin-a and other cyanotoxins is increasing as both fertilizer runoff, leading to eutrophication in lakes and rivers, and higher global temperatures contribute to a greater frequency and prevalence of cyanobacterial blooms.
Water regulations
The World Health Organization in 1999 and EPA in 2006 both came to the conclusion that there was not enough toxicity data for anatoxin-a to establish a formal tolerable daily intake (TDI) level, though some places have implemented levels of their own.
United States
Drinking water advisory levels
Anatoxin-a is not regulated under the Safe Drinking Water Act, but states are allowed to create their own standards for contaminants that are unregulated. Currently there are four states that have set drinking water advisory levels for anatoxin-a as seen in the table below. On October 8, 2009 the EPA published the third Drinking Water Contaminant Candidate List (CCL) which included anatoxin-a (among other cyanotoxins), indicating that anatoxin-a may be present in public water systems but is not regulated by the EPA. Anatoxin-a's presence on the CCL means that it may need to be regulated by the EPA in the future, pending further information on its health effects in humans.
Recreational water advisory levels
In 2008 the state of Washington implemented a recreational advisory level for anatoxin-a of 1 μg/L in order to better manage algal blooms in lakes and protect users from exposure to the blooms.
Canada
The Canadian province of Québec has a drinking water Maximum Accepted Value for anatoxin-a of 3.7 μg/L.
New Zealand
New Zealand has a drinking water Maximum Accepted Value for anatoxin-a of 6 μg/L.
Water treatment
As of now, there is no official guideline level for anatoxin-a, although scientists estimate that a level of 1 μg l−1 would be sufficiently low. Likewise, there are no official guidelines regarding testing for anatoxin-a. Among methods of reducing the risk for cyanotoxins, including anatoxin-a, scientists look favorably on biological treatment methods because they do not require complicated technology, are low maintenance, and have low running costs. Few biological treatment options have been tested for anatoxin-a specifically, although a species of Pseudomonas, capable of biodegrading anatoxin-a at a rate of 2–10 μg ml−1 d−1, has been identified. Biological (granular) activated carbon (BAC) has also been tested as a method of biodegradation, but it is inconclusive whether biodegradation occurred or if anatoxin-a was simply adsorbing the activated carbon. Others have called for additional studies to determine more about how to use activated carbon effectively.
Chemical treatment methods are more common in drinking water treatment compared to biological treatment, and numerous processes have been suggested for anatoxin-a. Oxidants such as potassium permanganate, ozone, and advanced oxidation processes (AOPs) have worked in lowering levels of anatoxin-a, but others, including photocatalysis, UV photolysis, and chlorination, have not shown great efficacy.
Directly removing the cyanobacteria in the water treatment process through physical treatment (e.g., membrane filtration) is another option because most of the anatoxin-a is contained within the cells when the bloom is growing. However, anatoxin-a is released from cyanobacteria into water when they senesce and lyse, so physical treatment may not remove all of the anatoxin-a present. Additional research needs to be done to find more reliable and efficient methods of both detection and treatment.
Laboratory uses
Anatoxin-a is a very powerful nicotinic acetylcholine receptor agonist and as such has been extensively studied for medicinal purposes. It is mainly used as a pharmacological probe in order to investigate diseases characterized by low acetylcholine levels, such as muscular dystrophy, myasthenia gravis, Alzheimer disease, and Parkinson disease. Further research on anatoxin-a and other less potent analogues are being tested as possible replacements for acetylcholine.
Genera of cyanobacteria that produce anatoxin-a
Anabaena (Dolichospermum)
Aphanizomenon
Cylindrospermopsis
Cylindrospermum
Lyngbya
Microcystis
Nostoc
Oscillatoria
Microcoleus (Phormidium)
Planktothrix
Raphidiopsis
Tychonema
Woronichinia
See also
Guanitoxin
Epibatidine
References
Further reading
External links
Very Fast Death Factor (Anatoxin-a) at The Periodic Table of Videos (University of Nottingham)
Molecule of the Month: Anatoxin at the School of Chemistry, Physics, and Environmental Studies, University of Sussex at Brighton
Neurotoxins
Nitrogen heterocycles
Alkaloids
Ketones
Cyanotoxins
Cycloalkenes
Amines
Heterocyclic compounds with 2 rings
Enones
Bacterial alkaloids
Nicotinic agonists | Anatoxin-a | [
"Chemistry"
] | 4,972 | [
"Biomolecules by chemical classification",
"Natural products",
"Ketones",
"Functional groups",
"Amines",
"Organic compounds",
"Neurochemistry",
"Neurotoxins",
"Bases (chemistry)",
"Alkaloids"
] |
10,795,214 | https://en.wikipedia.org/wiki/NGC%202371-2 | NGC 2371-2 is a dual lobed planetary nebula located in the constellation Gemini. Visually, it appears like it could be two separate objects; therefore, two entries were given to the planetary nebula by John Louis Emil Dreyer in the New General Catalogue, so it may be referred to as NGC 2371, NGC 2372, or variations on this name. It has also been called the double bubble nebula.
The central star of the planetary nebula has a spectral type of [WO1], indicating a spectrum similar to that of an oxygen-rich Wolf–Rayet star.
Observations
NGC 2371-2 is in the constellation of Gemini which is visible in the latitudes between +90° and −60°. The planetary nebula appears southwest of Castor, and is located at a distance of 4400 light years.
At 13th magnitude, this nebula is well within the limits of most amateur telescopes. Like most planetary nebulae, this one responds well to both high magnification and narrow-band filters, especially an OIII emission filter. It is listed within the RASC's 110 Finest NGC List.
Gallery
See also
Gemini in Chinese astronomy
Cancer Minor (constellation) – Obsolete constellation inside modern Gemini
References
External links
The Hubble European Space Agency Information Centre Hubble picture and information on NGC 2371
Planetary nebulae
Gemini (constellation)
2371 | NGC 2371-2 | [
"Astronomy"
] | 274 | [
"Gemini (constellation)",
"Constellations"
] |
10,795,399 | https://en.wikipedia.org/wiki/NGC%203242 | NGC 3242 (also known as the Ghost of Jupiter, Eye Nebula or Caldwell 59) is a planetary nebula located in the constellation Hydra.
William Herschel discovered the nebula on February 7, 1785, and catalogued it as H IV.27. John Herschel observed it from the Cape of Good Hope, South Africa, in the 1830s, and numbered it as h 3248, and included it in the 1864 General Catalogue as GC 2102; this became NGC 3242 in J. L. E. Dreyer's New General Catalogue of 1888.
This planetary nebula is most frequently called the Ghost of Jupiter, or Jupiter's Ghost due to its similar shape to the planet, but it is also sometimes referred to as the Eye Nebula. The nebula measures around two light years long from end to end, and contains a central white dwarf with an apparent magnitude of 11. The inner layers of the nebula were formed some 1,500 years ago. The two ends of the nebula are marked by FLIERs, lobes of fast moving gas often tinted red in false-color pictures. NGC 3242 can easily be observed with amateur telescopes and appears bluish-green to most observers. Larger telescopes can distinguish the outer halo as well.
At the center of NGC 3242 is an O-type star with a spectral type of O(H).
Gallery
See also
List of NGC objects
Planetary nebula
References
External links
The Hubble European Space Agency Information Centre – Hubble picture and information on NGC 3242
NGC3242 on astro-pics.com
FLIERs in NGC 3242
Planetary nebulae
3242
059b
17850207
Hydra (constellation) | NGC 3242 | [
"Astronomy"
] | 335 | [
"Hydra (constellation)",
"Constellations"
] |
10,795,520 | https://en.wikipedia.org/wiki/National%20Centre%20for%20Text%20Mining | The National Centre for Text Mining (NaCTeM) is a publicly funded text mining (TM) centre. It was established to provide support, advice and information on TM technologies and to disseminate information within the larger TM community, while also providing services and tools in response to the requirements of the United Kingdom academic community.
The software tools and services which NaCTeM supplies allow researchers to apply text mining techniques to problems within their specific areas of interest – examples of these tools are highlighted below. In addition to providing services, the centre is also involved in, and makes significant contributions to, the text mining research community both nationally and internationally in initiatives such as Europe PubMed Central.
The centre is located in the Manchester Institute of Biotechnology and is operated and organised by the Department of Computer Science, University of Manchester. NaCTeM contributes expertise in natural language processing and information extraction, including named-entity recognition, and extractions of complex relationships (or events) that hold between named entities, along with parallel and distributed data mining systems in biomedical and clinical applications.
Services
TerMine
TerMine is a domain independent method for automatic term recognition which can be used to help locate the most important terms in a document and automatically rank them.
AcroMine
AcroMine finds all known expanded forms of acronyms as they have appeared in Medline entries or conversely, it can be used to find possible acronyms of expanded forms as they have previously appeared in Medline and disambiguates them.
Medie
Medie is an intelligent search engine for the semantic retrieval of sentences containing biomedical correlations from Medline abstracts.
Facta+
Facta+ is a Medline search engine for finding associations between biomedical concepts.
Facta+ Visualizer
Facta+ Visualizer is a web application that aids in understanding FACTA+ search results through intuitive graphical visualisation.
KLEIO
KLEIO is a faceted semantic information retrieval system over Medline abstracts.
Europe PMC EvidenceFinder
Europe PMC EvidenceFinder Europe PMC EvidenceFinder helps users to explore facts that involve entities of interest within the full text articles of the Europe PubMed Central database.
EUPMC Evidence Finder for Anatomical entities with meta-knowledge
EUPMC Evidence Finder for Anatomical entities with meta-knowledge is similar to the Europe PMC EvidenceFinder, allowing exploration of facts involving anatomical entities within the full text articles of the Europe PubMed Central database. Facts can be filtered according to various aspects of their interpretation (e.g., negation, certainly level, novelty).
Info-PubMed
Info-PubMed provides information and graphical representation of biomedical interactions extracted from Medline using deep semantic parsing technology. This is supplemented with a term dictionary consisting of over 200,000 protein/gene names and identification of disease types and organisms.
Clinical Trial Protocols (ASCOT)
ASCOT is an efficient, semantically-enhanced search application, customised for clinical trial documents.
History of Medicine (HOM)
HOM is a semantic search system over historical medical document archives
Resources
BioLexicon
BioLexicon is a large-scale terminological resource for the biomedical domain.
GENIA
GENIA is a collection of reference materials for the development of biomedical text mining systems.
GREC
GREC is a semantically annotated corpus of Medline abstracts intended for training IE systems and/or resources which are used to extract events from biomedical literature.
Metabolite and Enzyme Corpus
This is a corpus of Medline abstracts annotated by experts with metabolite and enzyme names.
Anatomy Corpora
A collection of corpora manually annotated with fine-grained, species-independent anatomical entities, to facilitate the development of text mining systems that can carry out detailed and comprehensive analyses of biomedical scientific text.
Meta-knowledge corpus
This is an enrichment of the GENIA Event corpus, in which events are enriched with various levels of information pertaining to their interpretation. The aim is to allow systems to be trained that can distinguish between events that factual information or experimental analyses, definite information from speculated information, etc.
Projects
Argo
The objective of the Argo project is to develop a workbench for analysing (primarily annotating) textual data. The workbench, which is accessed as a web application, supports the combination of elementary text-processing components to form comprehensive processing workflows. It provides functionality to manually intervene in the otherwise automatic process of annotation by correcting or creating new annotations, and facilitates user collaboration by providing sharing capabilities for user-owned resources. Argo benefits users such as text-analysis designers by providing an integrated environment for the development of processing workflows; annotators/curators by providing manual annotation functionalities supported by automatic pre-processing and post-processing; and developers by providing a workbench for testing and evaluating text analytics.
Big Mechanism
Big mechanisms are large, explanatory models of complicated systems in which interactions have important causal effects. Whilst the collection of big data is increasingly automated, the creation of big mechanisms remains a largely human effort, which is becoming increasingly challenging, according to the fragmentation and distribution of knowledge. The ability to automate the construction of big mechanisms could have a major impact on scientific research. As one of a number of different projects that make up the big mechanism programme, funded by DARPA, the aim is to assemble an overarching big mechanism from the literature and prior experiments and to utilise this for the probabilistic interpretation of new patient panomics data. We will integrate machine reading of the cancer literature with probabilistic reasoning across cancer claims using specially-designed ontologies, computational modelling of cancer mechanisms (pathways), automated hypothesis generation to extend knowledge of the mechanisms and a 'Robot Scientist' that performs experiments to test the hypotheses. A repetitive cycle of text mining, modelling, experimental testing, and worldview updating is intended to lead to increased knowledge about cancer mechanisms.
Pathtext
Pathtext/Refine is a system designed to integrate a pathway visualiser, text mining systems and annotation tools.
COPIOUS
This project aims to produce a knowledge repository of Philippine biodiversity by combining the domain-relevant expertise and resources of Philippine partners with the text mining-based big data analytics of the University of Manchester's National Centre for Text Mining. The repository will be a synergy of different types of information, e.g., taxonomic, occurrence, ecological, biomolecular, biochemical, thus providing users with a comprehensive view on species of interest that will allow them to (1) carry out predictive analysis on species distributions, and (2) investigate potential medicinal applications of natural products derived from Philippine species.
Europe PMC Project
This is a collaboration with the Text-Mining group at the European Bioinformatics Institute (EBI) and Mimas (data centre), forming a work package in the Europe PubMed Central project (formerly UKPMC) hosted and coordinated by the British Library. Europe PMC, as a whole, forms a European version of the PubMed Central paper repository, in collaboration with the National Institutes of Health (NIH) in the United States. Europe PMC is funded by a consortium of key funding bodies from the biomedical research funders. The contribution to this major project is in the application of text mining solutions to enhance information retrieval and knowledge discovery. As such this is an application of technology developed in other NaCTeM projects on a large scale and in a prominent resource for the Biomedicine community.
Mining Biodiversity
This project aims to transform the Biodiversity Heritage Library (BHL) into a next-generation social digital library resource to facilitate the study and discussion (via social media integration) of legacy science documents on biodiversity by a worldwide community and to raise awareness of the changes in biodiversity over time in the general public. The project integrates novel text mining methods, visualisation, crowdsourcing and social media into the BHL. The resulting digital resource will provide fully interlinked and indexed access to the full content of BHL library documents, via semantically enhanced and interactive browsing and searching capabilities, allowing users to locate precisely the information of interest to them in an easy and efficient manner.
Mining for Public Health
This project aims to conduct novel research in text mining and machine learning to transform the way in which evidence-based public health (EBPH) reviews are conducted. The aims of the project include developing unsupervised new text mining methods to derive term similarities, supporting screening during EBPH reviews, and creating new algorithms for ranking and visualising meaningful associations of multiple types in a dynamic and iterative manner. These newly developed methods will be evaluated in EBPH reviews, based on implementation of a pilot, to ascertain the level of transformation in EBPH reviewing.
References
External links
http://www.nactem.ac.uk
Computational linguistics
Computer science organizations
Information retrieval organizations
Information technology organisations based in the United Kingdom
Research institutes in Manchester
Department of Computer Science, University of Manchester
Linguistics organizations | National Centre for Text Mining | [
"Technology"
] | 1,829 | [
"Computer science",
"Natural language and computing",
"Computational linguistics",
"Computer science organizations"
] |
10,795,528 | https://en.wikipedia.org/wiki/Cylindrospermopsin | Cylindrospermopsin (abbreviated to CYN, or CYL) is a cyanotoxin produced by a variety of freshwater cyanobacteria. CYN is a polycyclic uracil derivative containing guanidino and sulfate groups. It is also zwitterionic, making it highly water soluble. CYN is toxic to liver and kidney tissue and is thought to inhibit protein synthesis and to covalently modify DNA and/or RNA. It is not known whether cylindrospermopsin is a carcinogen, but it appears to have no tumour initiating activity in mice.
CYN was first discovered after an outbreak of a mystery disease on Palm Island, Queensland, Australia. The outbreak was traced back to a bloom of Cylindrospermopsis raciborskii in the local drinking water supply, and the toxin was subsequently identified. Analysis of the toxin led to a proposed chemical structure in 1992, which was revised after synthesis was achieved in 2000. Several analogues of CYN, both toxic and non-toxic, have been isolated or synthesised.
C. raciborskii has been observed mainly in tropical areas, however has also recently been discovered in temperate regions of Australia, North, South America, New Zealand and Europe. However, CYN-producing strain of C. raciborskii has not been identified in Europe, several other cyanobacteria species occurring across the continent are able to synthesize it.
Discovery
In 1979, 138 inhabitants of Palm Island, Queensland, Australia, were admitted to hospital, suffering various symptoms of gastroenteritis. All of these were children; in addition, 10 adults were affected but not hospitalised. Initial symptoms, including abdominal pain and vomiting, resembled those of hepatitis; later symptoms included kidney failure and bloody diarrhoea. Urine analysis revealed high levels of proteins, ketones and sugar in many patients, along with blood and urobilinogen in lesser numbers. The urine analysis, along with faecal microscopy and poison screening, could not provide a statistical link to the symptoms. All patients recovered within 4 to 26 days, and at the time there was no apparent cause for the outbreak. Initial thoughts on the cause included poor water quality and diet, however none were conclusive, and the illness was coined the “Palm Island Mystery Disease”.
At the time, it was noticed that this outbreak coincided with a severe algal bloom in the local drinking water supply, and soon after the focus turned to the dam in question. An epidemiological study of this “mystery disease” later confirmed that the Solomon Dam was implicated, as those that became ill had used water from the dam. It became apparent that a recent treatment of the algal bloom with copper sulfate caused lysis of the algal cells, releasing a toxin into the water.
A study of the dam revealed that periodic blooms of algae were caused predominantly by three strains of cyanobacteria: two of the genus Anabaena, and Cylindrospermopsis raciborskii, previously unknown in Australian waters. A mouse bioassay of the three demonstrated that although the two Anabaena strains were non-toxic, C. raciborskii was highly toxic. Later isolation of the compound responsible led to the identification of the toxin cylindrospermopsin.
A later report alternatively proposed that the excess copper in the water was the cause of the disease. The excessive dosing was following the use of least-cost contractors to control the algae, who were unqualified in the field.
Chemistry
Structure determination
Isolation of the toxin using cyanobacteria cultured from the original Palm Island strain was achieved by gel filtration of an aqueous extract, followed by reverse-phase HPLC. Structure elucidation was achieved via mass spectrometry (MS) and nuclear magnetic resonance (NMR) experiments, and a structure (later proven slightly incorrect) was proposed (Figure 1).
This almost-correct molecule possesses a tricyclic guanidine group (rings A, B & C), along with a uracil ring (D). The zwitterionic nature of the molecule makes this highly water-soluble, as the presence of charged areas within the molecule creates a dipole effect, suiting the polar solvent. Sensitivity of key signals in the NMR spectrum to small changes in pH suggested that the uracil ring exists in a keto/enol tautomeric relationship, where a hydrogen transfer results in two distinct structures (Figure 2). It was originally proposed that a hydrogen bond between the uracil and guanidine groups in the enol tautomer would make this the dominant form.
Analogues
A second metabolite of C. raciborskii was identified from extracts of the cyanobacteria after the observation of a frequently occurring peak accompanying that of CYN during UV and MS experiments. Analysis by MS and NMR methods concluded that this new compound was missing the oxygen adjacent to the uracil ring, and was named deoxycylindrospermopsin (Figure 3).
In 1999, an epimer of CYN, named 7-epicyclindrospermopsin (epiCYN), was also identified as a minor metabolite from Aphanizomenon ovalisporum. This occurred whilst isolating CYN from cyanobacteria taken from Lake Kinneret in Israel. The proposed structure of this molecule differed from CYN only in the orientation of the hydroxyl group adjacent to the uracil ring (Figure 4).
Total synthesis
Synthetic approaches to CYN started with the piperidine ring (A), and progressed to annulation of rings B and C. The first total synthesis of CYN was reported in 2000 through a 20-step process.
Improvements to synthetic methods led to a revision of the stereochemistry of CYN in 2001. A synthetic process controlling each of the six stereogenic centres of epiCYN established that the original assignments of both CYN and epiCYN were in fact a reversal of the correct structures. An alternative approach by White and Hansen supported these absolute configurations (Figure 5). At the time of this correct assignment, it was suggested that the enol form was not dominant.
Stability
One of the key factors associated with the toxicity of CYN is its stability. Although the toxin has been found to degrade rapidly in an algal extract when exposed to sunlight, it is resistant to degradation by changes in pH and temperature, and shows no degradation in either the pure solid form or in pure water. As a result, in turbid and unmoving water the toxin can persist for long periods, and although boiling water will kill the cyanobacteria, it may not remove the toxin.
Toxicology
Toxic effects
Hawkins et al.. demonstrated the toxic effects of CYN by mouse bioassay, using an extract of the original Palm Island strain. Acutely poisoned mice displayed anorexia, diarrhoea and gasping respiration. Autopsy results revealed haemorrhages in the lungs, livers, kidneys, small intestines and adrenal glands. Histopathology revealed dose-related necrosis of hepatocytes, lipid accumulation, and fibrin thrombi formation in blood vessels of the liver and lungs, along with varying epithelial cell necrosis in areas of the kidneys.
A more recent mouse bioassay of the effects of cylindrospermopsin revealed an increase in liver weight, with both lethal and non-lethal doses; in addition the livers appeared dark-coloured. Extensive necrosis of hepatocytes was visible in mice administered a lethal dose, and some localised damage was also observed in mice administered a non-lethal dose.
Toxicity
An initial estimate of the toxicity of CYN in 1985 was that an at 24 hours was 64±5 mg of freeze-dried culture/kg of mouse body weight on intraperitoneal injection. A further experiment in 1997 measured the LD50 as 52 mg/kg at 24 hours and 32 mg/kg at 7 days, however the data suggested that another toxic compound was present in the isolate of sonicated cells used; predictions made by Ohtani et al. about the 24‑hour toxicity were considerably higher, and it was proposed that another metabolite was present to account for the relatively low 24‑hour toxicity level measured.
Because the most likely human route of uptake of CYN is ingestion, oral toxicity experiments were conducted on mice. The oral LD50 was found to be 4.4-6.9 mg CYN/kg, and in addition to some ulceration of the oesophageal gastric mucosa, symptoms were consistent with that of intraperitoneal dosing. Stomach contents included culture material, which indicated that these LD50 figures might be overestimated.
Another means of exposure to CYN is related to alterations in the gut microbiome by artificial sweetners. A study including Aspartame conducted at Cedars-Sinai in Los Angeles by Ruchi Mathur, MD detected CYN in the duodenum at levels four times above baseline in Aspartame users, along with alterations in bacterial species.
Mechanism of action
Pathological changes associated with CYN poisoning were reported to be in four distinct stages: inhibition of protein synthesis, proliferation of membranes, lipid accumulation within cells, and finally cell death. Examination of mice livers removed at autopsy showed that on intraperitoneal injection of CYN, after 16 hours ribosomes from the rough endoplasmic reticulum (rER) had detached, and at 24 hours, marked proliferation of the membrane systems of the smooth ER and Golgi apparatus had occurred. At 48 hours, small lipid droplets had accumulated in the cell bodies, and at 100 hours, hepatocytes in the hepatic lobules were destroyed beyond function.
The process of protein synthesis inhibition has been shown to be irreversible, however is not conclusively the method of cytotoxicity of the compound. Froscio et al.. proposed that CYN has at least two separate modes of action: the previously reported protein synthesis inhibition, and an as-yet unclear method of causing cell death. It has been shown that cells can survive for long periods (up to 20 hours) with 90% inhibition of protein synthesis, and still maintain viability. Since CYN is cytotoxic within 16–18 hours it has been suggested that other mechanisms are the cause of cell death.
Cytochrome P450 has been implicated in the toxicity of CYN, as blocking the action of P450 reduces the toxicity of CYN. It has been proposed that an activated P450-derived metabolite (or metabolites) of CYN is the main cause of toxicity. Shaw et al.. demonstrated that the toxin could be metabolised in vivo, resulting in bound metabolites in the liver tissue, and that damage was more prevalent in rat hepatocytes than other cell types.
Due to the structure of CYN, which includes sulfate, guanidine and uracil groups, it has been suggested that CYN acts on DNA or RNA. Shaw et al.. reported covalent binding of CYN or its metabolites to DNA in mice, and DNA strand breakage has also been observed. Humpage et al. also supported this, and in addition postulated that CYN (or a metabolite) acts on either the spindle or centromeres during cell division, inducing loss of whole chromosomes.
The uracil group of CYN has been identified as a pharmacophore of the toxin. In two experiments, the vinylic hydrogen atom on the uracil ring was replaced with a chlorine atom to form 5-chlorocylindrospermopsin, and the uracil group was truncated to a carboxylic acid, to form cylindrospermic acid (Figure 6). Both products were assessed as being non-toxic, even at 50 times the LD50 of CYN. In the previous determination of the structure of deoxycylindrospermopsin, a toxicity assessment of the compound was carried out. Mice injected intraperitoneally with four times the 5-day median lethal dose of CYN showed no toxic effects. As this compound was shown to be relatively abundant, it was concluded that this analogue was comparatively non-toxic. Given that both CYN and epiCYN are toxic, the hydroxyl group on the uracil bridge can be considered necessary for toxicity. As yet, the relative toxicities of CYN and epiCYN have not been compared.
Biosynthesis
The cylindrospermopsin biosynthetic gene cluster (BGC) was described from Cylindrospermopsis raciborskii AWT205 in 2008.
Related toxic blooms and their impact
Since the Palm Island outbreak, several other species of cyanobacteria have been identified as producing CYN: Anabaena bergii, Anabaena lapponica , Aphanizomenon ovalisporum, Umezakia natans, Raphidiopsis curvata. and Aphanizomenon issatschenkoi. In Australia, three main toxic cyanobacteria exist: Anabaena circinalis, Microcystis species and C. raciborskii. Of these the latter, which produces CYN, has attracted considerable attention, not only due to the Palm Island outbreak, but also as the species is spreading to more temperate areas. Previously, the algae was classed as only tropical, however it has recently been discovered in temperate regions of Australia, Europe, North and South America, and also New Zealand.
In August 1997, three cows and ten calves died from cylindrospermopsin poisoning on a farm in northwest Queensland. A nearby dam containing an algal bloom was tested, and C. raciborskii was identified. Analysis by HPLC/mass spectrometry revealed the presence of CYN in a sample of the biomass. An autopsy of one of the calves reported a swollen liver and gall bladder, along with haemorrhages of the heart and small intestine. Histological examination of the hepatic tissue was consistent with that reported in CYN-affected mice. This was the first report of C. raciborskii causing mortality in animals in Australia.
The effect of a bloom of C. raciborskii on an aquaculture pond in Townsville, Australia was assessed in 1997. The pond contained Redclaw crayfish, along with a population of Lake Eacham Rainbowfish to control the excess food. Analysis revealed that the water contained both extracellular and intracellular CYN, and that the crayfish had accumulated this primarily in the liver but also in the muscle tissue. Examination of the gut contents revealed cyanobacterial cells, indicating that the crayfish had ingested intracellular toxin. An experiment using an extract of the bloom showed that it was also possible to uptake extracellular toxin directly into the tissues. Such bioaccumulation, particularly in the aquaculture industry, was of concern, especially when humans were the end users of the product.
The impact of cyanobacterial blooms has been assessed in economic terms. In December 1991, the world's largest algal bloom occurred in Australia, where 1000 km of the Darling-Barwon River was affected. One million people-days of drinking water were lost, and the direct costs incurred totalled more than A$1.3 million. Moreover, 2000 site-days of recreation were also lost, and the economic cost was estimated at A$10 million, after taking into account indirectly affected industries such as tourism, accommodation and transport.
Current methods of analysis in water samples
Current methods include liquid chromatography coupled to mass spectrometry (LC-MS), mouse bioassay, protein synthesis inhibition assay, and reverse-phase HPLC-PDA (Photo Diode Array) analysis. A cell free protein synthesis assay has been developed which appears to be comparable to HPLC-MS.
See also
Cyanotoxin
Lyngbyatoxin
Microcystin
Nodularin
Saxitoxin
Guanitoxin
References
Neurotoxins
Nitrogen heterocycles
Bacterial alkaloids
Cyanotoxins
Guanidine alkaloids
Zwitterions
Total synthesis
Uracil derivatives
Protein synthesis inhibitors
Sulfate esters | Cylindrospermopsin | [
"Physics",
"Chemistry"
] | 3,436 | [
"Matter",
"Guanidine alkaloids",
"Alkaloids by chemical classification",
"Zwitterions",
"Chemical synthesis",
"Total synthesis",
"Neurochemistry",
"Neurotoxins",
"Ions"
] |
10,795,809 | https://en.wikipedia.org/wiki/Adakite | Adakites are volcanic rocks of intermediate to felsic composition that have geochemical characteristics of magma originally thought to have formed by partial melting of altered basalt that is subducted below volcanic arcs. Most magmas derived in subduction zones come from the mantle above the subducting plate when hydrous fluids are released from minerals that break down in the metamorphosed basalt, rise into the mantle, and initiate partial melting. However, Defant and Drummond recognized that when young oceanic crust (less than 25 million years old) is subducted, adakites are typically produced in the arc. They postulated that when young oceanic crust is subducted it is "warmer" (closer to the mid-ocean ridge where it formed) than crust that is typically subducted. The warmer crust enables melting of the metamorphosed subducted basalt rather than the mantle above. Experimental work by several researchers has verified the geochemical characteristics of "slab melts" and the contention that melts can form from young and therefore warmer crust in subduction zones.
The geochemical characteristics Defant and Drummond gave for adakites are:
SiO2 greater than 56 wt %
Al2O3 greater than or equal to 15 wt %
MgO normally less than 3 wt %
Sr greater than 400 ppm
Y less than 18 ppm
Yb less than 1.9 ppm
87Sr/86Sr usually less than 0.7045
Later Defant and Kepezhinskas reviewed the topic in some detail pointing out that adakites are found associated with many mineral deposits including gold and copper.
Drummond and Defant noted that Archean trondhjemites (which make up most of the ancient crust of continents) have similar geochemical characteristics to adakites. They suggested that the entire Archean crust may have been derived from the partial melting of subducted oceanic crust during the Archean (> 2.5 billion years ago) because during early Earth the temperature of the mantle was much hotter and more oceanic crust was generated and subducted younger. The proposal has been controversial and is still being argued among the scientific community. The alternative interpretation is that the continental crust was derived from the partial melting of lower crustal basalts. The same idea has also been postulated for the generation of adakites. However, this hypothesis does not explain the correlation between subducted young crust and adakite eruptions nor the fact that the lower Yb and Y in adakites suggest that garnet is stable in the source. Garnet forms only under high pressures within the Earth and would not be stable in lower crust below some island arcs that erupt adakites. See Martin et al. for a more recent summary.
Low magnesium adakites may be representative of relatively pure partial melting of a subducting basalt, whereas high-magnesium adakite or high-magnesium andesites may represent melt contamination with the peridotites of the overlying mantle wedge. Adakites have also been reported from the continent-continent collision zone beneath Tibet and Lesser Caucasus.
Examples
Adak Island, Alaska
Trans-Mexican Volcanic Belt, Mexico
Mindanao, the Philippines
References
Felsic rocks
Intermediate rocks
Volcanic rocks
Subduction | Adakite | [
"Chemistry"
] | 654 | [
"Felsic rocks",
"Intermediate rocks",
"Igneous rocks by composition"
] |
10,795,926 | https://en.wikipedia.org/wiki/PEAKS | PEAKS is a proteomics software program for tandem mass spectrometry designed for peptide sequencing, protein identification and quantification.
Description
PEAKS is commonly used for peptide identification (Protein ID) through de novo peptide sequencing assisted search engine database searching. PEAKS has also integrated PTM and mutation characterization through automatic peptide sequence tag based searching (SPIDER) and PTM Identification.
PEAKS provides a complete sequence for each peptide, confidence scores on individual amino acid assignments, simple reporting for high-throughput analysis, amongst other information.
The software has the ability to compare results of multiple search engines. PEAKS inChorus will cross check test results automatically with other protein ID search engines, like Sequest, OMSSA, X!Tandem and Mascot. This approach guards against false positive peptide assignments.
PEAKS Q is an add-on tool for protein quantification, supporting label (ICAT, iTRAQ, SILAC, TMT, 018, etc.) and label free techniques.
SPIDER is a sequence tag based search tool within PEAKS, which deals with the possible overlaps between the de novo sequencing errors and the homology mutations. It reconstructs the real peptide sequence by combining both the de novo sequence tag and the homolog, automatically and efficiently.
A collection of algorithms used within the PEAKS software have been adapted and configured into a specialized project, PEAKS AB, which has proven to be the first method for automatic monoclonal antibody sequencing.
Notes
Mass spectrometry software
Proteomic sequencing | PEAKS | [
"Physics",
"Chemistry",
"Biology"
] | 304 | [
"Spectrum (physical sciences)",
"Chemistry software",
"Proteomic sequencing",
"Molecular biology techniques",
"Mass spectrometry software",
"Mass spectrometry"
] |
10,796,040 | https://en.wikipedia.org/wiki/Parking%20guidance%20and%20information | Parking guidance and information (PGI) systems, or car park guidance systems, present drivers with dynamic information on parking within controlled areas. The systems combine traffic monitoring, communication, processing and variable message sign technologies to provide the service.
Modern parking lots utilize a variety of technologies to help motorists find unoccupied parking spaces, car location when returning to the vehicle and improve their experience. This includes adaptive lighting sensors and parking space indicators (red for occupied, green for available and blue is reserved for the disabled; above every parking space), and indoor positioning systems (IPS).
PGI systems are a product of the worldwide initiative for the development of intelligent transportation system in urban areas. PGI systems can assist in the development of safe, efficient and environmentally friendly transportation network.
PGI systems are designed to aid in the search for vacant parking spaces by directing drivers to car parks where occupancy levels are low. The objective is to reduce search time, which in turn reduces congestion on the surrounding roads for other traffic with related benefits to air pollution with the ultimate aim of enhancement of the urban area.
Parking guidance systems have evolved a lot in recent times. Ultrasound and laser technologies provide information on the availability of parking spaces throughout the parking facility. At the same time, new camera-based technologies now make it possible to read the license plate of the vehicle in each parking space. This is an added value since it allows the identification of a specific vehicle in a specific parking space and, in addition, sometimes records possible incidents occurring in that space. These new methods increase security and revenue for the parking owners.
Parking guidance system
Parking guidance systems (PGS) have different elements:
Ultrasound detectors
Camera-based sensors
Individual indicators
Zone Controllers
Data / Intermediate Controllers
Central Control System
Signs or displays
Kiosks
References
Civil engineering
Transportation engineering
Urban planning
Articles containing video clips | Parking guidance and information | [
"Engineering"
] | 373 | [
"Industrial engineering",
"Construction",
"Transportation engineering",
"Urban planning",
"Civil engineering",
"Architecture"
] |
10,796,093 | https://en.wikipedia.org/wiki/Hypersociability | In the context of transmedia storytelling, hypersociability is the encouraged involvement of media consumers in a story through ordinary social interaction. A story may be shared through discourse within a fan group. Hypersociability lessens the need for a publisher to offer fixed media. Instead, storytellers hope that fans will build on the story themselves either over the Internet or through direct conversation. The principle of hypersociability is most widely used in Japanese pop culture, examples of which include Yu-Gi-Oh! and Pokémon, which used multiplayer games separate from the original media. The Wachowskis deliberately incorporated elements of hypersociability for The Animatrix by seeking the help of Japanese animators.
Hypersociability can also occasionally refer to a symptom of Williams syndrome characterized by an unusual willingness to converse with others.
References
Internet culture
Hyperreality
Social influence
Storytelling | Hypersociability | [
"Technology"
] | 181 | [
"Hyperreality",
"Science and technology studies"
] |
10,796,362 | https://en.wikipedia.org/wiki/Immunoglobulin%20light%20chain | The immunoglobulin light chain is the small polypeptide subunit of an antibody (immunoglobulin).
A typical antibody is composed of two immunoglobulin (Ig) heavy chains and two Ig light chains.
In humans
There are two types of light chain in humans:
kappa (κ) chain, encoded by the immunoglobulin kappa locus (IGK@) on chromosome 2 (locus: 2p11.2)
lambda (λ) chain, encoded by the immunoglobulin lambda locus (IGL@) on chromosome 22 (locus: 22q11.2)
Antibodies are produced by B lymphocytes, each expressing only one class of light chain. Once set, light chain class remains fixed for the life of the B lymphocyte. In a healthy individual, the total kappa-to-lambda ratio is roughly 2:1 in serum (measuring intact whole antibodies) or 1:1.5 if measuring free light chains, with a highly divergent ratio indicative of neoplasm. The free light chain ratio ranges from 0.26 to 1.65. Both the kappa and the lambda chains can increase proportionately, maintaining a normal ratio. This is usually indicative of something other than a blood cell dyscrasia, such as kidney disease.
In other animals
The immunoglobulin light chain genes in tetrapods can be classified into three distinct groups: kappa (κ), lambda (λ), and sigma (σ). The divergence of the κ, λ, and σ isotypes preceded the radiation of tetrapods. The σ isotype was lost after the evolution of the amphibian lineage and before the emergence of the reptilian lineage.
Other types of light chains can be found in lower vertebrates, such as the Ig-Light-Iota chain of Chondrichthyes and Teleostei.
Camelids are unique among mammals as they also have fully functional antibodies which have two heavy chains, but lack the light chains usually paired with each heavy chain.
Sharks also possess, as part of their adaptive immune systems, a functional heavy-chain homodimeric antibody-like molecule referred to as IgNAR (immunoglobulin new antigen receptor). IgNAR is believed to have never had an associated light chain, in contrast with the understanding that the heavy-chain-only antibodies in camelids may have lost their light chain partners through evolution.
Structure
Only one type of light chain is present in a typical antibody, thus the two light chains of an individual antibody are identical.
Each light chain is composed of two tandem immunoglobulin domains:
one constant (CL) domain
one variable domain (VL) that is important for binding antigen
The approximate length of a light chain protein is from 211 to 217 amino acids. The constant region determines what class (kappa or lambda) the light chain is. The lambda class has 4 subtypes (1, 2, 3, and 7).
In pathology
Individual B-cells in lymphoid tissue possess either kappa or lambda light chains, but never both together.
Using immunohistochemistry, it is possible to determine the relative abundance of B-cells expressing kappa and lambda light chains. If the lymph node or similar tissue is reactive, or otherwise benign, it should possess a mixture of kappa positive and lambda positive cells. If, however, one type of light chain is significantly more common than the other, the cells are likely all derived from a small clonal population, which may indicate a malignant condition, such as B-cell lymphoma.
Free immunoglobulin light chains secreted by neoplastic plasma cells, such as in multiple myeloma, can be called Bence Jones protein when detected in the urine, although there is a trend to refer to these as urinary free light chains.
Increased levels of free Ig light chains have also been detected in various inflammatory diseases. It is important to note that, in contrast to increased levels in lymphoma patients, these Ig light chains are polyclonal. Recent studies have shown that these Ig light chains can bind to mast cells and, using their ability to bind antigen, facilitate activation of these mast cells. Activation of mast cells results in the release of various pro-inflammatory mediators which are believed to contribute to the development of the inflammatory disease. Recent studies have shown that Ig light chains not only activate mast cells but also dorsal root ganglia and neutrophils, expanding their possible role as mediators in inflammatory disease.
See also
Monoclonal antibody
References
External links
Educational Resource for Immunoglobulin Light Chains
Immune system | Immunoglobulin light chain | [
"Biology"
] | 980 | [
"Immune system",
"Organ systems"
] |
10,796,713 | https://en.wikipedia.org/wiki/Balanced%20flow | In atmospheric science, balanced flow is an idealisation of atmospheric motion. The idealisation consists in considering the behaviour of one isolated parcel of air having constant density, its motion on a horizontal plane subject to selected forces acting on it and, finally, steady-state conditions.
Balanced flow is often an accurate approximation of the actual flow, and is useful in improving the qualitative understanding and interpretation of atmospheric motion.
In particular, the balanced-flow speeds can be used as estimates of the wind speed for particular arrangements of the atmospheric pressure on Earth's surface.
The momentum equations in natural coordinates
Trajectories
The momentum equations are written primarily for the generic trajectory of a packet of flow travelling on a horizontal plane and taken at a certain elapsed time called t. The position of the packet is defined by the distance on the trajectory s=s(t) which it has travelled by time t. In reality, however, the trajectory is the outcome of the balance of forces upon the particle. In this section we assume to know it from the start for convenience of representation. When we consider the motion determined by the forces selected next, we will have clues of which type of trajectory fits the particular balance of forces.
The trajectory at a position s has one tangent unit vector s that invariably points in the direction of growing ss, as well as one unit vector n, perpendicular to s, that points towards the local centre of curvature O.
The centre of curvature is found on the 'inner side' of the bend, and can shift across either side of the trajectory according to the shape of it.
The distance between the parcel position and the centre of curvature is the radius of curvature R at that position.
The radius of curvature approaches an infinite length at the points where the trajectory becomes straight and the positive orientation of n is not determined in this particular case (discussed in geostrophic flows).
The frame of reference (s,n) is shown by the red arrows in the figure. This frame is termed natural or intrinsic because the axes continuously adjust to the moving parcel, and so they are the most closely connected to its fate.
Kinematics
The velocity vector (V) is oriented like s and has intensity (speed) V = ds/dt. This speed is always a positive quantity, since any parcel moves along its own trajectory and, for increasing times (dt>0), the trodden length increases as well (ds>0).
The acceleration vector of the parcel is decomposed in the tangential acceleration parallel to s and in the centripetal acceleration along positive n. The tangential acceleration only changes the speed V and is equal to DV/Dt, where big d's denote the material derivative. The centripetal acceleration always points towards the centre of curvature O and only changes the direction s of the forward displacement while the parcel moves on.
Forces
In the balanced-flow idealization we consider a three-way balance of forces that are:
Pressure force. This is the action on the parcel arising from the spatial differences of atmospheric pressure p around it. (Temporal changes are of no interest here.) The spatial change of pressure is visualised through isobars, that are contours joining the locations where the pressure has a same value. In the figure this is simplistically shown by equally spaced straight lines. The pressure force acting on the parcel is minus the gradient vector of p (in symbols: grad p) – drawn in the figure as a blue arrow. At all points, the pressure gradient points to the direction of maximum increase of p and is always normal to the isobar at that point. Since the flow packet feels a push from the higher to the lower pressures, the effective pressure vector force is contrary to the pressure gradient, whence the minus sign before the gradient vector.
Friction. This is a force always opposing the forward motion, whereby the vector invariably acts in the negative direction s with an effect to reduce the speed. The friction at play in the balanced-flow models is the one exerted by the roughness of the Earth's surface on the air moving higher above. For simplicity, we here assume that the frictional force (per unit mass) adjusts to the parcel's speed proportionally through a constant coefficient of friction K. In more realistic conditions, the dependence of friction on the speed is non-linear except for slow laminar flows.
Coriolis force. This action, due to the Earth's rotation, tends to displace any body travelling in the northern (southern) hemisphere towards its right (left). Its intensity per unit mass is proportional to the speed V and increases in magnitude from the equator (where it is zero) towards the poles proportionally to the local Coriolis frequency f (a positive number north of the equator and negative south). Therefore, the Coriolis vector invariably points sideways, that is along the n axis. Its sign in the balance equation may change, since the positive orientation of n flips between right and left of the trajectory based solely on its curvature, while the Coriolis vector points to either side based on the packet's position on the Earth. The exact expression of the Coriolis force is a bit more complex than the product of the Coriolis parameter and parcel's velocity. However, this approximation is consistent with having neglected the curvature of the Earth's surface.
In the fictitious situation drawn in the figure, the pressure force pushes the parcel forward along the trajectory and inward with respect to the bend; the Coriolis force pushes inwards (outwards) of the bend in the northern (southern) hemisphere; and friction pulls (necessarily) rearwards.
Governing equations
For the dynamical equilibrium of the parcel, either component of acceleration times the parcel's mass is equal to the components of the external forces acting in the same direction.
As the equations of equilibrium for the parcel are written in natural coordinates, the component equations for the horizontal momentum per unit mass are expressed as follows:
in the forward and sideway directions respectively, where ρ is the density of air.
The terms can be broken down as follows:
is the temporal rate of speed change to the parcel (tangential acceleration);
is the component of the pressure force per unit volume along the trajectory;
is the deceleration due to friction;
is the centripetal acceleration;
is the component of the pressure force per unit volume perpendicular to the trajectory;
is the Coriolis force per unit mass (the sign ambiguity depends on the mutual orientation of the force vector and n).
Steady-state assumption
In the following discussions, we consider steady-state flow.
The speed cannot thus change with time, and the component forces producing tangential acceleration need to sum up to zero.
In other words, active and resistive forces must balance out in the forward direction in order that .
Importantly, no assumption is made yet on whether the right-hand side forces are of either significant or negligible magnitude there. Moreover, trajectories and streamlines coincide in steady-state conditions, and the pairs of adjectives tangential/normal and streamwise/cross-stream become interchangeable. An atmospheric flow in which the tangential acceleration is not negligible is called allisobaric.
The velocity direction can still change in space along the trajectory that, excluding inertial flows, is set by the pressure pattern.
General framework
The schematisations
Omitting specific terms in the tangential and normal balance equations, we obtain one of the five following idealized flows: antitriptic, geostrophic, cyclostrophic, inertial, and gradient flows.
By reasoning on the balance of the remaining terms, we can understand
what arrangement of the pressure field supports such flows;
along which trajectory the parcel of air travels; and
with which speed it does so.
The following yes/no table shows which contributions are considered in each idealisation.
The Ekman layer's schematisation is also mentioned for completeness, and is treated separately since it involves the internal friction of air rather than that between air and ground.
The limitations
Vertical differences of air properties
The equations were said to apply to parcels of air moving on horizontal planes.
Indeed, when one considers a column of atmosphere, it is seldom the case that the air density is the same all the height up, since temperature and moisture content, hence density, do change with height.
Every parcel within such a column moves according to the air properties at its own height.
Homogeneous sheets of air may slide one over the other, so long as stable stratification of lighter air on top of heavier air leads to well-separated layers.
If some air happens to be heavier/lighter than that in the surroundings, though, vertical motions do occur and modify the horizontal motion in turn.
In nature downdrafts and updrafts can sometimes be more rapid and intense than the motion parallel to the ground.
The balanced-flow equations do not contain either a force representing the sinking/buoyancy action or the vertical component of velocity.
Consider also that the pressure is normally known through instruments (barometers) near the ground/sea level.
The isobars of the ordinary weather charts summarise these pressure measurements, adjusted to the mean sea level for uniformity of presentation, at one particular time.
Such values represent the weight of the air column overhead without indicating the details of the changes of the air's specific weight overhead.
Also, by Bernoulli's theorem, the measured pressure is not exactly the weight of the air column, should significant vertical motion of air occur.
Thus, the pressure force acting on individual parcels of air at different heights is not really known through the measured values.
When using information from a surface-pressure chart in balanced-flow formulations, the forces are best viewed as applied to the entire air column.
One difference of air speed in every air column invariably occurs, however, near the ground/sea, also if the air density is the same anywhere and no vertical motion occurs.
There, the roughness of the contact surface slows down the air motion above, and this retarding effect peters out with height.
See, for example, planetary boundary layer.
Frictional antitriptic flow applies near the ground, while the other schematisations apply far enough from the ground not to feel its "braking" effect (free-air flow).
This is a reason to keep the two groups conceptually separated.
The transition from low-quote to high-quote schematisations is bridged by Ekman-like schematisations where air-to-air friction, Coriolis and pressure forces are in balance.
In summary, the balanced-flow speeds apply well to air columns that can be regarded as homogeneous (constant density, no vertical motion) or, at most, stably stratified (non-constant density, yet no vertical motion).
An uncertainty in the estimate arises if we are not able to verify these conditions to occur.
They also cannot describe the motion of the entire column from the contact surface with the Earth up to the outer atmosphere, because of the on-off handling of the friction forces.
Horizontal differences of air properties
Even if air columns are homogeneous with height, the density of each column can change from location to location, firstly since air masses have different temperatures and moisture content depending on their origin; and then since air masses modify their properties as they flow over Earth's surface.
For example, in extra-tropical cyclones the air circulating around a pressure low typically comes with a sector of warmer temperature wedged within colder air.
The gradient-flow model of cyclonic circulation does not allow for these features.
Balanced-flow schematisations can be used to estimate the wind speed in air flows covering several degrees of latitude of Earth's surface.
However, in this case assuming a constant Coriolis parameter is unrealistic, and the balanced-flow speed can be applied locally.
See Rossby waves as an example of when changes of latitude are dynamically effective.
Unsteadiness
The balanced-flow approach identifies typical trajectories and steady-state wind speeds derived from balance-giving pressure patterns.
In reality, pressure patterns and the motion of air masses are tied together, since accumulation (or density increase) of air mass somewhere increases the pressure on the ground and vice versa.
Any new pressure gradient will cause a new displacement of air, and thus a continuous rearrangement.
As weather itself demonstrates, steady-state conditions are exceptional.
Since friction, pressure gradient and Coriolis forces do not necessarily balance out, air masses actually accelerate and decelerate, so the actual speed depends on its past values too.
As seen next, the neat arrangement of pressure fields and flow trajectories, either parallel or at a right angle, in balanced-flow follows from the assumption of steady flow.
The steady-state balanced-flow equations do not explain how the flow was set in motion in the first place.
Also, if pressure patterns change quickly enough, balanced-flow speeds cannot help track the air parcels over long distances, simply because the forces that the parcel feels have changed while it is displaced.
The particle will end up somewhere else compared to the case that it had followed the original pressure pattern.
In summary, the balanced-flow equations give out consistent steady-state wind speeds that can estimate the situation at a certain moment and a certain place.
These speeds cannot be confidently used to understand where the air is moving to in the long run, because the forcing naturally changes or the trajectories are skewed with respect to the pressure pattern.
Antitriptic flow
Antitriptic flow describes a steady-state flow in a spatially varying pressure field when
the entire pressure gradient exactly balances friction alone; and:
all actions promoting curvature are neglected.
The name comes from the Greek words 'anti' (against, counter-) and 'triptein' (to rub) – meaning that this kind of flow proceeds by countering friction.
Formulation
In the streamwise momentum equation, friction balances the pressure gradient component without being negligible (so that K≠0).
The pressure gradient vector is only made by the component along the trajectory tangent s.
The balance in the streamwise direction determines the antitriptic speed as
A positive speed is guaranteed by the fact that antitriptic flows move along the downward slope of the pressure field, so that mathematically .
Provided the product KV is constant and ρ stays the same, p turns out to vary linearly with s and the trajectory is such that the parcel feels equal pressure drops while it covers equal distances.
(This changes, of course, when using a non-linear model of friction or a coefficient of friction that varies in space to allow for different surface roughness.)
In the cross-stream momentum equation, the Coriolis force and normal pressure gradient are both negligible, leading to no net bending action.
As the centrifugal term vanishes while the speed is non-zero, the radius of curvature goes to infinity, and the trajectory must be a straight line.
In addition, the trajectory is perpendicular to the isobars since .
Since this condition occurs when the n direction is that of an isobar, s is perpendicular to the isobars.
Thus, antitriptic isobars need to be equispaced circles or straight lines.
Application
Antitriptic flow is probably the least used of the five balanced-flow idealizations, because the conditions are quite strict. However, it is the only one for which the friction underneath is regarded as a primary contribution.
Therefore, the antitriptic schematisation applies to flows that take place near the Earth's surface, in a region known as constant-stress layer.
In reality, the flow in the constant-stress layer has a component parallel to the isobars too, since it is often driven by the faster flow overhead.
This occurs owing to the so-called free-air flow at high quotes, which tends to be parallel to the isobars, and to the Ekman flow at intermediate quotes, which causes a reduction of the free-air speed and a turning of direction while approaching the surface.
Because the Coriolis effects are neglected, antitriptic flow occurs either near the equator (irrespective of the motion's length scale) or elsewhere whenever the Ekman number of the flow is large (normally for small-scale processes), as opposed to geostrophic flows.
Antitriptic flow can be used to describe some boundary-layer phenomena such as sea breezes, Ekman pumping, and the low level jet of the Great Plains.
Geostrophic flow
Geostrophic flow describes a steady-state flow in a spatially varying pressure field when
frictional effects are neglected; and:
the entire pressure gradient exactly balances the Coriolis force alone (resulting in no curvature).
This condition is called geostrophic equilibrium or geostrophic balance (also known as geostrophy).
The name 'geostrophic' stems from the Greek words 'ge' (Earth) and 'strephein' (to turn).
This etymology does not suggest turning of trajectories, rather a rotation around the Earth.
Formulation
In the streamwise momentum equation, negligible friction is expressed by K=0 and, for steady-state balance, negligible streamwise pressure force follows.
The speed cannot be determined by this balance.
However, entails that the trajectory must run along isobars, else the moving parcel would experience changes of pressure like in antitriptic flows.
No bending is thus only possible if the isobars are straight lines in the first instance.
So, geostrophic flows take the appearance of a stream channelled along such isobars.
In the cross-stream momentum equation, non-negligible Coriolis force is balanced by the pressure force, in a way that the parcel does not experience any bending action.
Since the trajectory does not bend, the positive orientation of n cannot be determined for lack of a centre of curvature.
The signs of the normal vector components become uncertain in this case.
However, the pressure force must exactly counterbalance the Coriolis force anyway, so the parcel of air needs to travel with the Coriolis force contrary to the decreasing sideways slope of pressure.
Therefore, irrespective of the uncertainty in formally setting the unit vector n, the parcel always travels with the lower pressure at its left (right) in the northern (southern) hemisphere.
The geostrophic speed is
The expression of geostrophic speed resembles that of antitriptic speed: here the speed is determined by the magnitude of the pressure gradient across (instead of along) the trajectory that develops along (instead of across) an isobar.
Application
Modelers, theoreticians, and operational forecasters frequently make use of geostrophic/quasi-geostrophic approximation.
Because friction is unimportant, the geostrophic balance fits flows high enough above the Earth's surface.
Because the Coriolis force is relevant, it normally fits processes with small Rossby number, typically having large length scales.
Geostrophic conditions are also realised for flows having small Ekman number, as opposed to antitriptic conditions.
It is frequent that the geostrophic conditions develop between a well-defined pair of pressure high and low; or that a major geostrophic stream is flanked by several higher- and lower-pressure regions at either side of it (see images).
Although the balanced-flow equations do not allow for internal (air-to-air) friction, the flow directions in geostrophic streams and nearby rotating systems are also consistent with shear contact between those.
The speed of a geostrophic stream is larger (smaller) than that in the curved flow around a pressure low (high) with the same pressure gradient: this feature is explained by the more general gradient-flow schematisation.
This helps use the geostrophic speed as a back-of-the-envelope estimate of more complex arrangements—see also the balanced-flow speeds compared below.
The etymology and the pressure charts shown suggest that geostrophic flows may describe atmospheric motion at rather large scales, although not necessarily so.
Cyclostrophic flow
Cyclostrophic flow describes a steady-state flow in a spatially varying pressure field when
the frictional and Coriolis actions are neglected; and:
the centripetal acceleration is entirely sustained by the pressure gradient.
Trajectories do bend. The name 'cyclostrophic' stems from the Greek words 'kyklos' (circle) and 'strephein' (to turn).
Formulation
Like in geostrophic balance, the flow is frictionless and, for steady-state motion, the trajectories follow the isobars.
In the cross-stream momentum equation, only the Coriolis force is discarded, so that the centripetal acceleration is just the cross-stream pressure force per unit mass
This implies that the trajectory is subject to a bending action, and that the cyclostrophic speed is
So, the cyclostrophic speed is determined by the magnitude of the pressure gradient across the trajectory and by the radius of curvature of the isobar.
The flow is faster, the farther away from its centre of curvature, albeit less than linearly.
Another implication of the cross-stream momentum equation is that a cyclostrophic flow can only develop next to a low-pressure area.
This is implied in the requirement that the quantity under the square root is positive.
Recall that the cyclostrophic trajectory was found to be an isobar.
Only if the pressure increases from the centre of curvature outwards, the pressure derivative is negative and the square root is well defined – the pressure in the centre of curvature must thus be a low.
The above mathematics gives no clue whether the cyclostrophic rotation ends up to be clockwise or anticlockwise, meaning that the eventual arrangement is a consequence of effects not allowed for in the relationship, namely the rotation of the parent cell.
Application
The cyclostrophic schematisation is realistic when Coriolis and frictional forces are both negligible, that is for flows having large Rossby number and small Ekman number.
Coriolis effects are ordinarily negligible in lower latitudes or on smaller scales.
Cyclostrophic balance can be achieved in systems such as tornadoes, dust devils and waterspouts.
Cyclostrophic speed can also be seen as one of the contribution of the gradient balance-speed, as shown next.
Among the studies using the cyclostrophic schematisation,
Rennó and Bluestein use the cyclostrophic speed equation to construct a theory for waterspouts;
and Winn, Hunyady, and Aulich use the cyclostrophic approximation to compute the maximum tangential winds of a large tornado which passed near Allison, Texas on 8 June 1995.
Inertial flow
Unlike all other flows, inertial balance implies a uniform pressure field.
In this idealisation:
the flow is frictionless;
no pressure gradient (and force) is present at all.
The only remaining action is the Coriolis force, which imparts curvature to the trajectory.
Formulation
As before, frictionless flow in steady-state conditions implies that .
However, in this case isobars are not defined in the first place.
We cannot draw any anticipation about the trajectory from the arrangement of the pressure field.
In the cross-stream momentum equation, after omitting the pressure force, the centripetal acceleration is the Coriolis force per unit mass.
The sign ambiguity disappears, because the bending is solely determined by the Coriolis force that sets unchallenged the side of curvature – so this force has always a positive sign.
The inertial rotation will be clockwise (anticlockwise) in the northern (southern) hemisphere.
The momentum equation
gives us the inertial speed
The inertial speed's equation only helps determine either the speed or the radius of curvature once the other is given.
The trajectory resulting from this motion is also known as inertial circle.
The balance-flow model gives no clue on the initial speed of an inertial circle, which needs to be triggered by some external perturbation.
Application
Since atmospheric motion is due largely to pressure differences, inertial flow is not very applicable in atmospheric dynamics.
However, the inertial speed appears as a contribution to the solution of the gradient speed (see next).
Moreover, inertial flows are observed in the ocean streams, where flows are less driven by pressure differences than in air because of higher density—inertial balance can occur at depths such that the friction transmitted by the surface winds downwards vanishes.
Gradient flow
Gradient flow is an extension of geostrophic flow as it accounts for curvature too, making this a more accurate approximation for the flow in the upper atmosphere.
However, mathematically gradient flow is slightly more complex, and geostrophic flow may be fairly accurate, so the gradient approximation is not as frequently mentioned.
Gradient flow is also an extension of the cyclostrophic balance, as it allows for the effect of the Coriolis force, making it suitable for flows with any Rossby number.
Finally, it is an extension of inertial balance, as it allows for a pressure force to drive the flow.
Formulation
Like in all but the antitriptic balance, frictional and pressure forces are neglected in the streamwise momentum equation, so that it follows from that the flow is parallel to the isobars.
Solving the full cross-stream momentum equation as a quadratic equation for V yields
Not all solutions of the gradient wind speed yield physically plausible results: the right-hand side as a whole needs be positive because of the definition of speed; and the quantity under square root needs to be non-negative.
The first sign ambiguity follows from the mutual orientation of the Coriolis force and unit vector n, whereas the second follows from the square root.
The important cases of cyclonic and anticyclonic circulations are discussed next.
Pressure lows and cyclones
For regular cyclones (air circulation around pressure lows), the pressure force is inward (positive term) and the Coriolis force outward (negative term) irrespective of the hemisphere.
The cross-trajectory momentum equation is
Dividing both sides by |f|V, one recognizes that
whereby the cyclonic gradient speed V is smaller than the corresponding geostrophic, less accurate estimate, and naturally approaches it as the radius of curvature grows (as the inertial velocity goes to infinity).
In cyclones, therefore, curvature slows down the flow compared to the no-curvature value of geostrophic speed.
See also the balanced-flow speeds compared below.
The positive root of the cyclone equation is
This speed is always well defined as the quantity under the square root is always positive.
Pressure highs and anticyclones
In anticyclones (air circulation around pressure highs), the Coriolis force is always inward (and positive), and the pressure force outward (and negative) irrespective of the hemisphere.
The cross-trajectory momentum equation is
Dividing both sides by |f|V, we obtain
whereby the anticyclonic gradient speed V is larger than the geostrophic value and approaches it as the radius of curvature becomes larger.
In anticyclones, therefore, the curvature of isobars speeds up the airflow compared to the (geostrophic) no-curvature value.
See also the balanced-flow speeds compared below.
There are two positive roots for V, but the only one consistent with the limit to geostrophic conditions is
that requires that to be meaningful.
This condition can be translated in the requirement that, given a high-pressure zone with a constant pressure slope at a certain latitude, there must be a circular region around the high without wind.
On its circumference the air blows at half the corresponding inertial speed (at the cyclostrophic speed), and the radius is
obtained by solving the above inequality for R.
Outside this circle the speed decreases to the geostrophic value as the radius of curvature increases.
The width of this radius grows with the intensity of the pressure gradient.
Application
Gradient Flow is useful in studying atmospheric flow rotating around high and low pressures centers with small Rossby numbers.
This is the case where the radius of curvature of the flow about the pressure centers is small, and geostrophic flow no longer applies with a useful degree of accuracy.
Balanced-flow speeds compared
Each balanced-flow idealisation gives a different estimate for the wind speed in the same conditions.
Here we focus on the schematisations valid in the upper atmosphere.
Firstly, imagine that a sample parcel of air flows 500 meters above the sea surface, so that frictional effects are already negligible.
The density of (dry) air at 500 meter above the mean sea level is 1.167 kg/m3 according to its equation of state.
Secondly, let the pressure force driving the flow be measured by a rate of change taken as 1hPa/100 km (an average value).
Recall that it is not the value of the pressure to be important, but the slope with which it changes across the trajectory.
This slope applies equally well to the spacing of straight isobars (geostrophic flow) or of curved isobars (cyclostrophic and gradient flows).
Thirdly, let the parcel travel at a latitude of 45 degrees, either in the southern or northern hemisphere—so the Coriolis force is at play with a Coriolis parameter of 0.000115 Hz.
The balance-flow speeds also changes with the radius of curvature R of the trajectory/isobar.
In case of circular isobars, like in schematic cyclones and anticyclones, the radius of curvature is also the distance from the pressure low and high respectively.
Taking two of such distances R as 100 km and 300 km, the speeds are (in m/s)
The chart shows how the different speeds change in the conditions chosen above and with increasing radius of curvature.
The geostrophic speed (pink line) does not depend on curvature at all, and it appears as a horizontal line.
However, the cyclonic and anticyclonic gradient speeds approach it as the radius of curvature becomes indefinitely large—geostrophic balance is indeed the limiting case of gradient flow for vanishing centripetal acceleration (that is, for pressure and Coriolis force exactly balancing out).
The cyclostrophic speed (black line) increases from zero and its rate of growth with R is less than linear.
In reality an unbounded speed growth is impossible because the conditions supporting the flow change at some distance.
Also recall that the cyclostrophic conditions apply to small-scale processes, so extrapolation to higher radii is physically meaningless.
The inertial speed (green line), which is independent of the pressure gradient that we chose, increases linearly from zero and it soon becomes much larger than any other.
The gradient speed comes with two curves valid for the speeds around a pressure low (blue) and a pressure high (red).
The wind speed in cyclonic circulation grows from zero as the radius increases and is always less than the geostrophic estimate.
In the anticyclonic-circulation example, there is no wind within the distance of 260 km (point R*) – this is the area of no/low winds around a pressure high.
At that distance the first anticyclonic wind has the same speed as the cyclostrophic winds (point Q), and half of that of the inertial wind (point P).
Farther away from point R*, the anticyclonic wind slows down and approaches the geostrophic value with decreasingly larger speeds.
There is also another noteworthy point in the curve, labelled as S, where inertial, cyclostrophic and geostrophic speeds are equal.
The radius at S is always a fourth of R*, that is 65 km here.
Some limitations of the schematisations become also apparent.
For example, as the radius of curvature increases along a meridian, the corresponding change of latitude implies different values of the Coriolis parameter and, in turn, force.
Conversely, the Coriolis force stays the same if the radius is along a parallel.
So, in the case of circular flow, it is unlikely that the speed of the parcel does not change in time around the full circle, because the air parcel will feel the different intensity of the Coriolis force as it travels across different latitudes.
Additionally, the pressure fields quite rarely take the shape of neat circular isobars that keep the same spacing all around the circle.
Also, important differences of density occur in the horizontal plan as well, for example when warmer air joins the cyclonic circulation, thus creating a warm sector between a cold and a warm front.
See also
Secondary flow
References
Further reading
Holton, James R.: An Introduction to Dynamic Meteorology, 2004.
External links
American Meteorological Society Glossary of Terms
Met Office UK Pressure Charts in NE Atlantic and Europe
Plymouth State Weather Center Balanced Flows Tutorial
Atmospheric dynamics | Balanced flow | [
"Chemistry"
] | 6,883 | [
"Atmospheric dynamics",
"Fluid dynamics"
] |
10,797,093 | https://en.wikipedia.org/wiki/Karamata%27s%20inequality | In mathematics, Karamata's inequality, named after Jovan Karamata, also known as the majorization inequality, is a theorem in elementary algebra for convex and concave real-valued functions, defined on an interval of the real line. It generalizes the discrete form of Jensen's inequality, and generalizes in turn to the concept of Schur-convex functions.
Statement of the inequality
Let be an interval of the real line and let denote a real-valued, convex function defined on . If and are numbers in such that majorizes , then
Here majorization means that and satisfies
and we have the inequalities
and the equality
If is a strictly convex function, then the inequality () holds with equality if and only if we have for all .
Remarks
If the convex function is non-decreasing, then the proof of () below and the discussion of equality in case of strict convexity shows that the equality () can be relaxed to
The inequality () is reversed if is concave, since in this case the function is convex.
Example
The finite form of Jensen's inequality is a special case of this result. Consider the real numbers and let
denote their arithmetic mean. Then majorizes the -tuple , since the arithmetic mean of the largest numbers of is at least as large as the arithmetic mean of all the numbers, for every . By Karamata's inequality () for the convex function ,
Dividing by gives Jensen's inequality. The sign is reversed if is concave.
Proof of the inequality
We may assume that the numbers are in decreasing order as specified in ().
If for all , then the inequality () holds with equality, hence we may assume in the following that for at least one .
If for an , then the inequality () and the majorization properties () and () are not affected if we remove and . Hence we may assume that for all .
It is a property of convex functions that for two numbers in the interval the slope
of the secant line through the points and of the graph of is a monotonically non-decreasing function in for fixed (and vice versa). This implies that
for all . Define and
for all . By the majorization property (), for all and by (), . Hence,
which proves Karamata's inequality ().
To discuss the case of equality in (), note that by () and our assumption for all . Let be the smallest index such that , which exists due to (). Then . If is strictly convex, then there is strict inequality in (), meaning that . Hence there is a strictly positive term in the sum on the right hand side of () and equality in () cannot hold.
If the convex function is non-decreasing, then . The relaxed condition () means that , which is enough to conclude that in the last step of ().
If the function is strictly convex and non-decreasing, then . It only remains to discuss the case . However, then there is a strictly positive term on the right hand side of () and equality in () cannot hold.
References
External links
An explanation of Karamata's inequality and majorization theory can be found here.
Inequalities
Convex analysis
Articles containing proofs | Karamata's inequality | [
"Mathematics"
] | 665 | [
"Mathematical theorems",
"Binary relations",
"Mathematical relations",
"Inequalities (mathematics)",
"Articles containing proofs",
"Mathematical problems"
] |
10,797,278 | https://en.wikipedia.org/wiki/Cockade%20of%20Spain | The Cockade of Spain is a national symbol that arose after the French Revolution, by pleating a golden pin over the former red ribbon, colors of the ancient Royal Bend of Castile. The resulting insignia is a circle that symbolizes the colors of the Spanish flag: Red and Yellow, being carried as individual representation in case of distinctions or prizes or by other types of events. At the moment it is not used in Spain, except as a roundel for the identification of Spanish Armed Forces aircraft.
Gallery
See also
Roundel of the Spanish Republican Air Force
References
Antonio Cánovas del Castillo, De la escarapela roja y las banderas y divisas utilizadas en España
National symbols of Spain
Cockades | Cockade of Spain | [
"Mathematics"
] | 148 | [
"Cockades",
"Symbols"
] |
10,798,680 | https://en.wikipedia.org/wiki/4-Nitrophenol | 4-Nitrophenol (also called p-nitrophenol or 4-hydroxynitrobenzene) is a phenolic compound that has a nitro group at the opposite position of the hydroxyl group on the benzene ring.
Properties
4-nitro phenol is a slightly yellow, crystalline material, moderately toxic.
It shows two polymorphs in the crystalline state. The alpha-form is colorless pillars, unstable at room temperature, and stable toward sunlight. The beta-form is yellow pillars, stable at room temperature, and gradually turns red upon irradiation of sunlight. Usually 4-nitrophenol exists as a mixture of these two forms.
In solution, 4-nitrophenol has a dissociation constant (pKa) of 7.15 at 25 °C.
Preparation
From phenol
4-Nitrophenol can be prepared by nitration of phenol using dilute nitric acid at room temperature. The reaction produces a mixture of 2-nitrophenol and 4-nitrophenol.
Uses
pH indicator
4-Nitrophenol can be used as a pH indicator. A solution of 4-nitrophenol appears colorless below pH 5.4 and yellow above pH 7.5. This color-changing property makes this compound useful as a pH indicator.
The yellow color of the 4-nitrophenolate form (or 4-nitrophenoxide) is due to a maximum of absorbance at 405 nm (ε = 18.3 to 18.4 mM−1 cm−1 in strong alkali). In contrast, 4-nitrophenol has a weak absorbance at 405 nm (ε = 0.2 mM−1 cm−1).
The isosbestic point for 4-nitrophenol/4-nitrophenoxide is at 348 nm, with ε = 5.4 mM−1 cm−1.
Other uses
4-Nitrophenol is an intermediate in the synthesis of paracetamol. It is reduced to 4-aminophenol, then acetylated with acetic anhydride.
4-Nitrophenol is used as the precursor for the preparation of phenetidine and acetophenetidine, indicators, and raw materials for fungicides. Bioaccumulation of this compound rarely occurs.
In peptide synthesis, carboxylate ester derivatives of 4-nitrophenol may serve as activated components for construction of amide moieties.
Uses of derivatives
In the laboratory, it is used to detect the presence of alkaline phosphatase activity by hydrolysis of PNPP. In basic conditions, presence of hydrolytic enzymes will turn reaction vessel yellow.
4-Nitrophenol is a product of the enzymatic cleavage of several synthetic substrates such as 4-nitrophenyl phosphate (used as a substrate for alkaline phosphatase), 4-nitrophenyl acetate (for carbonic anhydrase), 4-nitrophenyl-β-D-glucopyranoside and other sugar derivatives which are used to assay various glycosidase enzymes. Amounts of 4-nitrophenol produced by a particular enzyme in the presence of its corresponding substrate can be measured with a spectrophotometer at or around 405 nm and used as a proxy measurement for the amount of the enzyme activity in the sample.
Accurate measurement of enzyme activity requires that the 4-nitrophenol product is fully deprotonated, existing as 4-nitrophenolate, given the weak absorbance of 4-nitrophenol at 405 nm. Complete ionization of the alcohol functional group affects the conjugation of the pi bonds on the compound. A lone pair from the oxygen can be delocalized via conjugation to the benzene ring and nitro group. Since the length of conjugated systems affects the color of organic compounds, this ionization change causes the 4-nitrophenol to turn yellow when fully deprotonated and existing as 4-nitrophenolate.
A common mistake in measuring enzyme activity using these substrates is to perform the assays at neutral or acidic pH without considering that only part of the chromophoric product is ionized. The problem can be overcome by stopping the reaction with sodium hydroxide (NaOH) or other strong base, which converts all product into 4-nitrophenoxide; final pH must be > ca. 9.2 to ensure more than 99% of the product is ionised. Alternatively enzyme activity can be measured at 348 nm, the isosbestic point for 4-nitrophenol/4-nitrophenoxide.
Toxicity
4-Nitrophenol irritates the eyes, skin, and respiratory tract. It may also cause inflammation of those parts. It has a delayed interaction with blood and forms methaemoglobin which is responsible for methemoglobinemia, potentially causing cyanosis, confusion, and unconsciousness. When ingested, it causes abdominal pain and vomiting. Prolonged contact with skin may cause allergic response. Genotoxicity and carcinogenicity of 4-nitrophenol are not known. The in mice is 282 mg/kg and in rats is 202 mg/kg (p.o.).
See also
Nitrophenols
References
Nitrophenols
PH indicators
4-Hydroxyphenyl compounds | 4-Nitrophenol | [
"Chemistry",
"Materials_science"
] | 1,165 | [
"Titration",
"PH indicators",
"Chromism",
"Chemical tests",
"Equilibrium chemistry"
] |
10,798,875 | https://en.wikipedia.org/wiki/Starlite | Starlite is an intumescent material said to be able to withstand and insulate from extreme heat. It was invented by British hairdresser and amateur chemist Maurice Ward (1933–2011) during the 1970s and 1980s, and received significant publicity after coverage of the material aired in 1990 on the BBC science and technology show Tomorrow's World. The name Starlite was coined by Ward's granddaughter Kimberly.
The American company Thermashield, LLC, says it acquired the rights to Starlite in 2013 and replicated it. It is the only company to have itself publicly demonstrated the technology and have samples tested by third parties. Thermashield's Starlite has successfully passed femtosecond laser testing at the Georgia Institute of Technology and ASTM D635-15 Standard Testing.
Properties
Live demonstrations on Tomorrow's World and BBC Radio 4 showed that an egg coated in Starlite could remain raw, and cold enough to be picked up with a bare hand, even after five minutes in the flame of an oxyacetylene blowtorch. It would also prevent a blowtorch from damaging a human hand. When heat is applied, the material chars, which creates an expanding low density carbon foam which is very thermally resistant. Even the application of a plasma torch, capable of cutting eighteen-inch thick steel plate, has little impact on Starlite. It was reported that it took nine seconds to heat a warhead to 900 °C, but a thin layer of the compound prevented the temperature from rising above 40 °C. Starlite was also claimed to have been able to withstand a laser beam that could produce a temperature of 10,000 °C.
Starlite reacts more efficiently as more heat is applied. The MOD's report, as published in Jane's International Defence Review 4/1993, speculated this was due to particle scatter of an ablative layer, thereby increasing the reflective properties of the compound. Testing continues for thermal conductivity and capacity under different conditions. Starlite may become contaminated with dust residue and so degrade with use. Keith Lewis, a retired MOD officer, noted that the material guards only against thermal damage and not the physical damage caused by an explosion, which can destroy the insulating layer.
Materials scientist Mark Miodownik described Starlite as a type of intumescent paint, and one of the materials he would most like to see for himself. He also admitted some doubt about the commercial potential of Starlite. Its main use appears to be as a flame retardant. Testing of modern composite materials enhanced with Starlite could expand the range of potential uses and applications of this substance.
Composition
Starlite's composition is a closely guarded secret. "The actual composition of Starlite is known only to Maurice and one or two members of his family," former Chief Scientific Adviser to the Ministry of Defence Sir Ronald Mason averred. It is said to contain a variety of organic polymers and co-polymers with both organic and inorganic additives, including borates and small quantities of ceramics and other special barrier ingredients—up to 21 in all. Perhaps uniquely for a material said to be thermal proof, it is said to be not entirely inorganic but up to 90 per cent organic. Nicola McDermott, Ward's youngest daughter, stated that Starlite is 'natural' and edible, and that it has been fed to dogs and horses without ill effects.
The American company Thermashield, LLC, which owns the Starlite formula, stated in a radio interview that Starlite is not made from household ingredients and there is no PVA glue, baking soda or baking powder in it.
Commercialisation
Ward allowed various organisations such as the Atomic Weapons Establishment and ICI to conduct tests on samples, but did not permit them to retain samples for fear of reverse engineering. Ward maintained that his invention was worth billions. Sir Ronald Mason told a reporter in 1993, "I started this path with Maurice very sceptical. I’m totally convinced of the reality of the claims." He further states, "We don't still quite understand how it works, but that it works is undoubtedly the case."
NASA became involved in Starlite in 1994, and NASA engineer Rosendo 'Rudy' Naranjo talked about its potential in a Dateline NBC report. The Dateline reporter stated that Starlite could perhaps help with the fragile Space Shuttle heat shield. Naranjo said of their discussions with Ward, "We have done a lot of evaluation and … we know all the tremendous possibilities that this material has."
Boeing, which was the main contractor for the Space Shuttles in 1994, became interested in the potential of Starlite to eliminate flammable materials in their jets.
By the time of Ward's death in 2011 there appeared to have been no commercialisation of Starlite, and the formulation of the material had not been released to the public.
According to a 2016 broadcast of the BBC programme The Naked Scientists, Ward took his secrets with him when he died.
According to a 2020 BBC Online release in the BBC Reel category, Thermashield, LLC had purchased all of Ward's notes, equipment and other related materials and is working towards a viable commercial product.
Replication
A YouTube user, NightHawkInLight, attempted in 2018 to create materials that replicated the properties of Starlite. Observing that the mechanism that generates an expanding carbon foam in Starlite is similar to black snake fireworks, NightHawkInLight concocted a formula using cornstarch, baking soda, and PVA glue. After drying, the hardened material creates a thin layer of carbon foam on the surface when exposed to high heat, insulating the material from further heat transfer. He later improved it by taking out the PVA glue and baking soda, and adding in flour, sugar and borax. Using borax and flour makes it less expensive, mold and insect resistant, and able to work when dry.
Several experiments testing the replication and variant recipes show that they can handle lasers, thermite, torches, etc. But the replication recipe failed when it was used to make a crucible for an induction furnace.
See also
Lost inventions
Firepaste
References
External links
.
.
. (Wayback Machine; March 9, 2020)
.
.
.
Organic polymers
Biomaterials
Brand name materials
Lost inventions
Firestops | Starlite | [
"Physics",
"Chemistry",
"Biology"
] | 1,284 | [
"Biomaterials",
"Organic polymers",
"Organic compounds",
"Materials",
"Matter",
"Medical technology"
] |
10,799,117 | https://en.wikipedia.org/wiki/Fashion%20design | Fashion design is the art of applying design, aesthetics, clothing construction and natural beauty to clothing and its accessories. It is influenced by culture and different trends and has varied over time and place. "A fashion designer creates clothing, including dresses, suits, pants, and skirts, and accessories like shoes and handbags, for consumers. They can specialize in clothing, accessory, or jewelry design, or may work in more than one of these areas."
Fashion designers
Fashion designers work in a variety of ways when designing their pieces and accessories such as rings, bracelets, necklaces and earrings. Due to the time required to put a garment out on the market, designers must anticipate changes to consumer desires. Fashion designers are responsible for creating looks for individual garments, involving shape, color, fabric, trimming, and more.
Fashion designers attempt to design clothes that are functional as well as aesthetically pleasing. They consider who is likely to wear a garment and the situations in which it will be worn, and they work with a wide range of materials, colors, patterns, and styles. Though most clothing worn for everyday wear falls within a narrow range of conventional styles, unusual garments are usually sought for special occasions such as evening wear or party dresses.
Some clothes are made specifically for an individual, as in the case of haute couture or bespoke tailoring. Today, most clothing is designed for the mass market, especially casual and everyday wear, which are commonly known as ready to wear or fast fashion.
Structure
There are different lines of work for designers in the fashion industry. Fashion designers who work full-time for a fashion house, as 'in-house designers', own the designs and may either work alone or as a part of a design team. Freelance designers who work for themselves sell their designs to fashion houses, directly to shops, or to clothing manufacturers. There are quite a few fashion designers who choose to set up their labels, which offers them full control over their designs. Others are self-employed and design for individual clients. Other high-end fashion designers cater to specialty stores or high-end fashion department stores. These designers create original garments, as well as those that follow established fashion trends. Most fashion designers, however, work for apparel manufacturers, creating designs of men's, women's, and children's fashions for the mass market. Large designer brands that have a 'name' as their brand such as Abercrombie & Fitch, Justice, or Juicy are likely to be designed by a team of individual designers under the direction of a design director.
Designing a garment
Garment design includes components of "color, texture, space, lines, pattern, silhouette, shape, proportion, balance, emphasis, rhythm, and harmony". All of these elements come together to design a garment by creating visual interest for consumers.
Fashion designers work in various ways, some start with a vision in their head and later move into drawing it on paper or on a computer, while others go directly into draping fabric onto a dress form, also known as a mannequin. The design process is unique to the designer and it is rather intriguing to see the various steps that go into the process. Designing a garment starts with patternmaking. The process begins with creating a sloper or base pattern. The sloper will fit the size of the model a designer is working with or a base can be made by utilizing standard size charting.
Three major manipulations within patternmaking include dart manipulation, contouring, and added fullness. Dart manipulation allows for a dart to be moved on a garment in various places but does not change the overall fit of the garment. Contouring allows for areas of a garment to fit closer to areas of the torso such as the bust or shoulders. Added fullness increases the length or width of a pattern to change the frame as well as fit of the garment. The fullness can be added on one side, unequal, or equally to the pattern.
A designer may choose to work with certain apps that can help connect all their ideas together and expand their thoughts to create a cohesive design. When a designer is completely satisfied with the fit of the toile (or muslin), they will consult a professional pattern maker who will then create the finished, working version of the pattern out of paper or using a computer program. Finally, a sample garment is made up and tested on a model to make sure it is an operational outfit. Fashion design is expressive, the designers create art that may be functional or non-functional.
Technology within fashion
Over the years, there has been an increase in the use of technology within fashion design. Iris van Herpen, a Dutch designer, incorporated 3D printing in her Crystallization collection.
Software can aid designers in the product development stage. Designers can use artificial intelligence and virtual reality to prototype clothing. 3D modeling within software allows for initial sampling and development stages for partnerships with suppliers before the garments are produced.
History
Modern Western fashion design is often considered to have started in the 19th century with Charles Frederick Worth who was the first designer to have his label sewn into the garments that he created. Before the former draper set up his maison couture (fashion house) in Paris, clothing design and creation of the garments were handled largely by anonymous seamstresses. At the time high fashion descended from what was popularly worn at royal courts. Worth's success was such that he was able to dictate to his customers what they should wear, instead of following their lead as earlier dressmakers had done. The term couturier was in fact first created in order to describe him. While all articles of clothing from any time period are studied by academics as costume design, only clothing created after 1858 is considered fashion design.
It was during this period that many design houses began to hire artists to sketch or paint designs for garments. Rather than going straight into manufacturing, the images were shown to clients to gain approval, which saved time and money for the designer. If the client liked their design, the patrons commissioned the garment from the designer, and it was produced for the client in the fashion house. This designer-patron construct launched designers sketching their work rather than putting the completed designs on models.
Types of fashion
Garments produced by clothing manufacturers fall into three main categories, although these may be split up into additional, different types.
Haute couture
Until the 1950s, fashion clothing was predominately designed and manufactured on a made-to-measure or haute couture basis (French for high-sewing), with each garment being created for a specific client. A couture garment is made to order for an individual customer, and is usually made from high-quality, expensive fabric, sewn with extreme attention to detail and finish, often using time-consuming, hand-executed techniques. Look and fit take priority over the cost of materials and the time it takes to make. Due to the high cost of each garment, haute couture makes little direct profit for the fashion houses, but is important for prestige and publicity.
Ready-to-wear (prêt-à-porter)
Ready-to-wear, or prêt-à-porter, clothes are a cross between haute couture and mass market. They are not made for individual customers, but great care is taken in the choice and cut of the fabric. Clothes are made in small quantities to guarantee exclusivity, so they are rather expensive. Ready-to-wear collections are usually presented by fashion houses each season during a period known as Fashion Week. This takes place on a citywide basis and occurs twice a year. The main seasons of Fashion Week include; spring/summer, fall/winter, resort, swim, and bridal.
Half-way garments are an alternative to ready-to-wear, "off-the-peg", or prêt-à-porter fashion. Half-way garments are intentionally unfinished pieces of clothing that encourage co-design between the "primary designer" of the garment, and what would usually be considered, the passive "consumer". This differs from ready-to-wear fashion, as the consumer is able to participate in the process of making and co-designing their clothing. During the Make{able} workshop, Hirscher and Niinimaki found that personal involvement in the garment-making process created a meaningful "narrative" for the user, which established a person-product attachment and increased the sentimental value of the final product.
Otto von Busch also explores half-way garments and fashion co-design in his thesis, "Fashion-able, Hacktivism and engaged Fashion Design".
Mass market
Currently, the fashion industry relies more on mass-market sales. The mass market caters for a wide range of customers, producing ready-to-wear garments using trends set by the famous names in fashion. They often wait around a season to make sure a style is going to catch on before producing their versions of the original look. To save money and time, they use cheaper fabrics and simpler production techniques which can easily be done by machines. The end product can, therefore, be sold much more cheaply.
There is a type of design called "kutch" originated from the German word kitschig, meaning "trashy" or "not aesthetically pleasing". Kitsch can also refer to "wearing or displaying something that is therefore no longer in fashion".
Income
The median annual wages for salaried fashion designers was $79,290 in May 2023, approximately $38.12 per hour. The middle 50 percent earned an average of 76,700. The lowest 10 percent earned $37,090 and the highest 10 percent earned $160,850. The highest number of employment lies within Apparel, Piece Goods, and Notions Merchant Wholesalers with a percentage of 5.4. The average is 7,820 based on employment. The lowest employment is within Apparel Knitting Mills at .46% of the industry employed, which averages to 30 workers within the specific specialty. In 2016, 23,800 people were counted as fashion designers in the United States.
Geographically, the largest employment state of Fashion designers is New York with an employment of 7,930. New York is considered a hub for fashion designers due to a large percentage of luxury designers and brands.
Fashion industry
Fashion today is a global industry, and most major countries have a fashion industry. Seven countries have established an international reputation in fashion: the United States, France, Italy, United Kingdom, Japan, Germany and Belgium. The "big four" fashion capitals of the fashion industry are New York City, Paris, Milan, and London.
United States
The United States is home to the largest, wealthiest, and most multi-faceted fashion industry. Most fashion houses in the United States are based in New York City, with a high concentration centered in the Garment District neighborhood. On the US west coast, there is also to a lesser extent a significant number of fashion houses in Los Angeles, where a substantial percentage of high fashion clothing manufactured in the United States is actually made. Miami has also emerged as a new fashion hub, especially in regards to swimwear and other beach-oriented fashion. A semi-annual event held every February and September, New York Fashion Week is the oldest of the four major fashion weeks held throughout the world. Parsons The New School for Design, located in the Greenwich Village neighborhood of Lower Manhattan in New York City, is considered one of the top fashion schools in the world. There are numerous fashion magazines published in the United States and distributed to a global readership. Examples include Vogue, Harper's Bazaar, and Cosmopolitan.
American fashion design is highly diverse, reflecting the enormous ethnic diversity of the population, but is largely dominated by a clean-cut, urban, hip aesthetic, and often favors a more casual style, reflecting the athletic, health-conscious lifestyles of the suburban and urban middle classes. The annual Met Gala ceremony in Manhattan is widely regarded as the world's most prestigious haute couture fashion event and is a venue where fashion designers and their creations are celebrated. Social media is also a place where fashion is presented most often. Some influencers are paid huge amounts of money to promote a product or clothing item, where the business hopes many viewers will buy the product off the back of the advertisement. Instagram is the most popular platform for advertising, but Facebook, Snapchat, Twitter and other platforms are also used. In New York, the LGBT fashion design community contributes very significantly to promulgating fashion trends, and drag celebrities have developed a profound influence upon New York Fashion Week.
Prominent American brands and designers include Calvin Klein, Ralph Lauren, Coach, Nike, Vans, Marc Jacobs, Tommy Hilfiger, DKNY, Tom Ford, Caswell-Massey, Michael Kors, Levi Strauss and Co., Estée Lauder, Revlon, Kate Spade, Alexander Wang, Vera Wang, Victoria's Secret, Tiffany and Co., Converse, Oscar de la Renta, John Varvatos, Anna Sui, Prabal Gurung, Bill Blass, Halston, Carhartt, Brooks Brothers, Stuart Weitzman, Diane von Furstenberg, J. Crew, American Eagle Outfitters, Steve Madden, Abercrombie and Fitch, Juicy Couture, Thom Browne, Guess, Supreme, and The Timberland Company.
Belgium
In the late 1980s and early 1990s, Belgian fashion designers brought a new fashion image that mixed East and West, and brought a highly individualised, personal vision on fashion. Well known Belgian designers are the Antwerp Six: Ann Demeulemeester, Dries Van Noten, Dirk Bikkembergs, Dirk Van Saene, Walter Van Beirendonck and Marina Yee, as well as Martin Margiela, Raf Simons, Kris Van Assche, Bruno Pieters, Anthony Vaccarello.
United Kingdom
London has long been the capital of the United Kingdom fashion industry and has a wide range of foreign designs which have integrated with modern British styles. Typical British design is smart but innovative yet recently has become more and more unconventional, fusing traditional styles with modern techniques. Vintage styles play an important role in the British fashion and styling industry. Stylists regularly 'mix and match' the old with the new, which gives British style a unique, bohemian aesthetic. Irish fashion (both design and styling) is also heavily influenced by fashion trends from Britain. Well-known British designers include Thomas Burberry, Alfred Dunhill, Paul Smith, Vivienne Westwood, Stella McCartney, Jimmy Choo, John Galliano, John Richmond, Alexander McQueen, Matthew Williamson, Gareth Pugh, Hussein Chalayan and Neil Barrett.
France
Most French fashion houses are in Paris, which is the capital of French fashion. Traditionally, French fashion is chic and stylish, defined by its sophistication, cut, and smart accessories. French fashion is internationally acclaimed.
Spain
Madrid and Barcelona are the main fashion centers in Spain. Spanish fashion is often more conservative and traditional but also more 'timeless' than other fashion cultures. Spaniards are known not to take great risks when dressing. Nonetheless, many of the fashion brands and designers coming from Spain.
The most notable luxury houses are Loewe and Balenciaga. Famous designers include Manolo Blahnik, Elio Berhanyer, Cristóbal Balenciaga, Paco Rabanne, Adolfo Domínguez, Manuel Pertegaz, Jesús del Pozo, Felipe Varela and Agatha Ruiz de la Prada.
Spain is also home to large fashion brands such as Zara, Massimo Dutti, Bershka, Pull&Bear, Mango, Desigual, Pepe Jeans and Camper.
Germany
Berlin is the centre of fashion in Germany (prominently displayed at Berlin Fashion Week), while Düsseldorf holds Europe's largest fashion trade fairs with Igedo. Other important centres of the scene are Munich, Hamburg, and Cologne. German fashion is known for its elegant lines as well as unconventional young designs and the great variety of styles.
India
Most of the Indian fashion houses are in Mumbai, Lakme Fashion Week is considered one of the premier fashion events in the country. Lakme Fashion Week in India takes place twice a year and is held in the populous city of Mumbai. The first show occurs during April featuring summer collections. The second show takes place in August to showcase the winter collection. Lakme, a cosmetic brand for Indian women, hosts the event. This fashion week started in 1999 and originally partnered with the FDCI, Fashion Design Council of India then later switched to a sponsorship with Lakme.
Italy
Milan is Italy's fashion capital. Most of the older Italian couturiers are in Rome. However, Milan and Florence are the Italian fashion capitals, and it is the exhibition venue for their collections. Italian fashion features casual and glamorous elegance. In Italy, Milan Fashion Week takes place twice a year in February and September. Milan Fashion week puts fashion in the spotlight and celebrates it in the heart of Milan with fashion lovers, buyers and media.
Japan
Most Japanese fashion houses are in Tokyo which is home to Tokyo Fashion Week, Asia's largest fashion week. The Japanese look is loose and unstructured (often resulting from complicated cutting), colors tend to the sombre and subtle, and richly textured fabrics. Famous Japanese designers include Kenzo Takada, Issey Miyake, Yohji Yamamoto and Rei Kawakubo.
China
Chinese clothing has historically been associated with lower quality both inside and outside China, leading to a stigma on Chinese brands. Due to government censorship, Chinese citizens were only able to access fashion magazines in the 1990s. However, as more and more Chinese designers matriculate from the world's top fashion schools, Chinese designers such as Shushu/Tong and Rui Zhou have made their way into the world's top fashion weeks, and Shanghai has become a fashion hub in China. In the early 2020s, Gen Z shoppers pioneered the guochao () movement, a trend of preferring homegrown designers which incorporate aspects of Chinese history and culture. Hong Kong clothing brand Shanghai Tang's design concept is inspired by Chinese clothing and set out to rejuvenate Chinese fashion of the 1920s and 30s, with a modern twist of the 21st century and its usage of bright colours. Additionally, a revival in interest in traditional Han clothing has led to interest in haute couture clothing with historical Chinese details, particularly around Chinese New Year.
Soviet Union
Fashion in the Soviet Union largely followed general trends of the Western world. However, the state's socialist ideology consistently moderated and influenced these trends. In addition, shortages of consumer goods meant that the general public did not have ready access to pre-made fashion.
Switzerland
Most of the Swiss fashion houses are in Zürich. The Swiss look is casual elegant and luxurious with a slight touch of quirkiness. Additionally, it has been greatly influenced by the dance club scene.
Mexico
In the development of Mexican indigenous dress, the fabrication was determined by the materials and resources that are available in specific regions, impacting the "fabric, shape and construction of a people's clothing". Textiles were created from plant fibers including cotton and agave. Class status differentiated what fabric was worn. Mexican dress was influenced by geometric shapes to create the silhouettes. Huipil a blouse characterized by a "loose, sleeveless tunic made of two or three joined webs of cloth sewn lengthwise" is an important historical garment, often seen today. After the Spanish Conquest, traditional Mexican clothing shifted to take a Spanish resemblance.
Mexican indigenous groups rely on specific embroidery and colors to differentiate themselves from each other.
Mexican Pink is a significant color to the identity of Mexican art and design and general spirit. The term "Rosa Mexicano" as described by Ramón Valdiosera was established by prominent figures such as Dolores del Río and designer Ramón Val in New York.
When newspapers and magazines such as El Imparcial and El Mundo Ilustrado circulated in Mexico, became a significant movement, as it informed the large cities, such as Mexico City, of European fashions. This encouraged the founding of department stores, changing the existent pace of fashion. With access to European fashion and dress, those with high social status relied on adopting those elements to distinguish themselves from the rest. Juana Catarina Romero was a successful entrepreneur and pioneer in this movement.
Fashion design terms
A fashion designer conceives garment combinations of line, proportion, color, and texture. While sewing and pattern-making skills are beneficial, they are not a pre-requisite of successful fashion design. Most fashion designers are formally trained or apprenticed.
A technical designer works with the design team and the factories overseas to ensure correct garment construction, appropriate fabric choices and a good fit. The technical designer fits the garment samples on a fit model, and decides which fit and construction changes to make before mass-producing the garment.
A pattern maker (also referred as pattern master or pattern cutter) drafts the shapes and sizes of a garment's pieces. This may be done manually with paper and measuring tools or by using a CAD computer software program. Another method is to drape fabric directly onto a dress form. The resulting pattern pieces can be constructed to produce the intended design of the garment and required size. Formal training is usually required for working as a pattern marker.
A tailor makes custom designed garments made to the client's measure; especially suits (coat and trousers, jacket and skirt, et cetera). Tailors usually undergo an apprenticeship or other formal training.
A textile designer designs fabric weaves and prints for clothes and furnishings. Most textile designers are formally trained as apprentices and in school.
A stylist co-ordinates the clothes, jewelry, and accessories used in fashion photography and catwalk presentations. A stylist may also work with an individual client to design a coordinated wardrobe of garments. Many stylists are trained in fashion design, the history of fashion, and historical costume, and have a high level of expertise in the current fashion market and future market trends. However, some simply have a strong aesthetic sense for pulling great looks together.
A fashion buyer selects and buys the mix of clothing available in retail shops, department stores, and chain stores. Most fashion buyers are trained in business and/or fashion studies.
A seamstress sews ready-to-wear or mass-produced clothing by hand or with a sewing machine, either in a garment shop or as a sewing machine operator in a factory. She (or he) may not have the skills to make (design and cut) the garments, or to fit them on a model.
A dressmaker specializes in custom-made women's clothes: day, cocktail, and evening dresses, business clothes and suits, trousseaus, sports clothes, and lingerie.
A fashion forecaster predicts what colours, styles and shapes will be popular ("on-trend") before the garments are on sale in stores.
A model wears and displays clothes at fashion shows and in photographs.
A fit model aids the fashion designer by wearing and commenting on the fit of clothes during their design and pre-manufacture. Fit models need to be a particular size for this purpose.
A fashion journalist writes fashion articles describing the garments presented or fashion trends, for magazines or newspapers.
A fashion photographer produces photographs about garments and other fashion items along with models and stylists for magazines or advertising agencies.
See also
Fashion
Fashion design copyright
History of western fashion
List of fashion designers
List of fashion education programs
List of fashion topics
List of individual dresses
Runway (fashion)
Deconstruction (fashion)
Sustainable fashion
Textile design
Western dress codes
References
Bibliography
Breward, Christopher, The culture of fashion: a new history of fashionable dress, Manchester: Manchester University Press, 2003,
Hollander, Anne, Seeing through clothes, Berkeley: University of California Press, 1993,
Hollander, Anne, Sex and suits: the evolution of modern dress, New York: Knopf, 1994,
Hollander, Anne, Feeding the eye: essays, New York: Farrar, Straus, and Giroux, 1999,
Hollander, Anne, Fabric of vision: dress and drapery in painting, London: National Gallery, 2002,
Kawamura, Yuniya, Fashion-ology: an introduction to Fashion Studies, Oxford and New York: Berg, 2005,
Lipovetsky, Gilles (translated by Catherine Porter), The empire of fashion: dressing modern democracy, Woodstock: Princeton University Press, 2002,
McDermott, Kathleen, Style for all: why fashion, invented by kings, now belongs to all of us (An illustrated history), 2010, — Many hand-drawn color illustrations, extensive annotated bibliography and reading guide
Mckay Rosenberg, Dawn, Fashion designer job description: Salary, skills, & more. Retrieved May 10, 2021, from https://www.thebalancecareers.com/fashion-designer-526016
Perrot, Philippe (translated by Richard Bienvenu), Fashioning the bourgeoisie: a history of clothing in the nineteenth century, Princeton NJ: Princeton University Press, 1994,
Steele, Valerie, Paris fashion: a cultural history, (2. ed., rev. and updated), Oxford: Berg, 1998,
Steele, Valerie, Fifty years of fashion: new look to now, New Haven: Yale University Press, 2000,
Steele, Valerie, Encyclopedia of clothing and fashion, Detroit: Thomson Gale, 2005
Strijbos, Bram. (2021, May 10). All the news about Milan Fashion week on FashionUnited. Retrieved May 10, 2021, from https://fashionweekweb.com/milan-fashion-week
Sterlacci, Francesca. (n.d.). What is a fashion designer? Retrieved May 10, 2021, from https://fashion-history.lovetoknow.com/fashion-clothing-industry/what-is-fashion-designer
Design occupations
Arts occupations | Fashion design | [
"Engineering"
] | 5,316 | [
"Design occupations",
"Design",
"Fashion design"
] |
10,799,349 | https://en.wikipedia.org/wiki/GIS%20and%20hydrology | Geographic information systems (GISs) have become a useful and important tool in the field of hydrology to study and manage Earth's water resources. Climate change and greater demands on water resources require a more knowledgeable disposition of arguably one of our most vital resources. Because water in its occurrence varies spatially and temporally throughout the hydrologic cycle, its study using GIS is especially practical. Whereas previous GIS systems were mostly static in their geospatial representation of hydrologic features, GIS platforms are becoming increasingly dynamic, narrowing the gap between historical data and current hydrologic reality.
The elementary water cycle has inputs equal to outputs plus or minus change in storage. Hydrologists make use of this hydrologic budget when they study a watershed. The inputs in a hydrologic budget include precipitation, surface flow, and groundwater flow. Outputs consist of evapotranspiration, infiltration, surface runoff, and surface/groundwater flows. All of these quantities can be measured or estimated based on environmental data and their characteristics can be graphically displayed and studies using GIS.
GIS in surface water
In the field of hydrological modeling, analysis generally begins with the sampling and measurement of existing hydrologic areas. In this stage of research, the scale and accuracy of measurements are key issues. Data may either be collected in the field or through online research. The United States Geological Survey ((USGS)) is a publicly available source of remotely sensed hydrological data. Historical and real-time streamflow data are also available via the internet from sources such as the National Weather Service (NWS) and the United States Environmental Protection Agency (EPA). A benefit of using GIS softwares for hydrological modeling is that digital visualizations of data can be linked to real-time data. GIS revolutionized curation, manipulation, and input for complex computational hydrologic models
For surface water modeling, digital elevation model are often layered with hydrographic data in order to determine the boundaries of a watershed. Understanding these boundaries is integral to understanding where precipitation runoff will flow. For example, in the event of snowmelt, the amount of snowfall can be input into GIS to predict the amount of water that will travel downstream. This information has applications in local government asset management, agriculture and environmental science.
Another useful application for GIS regards flood risk assessment. Using digital elevation models combined with peak discharge data can predict which areas of a floodplain will be submerged depending on the amount of rainfall. In a study of the Illinois River watershed, Rabie (2014) found that a decently accurate flood risk map could be generated using only DEMs and stream gauge data. Analysis based on these two parameters alone does not account for manmade developments including levees or drainage systems, and therefore should not be considered a comprehensive result.
GIS in groundwater
The use of GIS to analyze groundwater falls into the field of hydrogeology. Since 98% of available freshwater on Earth is groundwater, the need to effectively model and manage these resources is apparent. As the demand for groundwater continues to increase with the world’s growing population, it is vital that these resources be properly managed. Indeed, when groundwater usage is not monitored sufficiently, it may result in damage to aquifers or groundwater-related subsidence, as occurred in the Ogallala aquifer in the United States. In some cases, GIS can be used to analyze drainage and groundwater data in order to select suitable sites for groundwater recharge.
See also
GIS in environmental contamination
Geographic information system
ArcGIS
GIS and aquatic science
References
Girish Kumar, M., Bali, R. and Agarwal, A.K (2009). GIS Integration of remote sensing and electrical data for hydrological exploration- A case study of Bhakar watershed, India. Hydrological Sciences Journal 54 (5) pp 949–960.
Dingman, S. Lawrence, Physical Hydrology, Prentice-Hall, 2nd Edition, 2002
Fetter, C.W. Applied Hydrogeology, Prentice-Hall, 4th Edition, 2001
Maidment, David R., ed. Arc Hydro: GIS for Water Resources, ESRI Press, 2002
External links
Spatial Hydrology
GIS Lounge
ArcNews Online
US Army Geospatial Center — For information on OCONUS surface water and groundwater.
Applications of geographic information systems
Hydrology | GIS and hydrology | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 888 | [
"Hydrology",
"Environmental engineering"
] |
10,799,951 | https://en.wikipedia.org/wiki/Power%20supply%20rejection%20ratio | In electronic systems, power supply rejection ratio (PSRR), also supply-voltage rejection ratio (kSVR; SVR), is a term widely used to describe the capability of an electronic circuit to suppress any power supply variations to its output signal.
In the specifications of operational amplifiers, the PSRR is defined as the ratio of the change in supply voltage to the equivalent (differential) output voltage it produces, often expressed in decibels. An ideal op-amp would have infinite PSRR, as the device should have no change to the output voltage with any changes to the power supply voltage. The output voltage will depend on the feedback circuit, as is the case of regular input offset voltages. But testing is not confined to DC (zero frequency); often an operational amplifier will also have its PSRR given at various frequencies (in which case the ratio is one of RMS amplitudes of sinewaves present at a power supply compared with the output, with gain taken into account). Unwanted oscillation, including motorboating, can occur when an amplifying stage is too sensitive to signals fed via the power supply from a later power amplifier stage.
Some manufacturers specify PSRR in terms of the offset voltage it causes at the amplifiers inputs; others specify it in terms of the output; there is no industry standard for this issue. The following formula assumes it is specified in terms of input:
where is the voltage gain.
For example: an amplifier with a PSRR of 100 dB in a circuit to give 40 dB closed-loop gain would allow about 1 millivolt of power supply ripple to be superimposed on the output for every 1 volt of ripple in the supply. This is because
.
And since that's 60 dB of rejection, the sign is negative so:
Note:
The PSRR doesn't necessarily have the same poles as A(s), the open-loop gain of the op-amp, but generally tends to also worsen with increasing frequency (e.g. http://focus.ti.com/lit/ds/symlink/opa2277.pdf).
For amplifiers with both positive and negative power supplies (with respect to earth, as op-amps often have), the PSRR for each supply voltage may be separately specified (sometimes written: PSRR+ and PSRR−), but normally the PSRR is tested with opposite polarity signals applied to both supply rails at the same time (otherwise the common-mode rejection ratio (CMRR) will affect the measurement of the PSRR).
For voltage regulators the PSRR is occasionally quoted (confusingly; to refer to output voltage change ratios), but often the concept is transferred to other terms relating changes in output voltage to input: Ripple rejection (RR) for low frequencies, line transient response for high frequencies, and line regulation for DC.
References
External links
Operational Amplifier Power Supply Rejection Ratio (PSRR) and Supply Voltages by Analog Devices, Inc. Definition and measurement of PSRR.
Application Note on PSRR Testing of Linear Voltage Regulators, by Florian Hämmerle (OMICRON Lab) and Steven Sandler (Picotest)
Introduction to System Design Using Integrated Circuits, via Google Books
Electronics concepts
Power supplies
Engineering ratios | Power supply rejection ratio | [
"Mathematics",
"Engineering"
] | 666 | [
"Quantity",
"Metrics",
"Engineering ratios"
] |
10,800,208 | https://en.wikipedia.org/wiki/Zuckerman%20functor | In mathematics, a Zuckerman functor is used to construct representations of real reductive Lie groups from representations of Levi subgroups. They were introduced by Gregg Zuckerman (1978). The Bernstein functor is closely related.
Notation and terminology
G is a connected reductive real affine algebraic group (for simplicity; the theory works for more general groups), and g is the Lie algebra of G.
K is a maximal compact subgroup of G.
A (g,K)-module is a vector space with compatible actions of g and K, on which the action of K is K-finite. A representation of K is called K-finite if every vector is contained in a finite-dimensional representation of K.
WK is the subspace of K-finite vectors of a representation W of K.
R(g,K) is the Hecke algebra of G of all distributions on G with support in K that are left and right K finite. This is a ring which does not have an identity but has an approximate identity, and the approximately unital R(g,K)- modules are the same as (g,K) modules.
L is a Levi subgroup of G, the centralizer of a compact connected abelian subgroup, and l is the Lie algebra of L.
Definition
The Zuckerman functor Γ is defined by
and the Bernstein functor Π is defined by
References
David A. Vogan, Representations of real reductive Lie groups,
Anthony W. Knapp, David A. Vogan, Cohomological induction and unitary representations, prefacereview by Dan Barbasch
David A. Vogan, Unitary Representations of Reductive Lie Groups. (AM-118) (Annals of Mathematics Studies)
Gregg J. Zuckerman, Construction of representations via derived functors, unpublished lecture series at the Institute for Advanced Study, 1978.
Representation theory
Functors | Zuckerman functor | [
"Mathematics"
] | 387 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical objects",
"Fields of abstract algebra",
"Mathematical relations",
"Functors",
"Category theory",
"Representation theory"
] |
10,800,676 | https://en.wikipedia.org/wiki/Project%20Clear%20Vision | Project Clear Vision was a covert examination of Soviet-made biological bomblets conducted by the Battelle Memorial Institute under contract with the CIA. The legality of this project under the Biological Weapons Convention (BWC) of 1972 is disputed.
History
The operation
Project Clear Vision was conducted between 1997 and 2000, during the Clinton Administration. The project's stated goal was to assess the efficacy of bio-agent dissemination from bomblets. The program received criticism due to suspicions that its findings could possibly be used in a covert US bioweapons program.
Reportage
The secret project was disclosed in a September 2001 article in The New York Times. Reporters Judith Miller, Stephen Engelberg and William J. Broad collaborated to write the article. Shortly after the article appeared, the authors published a book that further elaborated the story. The 2001 book, Germs: Biological Weapons and America's Secret War, and the article are the only publicly available sources concerning Project Clear Vision and its sister projects, Bacchus and Jefferson.
Legality
As signatory to the BWC, the United States is committed to refrain from development of bioweapons. Moreover,
the US did not disclose the secret project in its annual confidence-building measure (CBM) declarations. The US maintains that the program was fully consistent with the BWC because the project was defensive in nature.
References
Further reading
Miller, Judith, Engelberg, Stephen and Broad, William J. Germs: Biological Weapons and America's Secret War, (Google Books), Simon and Schuster, 2002, ().
Arms control
United States biological weapons program
Military projects of the United States | Project Clear Vision | [
"Engineering"
] | 335 | [
"Military projects of the United States",
"Military projects"
] |
10,801,086 | https://en.wikipedia.org/wiki/Ocean%20Tracking%20Network | The Ocean Tracking Network (OTN) is a global network research and monitoring effort using implanted acoustic transmitters to study fish migration patterns. It is based at Dalhousie University in Nova Scotia. The technology used by the Ocean Tracking Network comes from the Pacific Ocean Shelf Tracking Project (POST) and the Tagging of Pacific Pelagics (TOPP) project.
History
The Ocean Tracking Network (OTN) began at Dalhousie University in 2008. Sara Iverson is the current science director of OTN.
Operations
OTN operates a fleet of autonomous vehicles—Teledyne Webb Slocum gliders and Liquid Robotics Wave Gliders. The TWS gliders are electrically powered and collect physical, biological and chemical information. The LRW glides are solar and wave powered. They each gather data on weather and sea surface conditions. Additionally, OTN maintains a rental fleet of Innovasea Vemco acoustic receiver units for use by those in academia, government, non-profits and industry.
Funding
The program received an initial $35 million in funding to support global monitoring infrastructure, governance, and operations from the Canadian Foundation for Innovation’s (CFI) International Joint Venture Fund (IJVF), and in 2022 received an additional $38.5 million in funding.The Natural Sciences and Engineering Research Council of Canada (NSERC) gave $10 million in network funding; and additional funding was received from the Social Sciences and Humanities Research Council (SSHRC) and international partnerships.
OTN and the Prince William Sound Science Center formed a partnership in 2013, to support the science center’s Pacific Ocean Shelf Tracking (POST) project.
Partnerships
The European Tracking Network (ETN) is a main partner of the OTN.
References
External links
Fisheries databases
Acoustics
Sound | Ocean Tracking Network | [
"Physics"
] | 366 | [
"Classical mechanics",
"Acoustics"
] |
10,801,115 | https://en.wikipedia.org/wiki/Project%20Bacchus | Project Bacchus was a covert investigation by the Defense Threat Reduction Agency to determine whether it is possible to construct a bioweapons production facility with off-the-shelf equipment.
History
The project
Project Bacchus operated from 1999 to 2000 to investigate whether would-be terrorists could build an anthrax production facility and remain undetected. During the two-year simulation, the facility was constructed, and successfully produced an anthrax-like bacterium. The participating scientists were able to make about of highly refined bacterial particles.
Reportage
The secret Project Bacchus was disclosed in a September 2001 article in The New York Times. Reporters Judith Miller, Stephen Engelberg and William J. Broad collaborated on the article. Shortly after it appeared, they published a book containing further details. The book, Germs: Biological Weapons and America's Secret War, and the article are the only publicly available sources concerning Project Bacchus and its sister projects, Clear Vision and Jefferson.
References
Further reading
Tucker, Jonathan B. "Biological Threat Assessment: Is the Cure Worse Than the Disease?", Arms Control Today, October 2004, accessed January 6, 2009.
Miller, Judith, Engelberg, Stephen and Broad, William J. Germs: Biological Weapons and America's Secret War, (Google Books), Simon & Schuster, 2002, ().
-- " U.S. Germ Warfare Research Pushes Treaty Limits", The New York Times, September 4, 2001, accessed January 6, 2009.
Bacchus
Bacchus
Bacchus
Bioterrorism | Project Bacchus | [
"Engineering",
"Biology"
] | 324 | [
"Military projects of the United States",
"Bioterrorism",
"Military projects",
"Biological warfare"
] |
10,801,720 | https://en.wikipedia.org/wiki/Nature%20deficit%20disorder | Nature-deficit disorder is the idea that human beings, especially children, are spending less time outdoors than they have in the past, and the belief that this change results in a wide range of behavioral problems.
This disorder is not recognized in any of the medical manuals for mental disorders, such as the ICD-10 or the DSM-5.
This term was coined by Richard Louv in 2005. Louv does not intend the term "disorder" to represent an actual illness but instead intends the term to act as a metaphor describing the costs of alienation from nature. Louv claims that causes for nature-deficit disorder include parental fears and restricted access to natural areas.
Elizabeth Dickinson has criticized the term as a misdiagnosis that obscures the problems of dysfunctional cultural practices.
Research
Nature-deficit disorder is unrecognized by most medical institutions. Some preliminary research shows that lack of time outdoors does have negative effects on children's mental well-being.
Most research relating to nature-deficit disorder does not specifically mention it by name. Though studies on the impact of natural environments, particularly the concept of urban green space, on mental and physical wellbeing often show supporting claims.
A study on Italian undergraduate students showed how mental fatigue can be improved quicker in natural environments compared to urban ones.
In Edinburgh, UK, a survey was professionally analyzed to show the effects exposure to greenspace has on primary school children. It found that more exposure helps increase self-esteem in young children.
Increased urbanization in the Netherlands was studied in correlation to an increase in various mental and physical health issues. They found less disease clusters in areas with more greenspace.
Causes
Researchers have not assessed the causes of nature-deficit disorder. However, Richard Louv has proposed some causes:
Parents are keeping children indoors in order to keep them safe from danger. Louv believes that growing parental fear of "stranger danger", heavily fueled by the media, may be the leading cause in nature-deficit disorder, as parents may be protecting children to such an extent that it disrupts the child's ability to connect to nature.
Dr. Rhonda Clements surveyed 830 mothers, mostly born between 1960 and 1980, and asked about how much time they spent in nature as children; 76% of the mothers said they were outdoors every day Monday-Sunday, but when the same question was asked about their children only 26% said their children spent time outside every day. When asked why their children were not enjoying the outdoors as often, the parents said that safety, injury, and fear of crime were the reasons that restricted their children from more outdoor play. This research did not, however, address causes of nature-deficit disorder per se, instead focusing solely on changes in outdoor play.
Loss of natural surroundings in a child's neighborhood and city. Many parks and nature preserves have restricted access and "do not walk off the trail" signs. Environmentalists and educators add to the restriction, telling children "look don't touch". While they are protecting the natural environment, Louv questions the cost of that protection on our children's relationship with nature, which profoundly shapes their ecocultural identities.
Redlining in the U.S. has led to more low-income and marginalized communities to have limited access to greenspace. One review suggested that nature-deficit disorder may have an increased impact on these communities, although there has been inadequate research to determine any such effects conclusively.
Effects
Because nature-deficit disorder is not meant to be a medical diagnosis (and is not recognized as one), researchers have not assessed the effects of nature-deficit disorder.
Louv believes that the effects of nature-deficit disorder on our children will have "profound implications, not only for the health of future generations but for the health of the Earth itself".
Organizations
The Children & Nature Network was created to encourage and support the people and organizations working to reconnect children with nature. Richard Louv is a co-founder of the Children & Nature Network.
The No Child Left Inside Coalition works to get children outside and actively learning. They hope to address the problem of nature-deficit disorder. They are now working on the No Child Left Inside Act, which would increase environmental education in schools. The coalition claims the problem of nature-deficit disorder could be helped by "igniting student's interest in the outdoors" and encouraging them to explore the natural world in their own lives.
In Colombia, OpEPA (Organización para la Educación y Protección Ambiental) has been working to increase time spent outdoors since 1998. OpEPA's mission is to reconnect children and youth to the Earth so they can act with environmental responsibility. OpEPA works by linking three levels of education: intellectual, experiencial and emotional/spiritual into outdoor experiences. Developing and training educators in the use of inquiry based learning, learning by play and experiential education is a key component to empower educators to engage in nature education.
Critique
Elizabeth Dickinson, a business communication professor at the University of North Carolina at Chapel Hill, studied nature-deficit disorder through a case study at the North Carolina Educational State Forest system (NCESF), a forest conservation education program. Dickinson argues that it is what Louv's narrative is missing that prevents nature-deficit disorder from effecting meaningful change. She attributes the problems described by nature-deficit disorder as coming not from a lack of children outside or in nature, but from adults' own "psyche and dysfunctional cultural practices". According to Dickinson, "in the absence of deeper cultural examination and alternative practices, nature deficit disorder is a misdiagnosis—a problematic contemporary environmental discourse that can obscure and mistreat the problem."
Dickinson analyzed the language and discourses used at the NCESF (educators' messages, education and curriculum materials, forest service messages and literature, and the forests themselves) and compared them to Louv's discussion of nature-deficit disorder in his writings. She concluded that both Louv and the NCESF (both who loosely support each other) perpetuate the problematic idea that humans are outside of nature, and they use techniques that appear to get children more connected to nature but that may not.
She suggests making it clear that modern culture's disassociation with nature has occurred gradually over time, rather than very recently. Dickinson thinks that many people idealize their own childhoods without seeing the dysfunction that has existed for multiple generations. She warns against viewing the cure to nature-deficit disorder as an outward entity: "nature". Instead, Dickinson states that a path of inward self-assessment "with nature" (rather than "in nature") and alongside meaningful time spent in nature is the key to solving the social and environmental problems of which nature-deficit disorder is a symptom. In addition, she advocates allowing nature education to take on an emotional pedagogy rather than a mainly scientific one, as well as experiencing nature as it is before ascribing names to everything.
See also
Biophilia hypothesis
Ecopsychology
Environmental psychology
Plant blindness
Wilderness therapy
References
Further reading
Louv, Richard. (2011) The Nature Principle: Human Restoration and the End of Nature-Deficit Disorder. Algonquin Books. 303pp.
Louv, Richard. (2005) Last Child in the Woods: Saving Our Children from Nature-Deficit Disorder (Paperback edition). Algonquin Books. 335pp.
Louv, Richard, Web of Life: Weaving the Values That Sustain Us.
External links
Richard Louv's website
Children & Nature Network
An interview with Richard Louv about the need to get kids out into nature, by David Roberts, The Grist: Environmental News and Commentary, 30 Mar 2006.
Saving kids from nature-deficit disorder – May 25, 2005, NPR
Public School Insights' Interview with Richard Louv – April 22, 2008
Chicago Wilderness Leave No Child Inside Initiative
Planet Ark's Research Report on Children & Nature in Australia
Nature Play: Nurturing Children and Strengthening Conservation through Connections to the Land
Natural environment
Developmental psychology
Environmental psychology
Outdoor education
Childhood
Biophilia hypothesis | Nature deficit disorder | [
"Biology",
"Environmental_science"
] | 1,654 | [
"Behavior",
"Environmental psychology",
"Developmental psychology",
"Behavioural sciences",
"Biophilia hypothesis",
"Biological hypotheses",
"Environmental social science"
] |
10,803,719 | https://en.wikipedia.org/wiki/String%20operations | In computer science, in the area of formal language theory, frequent use is made of a variety of string functions; however, the notation used is different from that used for computer programming, and some commonly used functions in the theoretical realm are rarely used when programming. This article defines some of these basic terms.
Strings and languages
A string is a finite sequence of characters.
The empty string is denoted by .
The concatenation of two string and is denoted by , or shorter by .
Concatenating with the empty string makes no difference: .
Concatenation of strings is associative: .
For example, .
A language is a finite or infinite set of strings.
Besides the usual set operations like union, intersection etc., concatenation can be applied to languages:
if both and are languages, their concatenation is defined as the set of concatenations of any string from and any string from , formally .
Again, the concatenation dot is often omitted for brevity.
The language consisting of just the empty string is to be distinguished from the empty language .
Concatenating any language with the former doesn't make any change: ,
while concatenating with the latter always yields the empty language: .
Concatenation of languages is associative: .
For example, abbreviating , the set of all three-digit decimal numbers is obtained as . The set of all decimal numbers of arbitrary length is an example for an infinite language.
Alphabet of a string
The alphabet of a string is the set of all of the characters that occur in a particular string. If s is a string, its alphabet is denoted by
The alphabet of a language is the set of all characters that occur in any string of , formally:
.
For example, the set is the alphabet of the string ,
and the above is the alphabet of the above language as well as of the language of all decimal numbers.
String substitution
Let L be a language, and let Σ be its alphabet. A string substitution or simply a substitution is a mapping f that maps characters in Σ to languages (possibly in a different alphabet). Thus, for example, given a character a ∈ Σ, one has f(a)=La where La ⊆ Δ* is some language whose alphabet is Δ. This mapping may be extended to strings as
f(ε)=ε
for the empty string ε, and
f(sa)=f(s)f(a)
for string s ∈ L and character a ∈ Σ. String substitutions may be extended to entire languages as
Regular languages are closed under string substitution. That is, if each character in the alphabet of a regular language is substituted by another regular language, the result is still a regular language.
Similarly, context-free languages are closed under string substitution.
A simple example is the conversion fuc(.) to uppercase, which may be defined e.g. as follows:
For the extension of fuc to strings, we have e.g.
fuc(‹Straße›) = {‹S›} ⋅ {‹T›} ⋅ {‹R›} ⋅ {‹A›} ⋅ {‹SS›} ⋅ {‹E›} = {‹STRASSE›},
fuc(‹u2›) = {‹U›} ⋅ {ε} = {‹U›}, and
fuc(‹Go!›) = {‹G›} ⋅ {‹O›} ⋅ {} = {}.
For the extension of fuc to languages, we have e.g.
fuc({ ‹Straße›, ‹u2›, ‹Go!› }) = { ‹STRASSE› } ∪ { ‹U› } ∪ { } = { ‹STRASSE›, ‹U› }.
String homomorphism
A string homomorphism (often referred to simply as a homomorphism in formal language theory) is a string substitution such that each character is replaced by a single string. That is, , where is a string, for each character .
String homomorphisms are monoid morphisms on the free monoid, preserving the empty string and the binary operation of string concatenation. Given a language , the set is called the homomorphic image of . The inverse homomorphic image of a string is defined as
while the inverse homomorphic image of a language is defined as
In general, , while one does have
and
for any language .
The class of regular languages is closed under homomorphisms and inverse homomorphisms.
Similarly, the context-free languages are closed under homomorphisms and inverse homomorphisms.
A string homomorphism is said to be ε-free (or e-free) if for all a in the alphabet . Simple single-letter substitution ciphers are examples of (ε-free) string homomorphisms.
An example string homomorphism guc can also be obtained by defining similar to the above substitution: guc(‹a›) = ‹A›, ..., guc(‹0›) = ε, but letting guc be undefined on punctuation chars.
Examples for inverse homomorphic images are
guc−1({ ‹SSS› }) = { ‹sss›, ‹sß›, ‹ßs› }, since guc(‹sss›) = guc(‹sß›) = guc(‹ßs›) = ‹SSS›, and
guc−1({ ‹A›, ‹bb› }) = { ‹a› }, since guc(‹a›) = ‹A›, while ‹bb› cannot be reached by guc.
For the latter language, guc(guc−1({ ‹A›, ‹bb› })) = guc({ ‹a› }) = { ‹A› } ≠ { ‹A›, ‹bb› }.
The homomorphism guc is not ε-free, since it maps e.g. ‹0› to ε.
A very simple string homomorphism example that maps each character to just a character is the conversion of an EBCDIC-encoded string to ASCII.
String projection
If s is a string, and is an alphabet, the string projection of s is the string that results by removing all characters that are not in . It is written as . It is formally defined by removal of characters from the right hand side:
Here denotes the empty string. The projection of a string is essentially the same as a projection in relational algebra.
String projection may be promoted to the projection of a language. Given a formal language L, its projection is given by
Right and left quotient
The right quotient of a character a from a string s is the truncation of the character a in the string s, from the right hand side. It is denoted as . If the string does not have a on the right hand side, the result is the empty string. Thus:
The quotient of the empty string may be taken:
Similarly, given a subset of a monoid , one may define the quotient subset as
Left quotients may be defined similarly, with operations taking place on the left of a string.
Hopcroft and Ullman (1979) define the quotient L1/L2 of the languages L1 and L2 over the same alphabet as .
This is not a generalization of the above definition, since, for a string s and distinct characters a, b, Hopcroft's and Ullman's definition implies yielding , rather than .
The left quotient (when defined similar to Hopcroft and Ullman 1979) of a singleton language L1 and an arbitrary language L2 is known as Brzozowski derivative; if L2 is represented by a regular expression, so can be the left quotient.
Syntactic relation
The right quotient of a subset of a monoid defines an equivalence relation, called the right syntactic relation of S. It is given by
The relation is clearly of finite index (has a finite number of equivalence classes) if and only if the family right quotients is finite; that is, if
is finite. In the case that M is the monoid of words over some alphabet, S is then a regular language, that is, a language that can be recognized by a finite-state automaton. This is discussed in greater detail in the article on syntactic monoids.
Right cancellation
The right cancellation of a character a from a string s is the removal of the first occurrence of the character a in the string s, starting from the right hand side. It is denoted as and is recursively defined as
The empty string is always cancellable:
Clearly, right cancellation and projection commute:
Prefixes
The prefixes of a string is the set of all prefixes to a string, with respect to a given language:
where .
The prefix closure of a language is
Example:
A language is called prefix closed if .
The prefix closure operator is idempotent:
The prefix relation is a binary relation such that if and only if . This relation is a particular example of a prefix order.
See also
Comparison of programming languages (string functions)
Levi's lemma
String (computer science) — definition and implementation of more basic operations on strings
Notes
References
(See chapter 3.)
Formal languages
Relational algebra
Operations | String operations | [
"Mathematics",
"Technology"
] | 1,926 | [
"Sequences and series",
"String (computer science)",
"Mathematical structures",
"Formal languages",
"Mathematical logic",
"Fields of abstract algebra",
"Computer science",
"Mathematical relations",
"Relational algebra"
] |
10,803,998 | https://en.wikipedia.org/wiki/Bcfg2 | Bcfg2 (pronounced "bee-config") is a configuration management tool developed in the Mathematics and Computer Science Division of Argonne National Laboratory. Bcfg2 aids in the infrastructure management lifecycle – configuration analysis, service deployment, and configuration auditing. It includes tools for visualizing configuration information, as well as reporting tools that help administrators understand configuration patterns in their environments.
Bcfg2 differs from similar configuration management tools due to its auditing capability. One of the stated design goals for Bcfg2 is to determine if interactive (direct) changes have been made to a machine and report on these extra changes. The client can optionally remove any additional configuration.
Overview
Bcfg2 is written in Python and enables system administrator to manage the configuration of a large number of computers using a central configuration model. Bcfg2 operates using a simple model of system configuration, modeling intuitive items like packages, services and configuration files (as well as the dependencies between them). This model of system configuration is used for verification and validation, allowing robust auditing of deployed systems. The Bcfg2 configuration specification is written using a declarative XML model. The entire specification can be validated using widely available XML schema validators along with the custom schemas included in Bcfg2.
Built to be cross-platform, Bcfg2 works on most Unix-like operating systems.
Architecture
Bcfg2 is based on a client-server architecture. The client is responsible for interpreting (but not processing) the configuration served by the server. This configuration is literal, so no client-side processing of the configuration is required. After completion of the configuration process, the client uploads a set of statistics to the server.
The Bcfg2 Client
The Bcfg2 client performs all client configuration or reconfiguration operations. It renders a declarative configuration specification, provided by the Bcfg2 server, into a set of configuration operations which will attempt to change the client's state into that described by the configuration specification.
The operation of the Bcfg2 client is intended to be as simple as possible. Conceptually, the sole purpose of the client is to reconcile the differences between the current client state and the state described in the specification received from the Bcfg2 server.
The Bcfg2 Server
The Bcfg2 server is responsible for taking a network description and turning it into a series of configuration specifications for particular clients. It also manages probed data and tracks statistics for clients.
Server operation
The Bcfg2 server takes information from two sources when generating client configuration specifications. The first is a pool of metadata that describes clients as members of an aspect-based classing system. That is, clients are defined in terms of aspects of their abstract behavior. The other is a file system repository that contains mappings from metadata to literal configuration. These are combined to form the literal configuration specifications for clients.
An example of abstract configuration entries:
<Package name="openssh-server"/>
<Path name="/etc/motd"/>
An example of literal configuration entries:
<Package name="openssh-server" version="auto" type="deb"/>
<Path name="/etc/motd">Hello from Bcfg2
</Path>
See also
Comparison of open source configuration management software
Puppet
OpenLMI
References
Further reading
External links
Configuration management
Linux configuration utilities
MacOS
Linux package management-related software
Unix package management-related software
Software using the BSD license | Bcfg2 | [
"Engineering"
] | 738 | [
"Systems engineering",
"Configuration management"
] |
10,804,088 | https://en.wikipedia.org/wiki/Natural%20magic | Natural magic in the context of Renaissance magic is that part of the occult which deals with natural forces directly, as opposed to ceremonial magic which deals with the summoning of spirits. Natural magic sometimes makes use of physical substances from the natural world such as stones or herbs.
Natural magic so defined includes astrology, alchemy, and disciplines that we would today consider fields of natural science, such as astronomy and chemistry (which developed and diverged from astrology and alchemy, respectively, into the modern sciences they are today) or botany (from herbology). The Jesuit scholar Athanasius Kircher wrote that "there are as many types of natural magic as there are subjects of applied sciences".
Heinrich Cornelius Agrippa discusses natural magic in his Three Books of Occult Philosophy (1533), where he calls it "nothing else but the highest power of natural sciences". The Italian Renaissance philosopher Giovanni Pico della Mirandola, who founded the tradition of Christian Kabbalah, argued that natural magic was "the practical part of natural science" and was lawful rather than heretical.
See also
References
Further reading
External links
.
History of science
Renaissance
Magic (supernatural) | Natural magic | [
"Technology"
] | 240 | [
"History of science",
"History of science and technology"
] |
10,804,230 | https://en.wikipedia.org/wiki/Ly49 | Ly49 is a family of membrane C-type lectin-like receptors expressed mainly on NK cells but also on other immune cells (some CD8+ and CD3+ T lymphocytes, intestinal epithelial lymphocytes (IELs), NKT cells, uterine NK cells (uNK) cells, macrophages or dendritic cells). Their primary role is to bind MHC-I molecules to distinguish between self healthy cells and infected or altered cells. Ly49 family is coded by Klra gene cluster and include genes for both inhibitory and activating paired receptors, but most of them are inhibitory. Inhibitory Ly49 receptors play a role in the recognition of self cells and thus maintain self-tolerance and prevent autoimmunity by suppressing NK cell activation. On the other hand, activating receptors recognise ligands from cancer or viral infected cells (induced-self hypothesis) and are used when cells lack or have abnormal expression of MHC-I molecules (missing-self hypothesis), which activate cytokine production and cytotoxic activity of NK and immune cells.
Ly49 receptors are expressed in some mammals including rodents, cattle, some primates but not in humans. Only one human gene homologous to rodent Ly49 receptors is found in the human genome, KLRA1P (LY49L), however, it represents a non-functional pseudogene. However killer cell immunoglobulin-like receptors (KIR) have the same function in humans. They have different molecular structure but recognise HLA class I molecules as ligands and include both inhibitory (mainly) and activating receptors.
Function
Role in NK cells
The function of NK cells is the killing of virally infected or cancerous cells. Therefore, they must have a precisely regulated system of self-cell recognition to prevent the destruction of healthy cells. They express several types of inhibitory and activating receptors on their surface, including the Ly49 receptor family, which have roles in NK cell licensing, antiviral, and antitumor immunity,.
NK cells are activated when signal from activating receptors outweighs inhibitory signals. This could happen when activating receptors recognise viral proteins presented on infected cell surface (induced-self theory). Some Ly49 receptors have evolved to recognise specific viral proteins, for example Ly49H binds to murine cytomegalovirus (MCMV) glycoprotein m157. Mouse strains without Ly49H are more susceptible to MCMV infection. In addition these Ly49H positive NK cells have properties of MCMV specific memory NK cells and react better during secondary MCMV infections.
Another example of NK cell activation is recognition of tumor cells, which stop expressing MHC I molecules in order to avoid killing by cytotoxic T lymphocytes. Inhibitory receptors of NK cells don't obtain signal resulting in cell activation via activating receptors. This mechanism describes the missing self hypothesis.
In order to be fully functional and have cytotoxic activity, NK cells need to get signals from self-MHC I molecules on inhibitory Ly49 receptors in rodents (KIR in humans) especially during their development. This educational process prevents generation of autoreactive NK cells and it was called "NK cell licensing" by Yokoyama and colleagues. If inhibitory Ly49 receptors miss the signal from MHCI during their development, they are unlicensed (un-educated) and don't react to stimulation on activating receptors. But this hyporesponsive state isn't definite and they can be re-educated in certain conditions. Besides, it has been shown un-educated cells can be activated by certain acute viral infections or by some tumors and kill these cells more efficiently than educated cells.
Receptor types
Inhibitory receptors
Inhibitory receptors play a role in the NK cell licensing and are important for recognition and tolerance of self cells.
Stimulation of inhibitory receptors leads to phosphorylation of immunoreceptor tyrosine‐based inhibitory motif (ITIM), located in the cytoplasmic part of these receptors. Phosphorylated Ly49 molecule recruits the src homology 2 (SH2) domain containing protein phosphatase SHP-1, which dephosphorylates ITIM and thus prevents cell activation.
Inhibitory receptors include Ly49A, B, C, E, G, Q.
Activating receptors
Activating receptors are involved in antiviral and antitumor immunity.
They signal through immunoreceptor tyrosine-based activation motif (ITAM) which is part of an associated molecule DAP-12 attached to arginine in the transmembrane segment of Ly49. After stimulation of the receptor and phosphorylation of ITAM, SH2 domain with protein kinase is recruited and that starts kinase signaling cascade leading to activating cell effector functions.
Activating receptors include Ly49D, H, L.
References
Cell biology
Immune receptors | Ly49 | [
"Biology"
] | 1,059 | [
"Cell biology"
] |
10,805,442 | https://en.wikipedia.org/wiki/B-cell%20activating%20factor | B-cell activating factor (BAFF) also known as tumor necrosis factor ligand superfamily member 13B and CD257 among other names, is a protein that in humans is encoded by the TNFSF13B gene. BAFF is also known as B Lymphocyte Stimulator (BLyS) and TNF- and APOL-related leukocyte expressed ligand (TALL-1) and the Dendritic cell-derived TNF-like molecule (CD257 antigen; cluster of differentiation 257).
Structure and function
BAFF is a cytokine that belongs to the tumor necrosis factor (TNF) ligand family. This cytokine is a ligand for receptors TNFRSF13B/TACI, TNFRSF17/BCMA, and TNFRSF13C/BAFF-R. This cytokine is expressed in B cell lineage cells, and acts as a potent B cell activator. It has been also shown to play an important role in the proliferation and differentiation of B cells.
BAFF is a 285-amino acid long peptide glycoprotein which undergoes glycosylation at residue 124. It is expressed as a membrane-bound type II transmembrane protein on various cell types including monocytes, dendritic cells and bone marrow stromal cells. The transmembrane form can be cleaved from the membrane, generating a soluble protein fragment. BAFF steady-state concentrations depend on B cells and also on the expression of BAFF-binding receptors. BAFF is the natural ligand of three nonconventional tumor necrosis factor receptors named BAFF-R (BR3), TACI (transmembrane activator and calcium modulator and cyclophilin ligand interactor), and BCMA (B-cell maturation antigen), all of which have differing binding affinities for it. These receptors are expressed mainly on mature B lymphocytes and their expression varies in dependence of B cell maturation (TACI is also found on a subset of T-cells and BCMA on plasma cells). BAFF-R is involved in the positive regulation during B cell development. TACI binds worst since its affinity is higher for a protein similar to BAFF, called a proliferation-inducing ligand (APRIL). BCMA displays an intermediate binding phenotype and will work with either BAFF or APRIL to varying degrees. Signaling through BAFF-R and BCMA stimulates B lymphocytes to undergo proliferation and to counter apoptosis. All these ligands act as homotrimers (i.e. three of the same molecule) interacting with homotrimeric receptors, although BAFF has been known to be active as either a hetero- or homotrimer (can aggregate into 60-mer depending on the primary structure of the protein).
Interactions
B-cell activating factor has been shown to interact with TNFRSF13B, TNFSF13, TNFRSF13C, and TNFRSF17.
Interaction between BAFF and BAFF-R activates classical and noncanonical NF-κB signaling pathways. This interaction triggers signals essential for the formation and maintenance of B cell, thus it is important for a B-cell survival.
Recombinant production
Human BLyS has been expressed and purified in E. Coli. The BLyS protein in the engineered bacteria can be as much as 50% to the bacteria's total protein content and still retains activity after a purification procedure.
Clinical significance
As an immunostimulant, BAFF (BLyS, TALL-1) is necessary for maintaining normal immunity. Inadequate level of BAFF will fail to activate B cells to produce enough immunoglobulin and will lead to immunodeficiency.
Excessive level of BAFF causes abnormally high antibody production, results in systemic lupus erythematosus, rheumatoid arthritis, and many other autoimmune diseases. Overexpression of BAFF also correlates with enhanced humoral immunity against malaria infection.
Belimumab (Benlysta) is a monoclonal antibody developed by Human Genome Sciences and GlaxoSmithKline, with significant discovery input by Cambridge Antibody Technology, which specifically recognizes and inhibits the biological activity of B-Lymphocyte stimulator (BLyS) and is in clinical trials for treatment of Systemic lupus erythematosus and other autoimmune diseases.
BAFF has been found in renal transplant biopsies with acute rejection and correlate with appearance C4d. Increased levels of BAFF may initiate alloreactive B cell and T cell immunity, therefore may promote allograft rejection. Lower level of BAFF transcripts (or a higher level of soluble BAFF) show a higher risk of producing donor-specific antibodies in the investigated patients. Donor-specific antibodies bind with high affinity to the vascular endothelium of graft and activate complement. This process result in neutrophils infiltration, hemorrhage, fibrin deposition and platelet aggregation. Targeting BAFF-R interactions may provide new therapeutic possibilities in transplantation.
Blisibimod, a fusion protein inhibitor of BAFF, is in development by Anthera Pharmaceuticals, also primarily for the treatment of systemic lupus erythematosus.
BAFF may also be a new mediator of food-related inflammation. Higher levels of BAFF are present in non-atopic compared with atopic patients, and there is not any correlation between BAFF and IgE, suggesting that BAFF might be particularly involved in non-IgE-mediated reactions. In patients with celiac disease, serum BAFF levels are reduced after a gluten-free diet. The same reduction could be present in the recently defined “Non Celiac Gluten sensitivity” (a reaction to gluten which provokes almost the same symptoms of celiac disease and could involve up to 20% of apparently healthy individuals.) BAFF is also a specific inducer of insulin resistance and can be a strong link between inflammation and diabetes or obesity. BAFF gives the organism a sort of danger signal and usually, according to the evolutionary theories, every human being responds to danger activating thrifty genes in order to store fat and to avoid starvation. BAFF shares many activities with PAF (Platelet Activating Factor) and they are both markers of non-IgE-mediated reactions in food-reactivity.
References
Further reading
External links
Proteins | B-cell activating factor | [
"Chemistry"
] | 1,361 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
10,805,529 | https://en.wikipedia.org/wiki/TNFSF9 | Tumor necrosis factor ligand superfamily member 9 also known as 4-1BB ligand or 4-1BBL or CD137L is a protein that in humans is encoded by the TNFSF9 gene.
4-1BBL is a type 2 transmembrane glycoprotein receptor that is found on APCs (antigen presenting cells) and binds to 4-1BB (also known as CD137). The 4-1BB/4-1BBL complex belongs to the TNFR:TNF superfamily, which is expressed on activated T Lymphocytes.
Structure of 4-1BB/4-1BBL complex
The 4-1BB/4-1BBL complex consists of three monomeric 4-1BBs bound to a trimeric 4-1BBL. Each 4-1BB monomer binds to two 4-1BBLs via cysteine-rich domains (CRDs). The interaction between 4-1BB and the second 4-1BBL is required to stabilize their interactions. The link with 4-1BBL is largely made up of amino acids from the dynamic loops of the CRD2 and the β sheet of CRD3 of 4-1BB, according to a detailed study of the binding between the 4-1BB and 4-1BBL interface. CRD2 amino acids (T61, Q67, and K69) interact with the AA′ loop (Y110 and G114) and the intra-H-strand loop (Q227 and Q230) of 4-1BBL to form various hydrogen bond interactions.
Application to cancer immunotherapy
Studies on the poorly immunogenic Ag104A sarcoma and the extremely tumorigenic P815 mastocytoma provided the first systematic proof that anti-4-1BB antibodies have potent anti-tumor effects. Anti-4-1BB administration to mice with the aforementioned tumors was shown to substantially inhibit tumor growth by increasing CTL activity. In the years to come, more studies verified and legitimized the effect of 4-1BB signaling to inhibit tumor growth.
The interaction between 4-1BB and 4-1BBL provide costimulatory signals to a variety of T cells, which can be used to discover cancer immunotherapy. The 4-1BB/4-1BBL complex together with a signal provided by a T-cell receptor can provide costimulatory signals to CD4+ and CD8+ T cells in mice, leading to the activation of CD4+ and CD8+ T cells. The activation of CD8+ T cells is essential in antitumor immunity. The 4-1BB/4-1BBL complex with the help of T-cell receptor signals can co-stimulate human CD28− T cells and trigger the increase in CD28− T cells. Unlike the activation of CD8+ T cells, the proliferation of CD28− T cells can negatively affect cancer state and other diseases. Therefore, this pathway can be targeted for immunotherapy.
See also
CD137
References
External links | TNFSF9 | [
"Chemistry",
"Biology"
] | 641 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
10,805,909 | https://en.wikipedia.org/wiki/Subgroups%20of%20cyclic%20groups | In abstract algebra, every subgroup of a cyclic group is cyclic. Moreover, for a finite cyclic group of order n, every subgroup's order is a divisor of n, and there is exactly one subgroup for each divisor. This result has been called the fundamental theorem of cyclic groups.
Finite cyclic groups
For every finite group G of order n, the following statements are equivalent:
G is cyclic.
For every divisor d of n, G has at most one subgroup of order d.
If either (and thus both) are true, it follows that there exists exactly one subgroup of order d, for any divisor of n.
This statement is known by various names such as characterization by subgroups. (See also cyclic group for some characterization.)
There exist finite groups other than cyclic groups with the property that all proper subgroups are cyclic; the Klein group is an example. However, the Klein group has more than one subgroup of order 2, so it does not meet the conditions of the characterization.
The infinite cyclic group
The infinite cyclic group is isomorphic to the additive subgroup Z of the integers. There is one subgroup dZ for each integer d (consisting of the multiples of d), and with the exception of the trivial group (generated by d = 0) every such subgroup is itself an infinite cyclic group. Because the infinite cyclic group is a free group on one generator (and the trivial group is a free group on no generators), this result can be seen as a special case of the Nielsen–Schreier theorem that every subgroup of a free group is itself free.
The fundamental theorem for finite cyclic groups can be established from the same theorem for the infinite cyclic groups, by viewing each finite cyclic group as a quotient group of the infinite cyclic group.
Lattice of subgroups
In both the finite and the infinite case, the lattice of subgroups of a cyclic group is isomorphic to the dual of a divisibility lattice. In the finite case, the lattice of subgroups of a cyclic group of order n is isomorphic to the dual of the lattice of divisors of n, with a subgroup of order n/d for each divisor d. The subgroup of order n/d is a subgroup of the subgroup of order n/e if and only if e is a divisor of d. The lattice of subgroups of the infinite cyclic group can be described in the same way, as the dual of the divisibility lattice of all positive integers. If the infinite cyclic group is represented as the additive group on the integers, then the subgroup generated by d is a subgroup of the subgroup generated by e if and only if e is a divisor of d.
Divisibility lattices are distributive lattices, and therefore so are the lattices of subgroups of cyclic groups. This provides another alternative characterization of the finite cyclic groups: they are exactly the finite groups whose lattices of subgroups are distributive. More generally, a finitely generated group is cyclic if and only if its lattice of subgroups is distributive and an arbitrary group is locally cyclic if and only its lattice of subgroups is distributive. The additive group of the rational numbers provides an example of a group that is locally cyclic, and that has a distributive lattice of subgroups, but that is not itself cyclic.
References
Theorems in group theory
Articles containing proofs | Subgroups of cyclic groups | [
"Mathematics"
] | 709 | [
"Articles containing proofs"
] |
10,806,161 | https://en.wikipedia.org/wiki/Medical%20calculator | A medical calculator is a type of medical computer software, whose purpose is to allow easy calculation of various scores and indices, presenting the user with a friendly interface that hides the complexity of the formulas. Most offer further information such as result interpretation guides and medical literature references. Generally, such calculators are intended for use by health care professionals, and use by the general public maybe discouraged.
Medical calculators arose because modern medicine makes frequent use of scores and indices that put physicians' memory and calculation skills to the test. The advent of personal computers, the Internet and Web, and more recently personal digital assistants (PDAs) have formed an environment conducive to their development, spread and use.
Types
Online
Various websites, including Wikipedia, are available that provide calculations from a browser based input form. Websites that offer this ability include MDCalc.
Hardware
Purpose-built devices for specific medical calculations are available from various commercial sources. There are two ways to make a calculator using an array that looks up an answer based on a large array of data or where the calculator computes the answer using a mathematical equation.
Apps
Software-based medical calculators are available for various platforms, including the iPhone and Android. Handheld battery powered portable units are available and can be manufactured in smaller quantities than before thanks to OTP (one Time Programmable) chips.
References
Medical equipment
Medical software | Medical calculator | [
"Biology"
] | 284 | [
"Medical software",
"Medical equipment",
"Medical technology"
] |
1,517,190 | https://en.wikipedia.org/wiki/Pulsed%20power | Pulsed power is the science and technology of accumulating energy over a relatively long period of time and releasing it instantly, thus increasing the instantaneous power. They can be used in some applications such as food processing, water treatment, weapons, and medical applications.
Overview
Energy is typically stored within electrostatic fields (capacitors), magnetic fields (inductors), as mechanical energy (using large flywheels connected to special-purpose high-current alternators), or as chemical energy (high-current lead-acid batteries, or explosives). By releasing the stored energy over a very short interval (a process that is called energy compression), a huge amount of peak power can be delivered to a load. For example, if one joule of energy is stored within a capacitor and then evenly released to a load over one second, the average power delivered to the load would only be 1 watt. However, if all of the stored energy were released within one microsecond, the average power over one second would still be one watt, but the instantaneous peak power would be one megawatt, a million times greater.
Maximum Power Records
Single pulse energies as high as 100 MJ, power as high as a "few hundred terawatts" with voltages between 10 kV and 50 MV, and currents between 1 kA and 10 MA, have been achieved at least as of 2006.
Usage
Railgun is one of the example usage of pulsed power and it is still at research stage due to its complexity.
See also
"kicker"
(EMP)
, "Z machine"
Manufacturers
ABB Pulsed Power Manufacturer of semiconductor-based replacements for thyratrons
References
Power (physics) | Pulsed power | [
"Physics",
"Mathematics"
] | 348 | [
"Force",
"Physical quantities",
"Quantity",
"Power (physics)",
"Energy (physics)",
"Pulsed power",
"Wikipedia categories named after physical quantities"
] |
1,517,197 | https://en.wikipedia.org/wiki/Navy%20Precision%20Optical%20Interferometer | The Navy Precision Optical Interferometer (NPOI) is an American astronomical interferometer, with the world's largest baselines, operated by the Naval Observatory Flagstaff Station (NOFS) in collaboration with the Naval Research Laboratory (NRL) and Lowell Observatory. The NPOI primarily produces space imagery and astrometry, the latter a major component required for the safe position and navigation of all manner of vehicles for the DoD. The facility is located at Lowell's Anderson Mesa Station on Anderson Mesa about southeast of Flagstaff, Arizona (US). Until November 2011, the facility was known as the Navy Prototype Optical Interferometer (NPOI). Subsequently, the instrument was temporarily renamed the Navy Optical Interferometer, and now permanently, the Kenneth J. Johnston Navy Precision Optical Interferometer (NPOI) – reflecting both the operational maturity of the facility, and paying tribute to its principal driver and retired founder, Kenneth J. Johnston.
The NPOI project was initiated by the United States Naval Observatory (USNO) in 1987. Lowell joined the project the following year when the USNO decided to build the NPOI at Anderson Mesa. The first phase of construction was completed in 1994, which allowed the interferometer to see its first fringes, or light combined from multiple sources, that year. The Navy began regular science operations in 1997. The NPOI has been continuously upgraded and expanded since then, and has been operational for a decade. The workings of NPOI as a classic interferometer, are described at Scholarpedia, and at the NPOI site.
Description
The NPOI is an astronomical interferometer laid out in a three-arm "Y" configuration, with each equally-spaced arm measuring long. There are two types of stations that can be used in the NPOI. Astrometric stations, used to measure the positions of celestial objects very accurately, are fixed units placed apart, with one on each arm and one at the center. Imaging stations can be moved to one of nine positions on each arm, and up to six can be used at one time to perform standard observing. Light from either type of station is first directed into the feed system, which consists of long pipes which have been evacuated of all air. They lead to a switchyard of mirrors, where the light is directed into the six Long Delay Lines, which is another set of long pipes that compensate for the different distances to each station. The light is then sent into the Beam Combining Facility, where it enters the Fast Delay Lines. This third set of evacuated pipes contains mechanisms that move mirrors back and forth with a very high degree of accuracy. These compensate for the movement of the mirrors as they track an object across the sky, and for other effects. Finally, the light leaves the pipes inside the BCF and goes to the Beam Combining Table, where the light is combined in a way that allows images to be formed.
Both types of station have three elements: a siderostat, a Wide Angle Star Acquisition (WASA) camera, and a Narrow Angle Tracking (NAT) mirror. The first is a precisely-ground flat mirror in diameter. The WASA cameras control the aiming of the mirror at the celestial target. The reflected light from the siderostat is directed through a telescope which narrows the beam down to the diameter of the pipes, which is . The light then hits the mirror of the NAT, which compensates for atmospheric effects and directs the light into the feed system.
In 2009 NOFS began final plans for NPOI to incorporate four aperture optical-infrared telescopes into the array, which were accepted by the Navy in 2010, and assigned to the Naval Observatory Flagstaff Station. They were originally intended to be "outrigger" telescopes for the W. M. Keck Observatory in Hawaii, but were never installed and incorporated into Keck's interferometer. Three telescopes are being prepared for near-immediate installation, while the fourth is currently at Mount Stromlo Observatory in Australia and will be incorporated at some point in the future. The new telescopes will help with faint object imaging and improved absolute astrometry, due to their greater light-gathering abilities than the existing siderostats.
NOFS operates and leads the science for the Navy Precision Optical Interferometer, as noted, in collaboration with Lowell Observatory and the Naval Research Laboratory at Anderson Mesa. NOFS funds all principal operations, and from this contracts Lowell Observatory to maintain the Anderson Mesa facility and make the observations for NOFS to conduct the primary astrometry. The Naval Research Laboratory (NRL) also provides funds to contract Lowell Observatory's and NRL's implementation of additional, long-baseline siderostat stations, facilitating NRL's primary scientific work, synthetic imaging (both celestial and of orbital satellites). When complete by 2013, NPOI will run the longest baseline interferometer in the world. The three institutions – USNO, NRL, and Lowell – each provide an executive to sit on an Operational Advisory Panel (OAP), which collectively guides the science and operations of the interferometer. The OAP commissioned the chief scientist and director of the NPOI to effect the science and operations for the Panel; this manager is a senior member of the NOFS staff and reports to the NOFS Director.
NPOI is an example of the Michelson Interferometer design, with the principal science managed by NOFS. Lowell Observatory and NRL join in the scientific efforts through their fractions of time to use the interferometer; science time is 85% Navy (NOFS and NRL); and 15% Lowell. NPOI is one of the few major instruments globally which can conduct optical interferometry. See an illustration of its layout, at bottom. NOFS has used NPOI to conduct a wide and diverse series of scientific studies, beyond just the study of absolute astrometric positions of stars,; additional NOFS science at NPOI includes the study of binary stars, Be Stars, Oblate stars, rapidly rotating stars, those with starspots, and the imaging of stellar disks (the first in history) and flare stars. In 2007–2008, NRL with NOFS used NPOI to obtain first-ever closure phase image precursors of satellites orbiting in geostationary orbit.
Installation plans for a 1m Array have been developed by NRL and Lowell Observatory, based on the funded science performed.
Discussion
Optical interferometers are extremely complex, unfilled aperture photon-collecting telescopes in the visual (sometimes the near infrared, too), which produce synthesized images and fringe data "on the fly" (unlike radio interferometers which are privileged to record the data for later synthesis), essentially by taking an inverse Fourier transform of the incoming data. Astrometry is understood by precisely measuring delay line additions while fringing, to match the light path differences from baseline ends. Using essentially trigonometry the angle and position of where the array is 'pointed' can be determined, thus inferring a precise position on the sphere of the sky.
Only a few exist that can be considered operational. To date NPOI has produced the highest resolution optical images of any astronomical instrument, though this may change when the CHARA array and Magdalena Ridge Observatory Interferometer begin optical-band operations. The first astronomical object imaged (resolved) by NPOI was Mizar, and since, a significant amount of astrometry, reference tie frame, rapid rotator star, and Be stellar disk study has been performed. NPOI is capable of determining positions of celestial objects to a few milli-arcsecond, in part due to the optical anchoring of its components using a complex metrology array of lasers that connect main optical elements to each other and to bedrock.
Many specialized lasers are also used to align the long train of optics. The current NPOI siderostat array remains the world's only long-baseline (437-meter) optical interferometer that can simultaneously co-phase six elements. NPOI is expected to grow significantly in capability with the pending addition of four 1.8-meter aperture IR/Optical telescopes into the current array. The enhanced array will also employ adaptive optics techniques. This layout and increased sparse aperture will permit significant improvements to the science capability, from a tenfold increase in measuring ever-fainter wide-angle astrometry targets, to improved positional determination for numerous binary and flare stars. When the 1.8m telescope addition are complete, NPOI also will undertake additional studies of dust and proto-planetary disks, and planetary systems and their formation.
See also
List of astronomical interferometers at visible and infrared wavelengths
List of telescope types
References
External links
United States Naval Observatory Flagstaff Station
Additional USNOFS Information
Lowell Observatory NPOI site
Astrometry
Optical telescopes
Interferometric telescopes
Military installations in Arizona
United States Naval Observatory | Navy Precision Optical Interferometer | [
"Astronomy"
] | 1,790 | [
"Astrometry",
"Astronomical sub-disciplines"
] |
1,517,620 | https://en.wikipedia.org/wiki/Steady-state%20economy | A steady-state economy is an economy made up of a constant stock of physical wealth (capital) and a constant population size. In effect, such an economy does not grow in the course of time. The term usually refers to the national economy of a particular country, but it is also applicable to the economic system of a city, a region, or the entire world. Early in the history of economic thought, classical economist Adam Smith of the 18th century developed the concept of a stationary state of an economy: Smith believed that any national economy in the world would sooner or later settle in a final state of stationarity.
Since the 1970s, the concept of a steady-state economy has been associated mainly with the work of leading ecological economist Herman Daly. As Daly's concept of a steady-state includes the ecological analysis of natural resource flows through the economy, his concept differs from the original classical concept of a stationary state. One other difference is that Daly recommends immediate political action to establish the steady-state economy by imposing permanent government restrictions on all resource use, whereas economists of the classical period believed that the final stationary state of any economy would evolve by itself without any government intervention.
Critics of the steady-state economy usually object to it by arguing that resource decoupling, technological development, and the operation of market mechanisms are capable of overcoming resource scarcity, pollution, or population overshoot. Proponents of the steady-state economy, on the other hand, maintain that these objections remain insubstantial and mistaken — and that the need for a steady-state economy is becoming more compelling every day.
A steady-state economy is not to be confused with economic stagnation: Whereas a steady-state economy is established as the result of deliberate political action, economic stagnation is the unexpected and unwelcome failure of a growth economy. An ideological contrast to the steady-state economy is formed by the concept of a post-scarcity economy.
Definition and vision
Since the 1970s, the concept of a steady-state economy has been associated mainly with the work of leading ecological economist Herman Daly — to such an extent that even his boldest critics recognize the prominence of his work.
Herman Daly defines his concept of a steady-state economy as an economic system made up of a constant stock of physical wealth (capital) and a constant stock of people (population), both stocks to be maintained by a flow of natural resources through the system. The first component, the constant stocks, is similar to the concept of the stationary state, originally used in classical economics; the second component, the flow of natural resources, is a new ecological feature, presently also used in the academic discipline of ecological economics. The durability of both of the constant stocks is to be maximized: The more durable the stock of capital is, the smaller the flow of natural resources is needed to maintain the stock; likewise, a 'durable' population means a population enjoying a high life expectancy — something desirable by itself — maintained by a low birth rate and an equally low death rate. Taken together, higher durability translates into better ecology in the system as a whole.
Daly's concept of a steady-state economy is based on the vision that man's economy is an open subsystem embedded in a finite natural environment of scarce resources and fragile ecosystems. The economy is maintained by importing valuable natural resources from the input end and exporting valueless waste and pollution at the output end in a constant and irreversible flow. Any subsystem of a finite nongrowing system must itself at some point also become nongrowing and start maintaining itself in a steady-state as far as possible. This vision is opposed to mainstream neoclassical economics, where the economy is represented by an isolated and circular model with goods and services exchanging endlessly between companies and households, without exhibiting any physical contact to the natural environment.
In the early 2010s, reviewers sympathetic towards Daly's concept of a steady-state economy have passed the concurrent judgement that although his concept remains beyond what is politically feasible at present, there is room for mainstream thinking and collective action to approach the concept in the future. In 2022 a research (chapters 4–5) described degrowth toward a steady state economy as something possible and probably positive. The study ends by the words:"The case for a transition to a steady-state economy with low throughput and low emissions, initially in the high-income economies and then in rapidly growing economies, needs more serious attention and international cooperation.
Historical background
For centuries, economists and other scholars have considered matters of natural resource scarcity and limits to growth, from the early classical economists in the 18th and 19th centuries down to the ecological concerns that emerged in the second half of the 20th century and developed into the formation of ecological economics as an independent academic subdiscipline in economics.
Concept of the stationary state in classical economics
From Adam Smith and onwards, economists in the classical period of economic theorising described the general development of society in terms of a contrast between the scarcity of arable agricultural land on the one hand, and the growth of population and capital on the other hand. The incomes from gross production were distributed as rents, profits and wages among landowners, capitalists and labourers respectively, and these three classes were incessantly engaged in the struggle for increasing their own share. The accumulation of capital (net investments) would sooner or later come to an end as the rate of profit fell to a minimum or to nil. At that point, the economy would settle in a final stationary state with a constant population size and a constant stock of capital.
Adam Smith's concept
Adam Smith's magnum opus on The Wealth of Nations, published in 1776, laid the foundation of classical economics in Britain. Smith thereby disseminated and established a concept that has since been a cornerstone in economics throughout most of the world: In a liberal capitalist society, provided with a stable institutional and legal framework, an 'invisible hand' will ensure that the enlightened self-interest of all members of society will contribute to the growth and prosperity of society as a whole, thereby leading to an 'obvious and simple system of natural liberty'.
Smith was convinced of the beneficial effect of the enlightened self-interest on the wealth of nations; but he was less certain this wealth would grow forever. Smith observed that any country in the world found itself in either a 'progressive', a 'stationary', or a 'declining' state: Although England was wealthier than its North American colonies, wages were higher in the latter place as wealth in North America was growing faster than in England; hence, North America was in the 'cheerful and hearty' progressive state. In China, on the other hand, wages were low, the condition of poor people was scantier than in any nation in Europe, and more marriages were contracted here because the 'horrid' killing of newborn babies was permitted and even widely practised; hence, China was in the 'dull' stationary state, although it did not yet seem to be declining. In nations situated in the 'melancholic' declining state, the higher ranks of society would fall down and settle for occupation amid the lower ranks, while the lowest ranks would either subsist on a miserable and insufficient wage, resort to begging or crime, or slide into starvation and early death. Bengal and some other English settlements in the East Indies possibly found themselves in this state, Smith reckoned.
Smith pointed out that as wealth was growing in any nation, the rate of profit would tend to fall and investment opportunities would diminish. In a nation that had thereby reached this 'full complement of riches', society would finally settle in a stationary state with a constant stock of people and capital. In an 18th-century anticipation of The Limits to Growth (see below), Smith described the state as follows:
According to Smith, Holland seemed to be approaching this stationary state, although at a much higher level than in China. Smith believed the laws and institutions of China prevented this country from achieving the potential wealth its soil, climate and situation might have admitted of. Smith was unable to provide any contemporary examples of a nation in the world that had in fact reached the full complement of riches and thus had settled in stationarity, because, as he conjectured, "... perhaps no country has ever yet arrived at this degree of opulence."
David Ricardo's concept
In the early 19th century, David Ricardo was the leading economist of the day and the champion of British laissez-faire liberalism. He is known today for his free trade principle of comparative advantage, and for his formulation of the controversial labor theory of value. Ricardo replaced Adam Smith's empirical reasoning with abstract principles and deductive argument. This new methodology would later become the norm in economics as a science.
In Ricardo's times, Britain's trade with the European continent was somewhat disrupted during the Napoleonic Wars that had raged since 1803. The Continental System brought into effect a large-scale embargo against British trade, whereby the nation's food supply came to rely heavily on domestic agriculture to the benefit of the landowning classes. When the wars ended with Napoleon's final defeat in 1815, the landowning classes dominating the British parliament had managed to tighten the existing Corn Laws in order to retain their monopoly status on the home market during peacetime. The controversial Corn Laws were a protectionist two-sided measure of subsidies on corn exports and tariffs on corn imports. The tightening was opposed by both the capitalist and the labouring classes, as the high price of bread effectively reduced real profits and real wages in the economy. So was the political setting when Ricardo published his treatise On the Principles of Political Economy and Taxation in 1817.
According to Ricardo, the limits to growth were ever present due to scarcity of arable agricultural land in the country. In the wake of the wartime period, the British economy seemed to be approaching the stationary state as population was growing, plots of land with lower fertility were put into agricultural use, and the rising rents of the rural landowning class were crowding out the profits of the urban capitalists. This was the broad outline of Ricardo's controversial land rent theory. Ricardo believed that the only way for Britain to avoid the stationary state was to increase her volume of international trade: The country should export more industrial products and start importing cheap agricultural products from abroad in turn. However, this course of development was impeded by the Corn Laws that seemed to be hampering both the industrialisation and the internationalization of the British economy. In the 1820s, Ricardo and his followers – Ricardo himself died in 1823 – directed much of their fire at the Corn Laws in order to have them repealed, and various other free trade campaigners borrowed indiscriminately from Ricardo's doctrines to suit their agenda.
The Corn Laws were not repealed before 1846. In the meantime, the British economy kept growing, a fact that effectively undermined the credibility and thrust of Ricardian economics in Britain; but Ricardo had by now established himself as the first stationary state theorist in the history of economic thought.
Ricardo's preoccupation with class conflict anticipated the work of Karl Marx (see below).
John Stuart Mill's concept
John Stuart Mill was the leading economist, philosopher and social reformer in mid-19th century Britain. His economics treatise on the Principles of Political Economy, published in 1848, attained status as the standard textbook in economics throughout the English-speaking world until the turn of the century.
A champion of classical liberalism, Mill believed that an ideal society should allow all individuals to pursue their own good without any interference from others or from government. Also a utilitarian philosopher, Mill regarded the 'Greatest Happiness Principle' as the ultimate ideal for a harmonious society:
Mill's concept of the stationary state was strongly coloured by these ideals. Mill conjectured that the stationary state of society was not too far away in the future:
Contrary to both Smith and Ricardo before him, Mill took an optimistic view on the future stationary state. Mill could not "... regard the stationary state of capital and wealth with the unaffected aversion so generally manifested toward it by political economists of the old school." Instead, Mill attributed many important qualities to this future state, he even believed the state would bring about "... a very considerable improvement on our present condition." According to Mill, the stationary state was at one and the same time inevitable, necessary and desirable: It was inevitable, because the accumulation of capital would bring about a falling rate of profit that would diminish investment opportunities and hamper further accumulation; it was also necessary, because mankind had to learn how to reduce its size and its level of consumption within the boundaries set by nature and by employment opportunities; finally, the stationary state was desirable, as it would ease the introduction of public income redistibution schemes, create more equality and put an end to man's ruthless struggle to get by — instead, the human spirit would be liberated to the benefit of more elevated social and cultural activities, 'the graces of life'.
Hence, Mill was able to express all of his liberal ideals for mankind through his concept of the stationary state. It has been argued that Mill essentially made a quality-of-life argument for the stationary state.
Main developments in economics since Mill
When the influence of John Stuart Mill and his Principles declined, the classical-liberalist period of economic theorising came to an end. By the turn of the 19th century, Marxism and neoclassical economics had emerged to dominate economics:
Although a classical economist in his own right, Karl Marx abandoned the earlier concept of a stationary state and replaced it with his own unique vision of historical materialism, according to which human societies pass through several 'modes of production', eventually leading to communism. In each mode of production, man's increasing mastery over nature and the 'productive forces' of society develop to a point where the class conflict bursts into revolutions, followed by the establishment of a new mode of production. In opposition to his liberalist predecessors in the field, Marx did not regard natural resource scarcity as a factor constraining future economic growth; instead, the capitalist mode of production was to be overturned before the productive forces of society could fully develop, bringing about an abundance of goods in a new society based on the principle of "from each according to ability, to each according to need" — that is, communism. The assumption, based on technological optimism, was that communism would overcome any resource scarcity ever to be encountered. For ideological reasons, then, orthodox Marxism has mostly been opposed to any concern with natural resource scarcity ever since Marx's own day. However, the march of history has been hard on this ideology: By 1991, German sociologist Reiner Grundmann was able to make the rather sweeping observation that "Orthodox Marxism has vanished from the scene, leftism has turned green, and Marxists have become ecologists."
In neoclassical economics, on the other hand, the preoccupation with society's long term growth and development inherent in classical economics was abandoned altogether; instead, economic analysis came to focus on the study of the relationship between given ends and given scarce means, forming the concept of general equilibrium theory within an essentially static framework. Hence, neoclassical economics achieved greater generality, but only by asking easier questions; and any concern with natural resource scarcity was neglected. For this reason, modern ecological economists have deplored the simplified and ecologically harmful features of neoclassical economics: It has been argued that neoclassical economics has become a pseudoscience of choice between anything in general and nothing in particular, while neglecting the preferences of future generations; that the very terminology of neoclassical economics is so ecologically illiterate as to rarely even refer to natural resources or ecological limits; and that neoclassical economics has developed to become a dominant free market ideology legitimizing an ideal of society resembling a perpetual motion machine of economic growth at intolerable environmental and human costs.
Taken together, it has been argued that "... if Judeo-Christian monotheism took nature out of religion, Anglo-American economists (after about 1880) took nature out of economics." Almost one century later, Herman Daly has
reintegrated nature into economics in his concept of a steady-state economy (see below).
John Maynard Keynes's concept of reaching saturation
John Maynard Keynes was the paradigm founder of modern macroeconomics, and is widely considered today to be the most influential economist of the 20th century. Keynes rejected the basic tenet of classical economics that free markets would lead to full employment by themselves. Consequently, he recommended government intervention to stimulate aggregate demand in the economy, a macroeconomic policy now known as Keynesian economics. Keynes also believed that capital accumulation would reach saturation at some point in the future.
In his essay from 1930 on The Economic Possibilities of Our Grandchildren, Keynes ventured to look one hundred years ahead into the future and predict the standard of living in the 21st century. Writing at the beginning of the Great Depression, Keynes rejected the prevailing "bad attack of economic pessimism" of his own time and foresaw that by 2030, the grandchildren of his generation would live in a state of abundance, where saturation would have been reached. People would find themselves liberated from such economic activities as saving and capital accumulation, and be able to get rid of 'pseudo-moral principles' — avarice, exaction of interest, love of money — that had characterized capitalistic societies so far. Instead, people would devote themselves to the true art of life, to live "wisely and agreeably and well." Mankind would finally have solved "the economic problem," that is, the struggle for existence.
The similarity between John Stuart Mill's concept of the stationary state (see above) and Keynes's predictions in this essay has been noted. It has been argued that although Keynes was right about future growth rates, he underestimated the inequalities prevailing today, both within and across countries. He was also wrong in predicting that greater wealth would induce more leisure spent; in fact, the reverse trend seems to be true.
In his magnum opus on The General Theory of Employment, Interest and Money, Keynes looked only one generation ahead into the future and predicted that state intervention balancing aggregate demand would by then have caused capital accumulation to reach the point of saturation. The marginal efficiency of capital as well as the rate of interest would both be brought down to zero, and — if population was not increasing rapidly — society would finally "... attain the conditions of a quasi-stationary community where change and progress would result only from changes in technique, taste, population and institutions ..." Keynes believed this development would bring about the disappearance of the rentier class, something he welcomed: Keynes argued that rentiers incurred no sacrifice for their earnings, and their savings did not lead to productive investments unless aggregate demand in the economy was sufficiently high. "I see, therefore, the rentier aspect of capitalism as a transitional phase which will disappear when it has done its work."
Post-war economic expansion and emerging ecological concerns
The economic expansion following World War II took place while mainstream economics largely neglected the importance of natural resources and environmental constraints in the development. Addressing this discrepancy, ecological concerns emerged in academia around 1970. Later on, these concerns developed into the formation of ecological economics as an academic subdiscipline in economics.
Post-war economic expansion and the neglect of mainstream economics
After the ravages of World War II, the industrialised part of the world experienced almost three decades of unprecedented and prolonged economic expansion. This expansion — known today as the Post–World War II economic expansion — was brought about by international financial stability, low oil prices and ever increasing labour productivity in manufacturing. During the era, all the advanced countries who founded — or later joined — the OECD enjoyed robust and sustained growth rates as well as full employment. In the 1970s, the expansion ended with the 1973 oil crisis, resulting in the 1973–75 recession and the collapse of the Bretton Woods monetary system.
Throughout this era, mainstream economics — dominated by both neoclassical economics and Keynesian economics — developed theories and models where natural resources and environmental constraints were neglected. Conservation issues related specifically to agriculture and forestry were left to specialists in the subdiscipline of environmental economics at the margins of the mainstream. As the theoretical framework of neoclassical economics — namely general equilibrium theory — was uncritically adopted and maintained by even environmental economics, this subdiscipline was rendered largely unable to consider important issues of concern to environmental policy.
In the years around 1970, the widening discrepancy between an ever-growing world economy on the one hand, and a mainstream economics discipline not taking into account the importance of natural resources and environmental constraints on the other hand, was finally addressed — indeed, challenged — in academia by a few unorthodox economists and researchers.
Emerging ecological concerns
During the short period of time from 1966 to 1972, four works were published addressing the importance of natural resources and the environment to human society:
In his 1966 philosophical-minded essay on The Economics of the Coming Spaceship Earth, economist and systems scientist Kenneth E. Boulding argued that mankind would soon have to adapt to economic principles much different than the past 'open earth' of illimitable plains and exploitative behaviour. On the basis of the thermodynamic principle of the conservation of matter and energy, Boulding developed the view that the flow of natural resources through the economy is a rough measure of the Gross national product (GNP); and, consequently, that society should start regarding the GNP as a cost to be minimized rather than a benefit to be maximized. Therefore, mankind would have to find its place in a cyclical ecological system without unlimited reservoirs of anything, either for extraction or for pollution — like a spaceman on board a spaceship. Boulding was not the first to make use of the 'Spaceship Earth' metaphor, but he was the one who combined this metaphor with the analysis of natural resource flows through the economy.
In his 1971 magnum opus on The Entropy Law and the Economic Process, Romanian American economist Nicholas Georgescu-Roegen integrated the thermodynamic concept of entropy with economic analysis, and argued that all natural resources are irreversibly degraded when put to use in economic activity. What happens in the economy is that all matter and energy is transformed from states available for human purposes (valuable natural resources) to states unavailable for human purposes (valueless waste and pollution). In the history of economic thought, Georgescu-Roegen was also the first economist of some standing to theorise on the premise that all of earth's mineral resources will eventually be exhausted at some point (see below).
Also in 1971, pioneering ecologist and general systems analyst Howard T. Odum published his book on Environment, Power and Society, where he described human society in terms of ecology. He formulated the maximum power principle, according to which all organisms, ecosystems and human societies organise themselves in order to maximize their use of available energy for survival. Odum pointed out that those human societies with access to the higher quality of energy sources enjoyed an advantage over other societies in the Darwinian evolutionary struggle. Odum later co-developed the concept of emergy (i.e., embodied energy) and made other valuable contributions to ecology and systems analysis. His work provided the biological term 'ecology' with its broader societal meaning used today.
In 1972, environmental scientist and systems analyst Donella Meadows and her team of researchers had their study on The Limits to Growth published by the Club of Rome. The Meadows team modelled aggregate trends in the world economy and made the projection — not prediction — that by the mid to latter part of the 21st century, industrial production per capita, food supply per capita and world population would all reach a peak, and then rapidly decline in a vicious overshoot-and-collapse trajectory. Due to its dire pessimism, the study was scorned and dismissed by most mainstream economists at the time of its publication. However, well into the 21st century, several independent researchers have confirmed that world economic trends so far do indeed match up to the original 'standard run' projections made by the Meadows team, indicating that a global collapse may still loom large in the not too distant future.
Taken together, these four works were seminal in bringing about the formation of ecological economics later on.
Formation of ecological economics as an academic subdiscipline
Although most of the theoretical and foundational work behind ecological economics was in place by the early 1970s, a long gestation period elapsed before this new academic subdiscipline in economics was properly named and institutionalized. Ecological economics was formally founded in 1988 as the culmination of a series of conferences and meetings through the 1980s, where key scholars interested in the ecology-economy interdependency were interacting with each other. The most important people involved in the establishment were Herman Daly and Robert Costanza from the US; AnnMari Jansson from Sweden; and Juan Martínez-Alier from Spain (Catalonia). Since 1989, the discipline has been organised in the International Society for Ecological Economics that publishes the journal of Ecological Economics.
When the ecological economics subdiscipline was established, Herman Daly's 'preanalytic vision' of the economy was widely shared among the members who joined in: The human economy is an open subsystem of a finite and non-growing ecosystem (earth's natural environment), and any subsystem of a fixed nongrowing system must itself at some point also become nongrowing. Indeed, it has been argued that the subdiscipline itself was born out of frustration with the unwillingness of the established disciplines to accept this vision. However, ecological economics has since been overwhelmed by the influence and domination of neoclassical economics and its everlasting free market orthodoxy. This development has been deplored by activistic ecological economists as an 'incoherent', 'shallow' and overly 'pragmatic' slide.
Herman Daly's concept of a steady-state economy
In the 1970s, Herman Daly became the world's leading proponent of a steady-state economy. Throughout his career, Daly published several books and articles on the subject. He also helped to found the Center for the Advancement of the Steady-State Economy (CASSE). He received several prizes and awards in recognition of his work.
According to two independent comparative studies of American Daly's steady-state economics versus the later, competing school of degrowth from continental Europe, no differences of analytical substance exist between the two schools; only, Daly's bureaucratic — or even technocratic — top-down management of the economy fares badly with the more radical grassroots appeal of degrowth, as championed by French political scientist Serge Latouche (see below).
The premise underlying Daly's concept of a steady-state economy is that the economy is an open subsystem of a finite and non-growing ecosystem (earth's natural environment). The economy is maintained by importing low-entropy matter-energy (resources) from nature; these resources are put through the economy, being transformed and manufactured into goods along the way; eventually, the throughput of matter-energy is exported to the environment as high-entropy waste and pollution. Recycling of material resources is possible, but only by using up some energy resources as well as an additional amount of other material resources; and energy resources, in turn, cannot be recycled at all, but are dissipated as waste heat. Out of necessity, then, any subsystem of a fixed nongrowing system must itself at some point also become nongrowing.
Daly argues that nature has provided basically two sources of wealth at man's disposal, namely a stock of terrestrial mineral resources and a flow of solar energy. An 'asymmetry' between these two sources of wealth exist in that we may — within some practical limits — extract the mineral stock at a rate of our own choosing (that is, rapidly), whereas the flow of solar energy is reaching earth at a rate beyond human control. Since the Sun will continue to shine on earth at a fixed rate for billions of years to come, it is the terrestrial mineral stock — and not the Sun — that constitutes the crucial scarcity factor regarding man's economic future.
Daly points out that today's global ecological problems are rooted in man's historical record: Until the Industrial Revolution that took place in Britain in the second half of the 18th century, man lived within the limits imposed by what Daly terms a 'solar-income budget': The Palaeolithic tribes of hunter-gatherers and the later agricultural societies of the Neolithic and onwards subsisted primarily — though not exclusively — on earth's biosphere, powered by an ample supply of renewable energy, received from the Sun. The Industrial Revolution changed this situation completely, as man began extracting the terrestrial mineral stock at a rapidly increasing rate. The original solar-income budget was thereby broken and supplemented by the new, but much scarcer source of wealth. Mankind still lives in the after-effect of this revolution.
Daly cautions that more than two hundred years of worldwide industrialisation is now confronting mankind with a range of problems pertaining to the future existence and survival of our species:
Following the work of Nicholas Georgescu-Roegen, Daly argues that the laws of thermodynamics restrict all human technologies and apply to all economic systems:
This view on the role of technology in the economy was later termed 'entropy pessimism' (see below).
In Daly's view, mainstream economists tend to regard natural resource scarcity as only a relative phenomenon, while human needs and wants are granted absolute status: It is believed that the price mechanism and technological development (however defined) is capable of overcoming any scarcity ever to be faced on earth; it is also believed that all human wants could and should be treated alike as absolutes, from the most basic necessities of life to the extravagant and insatiable craving for luxuries. Daly terms this belief 'growthmania', which he finds pervasive in modern society. In opposition to the dogma of growthmania, Daly submits that "... there is such a thing as absolute scarcity, and there is such a thing as purely relative and trivial wants". Once it is recognised that scarcity is imposed by nature in an absolute form by the laws of thermodynamics and the finitude of earth; and that some human wants are only relative and not worthy of satisfying; then we are all well on the way to the paradigm of a steady-state economy, Daly concludes.
Consequently, Daly recommends that a system of permanent government restrictions on the economy is established as soon as possible, a steady-state economy. Whereas the classical economists believed that the final stationary state would settle by itself as the rate of profit fell and capital accumulation came to an end (see above), Daly wants to create the steady-state politically by establishing three institutions of the state as a superstructure on top of the present market economy:
The first institution is to correct inequality to some extent by putting minimum and maximum limits on incomes, maximum limits on wealth, and then redistribute accordingly.
The second institution is to stabilise the population by issuing transferable reproduction licenses to all fertile women at a level corresponding with the general replacement fertility in society.
The third institution is to stabilise the level of capital by issuing and selling depletion quotas that impose quantitative restrictions on the flow of resources through the economy. Quotas effectively minimise the throughput of resources necessary to maintain any given level of capital (as opposed to taxes, that merely alter the prevailing price structure).
The purpose of these three institutions is to stop and prevent further growth by combining what Daly calls "a nice reconciliation of efficiency and equity" and providing "the ecologically necessary macrocontrol of growth with the least sacrifice in terms of microlevel freedom and variability."
Among the generation of his teachers, Daly ranks Nicholas Georgescu-Roegen and Kenneth E. Boulding as the two economists he has learned the most from. However, both Georgescu-Roegen and Boulding have assessed that a steady-state economy may serve only as a temporary societal arrangement for mankind when facing the long-term issue of global mineral resource exhaustion: Even with a constant stock of people and capital, and a minimised (yet constant) flow of resources put through the world economy, earth's mineral stock will still be exhausted, although at a slower rate than is presently the situation (see below).
Responding specifically to the criticism levelled at him by Georgescu-Roegen, Daly concedes that a steady-state economy will serve only to postpone, and not to prevent, the inevitable mineral resource exhaustion: "A steady-state economy cannot last forever, but neither can a growing economy, nor a declining economy". A frank and committed Protestant, Daly further argues that...
Later, several other economists in the field have agreed that not even a steady-state economy can last forever on earth.
Ecological reasons for a steady-state economy
In 2021, a study checked if the current situation confirms the predictions of the book Limits to Growth. The conclusion was that in 10 years the global GDP will begin to decline. If it will not happen by deliberate transition it will happen by ecological disaster.
Planetary boundaries
The world's mounting ecological problems have stimulated interest in the concept of a steady-state economy. Since the 1990s, most metrics have provided evidence that the volume of the world economy far exceeds critical global limits to economic growth already. According to the ecological footprint measure, Earth's carrying capacity — that is, Earth's long-term capacity to sustain human populations and consumption levels — was exceeded by some 30 percent in 1995. By 2018, this figure had increased to some 70 percent. In 2020 multinational team of scientists published a study, saying that overconsumption is the biggest threat to sustainability. According to the study a drastic change in lifestyle is necessary for solving the ecological crisis. According to one of the authors Julia Steinberger: "To protect ourselves from the worsening climate crisis, we must reduce inequality and challenge the notion that riches, and those who possess them, are inherently good." The research was published on the site of the World Economic Forum. The leader of the forum, professor Klaus Schwab, calls for a "great reset of capitalism".
In effect, mankind is confronted by an ecological crisis, in which humans are living outside of planetary boundaries which will have significant effects on human health and wellbeing. The significant impact of human activities on Earth's ecosystems has motivated some geologists to propose the present epoch be named the anthropocene. The following issues have raised much concern worldwide:
Pollution and global warming
Air pollution emanating from motor vehicles and industrial plants is damaging public health and increasing mortality rates. The concentration of carbon dioxide and other greenhouse gases in the atmosphere is the apparent source of global warming and climate changes. Extreme regional weather patterns and rising sea levels caused by warming degrade living conditions in many — if not all — parts of the world. The warming already poses a security threat to many nations and works as a so-called 'threat multiplier' to geo-political stability. Even worse, the loss of Arctic permafrost may be triggering a massive release of methane and other greenhouse gases from thawing soils in the region, thereby overwhelming political action to counter climate change. If critical temperature thresholds are crossed, Earth's climate may transit from an 'icehouse' to a 'greenhouse' state for the first time in 34 million years.
One of the most common solutions to the climate crisis is transitioning to renewable energy, but it also has some environmental impacts. They are presented by the proponents of theories like degrowth steady-state economy and circular economy as one of the proofs that for achieving sustainability technological methods are not enough and there is a need to limit consumption
In 2019 a new report "Plastic and Climate" was published. According to the report, in 2019, plastic will contribute greenhouse gases in the equivalent of 850 million tons of carbon dioxide () to the atmosphere. In current trend, annual emissions will grow to 1.34 billion tons by 2030. By 2050 plastic could emit 56 billion tons of greenhouse gas emissions, as much as 14 percent of the Earth's remaining carbon budget, except the harm to Phytoplankton. The report says that only solutions which involve a reduction in consumption can solve the problem, while others like biodegradable plastic, ocean cleanup, using renewable energy in plastic industry can do little, and in some cases may even worsen it. Another report referring to all the environmental and health effects of plastic says the same.
Depletion of non-renewable minerals
Non-renewable mineral reserves are currently extracted at high and unsustainable rates from Earth's crust. Remaining reserves are likely to become ever more costly to extract in the near future, and will reach depletion at some point. The era of relatively peaceful economic expansion that has prevailed globally since World War II may be interrupted by unexpected supply shocks or simply be succeeded by the peaking depletion paths of oil and other valuable minerals. In 2020 in the first time the rate of use of natural resources arrived to more than 110 billion tons per year
Economist Jason Hickel has written critically about the ideology of green-growth, the idea that as capitalism and systems expand, natural resources will also expand naturally, as it is compatible with our planet's ecology. This contradicts with the idea of no-growth economics, or degrowth economics, where the sustainability and stability of the economy is prioritized over the uncontrolled profit of those in power. Models around creating development in communities have found that failing to account for sustainability in early stages leads to failure in the long term. These models contradict green growth theory and do not support ideas about expansion of natural resources. Additionally, those living in poorer areas tend to be exposed to higher levels of toxins and pollutants as a result of systematic environmental racism. Increasing natural resources and increasing local involvement in their distribution are potential solutions to alleviate pollution and address poverty in these areas.
Net depletion of renewable resources
Use of renewable resources in excess of their replenishment rates is undermining ecological stability worldwide. Between 2000 and 2012, deforestation resulted in some 14 percent of the equivalent of Earth's original forest cover to be cut down. Tropical rainforests have been subject to deforestation at a rapid pace for decades — especially in west and central Africa and in Brazil — mostly due to subsistence farming, population pressure, and urbanization. Population pressures also strain the world's soil systems, leading to land degradation, mostly in developing countries. Global erosion rates on conventional cropland are estimated to exceed soil creation rates by more than ten times. Widespread overuse of groundwater results in water deficits in many countries. By 2025, water scarcity could impact the living conditions of two-thirds of the world's population.
Loss of biodiversity
The destructive impact of human activity on wildlife habitats worldwide is accelerating the extinction of rare species, thereby substantially reducing Earth's biodiversity. The natural nitrogen cycle is heavily overloaded by industrial nitrogen fixation and use, thereby disrupting most known types of ecosystems. The accumulating plastic debris in the oceans decimates aquatic life. Ocean acidification due to the excess concentration of carbon dioxide in the atmosphere is resulting in coral bleaching and impedes shell-bearing organisms. Arctic sea ice decline caused by global warming is endangering the polar bear.
In 2019, a summary for policymakers of the largest, most comprehensive study to date of biodiversity and ecosystem services was published by the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services. The report was finalised in Paris. The main conclusions:
Over the last 50 years, the state of nature has deteriorated at an unprecedented and accelerating rate.
The main drivers of this deterioration have been changes in land and sea use, exploitation of living beings, climate change, pollution and invasive species. These five drivers, in turn, are caused by societal behaviors, from consumption to governance.
Damage to ecosystems undermines 35 of 44 selected UN targets, including the UN General Assembly's Sustainable Development Goals for poverty, hunger, health, water, cities' climate, oceans and land. It can cause problems with food, water and humanity's air supply.
To fix the problem, humanity will need a transformative change, including sustainable agriculture, reductions in consumption and waste, fishing quotas and collaborative water management. On page 8 of the summary the authors state that one of the main measures is: " enabling visions of a good quality of life that do not entail ever-increasing material consumption;
These mounting concerns have prompted an increasing number of academics and other writers — beside Herman Daly — to point to limits to economic growth, and to question — and even oppose — the prevailing ideology of infinite economic growth.
In September 2019, 1 day before the Global Climate Strike on 20 September 2019 in the Guardian was published an article that summarizes a lot of research and say that limiting consumption is necessary for saving the biosphere.
Steady-state economy and well-being
Except the reasons linked to resource depletion and the carrying capacity of the ecological system, there are other reasons to limit consumption: overconsumption hurts the well-being of those who consume too much.
In the same time when the ecological footprint of humanity exceeded the sustainable level, while GDP more than tripled from 1950, one of the well-being measures, the genuine progress indicator, has fallen from 1978. This is one of the reasons for pursuing the steady-state economy.
In some cases reducing consumption can increase the living standard. In Costa Rica the GDP is 4 times smaller than in many countries in Western Europe and North America, but people live longer and better. An American study shows that when the income is higher than $75,000, an increase in profits does not increase well-being. To better measure well-being, the New Economics Foundation's has launched the Happy Planet Index.
The food industry is a large sector of consumption, responsible for 37% of global greenhouse-gas emissions and studies show that people waste a fifth of food products just through disposal or overconsumption. By the time food reaches the consumer, 9% (160 million tons) goes uneaten and 10% is lost to overconsumption, meaning consumers ate more than the calorie intake requirement. When the consumer takes in too much, this not only explains losses at the beginning of the stage at production (and overproduction) but also lends itself to overconsumption of energy and protein, having harmful effects on the body like obesity.
A report from the Lancet commission says the same. The experts write: "Until now, undernutrition and obesity have been seen as polar opposites of either too few or too many calories [...] In reality, they are both driven by the same unhealthy, inequitable food systems, underpinned by the same political economy that is single-focused on economic growth, and ignores the negative health and equity outcomes. Climate change has the same story of profits and power". Obesity was a medical problem for people who overconsumed food and worked too little already in ancient Rome, and its impact slowly grew through history. As to 2012, mortality from obesity was 3 times higher than from hunger, reaching 2.8 million people per year by 2017.
Cycling reduces greenhouse gas emissions while reducing the effects of a sedentary lifestyle at the same time. As of 2002, a sedentary lifestyle claimed 2 million lives per year. The World Health Organization stated that "60 to 85% of people in the world—from both developed and developing countries—lead sedentary lifestyles, making it one of the more serious yet insufficiently addressed public health problems of our time." By 2012, according to a study published in The Lancet, the number reached 5.3 million.
Reducing the use of screens can help fight many diseases, among others depression, the leading cause of disability globally. It also can lower greenhouse gas emission. As of 2018, 3.7% of global emissions were from digital technologies, more than from aviation; the number is expected to achieve 8% by 2025, equal to the emissions from cars.
Reducing light pollution can reduce greenhouse-gas emissions and improve health.
In September 2019, 1 day before the Global Climate Strike on 20 September 2019, an article was published in The Guardian that summarizes much research and says that limiting consumption is necessary for the health of overconsumers: it can increase empathy, improve the contacts with other people, and more.
Connection with other ideologies and movements
The concept of a steady-state economy is connected to other concepts that can be generally defined as ecological economics and anti-consumerism, because it serves as the final target of those concepts: Those ideologies are not calling for poverty but want to reach a level of consumption that is the best for people and the environment.
Degrowth
The Center for the Advancement of the Steady State Economy (CASSE) defines steady-state economy not only as an economy with some''' constant level of consumption, but as an economy with the best possible level of consumption maintained constantly. To define what that level is, it considers not only ecology, but also living standards. The Center writes: "In cases where the benefits of growth outweigh the costs (for example, where people are not consuming enough to meet their needs), growth or redistribution of resources may be required. In cases where the size of the economy has surpassed the carrying capacity of the ecosystems that contain it (a condition known as overshoot), degrowth may be required before establishing a steady state economy that can be maintained over the long term".
In February 2020, the same organization proposed a slogan of "Degrowth Toward a Steady State Economy" because it can unite degrowthers and steady staters. In the statement it is mentioned that "[i]n 2018 the nascent DegrowUS adopted the mission statement, "Our mission is a democratic and just transition to a smaller, steady state economy in harmony with nature, family, and community."
In his article on Economic de-growth vs. steady-state economy, Christian Kerschner has integrated the strategy of declining-state, or degrowth, with Herman Daly's concept of the steady-state economy to the effect that degrowth should be considered a path taken by the rich industrialized countries leading towards a globally equitable steady-state economy. This ultra-egalitarian path will then make ecological room for poorer countries to catch up and combine into a final world steady-state, maintained at some internationally agreed upon intermediate and 'optimum' level of activity for some period of time — although not forever. Kerschner admits that this goal of a world steady-state may remain unattainable in the foreseeable future, but such seemingly unattainable goals could stimulate visions about how to better approach them.
The concept of Overdevelopment by Leopold Cohr
In 1977 Leopold Kohr published a book named The Overdeveloped Nations: The Diseconomies Of Scale, talking primarily about overconsumption. This book is the basis for the theory of overdevelopment, saying that the global north, the rich countries are too developed, which increases the Ecological footprint of humanity and create many problems both in overdeveloped and underdeveloped countries.
Conceptual and ideological disagreements
Several conceptual and ideological disagreements presently exist concerning the steady-state economy in particular and the dilemma of growth in general. The following issues are considered below: The role of technology; resource decoupling and the rebound effect; a declining-state economy; the possibility of having capitalism without growth; and the possibility of pushing some of the terrestrial limits into outer space.
In 2019 a research, presenting an overview of the attempts to achieve constant economic growth without environmental destruction and their results, was published. It shows that by the year 2019 the attempts were not successful. It does not give a clear answer about future attempts.
Herman Daly's approach to these issues are presented throughout the text.
Role of technology
Technology is usually defined as the application of scientific method in the production of goods or in other social achievements. Historically, technology has mostly been developed and implemented in order to improve labour productivity and increase living standards. In economics, disagreement presently exists regarding
the role of technology when considering its dependency on natural resources:
In neoclassical economics, on the one hand, the role of 'technology' is usually represented as yet another factor of production contributing to economic growth, like land, labour and capital contribute. However, in neoclassical production functions, where the output of produced goods are related to the inputs provided by the factors of production, no mention is made of the contribution of natural resources to the production process. Hence, 'technology' is reified as a separate, self-contained device, capable of contributing to production without receiving any natural resource inputs beforehand. This representation of 'technology' also prevails in standard mainstream economics textbooks on the subject.
In ecological economics, on the other hand, 'technology' is represented as the way natural resources are transformed in the production process. Hence, Herman Daly argues that the role of technology in the economy cannot be properly conceptualized without taking into account the flow of natural resources necessary to support the technology itself: An internal combustion engine runs on fuels; machinery and electric devices run on electricity; all capital equipment is made out of material resources to begin with. In physical terms, any technology — useful though it is — works largely as a medium for transforming valuable natural resources into material goods that eventually end up as valueless waste and pollution, thereby increasing the entropy — or disorder — of the world as a whole. This view of the role of technology in the economy has been termed 'entropy pessimism'.
From the ecological point of view, it has been suggested that the disagreement boils down to a matter of teaching some elementary physics to the uninitiated neoclassical economists and other technological optimists. From the neoclassical point of view, leading growth theorist and Nobel Prize laureate Robert Solow has defended his much criticised position by replying in 1997 that 'elementary physics' has not by itself prevented growth in the industrialized countries so far.
Resource decoupling and the rebound effect
Resource decoupling occurs when economic activity becomes less intensive ecologically: A declining input of natural resources is needed to produce one unit of output on average, measured by the ratio of total natural resource consumption to gross domestic product (GDP). Relative resource decoupling occurs when natural resource consumption declines on a ceteris paribus assumption — that is, all other things being equal. Absolute resource decoupling occurs when natural resource consumption declines, even while GDP is growing.
In the history of economic thought, William Stanley Jevons was the first economist of some standing to analyse the occurrence of resource decoupling, although he did not use this term. In his 1865 book on The Coal Question, Jevons argued that an increase in energy efficiency would by itself lead to more, not less, consumption of energy: Due to the income effect of the lowered energy expenditures, people would be rendered better off and demand even more energy, thereby outweighing the initial gain in efficiency. This mechanism is known today as the Jevons paradox or the rebound effect. Jevons's analysis of this seeming paradox formed part of his general concern that Britain's industrial supremacy in the 19th century would soon be set back by the inevitable exhaustion of the country's coal mines, whereupon the geopolitical balance of power would tip in favour of countries abroad possessing more abundant mines.
In 2009, two separate studies were published that — among other things — addressed the issues of resource decoupling and the rebound effect: German scientist and politician Ernst Ulrich von Weizsäcker published Factor Five: Transforming the Global Economy through 80% Improvements in Resource Productivity, co-authored with a team of researchers from The Natural Edge Project. British ecological economist Tim Jackson published Prosperity Without Growth, drawing extensively from an earlier report authored by him for the UK Sustainable Development Commission. Consider each in turn:
Ernst Ulrich von Weizsäcker argues that a new economic wave of innovation and investment — based on increasing resource productivity, renewable energy, industrial ecology and other green technology — will soon kick off a 'Green Kondratiev' cycle, named after the Russian economist Nikolai Kondratiev. This new long-term cycle is expected to bring about as much as an 80 percent increase in resource productivity, or what amounts to a 'Factor Five' improvement of the gross input per output ratio in the economy, and reduce environmental impact accordingly, von Weizsäcker promises. Regarding the adverse rebound effect, von Weizsäcker notes that "... efforts to improve efficiency have been fraught with increasing overall levels of consumption." As remedies, von Weizsäcker recommends three separate approaches: Recycling of and imposing restrictions on the use of materials; establishing capital funds from natural resource proceeds for reinvestments in order to compensate for the future bust caused by depletion; and finally, taxing resource consumption so as to balance it with the available supplies.
Tim Jackson points out that according to empirical evidence, the world economy has indeed experienced some relative resource decoupling: In the period from 1970 to 2009, the 'energy intensity' — that is, the energy content embodied in world GDP—decreased by 33 percent; but as the world economy also kept growing, carbon dioxide emissions from fossil fuels have increased by 80 percent during the same period of time. Hence, no absolute energy resource decoupling materialized. Regarding key metal resources, the development was even worse in that not even relative resource decoupling have materialized in the period from 1990 to 2007: The extraction of iron ore, bauxite, copper and nickel was rising faster than world GDP to the effect that "resource efficiency is going in the wrong direction," mostly due to emerging economies — notably China — building up their infrastructure. Jackson concludes his survey by noting that the 'dilemma of growth' is evident when any resource efficiency squeezed out of the economy will sooner or later be pushed back up again by a growing GDP. Jackson further cautions that "simplistic assumptions that capitalism's propensity for efficiency will stabilize the climate and solve the problem of resource scarcity are almost literally bankrupt."
Herman Daly has argued that the best way to increase natural resource efficiency (decouple) and to prevent the occurrence of any rebound effects is to impose quantitative restrictions on resource use by establishing a cap and trade system of quotas, managed by a government agency. Daly believes this system features a unique triple advantage:
Absolute and permanent limits are set on the extraction rate of, use of and pollution with the resources flowing through the economy; as opposed to taxes that merely alter the prevailing price structure without stopping growth; and as opposed to pollution standards and control which are both costly and difficult to enact and enforce.
More efficiency and recycling efforts are induced by the higher resource prices resulting from the restrictions (quota prices plus regular extraction costs).
No rebound effects are able to appear, as any temporary excess demand will result only in inflation or shortages, or both — and not in increased supply, which is to remain constant and limited on a permanent basis.
For all its merits, Daly himself points to the existence of physical, technological and practical limitations to how much efficiency and recycling can be achieved by this proposed system. The idea of absolute decoupling ridding the economy as a whole of any dependence on natural resources is ridiculed polemically by Daly as 'angelizing GDP': It would work only if we ascended to become angels ourselves.
Declining-state economy
A declining-state economy is an economy made up of a declining stock of physical wealth (capital) or a declining population size, or both. A declining-state economy is not to be confused with a recession: Whereas a declining-state economy is established as the result of deliberate political action, a recession is the unexpected and unwelcome failure of a growing or a steady economy.
Proponents of a declining-state economy generally believe that a steady-state economy is not far-reaching enough for the future of mankind. Some proponents may even reject modern civilization as such, either partly or completely, whereby the concept of a declining-state economy begins bordering on the ideology of anarcho-primitivism, on radical ecological doomsaying or on some variants of survivalism.
Romanian American economist Nicholas Georgescu-Roegen was the teacher and mentor of Herman Daly and is presently considered the main intellectual figure influencing the degrowth movement that formed in France and Italy in the early 2000s. In his
paradigmatic magnum opus on The Entropy Law and the Economic Process, Georgescu-Roegen argues that the carrying capacity of earth — that is, earth's capacity to sustain human populations and consumption levels — is bound to decrease sometime in the future as earth's finite stock of mineral resources is presently being extracted and put to use; and consequently, that the world economy as a whole is heading towards an inevitable future collapse. In effect, Georgescu-Roegen points out that the arguments advanced by Herman Daly in support of his steady-state economy apply with even greater force in support of a declining-state economy: When the overall purpose is to ration and stretch mineral resource use for as long time into the future as possible, zero economic growth is more desirable than growth is, true; but negative growth is better still! Instead of Daly's steady-state economics, Georgescu-Roegen proposed his own so-called 'minimal bioeconomic program', featuring restrictions even more severe than those propounded by his former student Daly (see above).
American political advisor Jeremy Rifkin, French champion of the degrowth movement Serge Latouche and Austrian degrowth theorist Christian Kerschner — who all take their cue from Georgescu-Roegen's work — have argued in favour of declining-state strategies. Consider each in turn:
In his book on Entropy: A New World View, Jeremy Rifkin argues that the impending exhaustion of earth's mineral resources will mark the decline of the industrial age, followed by the advent of a new solar age, based on renewable solar power. Due to the diffuse, low-intensity property of solar radiation, this source of energy is incapable of sustaining industrialism, whether capitalist or socialist. Consequently, Rifkin advocates an anarcho-primitivist future solar economy — or what he terms an 'entropic society' — based on anti-consumerism, deindustrialization, counterurbanization, organic farming and prudential restraints on childbirths. Rifkin cautions that the transition to the solar age is likely to become a troublesome phase in the history of mankind, as the present world economy is so dependent on the non-renewable mineral resources.
In his manifesto on Farewell to Growth, Serge Latouche develops a strategy of so-called 'ecomunicipalism' to initiate a 'virtuous cycle of quiet contraction' or degrowth of economic activity at the local level of society: Consumption patterns and addiction to work should be reduced; systems of fair taxation and consumption permits should redistribute the gains from economic activity within and among countries; obsolescence and waste should be reduced, products designed so as to make recycling easier. This bottom-up strategy opposes overconsumption in rich countries as well as emerging, poor countries to aspire this overconsumption of the rich. Instead, the purpose of degrowth is to establish the convivial and sustainable society where people can live better lives whilst working and consuming less. Latouche further cautions that "the very survival of humanity ... means that ecological concerns must be a central part of our social, political, cultural and spiritual preoccupation with human life."
Herman Daly on his part is not opposed to the concept of a declining-state economy; but he does point out that the steady-state economy should serve as a preliminary first step on a declining path, once the optimal levels of population and capital have been properly defined. However, this first step is an important one:
Daly concedes that it is 'difficult, probably impossible' to define such optimum levels; even more, in his final analysis Daly agrees with his teacher and mentor Georgescu-Roegen that no defined optimum will be able to last forever (see above).
Capitalism without growth
Several radical critics of capitalism have questioned the possibility of ever imposing a steady-state or a declining-state (degrowth) system as a superstructure on top of capitalism. Taken together, these critics point to the following growth dynamics inherent in capitalism:
Economic activity is generally guided by the profit motive, a competitive work ethos and the drive to accumulate capital and wealth for its own sake to gratify personal ambition, provide social prestige — or simply to get rich in a hurry. Psychologically, these drives in the work sphere repress and distort biological and social homeostasis in most people.
Employments and incomes depend directly on sales revenues, that is, on people spending money on the consumption of goods and services for sale on the market. This dependency creates a pecuniary incentive to increase sales as much as possible. To this end, much cunning advertising is devised to manipulate human wants and prop up consumption patterns, often resulting in lavish and wasteful consumerism.
The financial system is based on fractional-reserve banking, enabling commercial banks to hold reserves in amounts that are less than their deposit liabilities. This credit creation is multiplying the monetary base supplied by the central bank in order to assist private corporations expanding their activities.
Technological development exhibits a strong labour-saving bias, creating the need to provide new employment elsewhere in the economy for workers displaced by the introduction of new technology.
Private corporations generally resist government regulations and restrictions that impede profits and deter investment opportunities. Attempts to downscale the economy would rapidly degenerate into economic crisis and political instability on this count alone.
Governments need tax revenues to service their debt obligations, run their institutions and finance their welfare programmes for the benefit of the public. Tax revenues are collected from general economic activity.
In the capitalist world economy, globalisation intensifies competition everywhere, both within and between countries. National governments are compelled to compete and struggle with each other to provide employment, investments, tax revenues and wealth for their own populations.
— In short: There is no end to the systemic and ecologically harmful growth dynamics in modern capitalism, radical critics assert.
Fully aware of the massive growth dynamics of capitalism, Herman Daly on his part poses the rhetorical question whether his concept of a steady-state economy is essentially capitalistic or socialistic. He provides the following answer (written in 1980):
Daly concludes by inviting all (most) people — both liberal supporters of and radical critics of capitalism — to join him in his effort to develop a steady-state economy.
Pushing some of the terrestrial limits into outer space
Ever since the beginning of the modern Space Age in the 1950s, some space advocates have pushed for space habitation, frequently in the form of colonization, some arguing as a reason for alleviating human overpopulation, overconsumption and mitigate the human impact on the environment on Earth (if not for other reasons).
In the 1970s, physicist and space activist Gerard K. O'Neill developed a large plan to build human settlements in outer space to solve the problems of overpopulation and limits to growth on earth without recourse to political repression. According to O'Neill's vision, mankind could — and indeed should — expand on this man-made frontier to many times the current world population and generate large amounts of new wealth in space. Herman Daly countered O'Neill's vision by arguing that a space colony would become subject to much harsher limits to growth — and hence, would have to be secured and managed with much more care and discipline — than a steady-state economy on large and resilient earth. Although the number of individual colonies supposedly could be increased without end, living conditions in any one particular colony would become very restricted nonetheless. Therefore, Daly concluded: "The alleged impossibility of a steady-state on earth provides a poor intellectual launching pad for space colonies."
By the 2010s, O'Neill's old vision of space colonisation had long since been turned upside down in many places: Instead of dispatching colonists from earth to live in remote space settlements, some ecology-minded space advocates conjecture that resources could be mined from asteroids in space and transported back to earth for use here. This new vision has the same double advantage of (partly) mitigating ecological pressures on earth's limited mineral reserves while also boosting exploration and colonisation of space. The building up of industrial infrastructure in space would be required for the purpose, as well as the establishment of a complete supply chain up to the level of self-sufficiency and then beyond, eventually developing into a permanent extraterrestrial source of wealth to provide an adequate return on investment for stakeholders. In the future, such an 'exo-economy' (off-planet economy) could possibly even serve as the first step towards mankind's cosmic ascension to a 'Type II' civilisation on the hypothetical Kardashev scale, in case such an ascension will ever be accomplished.
However, it is yet uncertain whether an off-planet economy of the type specified will develop in due time to match both the volume and the output mix needed to fully replace earth's dwindling mineral reserves. Sceptics like Herman Daly and others point to exorbitant earth-to-orbit launch costs of any space mission, inaccurate identification of target asteroids suitable for mining, and remote in situ ore extraction difficulties as obvious barriers to success: Investing a lot of terrestrial resources in order to recover only a few resources from space in return is not worthwhile in any case, regardless of the scarcities, technologies and other mission parameters involved in the venture. In addition, even if an off-planet economy could somehow be established at some future point, one long-term predicament would then loom large regarding the continuous mining and transportation of massive volumes of materials from space back to earth: How to keep up that volume flowing on a steady and permanent basis in the face of the astronomically long distances and time scales ever present in space. In the worst of cases, all of these obstacles could forever prevent any substantial pushing of limits into outer space — and then limits to growth on earth will remain the only limits of concern throughout mankind's entire span of existence.
Implementation
Today, steady state economy is not implemented officially by any state, but there are some measures that limit growth and means a steady level of consumption of some products per capita:
Phase-out of lightweight plastic bags that reduce consumption of bags and limit the number of bags per capita.
Reducing the consumption of energy is a very popular measure implemented by many, called generally Energy Efficiency and Energy Saving. A coalition named "3% Club for Energy Efficiency" was formed with a target of increasing energy efficiency by 3% per year. According to the International Energy Agency, Energy Efficiency can deliver more than 40% of the reduction in Greenhouse-gas emissions needed to reach the target of Paris Agreement.
In the 2019 UN Climate Action Summit, a coalition was created named "Action Towards Climate Friendly Transport"; its main targets include city planning that will reduce the need for transport and shifting to a non-motorized transport system Such measures reduce the consumption of fuel.
A method with growing popularity is Reduce, reuse and recycle. For example, reuse clothes, through the second hand market and renting clothes. The second hand market worth 24 billion$ as of 2018 and is expected to achieve bigger profit than the fast fashion market in the next years. The H&M company tries to implement it.
Some countries accepted measurements, alternatives to Gross domestic product to measure success:
Bhutan measure success in Gross National Happiness. This measurement was implemented partly in other countries.
Other popular measurements include Gross National Well-being, Better Life Index and Social Progress Index (see pages). As of 2014, the Happy Planet Index is used in 153 countries, the OECD Better Life Index in 36 countries, members of OECD.
Ecuador and Bolivia included in their constitutions the ideology of Sumac Kawsay (Buen Vivir) that "incorporates ideas of de-growth", e.g. contain some principles of the steady state economy.
See also
History of economic thought
Circular economy
Classical economics
Creative destruction
Criticism of capitalism
Degrowth
Ecological economics
Economic equilibrium
Post-growthThe Limits to GrowthProsperity Without GrowthMarket failure: Ecological market failure
Environmentalism
Ecological footprint
Planetary boundaries
Planned economy
Sustainability: Carrying capacity
Human overpopulation
Jevons paradox
Peak minerals
Kenneth E. Boulding
Herman Daly
Nicholas Georgescu-Roegen: Criticising Daly's steady-state economics
Sea level rise
References
External links
Websites
CASSE, Center for the Advancement of the Steady State Economy.
ISEE, The International Society for Ecological Economics.
Global Footprint Network. Advancing the Science of Sustainability.
Steady State Revolution. Fighting for a Sustainable World with a Steady State Economy.
Post Growth Institute. Creating global prosperity without economic growth.
Articles
Interviews and other material related to Herman Daly
(Lengthy interview spanning fifteen web pages)
(Excerpt from his Steady-state economics'')
(Essay summarizing his views)
Economics of sustainability
Economic growth
Demographic economic problems
Human overpopulation
Human impact on the environment
Global environmental issues
Ecological economics
Environmental social science
Green politics
Waste minimisation
Energy conservation
Natural resource management
Schools of economic thought
Economic systems
Degrowth
Future problems
Ecological economics concepts | Steady-state economy | [
"Environmental_science"
] | 14,336 | [
"Degrowth",
"Environmental social science",
"Environmental ethics"
] |
1,517,677 | https://en.wikipedia.org/wiki/Majolica | In different periods of time and in different countries, the term majolica has been used for two distinct types of pottery.
Firstly, from the mid-15th century onwards, maiolica was a type of pottery reaching Italy from Spain, Majorca and beyond. This was made by a tin-glaze process (dip, dry, paint, fire), resulting in an opaque white glazed surface decorated with brush-painting in metal oxide enamel colour(s). During the 17th century, the English added the letter j to their alphabet. Maiolica thereafter was commonly anglicized to majolica.
Secondly, from mid- to late 19th century, majolica was made by a simpler process (painting and then firing) whereby coloured lead silicate glazes were applied directly to an article, then fired. This resulted in brightly coloured, hard-wearing, inexpensive wares that were both useful and decorative, often with a naturalistic style. This type of majolica was introduced to the public at the 1851 Great Exhibition in London, later widely copied and mass-produced. Minton & Co., who developed the coloured lead glazes product, also developed and exhibited at the 1851 Exhibition a tin-glazed product in imitation of Italian maiolica which also became known as majolica.
Terminology
The notes in this article append tin-glazed to the word meaning 'opaque white tin-glaze, painted in enamels', and coloured glazes to the word meaning 'coloured lead glazes, applied direct to the biscuit'.
Mintons' description
Leon Arnoux, the artistic and technical director of Mintons, wrote in 1852, "We understand by majolica a pottery formed of a calcareous clay gently fired, and covered with an opaque enamel composed of sand, lead, and tin...".
Arnoux was describing the Minton & Co. tin-glazed product made in imitation of Italian maiolica both in process and in styles. Tin-glaze is simply plain lead glaze with a little tin oxide added. His description is often referenced, in error, as a definition of Minton's other new product, the much copied and later mass-produced ceramic sensation of the Victorian era, Minton's coloured lead glazes, Palissy ware. The 16th-century French pottery of Bernard Palissy was well known and much admired. Mintons adopted the name 'Palissy ware' for their new coloured glazes product, but this soon became known also as majolica.
Majolica described according to design as opposed to process
Some authors describe Minton majolica as falling into two main design styles: wares inspired by the natural world (naturalistic), and those inspired by historical wares (revivalist).
Minton Archive first design for majolica
Thomas Kirkby's design G144 in the Minton Archive is inscribed "This is the First Design for Majolica...". The design is Italian Renaissance in style. Close-up images illustrate a design suited for fine brushwork on flat surfaces. The design is for Minton's rare tin-glaze majolica imitation of Italian tin-glaze maiolica. Minton's designs for Palissy ware, also known as majolica, were suited for 'thick' painting of coloured lead glazes onto surfaces moulded in relief to make best use of the intaglio effect.
Coloured glazes earthenware
Earthenware coated with coloured lead glazes applied directly to an unglazed body has from the mid-19th century onwards been called majolica, e.g.: 20th-century majolica, Mexican majolica, Sarreguemines majolica, Palissy majolica, majolica-glazed Parian ware. The science involved in the development of multiple temperature compatible coloured lead glazes was complex, but the process itself was simple (paint, fire). This majolica is the vibrantly coloured, frequently naturalistic style of earthenware developed and named Palissy ware by Minton & Co. and introduced to the public at the 1851 Great Exhibition that was mass-produced throughout Europe and America and is widely available. In English this majolica is never spelt with an i in place of the j. It is, however, pronounced both with a hard j as in major and with a soft j as in maiolica. In some other languages i is indeed used for both coloured glazes earthenware and for tin-glazed earthenware: French and Italian .
Biscuitware was painted with thick coloured lead glazes simultaneously, then fired. The process requires just two stages and skill in painting. When fired in the kiln, every colour fuses to the body, usually without running into each other. The ceramic technology, which transformed the fortunes of Mintons, was developed by art director Leon Arnoux.
Tin-glazed earthenware
Tin-glazed earthenware having an opaque white glaze with painted overglaze decoration of metal oxide enamel colour(s) is known as maiolica. It reached Italy by the mid-15th century. It is frequently prone to flaking and somewhat delicate. The word is also spelt with a j, majolica. In contemporary England the use of maiolica spelt with an i tends to be restricted to Renaissance Italian maiolica. In the US majolica spelt with a j is used for both coloured glazes majolica and tin-glazed. In France and other countries, tin-glazed maiolica developed also as faience, and in UK and Netherlands as delftware. In France, Germany, Italy, Spain, Mexico and Portugal, tin-glazed wares are called , , , , talavera, and respectively.
Ware dipped (or coated) in tin glaze, set aside to dry, brush-painted on the unfired glaze, then fired. Process requires four separate stages and high skill in painting.
Majolica types, detail
Examples showing detail of coloured glazes majolica (paint, fire) versus tin-glazed majolica (dip, dry, paint, fire).
Collectors of majolica
Famous collectors of majolica include William Randolph Hearst, Mortimer L. Schiff, Alfred Pringsheim, Robert Strauss, and Robert Lehman.
In contemporary fiction
The Majolica Murders by Deborah Morgan
See also
Lustreware
Talavera de la Reina pottery
Tin-glazing
Victorian majolica
Citations
General and cited references
Arnoux, Leon, British Manufacturing Industries, Gutenberg, 1877.
Atterbury, Paul, and Batkin, Maureen, Dictionary of Minton, Antique Collectors' Club, 1990.
External links
The Majolica Society
Potteries Museum and Art Gallery, Stoke-on-Trent, UK
Victoria and Albert Museum, Majolica
The Minton Archive "Magnificent Majolica" archive patterns for Minton tin-glazed majolica.
American pottery
Austrian pottery
Ceramic glazes
English pottery
French pottery
Pottery
Types of pottery decoration | Majolica | [
"Chemistry"
] | 1,478 | [
"Ceramic glazes",
"Coatings"
] |
1,517,731 | https://en.wikipedia.org/wiki/EPICS | The Experimental Physics and Industrial Control System (EPICS) is a set of software tools and applications used to develop and implement distributed control systems to operate devices such as particle accelerators, telescopes and other large scientific facilities. The tools are designed to help develop systems which often feature large numbers of networked computers delivering control and feedback. They also provide SCADA capabilities.
History
EPICS was initially developed as the Ground Test Accelerator Controls System (GTACS) at Los Alamos National Laboratory (LANL) in 1988 by Bob Dalesio, Jeff Hill, et al. In 1989, Marty Kraimer from Argonne National Laboratory (ANL) came to work alongside the GTA controls team for 6 months, bringing his experience from his work on the Advanced Photon Source (APS) Control System to the project. The resulting software was renamed EPICS and was presented at the International Conference on Accelerator and Large Experimental Physics Control Systems (ICALEPCS) in 1991.
EPICS was originally available under a commercial license, with enhanced versions sold by Tate & Kinetic Systems. Licenses for collaborators were free, but required a legal agreement with LANL and APS. An EPICS community was established and development grew as more facilities joined in with the collaboration. In February 2004, EPICS became freely distributable after its release under the EPICS Open License.
It is now used and developed by over 50 large science institutions worldwide, as well as by several commercial companies.
Architecture
EPICS uses client–server and publish–subscribe techniques to communicate between computers. Servers, the “input/output controllers” (IOCs), collect experiment and control data in real time, using the measurement instruments attached to them. This information is then provided to clients, using the high-bandwidth Channel Access (CA) or the recently added pvAccess networking protocols that are designed to suit real-time applications such as scientific experiments.
IOCs hold and interact with a database of "records", which represent either devices or aspects of the devices to be controlled. IOCs can be hosted by stock-standard servers or PCs or by VME, MicroTCA, and other standard embedded system processors. For "hard real-time" applications the RTEMS or VxWorks operating systems are normally used, whereas "soft real-time" applications typically run on Linux or Microsoft Windows.
Data held in the records are represented by unique identifiers known as Process Variables (PVs). These PVs are accessible over the network channels provided by the CA/pvAccess protocol.
Many record types are available for various types of input and output (e.g., analog or binary) and to provide functional behaviour such as calculations. It is also possible to create custom record types. Each record consists of a set of fields, which hold the record's static and dynamic data and specify behaviour when various functions are requested locally or remotely. Most record types are listed in the EPICS record reference manual.
Graphical user interface packages are available, allowing users to view and interact with PV data through typical display widgets such as dials and text boxes. Examples include EDM (Extensible Display Manager), MEDM (Motif/EDM), and CSS.
Any software that implements the CA/pvAccess protocol can read and write PV values. Extension packages are available to provide support for MATLAB, LabVIEW, Perl, Python, Tcl, ActiveX, etc. These can be used to write scripts to interact with EPICS-controlled equipment.
Facilities using EPICS
Commercial Users
BiRa Systems
Ciemat
CosyLab
GLResearch
idt
Mobiis
Nusano, Inc
Observatory Sciences
Osprey Distributed Control Systems
Varian Medical Systems
Pyramid Technical Consultants
See also
TANGO control system
SCADA—Supervisory Control And Data Acquisition
References
External links
EPICS Record Reference Manual
Science software
Physics software
Experimental particle physics
Industrial automation software | EPICS | [
"Physics"
] | 798 | [
"Computational physics",
"Experimental physics",
"Particle physics",
"Experimental particle physics",
"Physics software"
] |
1,517,745 | https://en.wikipedia.org/wiki/Monocrystalline%20whisker | A monocrystalline whisker is a filament of material that is structured as a single, defect-free crystal. Some typical whisker materials are graphite, alumina, iron, silicon carbide and silicon. Single-crystal whiskers of these (and some other) materials are known for having very high tensile strength (on the order of 10–20 GPa). Whiskers are used in some composites, but large-scale fabrication of defect-free whiskers is very difficult.
Prior to the discovery of carbon nanotubes, single-crystal whiskers had the highest tensile strength of any materials known, and were featured regularly in science fiction as materials for fabrication of space elevators, arcologies, and other large structures. Despite showing great promise for a range of applications, their usage has been hindered by concerns over their effects on health when inhaled.
See also
Whisker (metallurgy) – Self-organizing metallic whisker-shaped structures that cause problems with electronics.
Laser-heated pedestal growth
References
Printiverse Online Printing
"Mechanical and Physical Properties of Whiskers", CRC Handbook of Chemistry and Physics, 55th edition.
Materials | Monocrystalline whisker | [
"Physics"
] | 251 | [
"Materials stubs",
"Materials",
"Matter"
] |
1,517,834 | https://en.wikipedia.org/wiki/Axial%20pen%20force | In graphonomics, Axial pen force is the component of the normal pen force that is parallel to the pen. It is dependent upon pen tilt. In the special case of a perfectly vertical orientation of the writing instrument the axial pen force
equals the normal pen force.
See also
Graphonomics
External links
http://hwr.nici.ru.nl/~miami/taxonomy/node54.html
http://hwr.nici.kun.nl/~miami/taxonomy/node57.html Pen ergonomics
Penmanship
Force | Axial pen force | [
"Physics",
"Mathematics"
] | 115 | [
"Force",
"Physical quantities",
"Quantity",
"Mass",
"Classical mechanics stubs",
"Classical mechanics",
"Wikipedia categories named after physical quantities",
"Matter"
] |
1,517,848 | https://en.wikipedia.org/wiki/International%20Centre%20for%20Diffraction%20Data | The International Centre for Diffraction Data (ICDD) maintains a database of powder diffraction patterns, the Powder Diffraction File (PDF), including the d-spacings (related to angle of diffraction) and relative intensities of observable diffraction peaks. Patterns may be experimentally determined, or computed based on crystal structure and Bragg's law. It is most often used to identify substances based on x-ray diffraction data, and is designed for use with a diffractometer. The PDF contains more than a million unique material data sets. Each data set contains diffraction, crystallographic and bibliographic data, as well as experimental, instrument and sampling conditions, and select physical properties in a common standardized format.
The organization was founded in 1941 as the Joint Committee on Powder Diffraction Standards. In 1978, the current name was adopted to highlight the global commitment of this scientific endeavor.
The ICDD is a nonprofit scientific organization working in the field of X-ray analysis and materials characterization. It produces materials databases, characterization tools, and educational materials, as well as organizing and supporting global workshops, clinics and conferences.
Products and services of the ICDD include the paid subscription based Powder Diffraction File databases (PDF-2, PDF-4+, PDF-4+/Web , PDF-4/Minerals, PDF-4/Organics, PDF-4/Axiom, and ICDD Server Edition), educational workshops, clinics, and symposia. It is a sponsor of the Denver X-ray Conference and the Pharmaceutical Powder X-ray Diffraction Symposium. It also publishes the journals Advances in X-ray Analysis and Powder Diffraction.
In 2019, Materials Data, also known as MDI, merged with ICDD. Materials Data creates JADE software used to collect, analyze, and simulate XRD data and solve issues in an array of materials science projects.
In 2020, the ICDD and the Cambridge Crystallographic Data Centre, which curates and maintains the Cambridge Structural Database, announced a data partnership.
See also
Powder diffraction
Crystallography
References
External links
History, contents & use of the PDF
Materials Data
Advances in X-ray Analysis—Technical articles on x-ray methods and analyses
Powder Diffraction Journal—quarterly journal published by the JCPDS-International Centre for Diffraction Data through the Cambridge University Press
Denver X-ray Conference—World's largest X-ray conference on the latest advancements in XRD and XRF
PPXRD-16 —Pharmaceutical Powder X-ray Diffraction Symposium
Crystallography organizations
Diffraction
Optics institutions
Organizations established in 1941 | International Centre for Diffraction Data | [
"Physics",
"Chemistry",
"Materials_science",
"Astronomy"
] | 550 | [
"Spectroscopy stubs",
"Spectrum (physical sciences)",
"Astronomy stubs",
"Diffraction",
"Crystallography",
"Molecular physics stubs",
"Spectroscopy",
"Physical chemistry stubs",
"Crystallography organizations"
] |
1,517,935 | https://en.wikipedia.org/wiki/Space%20Mirror%20Memorial | The Space Mirror Memorial, which forms part of the larger Astronauts Memorial, is a National Memorial on the grounds of the John F. Kennedy Space Center Visitor Complex on Merritt Island, Florida. It is maintained by the Astronauts Memorial Foundation (AMF), whose offices are located in the NASA Center for Space Education next door to the Visitor Complex. The memorial was designed in 1987 by Holt Hinshaw Pfau Jones, and dedicated on May 9, 1991, to remember the lives of the men and women who have died in the various space programs of the United States, particularly those of NASA. The Astronauts Memorial has been designated by the U.S. Congress "as the national memorial to astronauts who die in the line of duty" (Joint Resolution 214, 1991).
In addition to 20 NASA career astronauts, the memorial includes the names of a U.S. Air Force X-15 test pilot, a U.S. Air Force officer who died while training for a then-classified military space program, a civilian spaceflight participant who died in the Challenger disaster, and an Israeli astronaut who was killed in the Columbia disaster.
In July 2019, the AMF unanimously voted to include private astronauts on the memorial, recognizing the important contributions made to the American space program by private spaceflight crew members. The first private astronaut to be added to the wall was Scaled Composites pilot Michael T. Alsbury, who died in the crash of SpaceShipTwo VSS Enterprise on October 31, 2014. His name was added to the memorial on January 25, 2020.
Memorial elements
The primary feature of the memorial is the Space Mirror, a flat expanse of polished black granite, 42.5 feet high by 50 feet wide (), divided into 90 smaller panels. The names of the 25 astronauts who have died are scattered over the mirror, with names of astronauts who died in the same incident grouped on the same panel, or pairs of adjacent panels. The names are cut completely through the surface, exposing a translucent backing, and filled with translucent acrylic, which is then backlit with LED lights, causing the names to glow, and appear to float in a reflection of the sky.
Near the Space Mirror is a granite wall, bearing pictures and brief biographies of those listed on the Mirror. The Space Mirror Memorial was designed by Wes Jones of Holt Hinshaw Pfau Jones and was commissioned after he won an international design competition.
Defunct Sun tracking mechanism
The memorial as built incorporated motors and jackscrews to constantly track the Sun across the sky in both pan and tilt axes. Parabolic reflectors on the back side of the mirror would then direct the sunlight through the acrylic panels to brilliantly illuminate the honorees' names with sunlight. Supplemental floodlights were used when the sunlight was inadequate.
In 1997, the tracking system failed, allowing part of the monument to strike a steel beam on an adjacent platform. Insurance paid $375,000 for repair work, but later, the mechanism again ground to a halt, due to further problems with the slewing ring.
Estimated cost of repairs was around $700,000, and the Astronauts Memorial Foundation unanimously decided the money would be better spent on educational programs instead. The floodlights were repositioned and are kept burning 24 hours a day to illuminate the memorial.
Memorial funding
The Space Mirror Memorial cost $6.2 million.
The memorial was to be partially funded by the sales of "Space Shots" trading cards. An agreement was made for 25% of Space Shots profits, in exchange for guaranteeing a $160,000 loan. A projected $400,000 was owed to the Foundation, which was never paid.
The Space Mirror Memorial and the Astronauts Memorial Foundation are funded in part by a specialty vehicle registration plate issued by the state of Florida. Called the Challenger plate, it was first issued in 1987, and was the first specialty plate issued by the state. The third edition, introduced in 2004, includes Columbia in the text, and is now termed the Challenger/Columbia plate. License plates brought in $377,000 in 2009.
One quarter of the revenue from the Apollo 11 Fiftieth Anniversary commemorative coins will go to the Astronauts Memorial Foundation.
Honorees
Only those killed during human spaceflight missions or during training for such missions sponsored by the United States are eligible for inclusion in the memorial. For a comprehensive list of space disasters, see List of space disasters.
The people honored on the memorial are:
Theodore Freeman, one of the NASA Astronaut Group 3 recruits from 1963, died in a T-38 training accident on October 31, 1964.
Elliot See and Charles Bassett were killed in a T-38 accident on February 28, 1966, when their aircraft crashed into McDonnell Building 101 on a foggy day. They were originally slated to be the crew of Gemini 9. Bassett was another Group 3 recruit, whereas See was an Astronaut Group 2 recruit from 1962.
Gus Grissom, Ed White, and Roger Chaffee were in the Apollo 1 capsule for plugs-out test on January 27, 1967, when a short circuit ignited flammable materials in the pressurized pure-oxygen atmosphere. The astronauts died of carbon monoxide poisoning before ground crews could reach them. Grissom, one of the Mercury Seven astronauts, had flown twice before. White conducted the first US spacewalk on Gemini 4. Chaffee, a rookie, was a Group 3 recruit.
Clifton Williams died in a T-38 training crash on October 5, 1967. Another Group 3 recruit, he was in the Apollo astronaut rotation, and would have been on the crew of Apollo 12. He was also memorialized by a fourth star on the official Apollo 12 mission patch.
Michael J. Adams died in an X-15 crash on November 15, 1967. He was not a NASA astronaut recruit, but made the memorial by virtue of having earned the Astronaut Badge according to the USAF standard by reaching just over 50 miles in altitude on his fatal flight. He was also in the United States Air Force's Manned Orbiting Laboratory program.
Robert H. Lawrence Jr., died on December 8, 1967, when the F-104 he was in as an instructor pilot for a flight test trainee crashed and his ejection seat parachute failed to open. He was in the Manned Orbiting Laboratory program at the time, and could have been among the first African-American astronauts had he survived to take NASA's offer for all under-35 MOL candidates to join their space program when MOL was scrapped in 1969.
On January 28, 1986, the Space Shuttle Challenger broke apart 73 seconds after liftoff on mission STS-51-L due to a defect in one of the solid rocket boosters. All seven crew members—Francis "Dick" Scobee, Michael J. Smith, Ronald McNair, Gregory Jarvis, Judith Resnik, Ellison Onizuka, and Christa McAuliffe—died. Scobee, McNair, Resnik, and Onizuka had flown before. Resnik was the second American woman in space, after Sally Ride. McAuliffe was participating via the Teacher in Space Project.
M. L. "Sonny" Carter died on April 5, 1991, in the crash of Atlantic Southeast Airlines Flight 2311. Carter was a passenger traveling on NASA business. He had flown on STS-33 and was in training for STS-42 at the time.
On February 1, 2003, the Space Shuttle Columbia disintegrated on re-entry at the end of mission STS-107 due to damage during ascent. The crew was Rick Husband, William C. McCool, David M. Brown, Kalpana Chawla, Michael P. Anderson, Laurel Clark and Ilan Ramon. Husband, Chawla and Anderson were veterans. Ramon was a pilot in the Israeli Air Force.
On October 31, 2014, SpaceShipTwo broke apart during its fourth powered flight, killing co-pilot Michael T. Alsbury and severely injuring the pilot. Both were flying for Scaled Composites on a mission for Virgin Galactic.
Astronauts Memorial Foundation
The Astronauts Memorial Foundation was founded shortly (summer of 1986) after the Challenger disaster (January 28, 1986) by architect Alan Helman, then Congressman and astronaut Bill Nelson, Leland McKee, business director, Martin Marietta (now Lockheed Martin), Randy Berridge, executive with AT&T, Florida Governor Bob Graham, Ralph Turlington, Florida Commissioner of Education, Senator and Astronaut Jake Garn and other Central Florida and national leaders. On September 4, 1986, Alan Helman and Leland McKee were presented a resolution by Governor Bob Graham and the Florida cabinet fully endorsing the efforts of The Astronauts Memorial Foundation. This included fundraising efforts of the 67 county Challenger Run Walk a Thon and specialty license plate. The Astronauts Memorial Foundation license plate, designed by artist Robert McCall, was Florida’s first vanity plate sold starting in December 1986. The automobile license plate went on raising millions for educational purposes in the State of Florida. Other educational efforts continue to this day.
The president of the Astronaut Memorial Foundation was Stephen Feldman from 1999 to 2012. He was paid $303,000 annually. This was criticized as being the highest among 100 of Brevard County non-profits. His salary represented 18.3 percent of the fund's $1.8 million budget in fiscal year 2009. He defended his salary by saying that he was the sole fundraiser and the chief financial officer for the foundation.
Thad Altman became President and CEO of the Astronauts Memorial Foundation in August 2012. The Board of Directors include Eileen Collins - Chairman, Jack Kirschenbaum - Vice Chairman, Gregory H. Johnson - Treasurer, Sheryl L. Chaffee - Secretary.
Gallery
See also
Fallen Astronaut, a memorial to deceased astronauts and cosmonauts placed on the Moon during the 1971 Apollo 15 mission.
List of national memorials of the United States
References
External links
Map:
The Astronauts Memorial Foundation official website
Places of Commemoration: Search for Identity and Landscape Design, Volume 19, Joachim Wolschke-Bulmahn, Dumbarton Oaks, 2001, pages 185-214. .
Congressional Record, 30 April 1991, page 9600, H2578-79. Joint Resolution 214.
Astronaut Memorial Space Mirror
History of spaceflight
Human spaceflight
Space program fatalities
Space Shuttle program
Mirrors
Monuments and memorials in Florida
Tourist attractions in Brevard County, Florida
Merritt Island, Florida
1991 sculptures
Buildings and structures in Merritt Island, Florida
Granite sculptures in Florida
1991 establishments in Florida
National memorials of the United States
Kalpana Chawla
Monuments and memorials to explorers
Gus Grissom
Ed White (astronaut)
Astronauts in art | Space Mirror Memorial | [
"Engineering"
] | 2,166 | [
"Space program fatalities",
"Space programs"
] |
1,518,042 | https://en.wikipedia.org/wiki/MOATA | MOATA was a 100 kW thermal Argonaut class reactor built at the Australian Atomic Energy Commission (later ANSTO) Research Establishment at Lucas Heights, Sydney. MOATA went critical at 5:50am on 10 April 1961 and ended operations on 31 May 1995. MOATA was the first reactor to be decommissioned in Australia in 2009.
Background
The design of the university training reactor MOATA was based on the Argonaut research reactor developed by the Argonne National Laboratory in the mid-1950s, in the United States. Moata is an Aboriginal word meaning "gentle-fire" or "fire-stick".
MOATA was designed and manufactured by the Advanced Technology Laboratories and first went critical on 10 April 1961.
The purpose of the reactor was for training nuclear scientists in reactor operations and neutron physics. However, by the mid-1970s, its official envelope was expanded to include activation analysis and neutron radiography, soil analysis, and nuclear medical research.
Decommissioning
The reactor was shut down in 1995 as it was no longer possible to economically justify its continued operations. Experimental data on nuclear fuel and moderator systems was also accumulated during its lifetime. By 2009, the reactor had been completely dismantled and the site is now fully restored. It was the first reactor to be decommissioned in Australia.
In 1995, the spent fuel from the reactor was unloaded and in 2006, it was shipped to the United States under the US Department of Energy’s Foreign Research Reactor Spent Nuclear Fuel Acceptance.
References
Argonaut class reactor
Science and technology in New South Wales
Defunct nuclear reactors | MOATA | [
"Physics"
] | 319 | [
"Nuclear and atomic physics stubs",
"Nuclear physics"
] |
1,518,329 | https://en.wikipedia.org/wiki/Gheorghe%20%C8%9Ai%C8%9Beica | Gheorghe Țițeica (; 4 October 1873 – 5 February 1939) publishing as George or Georges Tzitzéica) was a Romanian mathematician who made important contributions in geometry. He is recognized as the founder of the Romanian school of differential geometry.
Education
He was born in Turnu Severin, western Oltenia, the son of Anca (née Ciolănescu) and Radu Țiței, originally from Cilibia, in Buzău County. His name was registered as Țițeica–a combination of his parents' surnames. He showed an early interest in science, as well as music and literature. Țițeica was an accomplished violinist, having studied music since childhood: music was to remain his hobby. While studying at the Carol I High School in Craiova, he contributed to the school's magazine, writing the columns on mathematics and studies of literary critique. After graduation in 1892, he obtained a scholarship at the preparatory school in Bucharest, where he also was admitted as a student in the Mathematics Department of University of Bucharest's Faculty of Sciences. His teachers there included David Emmanuel, Spiru Haret, Constantin Gogu, Dimitrie Petrescu, and Iacob Lahovary. In June 1895, he graduated with a Bachelor of Mathematics.
In the summer of 1896, after a stint as a substitute teacher at the Bucharest theological seminary, Țițeica passed his exams for promotion to a secondary school position, becoming teacher in Galați.
In 1897, on the advice of teachers and friends, Țițeica completed his studies at a preparatory school in Paris. Among his mates were Henri Lebesgue and Paul Montel. After ranking first in his class and earning a second undergraduate degree from the Sorbonne in 1897, he was admitted at the École Normale Supérieure, where he took classes with Paul Appell, Gaston Darboux, Édouard Goursat, Charles Hermite, Gabriel Koenigs, Émile Picard, Henri Poincaré, and Jules Tannery. Țițeica chose Darboux to be his thesis advisor; after working for two years on his doctoral dissertation, titled Sur les congruences cycliques et sur les systèmes triplement conjugués, he defended it on 30 June 1899 before a board of examiners consisting of Darboux (as chair), Goursat, and Koenigs.
Career
Upon his return to Romania, Țițeica was appointed assistant professor at the University of Bucharest. He was promoted to full professor on 3 May 1903, retaining this position until his death in 1939. He also taught mathematics at the Polytechnic University of Bucharest, starting in 1928. In 1913, at age 40, Țițeica was elected as a permanent member of the Romanian Academy, replacing Spiru Haret. Later he was appointed in leading roles: in 1922, vice-president of the scientific section, in 1928, vice-president and in 1929 secretary general. Țițeica was also president of the , of the Romanian Association of Science, and of the Association of the development and the spreading of science. He was a vice-president of the Polytechnics Association of Romania and member of the High Council of Public Teaching.
Țițeica was the president of the geometry section at the International Congress of Mathematicians (ICM) in Toronto (1924), Zürich (1932), and Oslo (1936). With 5 invited ICM talks (1908, 1912, 1924, 1932, and 1936), he is in a tie for 7th place among mathematicians with the most invited ICM talks.
He was elected correspondent of the Association of Sciences of Liège and doctor honoris causa of the University of Warsaw. In 1926, 1930, and 1937 he gave a series of lectures as titular professor at the Faculty of Sciences in Sorbonne. He also gave many lectures at the Free University of Brussels (1926) and the University of Rome (1927).
His Ph.D. students include Dan Barbilian and Grigore Moisil.
Scientific work
Țițeica wrote about 400 articles, of which 96 are scientific projects, most addressing problems of differential geometry. His bibliography includes over 200 published papers and books, which appeared in many editions. Carrying on the researches of the American geometer of German origin Ernest Wilczynski, Țițeica discovered a new class of surfaces and a new class of curves which now carry his name. His contributions represent the beginning of a new chapter in mathematics, namely, affine differential geometry. He also studied webs in n-dimensional space, defined through Laplace equations. He investigated what is now known as the Tzitzeica equation, which was further generalized by Robin Bullough and Roger Dodd (the Tzitzéica–Bullough–Dodd equation).
He is also known for a result on the geometry of circles and triangles in the plane, referred to as Țițeica's , a problem he proposed (and solved) at the contest in Galați in 1908. The problem was posed independently by Roger Arthur Johnson in 1916, and the resulting configuration is also referred to as the Johnson circles.
Private life and legacy
Țițeica married Florence Thierin (1882–1965) and the couple had three children — Radu (1905–1987), Gabriela (1907–1987), and Șerban (1908–1985) — all of whom pursued careers in academia; the youngest one became a renowned quantum physicist. The family lived in a 19th-century house on Dionisie Lupu Street, close to Lahovari Plaza, in Sector 1 of Bucharest; Țițeica moved there around 1913, when he was elected to the academy. A commemorative plaque was affixed to the house by the city administration in 1998. He died in 1939 in Bucharest and was buried in the city's Bellu Cemetery.
A high school in Drobeta-Turnu Severin and a gymnasium in Craiova bear his name, and so does a street in Sector 2 of Bucharest. The Romanian Academy offers an annual "Gheorghe Țițeica Prize" for achievements in mathematics. The logo of the 40th International Mathematical Olympiad, held in Bucharest in 1999, was inspired by Țițeica's 5 lei coin problem.
In 1961, Poșta Română issued a 1.55 lei stamp in his honor (Scott #1415); he also figures on a 2 lei stamp from 1945 commemorating the founding of Gazeta Matematică in 1895 (Scott #596).
Publications
References
1873 births
1939 deaths
People from Drobeta-Turnu Severin
19th-century Romanian mathematicians
20th-century Romanian mathematicians
Differential geometers
Geometers
Romanian science communicators
Romanian scientists
Romanian inventors
Carol I National College alumni
University of Bucharest alumni
University of Paris alumni
École Normale Supérieure alumni
Titular members of the Romanian Academy
Academic staff of the University of Bucharest
Academic staff of the Politehnica University of Bucharest
Burials at Bellu Cemetery
Romanian schoolteachers | Gheorghe Țițeica | [
"Mathematics"
] | 1,424 | [
"Geometers",
"Geometry"
] |
1,518,490 | https://en.wikipedia.org/wiki/Civil%20time | In modern usage, civil time refers to statutory time as designated by civilian authorities. Modern civil time is generally national standard time in a time zone at a fixed offset from Coordinated Universal Time (UTC), possibly adjusted by daylight saving time during part of the year. UTC is calculated by reference to atomic clocks and was adopted in 1972. Older systems use telescope observations.
In traditional astronomical usage, civil time was mean solar time reckoned from midnight. Before 1925, the astronomical time 00:00:00 meant noon, twelve hours after the civil time 00:00:00 which meant midnight. HM Nautical Almanac Office in the United Kingdom used Greenwich Mean Time (GMT) for both conventions, leading to ambiguity, whereas the Nautical Almanac Office at the United States Naval Observatory used GMT for the pre-1925 convention and Greenwich Civil Time (GCT) for the post-1924 convention until 1952. In 1928, the International Astronomical Union introduced the term Universal Time for GMT beginning at midnight.
In modern usage, GMT is no longer a formal standard reference time: it is now a name for the time zone UTC+00:00. Universal Time is now determined by reference to distant celestial objects: UTC is derived from International Atomic Time (TAI), and is adjusted by leap seconds to compensate for variations in the rotational velocity of the Earth. Civil Times around the world are all defined by reference to UTC. [In many jurisdictions, legislation has not been updated and still refers to GMT: this is taken to mean UTC+0.]
History
The division of the day into times of day has existed since the beginning of the calendar.
Twelve hours: Babylonian and Roman division of the day
People in Antiquity divided the day into twelve hours, but these were reckoned from sunrise rather than midnight. Babylonian hours were of equal length, while Roman temporal hours varied depending on the season.
The Horae, literally "the hours," were the original Greek goddesses who oversaw regulated life. They were the patron goddesses of the various times of day. In Greek tradition, the twelve hours were counted from just before sunrise to just after sunset.
Roman daytimes were called hora (hours), with the morning hour as hora prima. The night was divided into four sections called vigilia (night watch), two before midnight and two after. The Romans originally counted the morning hours backwards: "3 a. m." or "3 hours ante meridiem" meant "three hours before noon", in contrast to the modern meaning "three hours after midnight".
This ancient division has survived in the Liturgy of the Hours: Prime, Terce, Sext, and Nones are named after the first, third, sixth and ninth hours of the day. The Matins, the nocturnal prayer is, according to the Rule of Saint Benedict, to be prayed at the "eighth hour of the night", which corresponds to about 2 am.
The Spanish siesta derives its name from the Latin hora sexta for the sixth hour (noon).
Middle East
In Semitic language cultures, the day traditionally begins at nightfall. This is still important today for the beginning of Shabbat and Islamic holidays.
A division of days has survived from Persian, following the Babylonian beginning of the day: The rōsgār (times of day) are hāwan (morning), uapihwin (afternoon), usērin (evening), ēbsrūsrim (sunset to midnight), and ushahin (midnight to dawn). The last two are collectively called shab (night).
Middle Ages and early modern times
The modern division of the day into twenty-four hours of equal length (Italian hours) first appeared in the 14th century with the invention of the mechanical wheel clock and its widespread use in turret clocks.
With the onset of industrialization, working hours became tied to the clock rather than to daylight.
See also
:Category:Time by country
:Category:Time zones
References
External links
Time Scales
Time scales | Civil time | [
"Physics",
"Astronomy"
] | 814 | [
"Physical quantities",
"Time",
"Astronomical coordinate systems",
"Spacetime",
"Time scales"
] |
1,518,715 | https://en.wikipedia.org/wiki/Agricultural%20emissions%20research%20levy | The agricultural emissions research levy was a controversial tax proposal in New Zealand. It was first proposed in 2003 and would collect an estimated $8.4 million annually from livestock farmers (out of an estimated annual $50–125 million in costs to the public which is caused by farm animals' emissions of greenhouse gases such as methane), and which would have been used to fund research on the livestock industry's emissions of greenhouse gases, to further the nation's compliance with the Kyoto Protocol.
History
In May 2003 a report prepared for the Ministry of Agriculture and Fisheries (O'Hara report) identified that although some funding for agricultural emissions was being provided by FRST and MAF, "The level of investment in abatement research by other public and private sources has been low". The report assessed that a minimum of $4.5 million (optimally $8.4 million) of additional funding would be needed to fund the recommended research program.
In 2003, the tax was opposed by MP's of the ACT Party and the National Party. but eventually they proposed an alternative solution, as described below. Shane Ardern, a National Party MP, drove a tractor up the steps of Parliament as part of a protest against the tax.
In 2004, a consortium of the livestock industry agreed to pay for a portion of this research (just not via taxation), and the government reserved the right to reconsider the tax if they or the industry withdrew from the agreement.
In New Zealand, farm animals account for approximately 50% of the greenhouse gas emissions, according to two official estimates, and the Kyoto treaty may compel New Zealand to pay penalties if gas levels are not brought down. Research shows that the world's livestock produce are a significant contributor to global emissions (NZ exports a significant degree of its dairy and meat, as noted in Economy of New Zealand.)
In 2004, whilst the Labour Party's coalition still led parliament, New Zealand's livestock farmers agreed to contribute to related scientific research, and to fund an unspecified portion of the costs of the Pastoral Greenhouse Gas Research Consortium.
In September 2009, the National-led government announced that a push would be made for the formation of a Global Alliance to investigate methods of reducing greenhouse gas emissions due to agriculture. Simon Upton, a former National Party MP and Minister for the Environment, was appointed as a special envoy to liaise with other countries on the issue.
Controversy
The tax was described by livestock farmers and other critics as a "flatulence tax" or "fart tax" (though these nicknames are misleading, since most ruminant methane production is a product of the burping of methane produced by bacteria in the first stomach (the rumen) rather than of flatulence), and the president of the Federated Farmers contended that the government was trying to make the livestock industry pay for the "largesse" of others.
In contrast, those who endorse such taxes contend that the result is that if one consumes a larger amount of the products which increase healthcare costs (in a system where citizens share each other's medical costs) – or those whose habits damage the environment, or if one's animals require antibiotics constantly to ameliorate disease-prone conditions, antibiotics which breed super-bugs that may also attack humans – then one would merely be paying for their own largesse, and the costs to society that their habits cause (and the opposition argues that one should pay more, commensurately, as one does or consumes more of what harms others in his society) (see also Pigovian tax).
See also
Climate change in New Zealand
Agriculture in New Zealand
Livestock's Long Shadow – Environmental Issues and Options; Climate change and agriculture: livestock
References
External links
Agricultural Emissions Research Funding – discussion document
Department of Chemistry, University of Otago – "Methane – and lots of hot air" (a Kiwi Professor of Chemistry's Flatulence humour)
Flatulence
Climate change in New Zealand
Taxation in New Zealand
Agriculture in New Zealand
Climate change and agriculture
Climate change policy
Emissions reduction
Controversies in New Zealand
2000s in New Zealand | Agricultural emissions research levy | [
"Chemistry"
] | 836 | [
"Greenhouse gases",
"Emissions reduction"
] |
1,518,742 | https://en.wikipedia.org/wiki/Helly%E2%80%93Bray%20theorem | In probability theory, the Helly–Bray theorem relates the weak convergence of cumulative distribution functions to the convergence of expectations of certain measurable functions. It is named after Eduard Helly and Hubert Evelyn Bray.
Let F and F1, F2, ... be cumulative distribution functions on the real line. The Helly–Bray theorem states that if Fn converges weakly to F, then
for each bounded, continuous function g: R → R, where the integrals involved are Riemann–Stieltjes integrals.
Note that if X and X1, X2, ... are random variables corresponding to these distribution functions, then the Helly–Bray theorem does not imply that E(Xn) → E(X), since g(x) = x is not a bounded function.
In fact, a stronger and more general theorem holds. Let P and P1, P2, ... be probability measures on some set S. Then Pn converges weakly to P if and only if
for all bounded, continuous and real-valued functions on S. (The integrals in this version of the theorem are Lebesgue–Stieltjes integrals.)
The more general theorem above is sometimes taken as defining weak convergence of measures (see Billingsley, 1999, p. 3).
References
Probability theorems | Helly–Bray theorem | [
"Mathematics"
] | 278 | [
"Theorems in probability theory",
"Mathematical theorems",
"Mathematical problems"
] |
1,518,773 | https://en.wikipedia.org/wiki/Humanzee | The humanzee (sometimes chuman, manpanzee or chumanzee) is a hypothetical hybrid of chimpanzee and human, thus a form of human–animal hybrid. Serious attempts to create such a hybrid were made by Soviet biologist Ilya Ivanovich Ivanov in the 1920s, and possibly by researchers in China in the 1960s, though neither succeeded.
Etymology
The portmanteau humanzee for a human–chimpanzee hybrid appears to have entered usage in the 1980s.
Possibility
The possibility of hybrids between humans and other apes has been entertained since at least the medieval period; Saint Peter Damian (11th century) claimed to have been told of the offspring of a human woman who had mated with a non-human ape, and so did Antonio Zucchelli, an Italian Franciscan capuchin friar who was a missionary in Africa from 1698 to 1702, and Sir Edward Coke in "The Institutes of the Lawes of England".
Chimpanzees and humans are closely related. Genetic animal hybrids with different chromosome numbers decrease the probability of viable offspring and rarely occur in the first cross. Evolutionary biologists have found evidence that hybridization between humans and Pan troglodytes resulted in some varieties of archaic humans. Chimpanzees and bonobos are separate species, but hybridization has been documented. Genetic similarity, and thus the chances of successful hybridization, is not always correlated with visual appearances. Domestication and backcrossing has been found to increase fertility in subsequent generations.
All great apes have similar genetic chromosome structure. Humans have one pair fewer chromosomes than other apes, as humans have 23 chromosome pairs, while all other apes have 24, with ape chromosomes 12 and 13 fused in the human genome into a large chromosome (which contains remnants of the centromere and telomeres of the ancestral 12 and 13). Chromosomes 6, 13, 19, 21, 22, and X are structurally the same in all great apes. Chromosomes 3, 11, 14, 15, 18, and 20 match among gorillas, chimpanzees, and humans. Chimpanzees and humans match on 1, 2p, 2q, 5, 7–10, 12, 16, and Y as well. Some older references include Y as a match among gorillas, chimpanzees, and humans, but chimpanzees, bonobos, and humans have recently been found to share a large transposition from chromosome 1 to Y not found in other apes.
The degree of chromosomal similarity among apes is roughly equivalent to that found in equines. Interfertility of horses and donkeys is common, although sterility of the offspring (mules) is more common. Complexities and partial sterility pertain to horse–zebra hybrids, or zorses, whose chromosomal disparity is very wide, with horses typically having 32 chromosome pairs and zebras between 16 and 23 depending on species. The Przewalski's horse (Equus ferus przewalskii) with 33 chromosome pairs, and the domestic horse (E. f. caballus) with 32 pairs, have been found to be interfertile, and produce semi-fertile offspring: male hybrids can breed with female domestic horses.
In 1977, researcher J. Michael Bedford discovered that human sperm could penetrate the protective outer membranes of a gibbon egg. Bedford's paper also stated that human spermatozoa would not even attach to the zona surface of non-hominoid primates (baboon, rhesus monkey, and squirrel monkey), concluding that although the specificity of human spermatozoa is not confined to Homo sapiens sapiens alone, it is probably restricted to the Hominoidea. However, in the opposite direction of closely related species, it has been found that human sperm binds to gorilla oocytes with almost the same ease as to human ones.
Hybridization between members of different, but related genera is sometimes possible, as in the case of cama (camel and llama), wholphin (common bottlenose dolphin and false killer whale), and some felid hybrids. Even hybridization between different families, as in the case of the sturddlefish, is possible (albeit exceedingly rare) provided the parent species are genetically similar enough to one another.
Reports of attempted hybridization
There have been no scientifically verified specimens of a human–chimpanzee hybrid, but there have been substantiated reports of unsuccessful attempts to create one in the Soviet Union in the 1920s, and various unsubstantiated reports on similar attempts during the second half of the 20th century.
Ilya Ivanov was the first person to attempt to create a human–chimpanzee hybrid by artificial insemination. Ivanov outlined his idea as early as 1910 in a presentation to the World Congress of Zoologists in Graz. In the 1920s, Ivanov carried out a series of experiments, culminating in inseminating three female chimpanzees with human sperm, but he failed to achieve a pregnancy. These initial experiments took place in French Guinea. (For comparison with known cama statistics, in the case of male camel–female guanaco cross the probability that insemination would lead to pregnancy was approximately 1/6.) In 1929, he attempted to organize a set of experiments involving nonhuman ape sperm and human volunteers, but was delayed by the death of his last orangutan. The next year, he fell under political criticism from the Soviet government and was sentenced to exile in the Kazakh SSR; he worked there at the Kazakh Veterinary-Zootechnical Institute and died of a stroke two years later.
In the 1970s, a performing chimpanzee named Oliver was popularized as a possible "mutant" or even a human–chimpanzee hybrid. Claims that Oliver had 47 chromosomes—midpoint between the normal 46 for humans and 48 for chimpanzees—were disproven after an examination of his genetic material at the University of Chicago in 1996. Oliver's cranial morphology, ear shape, freckles, and baldness fall within the range of variability exhibited by the common chimpanzee. Results of further studies with Oliver were published in the American Journal of Physical Anthropology.
In the 1980s, there were reports of an experiment in human–chimpanzee crossbreeding conducted in China in 1967, and on the planned resumption of such experiments. In 1981, Ji Yongxiang, head of a hospital in Shenyang, was reported as claiming to have been part of a 1967 experiment in Shenyang in which a chimpanzee female had been impregnated with human sperm. According to this account, the experiment was cut short by the Cultural Revolution, with the responsible scientists sent off to farm labour and the three-months pregnant chimpanzee dying from neglect. According to Timothy McNulty of Chicago Tribune, the report was based on an article in the Wenhui Bao newspaper of Shanghai. Li Guong of the genetics research bureau at the Chinese Academy of Sciences was cited as confirming both the existence of the experiment prior to the Cultural Revolution and the plans to resume testing.
In 2019, unconfirmed reports surfaced that a team of researchers led by Juan Carlos Izpisua Belmonte from the Salk Institute for Biological Studies in the U.S. successfully produced the first human-monkey chimera. Belmonte and others had previously produced pig and sheep embryos containing a small percentage of human cells. As with those embryos, the human-monkey chimeras were reportedly only allowed to develop for a few weeks. Although development was stopped prior to the formation of a nervous system or organs, avoiding more severe ethical concerns, the research was reportedly carried out in China to avoid legal issues. Due to the much larger evolutionary distance between humans and monkeys versus humans and chimpanzees, it is considered unlikely that true human-monkey hybrids could be brought to term. However, it is feasible that human-compatible organs for transplantation could be grown in these chimeras.
Evidence for early hominin hybridization
There is evidence for a complex speciation process for the Pan–Homo split which may include hybridization, or what is known as reticulate evolution. Different chromosomes appear to have split at different times, suggesting that large-scale hybridization may have taken place over a period of as much as four million years leading up to the emergence of the distinct human and chimpanzee lineages as late as six million years ago.
The similarity of the X chromosome in humans and chimpanzees might suggest hybridization taking place as late as four million years ago. However, other mechanisms such as natural selection on the X chromosome in the chimpanzee–human last common ancestor may also explain the apparent short divergence time in the X chromosome.
It is hypothesized that the peculiar features of Homo naledi may be due to them being descendants of a relatively recent hybridization event between Homo and Australopithecus.
See also
Great ape personhood
Human–animal hybrid
Human evolution
References
Further reading
Human hybrids
Human subject research
Chimpanzees
Intergeneric hybrids
Hypothetical life forms | Humanzee | [
"Biology"
] | 1,893 | [
"Biological hypotheses",
"Intergeneric hybrids",
"Hypothetical life forms",
"Hybrid organisms"
] |
1,519,181 | https://en.wikipedia.org/wiki/Polaritonics | Polaritonics is an intermediate regime between photonics and sub-microwave electronics (see Fig. 1). In this regime, signals are carried by an admixture of electromagnetic and lattice vibrational waves known as phonon-polaritons, rather than currents or photons. Since phonon-polaritons propagate with frequencies in the range of hundreds of gigahertz to several terahertz, polaritonics bridges the gap between electronics and photonics. A compelling motivation for polaritonics is the demand for high speed signal processing and linear and nonlinear terahertz spectroscopy. Polaritonics has distinct advantages over electronics, photonics, and traditional terahertz spectroscopy in that it offers the potential for a fully integrated platform that supports terahertz wave generation, guidance, manipulation, and readout in a single patterned material.
Polaritonics, like electronics and photonics, requires three elements: robust waveform generation, detection, and guidance and control. Without all three, polaritonics would be reduced to just phonon-polaritons, just as electronics and photonics would be reduced to just electromagnetic radiation. These three elements can be combined to enable device functionality similar to that in electronics and photonics.
Illustration
To illustrate the functionality of polaritonic devices, consider the hypothetical circuit in Fig. 2 (right). The optical excitation pulses that generate phonon-polaritons, in the top left and bottom right of the crystal, enter normal to the crystal face (into the page). The resulting phonon-polaritons will travel laterally away from the excitation regions. Entrance into the waveguides is facilitated by reflective and focusing structures. Phonon-polaritons are guided through the circuit by terahertz waveguides carved into the crystal. Circuit functionality resides in the interferometer structure at the top and the coupled waveguide structure at the bottom of the circuit. The latter employs a photonic bandgap structure with a defect (yellow) that could provide bistability for the coupled waveguide.
Waveform generation
Phonon-polaritons generated in ferroelectric crystals propagate nearly laterally to the excitation pulse due to the high dielectric constants of ferroelectric crystals, facilitating easy separation of phonon-polaritons from the excitation pulses that generated them. Phonon-polaritons are therefore available for direct observation, as well as coherent manipulation, as they move from the excitation region into other parts of the crystal. Lateral propagation is paramount to a polaritonic platform in which generation and propagation take place in a single crystal. A full treatment of the Cherenkov-radiation-like terahertz wave response reveals that in general, there is also a forward propagation component that must be considered in many cases.
Signal detection
Direct observation of phonon-polariton propagation was made possible by real-space imaging, in which the spatial and temporal profiles of phonon-polaritons are imaged onto a CCD camera using Talbot phase-to-amplitude conversion. This by itself was an extraordinary breakthrough. It was the first time that electromagnetic waves were imaged directly, appearing much like ripples in a pond when a rock plummets through the water's surface (see Fig. 3). Real-space imaging is the preferred detection technique in polaritonics, though other more conventional techniques like optical Kerr-gating, time resolved diffraction, interferometric probing, and terahertz field induced second-harmonic generation are useful in some applications where real-space imaging is not easily employed. For example, patterned materials with feature sizes on the order of a few tens of micrometres cause parasitic scattering of the imaging light. Phonon-polariton detection is then only possible by focusing a more conventional probe, like those mentioned before, into an unblemished region of the crystal.
Guidance and control
The last element requisite to polaritonics is guidance and control. Complete lateral propagation parallel to the crystal plane is achieved by generating phonon-polaritons in crystals of thickness on the order of the phonon-polariton wavelength. This forces propagation to take place in one or more of the available slab waveguide modes. However, dispersion in these modes can be radically different from that in bulk propagation, and in order to exploit this, the dispersion must be understood.
Control and guidance of phonon-polariton propagation may also be achieved by guided wave, reflective, diffractive, and dispersive elements, as well as photonic and effective index crystals that can be integrated directly into the host crystal. However, lithium niobate, lithium tantalate, and other perovskites are impermeable to the standard techniques of material patterning. In fact, the only etchant known to be even marginally successful is hydrofluoric acid (HF), which etches slowly and predominantly in the direction of the crystal optic axis.
Laser Micromachining
Femtosecond laser micromachining is used for device fabrication by milling 'air' holes and/or troughs into ferroelectric crystals by directing them through the focus region of a femtosecond laser beam. . The advantages of femtosecond laser micromachining for a wide range of materials have been well documented. In brief, free electrons are created within the beam focus through multiphoton excitation. Because the peak intensity of a femtosecond laser pulse is many orders of magnitude higher than that from longer pulse or continuous wave lasers, the electrons are rapidly excited, heated to form a quantum plasma. Particularly in dielectric materials, the electrostatic instability, induced by the plasma, of the remaining lattice ions results in ejection of these ions and hence ablation of the material, leaving a material void in the laser focus region. Also, since the pulse duration and ablation time scales are much faster than the thermalization time, femtosecond laser micromachining does not suffer from the adverse effects of a heat-affected-zone, like cracking and melting in regions neighboring the intended damage region.
See also
Electronics
Photonics
Polariton
Spintronics
Polariton laser
External references
David W. Ward: Polaritonics: An Intermediate Regime between Electronics and Photonics, Ph.D. Thesis, Massachusetts Institute of Technology, 2005. This is the main reference for this article.
External links
The research group at MIT that invented polaritonics.
References
Photonics
Nanoelectronics
Solid state engineering | Polaritonics | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,388 | [
"Electronic engineering",
"Nanoelectronics",
"Condensed matter physics",
"Nanotechnology",
"Solid state engineering"
] |
1,519,267 | https://en.wikipedia.org/wiki/Copper%E2%80%93copper%28II%29%20sulfate%20electrode | The copper–copper(II) sulfate electrode is a reference electrode of the first kind, based on the redox reaction with participation of the metal (copper) and its salt, copper(II) sulfate.
It is used for measuring electrode potential and is the most commonly used reference electrode for testing cathodic protection corrosion control systems. The corresponding equation can be presented as follow:
Cu2+ + 2e− → Cu0(metal)
This reaction characterized by reversible and fast electrode kinetics, meaning that a sufficiently high current can be passed through the electrode with the 100% efficiency of the redox reaction (dissolution of the metal or cathodic deposition of the copper-ions).
The Nernst equation below shows the dependence of the potential of the copper-copper(II) sulfate electrode on the activity or concentration copper-ions:
Commercial reference electrodes consist of a plastic tube holding the copper rod and saturated solution of copper sulfate. A porous plug on one end allows contact with the copper sulfate electrolyte. The copper rod protrudes out of the tube. A voltmeter negative lead is connected to the copper rod.
The potential of a copper–copper sulfate electrode is +0.314 volt with respect to the standard hydrogen electrode. Copper–copper(II) sulfate electrode is also used as one of the half cells in the galvanic Daniel-Jakobi cell.
Applications
Copper coulometer
Notes
References
E. Protopopoff and P. Marcus, Potential Measurements with Reference Electrodes, Corrosion: Fundamentals, Testing, and Protection, Vol 13A, ASM Handbook, ASM International, 2003, p 13-16
A.W. Peabody, Peabody's Control of Pipeline Corrosion, 2nd Ed., 2001, NACE International.
Electrodes
Corrosion prevention | Copper–copper(II) sulfate electrode | [
"Chemistry"
] | 367 | [
"Corrosion prevention",
"Electrodes",
"Corrosion",
"Electrochemistry",
"Electrochemistry stubs",
"Physical chemistry stubs"
] |
1,519,526 | https://en.wikipedia.org/wiki/Antitropical%20distribution | Antitropical (alternatives include biantitropical or amphitropical) distribution is a type of disjunct distribution where a species or clade exists at comparable latitudes across the equator but not in the tropics. For example, a species may be found north of the Tropic of Cancer and south of the Tropic of Capricorn, but not in between. With increasing time since dispersal, the disjunct populations may be the same variety, species, or clade. How the life forms distribute themselves to the opposite hemisphere when they can't normally survive in the middle depends on the species; plants may have their seed spread through wind, animal, or other methods and then germinate upon reaching the appropriate climate, while sea life may be able to travel through the tropical regions in a larval state or by going through deep ocean currents with much colder temperatures than on the surface. For the American amphitropical distribution, dispersal has been generally agreed to be more likely than vicariance from a previous distribution including the tropics in North and South America.
Known cases
Plants
Phacelia crenulata – scorpionweed
Bowlesia incana – American Bowlesia
Osmorhiza berteroi and Osmorhiza depauperata – sweet cecily species.
Ruppia megacarpa
Solenogyne
For a list of American amphitropically distributed plants (237 vascular plants), see the tables in the open access paper Simpson et al. 2017 or their working group on figshare
Animals
Scylla serrata – mud crab
Freshwater crayfish
Ground beetle genus Bembidion
Bryophytes and lichens
Tetraplodon fuegianus - dung moss
See also
Rapoport's rule
References
Biogeography | Antitropical distribution | [
"Biology"
] | 368 | [
"Biogeography"
] |
1,519,594 | https://en.wikipedia.org/wiki/Lebesgue%20point | In mathematics, given a locally Lebesgue integrable function on , a point in the domain of is a Lebesgue point if
Here, is a ball centered at with radius , and is its Lebesgue measure. The Lebesgue points of are thus points where does not oscillate too much, in an average sense.
The Lebesgue differentiation theorem states that, given any , almost every is a Lebesgue point of .
References
Mathematical analysis | Lebesgue point | [
"Mathematics"
] | 97 | [
"Mathematical analysis"
] |
1,519,744 | https://en.wikipedia.org/wiki/Fade%20%28audio%20engineering%29 | In audio engineering, a fade is a gradual increase or decrease in the level of an audio signal. The term can also be used for film cinematography or theatre lighting in much the same way (see fade (filmmaking) and fade (lighting)).
In sound recording and reproduction a song may be gradually reduced to silence at its end (fade-out), or may gradually increase from silence at the beginning (fade-in). Fading-out can serve as a recording solution for pieces of music that contain no obvious ending. Quick fade-ins and -outs can also be used to change the characteristics of a sound, such as to soften the attack in vocal plosives and percussion sounds.
Professional turntablists and DJs in hip hop music use faders on a DJ mixer, notably the horizontal crossfader, in a rapid fashion while simultaneously manipulating two or more record players (or other sound sources) to create scratching and develop beats. Club DJs in house music and techno use DJ mixers, two or more sound sources (two record players, two iPods, etc.) along with a skill called beatmatching (aligning the beats and tempos of two records) to make seamless dance mixes for dancers at raves, nightclubs and dance parties.
History
Origins and examples
Possibly the earliest example of a fade-out ending can be heard in Joseph Haydn's Symphony No. 45, nicknamed the "Farewell" Symphony on account of the fade-out ending. The symphony which was written in 1772 used this device as a way of courteously asking Haydn's patron Prince Nikolaus Esterházy, to whom the symphony was dedicated, to allow the musicians to return home after a longer than expected stay. This was expressed by the players extinguishing their stand candles and leaving the stage one by one during the final adagio movement of the symphony, leaving only two muted violins playing. Esterházy appears to have understood the message, allowing the musicians to leave.
Gustav Holst's "Neptune, the mystic", part of the orchestral suite The Planets written between 1914 and 1916, is another early example of music having a fade-out ending during performance. Holst stipulates that the women's choruses are "to be placed in an adjoining room, the door of which is to be left open until the last bar of the piece, when it is to be slowly and silently closed", and that the final bar (scored for choruses alone) is "to be repeated until the sound is lost in the distance". Although commonplace today, the effect bewitched audiences in the era before widespread recorded sound—after the initial 1918 run-through, Holst's daughter Imogen (in addition to watching the charwomen dancing in the aisles during "Jupiter") remarked that the ending was "unforgettable, with its hidden chorus of women's voices growing fainter and fainter ... until the imagination knew no difference between sound and silence".
The technique of ending a spoken or musical recording by fading out the sound goes back to the earliest days of recording. In the era of mechanical (pre-electrical) recording, this could only be achieved by either moving the sound source away from the recording horn, or by gradually reducing the volume at which the performer(s) were singing, playing or speaking. With the advent of electrical recording, smooth and controllable fadeout effects could be easily achieved by simply reducing the input volume from the microphones using the fader on the mixing desk. The first experimental study on the effect of a fade-out showed that a version of a musical piece with fade-out in comparison to the same piece with a cold end prolonged the perceived duration by 2.4 seconds. This is called the "Pulse Continuity Phenomenon" and was measured by a tapping-along task to measure participants’ perception of pulsation.
An 1894 78 rpm record called "The Spirit of '76" features a narrated musical vignette with martial fife-and-drum that gets louder as it nears the listener, and quieter as it moves away. There are early examples that appear to bear no obvious relationship to movement. One is "Barkin' Dog" (1919) by the Ted Lewis Jazz Band. Another contender is "America" (1918), a patriotic piece by the chorus of evangelist Billy Sunday. By the early 1930s, longer songs were being put on both sides of records, with the piece fading out at the end of side one and fading back in at the beginning of side two. Records at the time held only about two to five minutes of music per side. The segue allowed for longer songs (such as Count Basie's "Miss Thing"), symphonies and live concert recordings.
However, shorter songs continued to use the fade-out for unclear reasons—for example, Fred Astaire's movie theme "Flying Down to Rio" (1933). Even using fade-out as a segue device does not seem obvious, though we certainly take it for granted today. It is possible that movies were an influence here. Fade-ins and fade-outs are often used as cinematic devices that begin and end scenes; film language that developed at the same time as these early recordings. The term fade-out itself is of cinematic origin, appearing in print around 1918. And jazz, a favorite of early records, was a popular subject of early movies too. The same could be said for radio productions. Within a single programme of a radio production, many different types of fade can be applied. When mixing from speech to music, there are a few ways that fade can be used. Here are three examples.
Straight: the introduction has become a musical link between the music/speech that follows, additionally the first notes of the intro can be emphasized to make it pop out more.
Cutting the introduction: Since the first word of the vocals has to follow promptly after the cue light, it could be used to move the recording onward.
Introduction under speech: The music is placed at the specified time on the cue, the level must be low in order for the vocals to be audible. Here the fade-up generally occurs just before the final words in order for the cue to be given. In stage productions the closing music is played from a predetermined time and fades up at the closing words in order to fit in exactly with the remaining program time.
Though relatively rare, songs can fade out then fade back in. Some examples of this are "Helter Skelter" and "Strawberry Fields Forever" by The Beatles, "Suspicious Minds" by Elvis Presley, "Shine On Brightly" by Procol Harum, "Sunday Bloody Sunday" by John Lennon and Yoko Ono, "That Joke Isn't Funny Anymore" by The Smiths, "Thank You" by Led Zeppelin, "In Every Dream Home A Heartache" by Roxy Music, "It's Only Money, Pt. 2" by Argent, "The Great Annihilator" by Swans, "(Reprise) Sandblasted Skin" by Pantera, "Illumination Theory" and "At Wit's End" by Dream Theater, "Future" by Paramore, "Doomsday" by MF Doom, "Outro" by M83, "Cold Desert" by Kings of Leon, and "The Edge Of The World" by DragonForce.
Contemporary
No modern recording can be reliably identified as "the first" to use the technique. In 2003, on the (now-defunct) website Stupid Question, John Ruch listed the following recordings as possible contenders:
More recently: "At the meta-song level, the prevalence of pre-taped sequences (for shops, pubs, parties, concert intervals, aircraft headsets) emphasizes the importance of flow. The effect on radio pop programme form [is] a stress on continuity achieved through the use of fades, voice-over links, twin-turntable mixing and connecting jingles."
Fade
A fade can be constructed so that the motion of the control (linear or rotary) from its start to end points affects the level of the signal in a different manner at different points in its travel. If there are no overlapping regions on the same track, regular fade (pre-fade / post-fade) should be used. A smooth fade is one that changes according to the logarithmic scale, as faders are logarithmic over much of their working range of 30-40 dB. If the engineer requires one region to gradually fade into another on the same track, a crossfade would be more suitable. If however the two regions are on different tracks, fade-ins and fade-outs will be applied. A fade-out can be accomplished without letting the sound's distance increase, however this is also something it can do. The perceived distance increase can be attributed to a diminishing level of timbral detail, not the result of a decreasing dynamic level. A listener's interest can be withdrawn from a sound that is faded at the lower end since the ear accepts a more prompt rounding off. The fade-in can be used as a device that separates the listener from the scene. An example of a mini fade out, of about a second or two, is a sustained bass note left to die down.
Shapes
The shape of a regular fade and a crossfade can be shaped by an audio engineer. Shape implies that you can change the rate at which the level change occurs over the length of the fade. Different types of preset fades shapes include linear, logarithmic, exponential and S-curve.
Linear
The simplest of fade curves is the linear curve and it is normally the default fade. It takes a straight line and introduces a curve. This curve represents an equal degree by which the gain increases or decreases during the length of the fade. A linear fade-in curve makes it sound as though the volume increases sharply at the beginning, and more gradually towards the end. The same principle applies on a fade-out where a gradual drop in volume can be perceived in the beginning, and the fade gets more abrupt towards the end. Because of the initial drop in perceived volume, the linear shape is ideal if there is a natural ambience or reverb present in the audio. When applied it shortens the ambience. Also if the music requires an accelerating effect, this linear curve can also be applied. This type of fade is not very natural sounding. The principle of a linear crossfade is: at the beginning of the fade the perceived volume drops more quickly, one can see at the halfway point (in the middle of the crossfade) that the perceived volume drops below 50%. This is a very noticeable drop in volume. Also if the control can move from position 0 to 100, and the percentage of the signal that is allowed to pass equals the position of the control (i.e. 25% of the signal is allowed to pass when the control is 25% of the physical distance from the 0 point to the 100 point). At the midpoint of the fade the effect of a linear crossfade is that both the sounds are below half of their maximum perceived volume; and as a result the sum of the two fades will be below the maximum level of either. This is not applicable when the two sounds are on different levels and the crossfade time is long enough. In turn if the crossfade is short (for example on a single note) the dip of the volume in the middle of the crossfade can be quite noticeable.
The level of the signal as a function of time, , after applying a linear fade-in can be modeled as follows:
where:
is the original level of the signal,
is any time in the fade,
is the start time of the fade,
is the end time of the fade.
Similarly, the level after applying a linear fade-out can be modeled as follows:
Logarithmic
Another type of curve is called the logarithmic ratio (also known as audio taper), or an inverse-logarithmic ratio. This curve more closely matches human hearing, with finer control at lower levels, increasing dramatically past the 50% point. Since the perceived volume of a sound has a logarithmic relationship with its level, the logarithmic fade sounds consistent and smooth over the whole duration of the fade. This makes this curve useful for fading standard pieces of music. It is best used on a long fade-out since the fade has a perceived linear nature. Also, a fade-out sounds very neutral when incorporated to parts of music with natural ambience. In crossfades, this type of curve sounds very natural. When this curve is applied the perceived volume of the fade's midpoint is at about 50% of the maximum – when the two sections are summed the output volume is fairly constant.
Exponential
The exponential curve shape is in many ways the precise opposite of the logarithmic curves. The fade-in works as follows: it increases in volume slowly and then it shoots up very quickly at the end of the fade. The fade-out drops very quickly (from the maximum volume) and then declines slowly again over the duration of the fade. Simply stated, a linear fade could thus be seen as an exaggerated version of an exponential fade in terms of the apparent volume. Thus the impression that would be gathered from an exponential curve's fade would sound as though the sound was rapidly accelerating toward the listener. Natural ambiance can also be repressed by using an exponential fade-out. A crossfade, in the exponential shape, will have a perceivable dip in the middle, which is very undesirable in music and vocals. This depends largely on the length of the crossfade, a long crossfade on ambient sounds can sound perfectly satisfactory (the dip can add a little breath to the music). Exponential crossfades (or a curve with a similar shape) have a smaller drop in the middle of the fade.
S-curve
The S-curve shape has a mixture of qualities from the previously mentioned curves. The level of the sound is 50% at the midpoint, but before and after the midpoint the shape is not linear. There are also two types of S-curves. Traditional S-curve fade-in has attributes of the exponential curve can be seen at the beginning; at the midpoint to the end it is more logarithmic in nature. A traditional S-curve fade-out is logarithmic from the beginning up to the midpoint, then its attributes are based on the exponential curve from the midpoint to the end. This is true for the situation in reverse as well (for both fade-in and fade-out). Crossfading with S-curves diminishes the amount of time that both sounds are playing simultaneously. This ensures that the edits sound like a direct cut when the two edits meet adding an extra smoothness to the edited regions. The second type of S-curve is more applicable to longer crossfades as both signals are audible for as long as possible. There is a short period at the start of each of the crossfades where the outgoing sound drops toward 50% quickly (with the incoming sound rising just as fast to 50%). This acceleration of sound slows and both sounds will appear as if they are at the same level for most of the crossfade before the changeover happens.
Adjustments
DAW's gives one the ability to change the shape of logarithmic, exponential, and S-curve fades and crossfades. Changing the shape of a logarithmic fade will change how soon the sound will rise above 50%, and then how long it takes for the end of the fade-out to drop below 50% once again. With exponential fades the shape change will affect the shape in reverse, to the shape of the logarithmic fade. In the S-curve's traditional form the shape determines how quickly the change can occur and in the type 2 curve the change can determine the time it takes for both the sounds to get to a nearly equal level.
The level after applying an S-curve fade-in can be modeled as follows:
Similarly, the level after applying an S-curve fade-out can be modeled as follows:
It is also possible to apply different fade times to the out and in portions; which a standard crossfade would not allow you to apply. Appropriate fade-in time for a linear fade can be around 500ms; for the fade-out 500ms would also be affective. By having this longer fade it makes sure that everything is gentle as it gives the fade time to blend in and be less abrupt. To clear up plosive sounds created through vocals a fade-in can be used, but now it has to have a very short time of around 10ms. The fade time can always be adjusted by the engineer in order to locate the best time. It is important that the fade does not change the intelligibility or character of the sound too much. When the crossfade is longer than 10ms the standard linear fades are not always the best choice for music editing.
Crossfading
A crossfader on a DJ mixer essentially functions like two faders connected side-by-side, but in opposite directions. A crossfader is typically mounted horizontally, so that the DJ can slide the fader from the extreme left (this provides 100% of sound source A) to the extreme right (this provides 100% of sound source B), move the fader to the middle (this is a 50/50 mix of sources A and B), or adjust the fader to any point in between. It allows a DJ to fade one source out while fading another source in at the same time. This is extremely useful when beatmatching two sources of audio (or more, where channels can be mapped to one of the two sides of the crossfader individually) such as phonograph records, compact discs or digital sources.
The technique of crossfading is also used in audio engineering as a mixing technique, particularly with instrumental solos. A mix engineer will often record two or more takes of a vocal or instrumental part and create a final version which is a composite of the best passages of these takes by crossfading between each track. In the perfect case, the crossfade would keep a constant output level, an important quality for a club DJ who is creating a seamless mix of dance tracks for dancers or a radio DJ seeking to avoid "dead air" (silence) between songs, an error that can cause listeners to change channels. However, there is no standard on how this should be achieved.
There are many software applications that feature virtual crossfades, for instance, burning-software for the recording of audio-CDs. Also many DAW's (Pro Tools, Logic exc.) have this function. Crossfade is normally found on samplers and usually based on velocity. The purpose of a cross-fade it to utilize a smooth changeover between two cut pieces of audio. Velocity crossfading can be incorporated through a MIDI transformation device and where more than one note can be assigned to a given pad (note) on the MIDI keyboard; velocity crossfading may be available.
These types of crossfades (those that are based on note velocity) allow two (even more) samples to be assigned to one note or range of notes. This requires both a loud and soft sample; the reason for this is Timbre change. This type of crossfade is quite subtle depending on the proportion of the received note velocity value of the loud and soft sample.
Crossfading usually involves the sounding of a combination of one or two sounds at the same time. Crossfades can either be applied to a piece of music in real time, or can be pre-calculated. While crossfading one does not want the second part of the fade to start playing before the first part is finished; one wants the overlapping parts to be as short as possible. If edit regions are not trimmed to a zero-crossing point one will get unwelcome pops in the middle. A sound at the lowest velocity can fade into a sound of a higher velocity, in the order of: first the first sound then the second. All possible without fading out the sounds that are already present. This in turn is a form of Layering that can be used in the mix. The same effect (as was created with velocity) can be applied to a controller. This allows continued monitored control; the crossfading function can also be controlled on some instruments by the keyboard position. These sounds on the MIDI keyboard can be programmed.
A crossfade can either be used between two unrelated pieces of music or between two sounds that are similar; in both of these cases, one would like the sound to be one continuous sound without any bumps. When applying a crossfade between two very different pieces of music (relating to both tone and pitch), one could simply use a crossfade between the two pieces, make a few minor adjustments. This is because the two sounds are different from one another. In the case of a crossfade between two sounds, that are similar, phase-cancellation can become an issue. The two sounds that are crossfaded should be brought into comparison with one another. If both sounds are moving upward they will have a cumulative effect - when added together, this is what one wants. What is not desirable is when both sounds are moving in a different direction, since this can lead to cancelations. This leads to no sound on areas where the amplitudes cancel out one another; there will thus be silence in the middle of the crossfade. This occurrence is rare though since the parameters have to be the same. Commonly a crossfade will result in a gradual reduction in the amount of the sample whose pitch is lower, and an increase will be found on the pitch that is higher. The longer a crossfade, the more likely a problem will occur. One also does not want the effect of the crossfade to be very prominent in the middle of the notes, since if different notes are between the edit point there will be a time when both of the sounds can be heard simultaneously. This overlapping is not expected from a normal singing voice, no reference to Overtone singing.
While DJ pioneers such as Francis Grasso had used basic faders to transition between two records as far back as the late 1960s, they typically had separate faders for each channel. Grandmaster Flash is often credited with the invention of the first crossfader by sourcing parts from a junkyard in the Bronx. It was initially an on/off toggle switch from an old microphone that he transformed into a left/right switch which allowed him to switch from one turntable to another, thereby avoiding a break in the music. However the earliest commercial documented example was designed by Richard Wadman, one of the founders of the British company Citronic. It was called the model SMP101, made about 1977, and had a crossfader that doubled as a L/R balance control or a crossfade between two inputs.
Crossfade shapes
When crossfading two signals that are being combined (mixed), the two fade curves can employ any of the shapes listed above (see #Shapes), such as linear, exponential, S-curve, etc. When the goal is to have the perceived loudness of the combined mix signal stay fairly constant across the full range of the mix, special shapes must be used, called "equal power" (or "constant power") shapes. Equal power shapes are based on audio power principles, particularly the fact that the power of an audio signal is proportional to the square of the amplitude. Many equal power shapes have the property that the midpoint of the mix provides an amplitude multiplier of 0.707 (square root of one half) for both signals. A variety of equal power shapes are available, and the optimal shape will generally depend on the amount of correlation between the two signals. An example pair of curves that keep power equal across the mix are and (where m is mix and ranges from 0 to 1).
Equal power shapes typically have the sum of their curves (in the middle of the mix range) exceeding the nominal maximum amplitude (1.0), which may produce clipping in some contexts. If that is a concern, then "equal gain" (or "constant gain") shapes should be used (which may be linear or curved) that are designed so the two curves always sum to 1.
In the digital signal processing realm, the term "power curve" is often used to designate crossfade shapes, particularly for equal power shapes.
Fader
A fader is any device used for fading, especially when it is a knob or button that slides along a track or slot. It is principally a variable resistance or potentiometer also called a ‘pot’. A contact can move from one end to another. As this movement takes place the resistance of the circuit can either increase or decrease. At one end the resistance of the scale is at 0 and at the other side it is infinite. A. Nisbett explains the fader law as follows in his book called The Sound studio:"The ‘law’ of the fader is near-logarithmic over much of its range, which means that a scale of decibels can be made linear (or close to it) over a working range of perhaps 60 dB. If the resistance were to increase according to the same law beyond this, it would be twice as long before reaching a point where the signal is negligible. But the range below -50 dB is of little practical use, so here the rate of fade increases rapidly to the final cut-off".
A knob which rotates is usually not considered a fader, although it is electrically and functionally equivalent. Some small mixers use knobs rather than faders, as do a small number of DJ mixers designed for club DJs who are creating seamless mixes of songs. A fader can be either analogue, directly controlling the resistance or impedance to the source (e.g. a potentiometer); or digital, numerically controlling a digital signal processor (DSP). Analogue faders are found on mixing consoles. A fader can also be used as a control for a voltage controlled amplifier, which has the same effect on the sound as any other fader, but the audio signal does not pass through the fader itself.
Digital
Digital faders are also referred to as virtual faders, since they can be viewed on the screen of a digital audio workstation. Modern high end digital mixers often feature "flying faders", faders with piezo-electric actuators attached; such faders can be multi-use and will jump to the correct position for a selected function or saved setting. Flying faders can be automated, so that when a timecode is presented to the equipment, the fader will move according to a previously performed path. Also called an automated fader, as it recalls the movement of the channel faders in time. A full-function automation system will continuously scan the console, many times per second, in order to incorporate new settings. While this scan is in progress, the stored representation of the previous scan will be compared to that of the fader's current position. If the fader's position has changed, the new position will be identified, thus resulting in a spurt of data.
The console's computer will update the console's controls on playback. This will be done from memory at the same speed. The advantage of working with mix automation is that only one engineer can perform the job with minimal effort; it can be set up or recorded beforehand to make it even simpler. An example of this is when Ken Hamman installed linear faders that made it possible for him to alter several channels with one hand while mixing, thus he assumed an interactive role in the process of recording. This type of fader level adjustment is also called ‘riding’ the fader.
Types
Many DJ equipment manufacturers offer different mixers for different purposes, with different fader styles, e.g., "scratching", beatmixing, and cut mixing. High-priced mixers often have crossfade curve switches allowing the DJ to select the type of crossfade necessary. Experienced DJs are also able to crossfade between tracks using the channel faders.
Pre-fader, post-fader
On a mixer with auxiliary send mixes, the send mixes are configured pre-fader or post-fader. If a send mix is configured pre-fader, then changes to the main channel strip fader does not affect the send mix. In live sound reinforcement, this is useful for stage monitor mixes where changes in the Front of House channel levels would distract the musicians. In recording and post production, configuring a send to be pre-fader allows the amount of audio sent to the aux bus to remain unaffected by the individual track fader, thus not disturbing the stability of the feed that is being sent to the musicians. If a send mix is configured post-fader, then the level sent to the send mix follows changes to the main channel strip fader. This is useful for reverberation and other signal processor effects. An example of this is when an engineer would like to add some delay to the vocals – the fader can thus be used to adjust the amount of delay added.
Pre-fader listen (PFL), after-fader listen (AFL)
Pre-fader listen and After-fader listen are functions found on a primary monitor function.
On an analogue mixing console, the PFL (pre-fader listen) switch routes the incoming signal of a channel to a PFL bus. This bus is sent to the monitor mix and/or the headphones mix, allowing to monitor an incoming signal before it is send to the main output. When the mixer is equipped with VU meters, the PFL allows to visually monitor an audio source without hearing it and adjust its input gain.
This pre-fade listen is valuable since it allows one to listen through headphones in order to hear what the pre-faded part sounds like, while the studio loudspeaker is being used to monitor the rest of the program.
Pre-fade listen can also be used for talkback as well as to listen to channels before they have been faded. After-fade listen only gets its information later. The choice of listen or level will depend on the user's interest: either with the quality and/or content of the signal or with the signal's level. PFL takes place just before the fader and has a joint channel and monitoring function. PFL sends the channel's signal path to the pre-fade bus. The bus is picked up in the monitor module and made accessible as a substitute signal that is sent to the mixer output. Automatic PFL has been made available, almost universally, and no longer needs to be selected beforehand.
Pre-fade listen can also be incorporated in radio stations and serves as a vital tool. This function allows the radio presenter to listen to the source before it is faded on air; allowing the presenter to check the source's incoming level and make sure it is accurate. It is also valuable since live radio broadcasts can fall apart without it as they will not be able to monitor the sound. After-fader listen is not as useful in live programs.
See also
Beatmatching
Beatmixing
Gapless playback
Harmonic mixing
References
Audio mixing
Audio engineering
Sound recording
DJing
Articles containing video clips | Fade (audio engineering) | [
"Engineering"
] | 6,520 | [
"Electrical engineering",
"Audio engineering"
] |
1,520,221 | https://en.wikipedia.org/wiki/Journal%20of%20the%20British%20Interplanetary%20Society | The Journal of the British Interplanetary Society (JBIS) is a monthly peer-reviewed scientific journal that was established in 1934. The journal covers research on astronautics and space science and technology, including spacecraft design, nozzle theory, launch vehicle design, mission architecture, space stations, lunar exploration, spacecraft propulsion, robotic and crewed exploration of the solar system, interstellar travel, interstellar communications, extraterrestrial intelligence, philosophy, and cosmology. It is published monthly by the British Interplanetary Society.
History
The journal was established in 1934 when the British Interplanetary Society was founded. The inaugural editorial stated:
The first issue was only a six-page pamphlet, but has the distinction of being the world's oldest surviving astronautical publication.
Notable papers
Notable papers published in the journal include:
The B.I.S Space-Ship, H.E.Ross, JBIS, 5, pp. 4–9, 1939
The Challenge of the Spaceship (Astronautics and its Impact Upon Human Society), Arthur C. Clarke, JBIS, 6, pp. 66–78, 1946
Atomic rocket papers by Les Shepherd, Val Cleaver and others, 1948–1949.
Interstellar Flight, L.R.Shepherd, JBIS, 11, pp. 149–167, 1952
A Programme for Achieving Interplanetary Flight, A.V.Cleaver, JBIS, 13, pp. 1–27, 1954
Special Issue on World Ships, JBIS, 37, 6, June 1984
Project Daedalus - Final Study Reports, Alan Bond & Anthony R Martin et al., Special Supplement JBIS, pp.S1-192, 1978
Editors
Some of the people that have been editor-in-chief of the journal are:
Philip E. Cleator
J. Hardy
Gerald V. Groves
Anthony R. Martin
Mark Hempsell
Chris Toomer
Kelvin Long
Roger Longstaff
See also
Spaceflight (magazine)
References
External links
British Interplanetary Society
Space science journals
Academic journals established in 1934
Planetary engineering
Monthly journals
English-language journals
1934 establishments in the United Kingdom | Journal of the British Interplanetary Society | [
"Engineering"
] | 430 | [
"Planetary engineering"
] |
1,520,238 | https://en.wikipedia.org/wiki/Embodied%20energy | Embodied energy is the sum of all the energy required to produce any goods or services, considered as if that energy were incorporated or 'embodied' in the product itself. The concept can be useful in determining the effectiveness of energy-producing or energy saving devices, or the "real" replacement cost of a building, and, because energy-inputs usually entail greenhouse gas emissions, in deciding whether a product contributes to or mitigates global warming. One fundamental purpose for measuring this quantity is to compare the amount of energy produced or saved by the product in question to the amount of energy consumed in producing it.
Embodied energy is an accounting method which aims to find the sum total of the energy necessary for an entire product lifecycle. Determining what constitutes this lifecycle includes assessing the relevance and extent of energy into raw material extraction, transport, manufacture, assembly, installation, disassembly, deconstruction and/or decomposition as well as human and secondary resources.
History
The history of constructing a system of accounts which records the energy flows through an environment can be traced back to the origins of accounting itself. As a distinct method, it is often associated with the Physiocrat's "substance" theory of value, and later the agricultural energetics of Sergei Podolinsky, a Russian physician, and the ecological energetics of Vladmir Stanchinsky.
The main methods of embodied energy accounting as they are used today grew out of Wassily Leontief's input-output model and are called Input-Output Embodied Energy analysis. Leontief's input-output model was in turn an adaptation of the neo-classical theory of general equilibrium with application to "the empirical study of the quantitative interdependence between interrelated economic activities". According to Tennenbaum Leontief's Input-Output method was adapted to embodied energy analysis by Hannon to describe ecosystem energy flows. Hannon's adaptation tabulated the total direct and indirect energy requirements (the energy intensity) for each output made by the system. The total amount of energies, direct and indirect, for the entire amount of production was called the embodied energy.
Methodologies
Embodied energy analysis is interested in what energy goes to supporting a consumer, and so all energy depreciation is assigned to the final demand of consumer. Different methodologies use different scales of data to calculate energy embodied in products and services of nature and human civilization. International consensus on the appropriateness of data scales and methodologies is pending. This difficulty can give a wide range in embodied energy values for any given material. In the absence of a comprehensive global embodied energy public dynamic database, embodied energy calculations may omit important data on, for example, the rural road/highway construction and maintenance needed to move a product, marketing, advertising, catering services, non-human services and the like. Such omissions can be a source of significant methodological error in embodied energy estimations. Without an estimation and declaration of the embodied energy error, it is difficult to calibrate the sustainability index, and so the value of any given material, process or service to environmental and economic processes.
Standards
The SBTool, UK Code for Sustainable Homes was, and USA LEED still is, a method in which the embodied energy of a product or material is rated, along with other factors, to assess a building's environmental impact. Embodied energy is a concept for which scientists have not yet agreed absolute universal values because there are many variables to take into account, but most agree that products can be compared to each other to see which has more and which has less embodied energy. Comparative lists (for an example, see the University of Bath Embodied Energy & Carbon Material Inventory) contain average absolute values, and explain the factors which have been taken into account when compiling the lists.
Typical embodied energy units used are MJ/kg (megajoules of energy needed to make a kilogram of product), t (tonnes of carbon dioxide created by the energy needed to make a kilogram of product). Converting MJ to t is not straightforward because different types of energy (oil, wind, solar, nuclear and so on) emit different amounts of carbon dioxide, so the actual amount of carbon dioxide emitted when a product is made will be dependent on the type of energy used in the manufacturing process. For example, the Australian Government gives a global average of 0.098 t = 1 GJ. This is the same as 1 MJ = 0.098 kg = 98 g or 1 kg = 10.204 MJ.
Related methodologies
In the 2000s drought conditions in Australia have generated interest in the application of embodied energy analysis methods to water. This has led to the use of the concept of embodied water.
Data
A range of databases exist for quantifying the embodied energy of goods and services, including materials and products. These are based on a range of different data sources, with variations in geographic and temporal relevance and system boundary completeness. One such database is the Environmental Performance in Construction (EPiC) Database developed at The University of Melbourne, which includes embodied energy data for over 250 mainly construction materials. This database also includes values for embodied water and greenhouse gas emissions.
The main reason for differences in embodied energy data between databases is due to the source of data and methodology used in their compilation. Bottom-up 'process' data is typically sourced from product manufacturers and suppliers. While this data is generally more reliable and specific to particular products, the methodology used to collect process data typically results in much of the embodied energy of a product being excluded, mainly due to the time, costs and complexity of data collection. Top-down environmentally-extended input-output (EEIO) data, based on national statistics can be used to fill these data gaps. While EEIO analysis of products can be useful on its own for initial scoping of embodied energy, it is generally much less reliable than process data and rarely relevant for a specific product or material. Hence, hybrid methods for quantifying embodied energy have been developed, using available process data and filling any data gaps with EEIO data. Databases that rely on this hybrid approach, such as The University of Melbourne's EPiC Database, provide a more comprehensive assessment of the embodied energy of products and materials.
In common materials
Selected data from the Inventory of Carbon and Energy ('ICE') prepared by the University of Bath (UK)
In transportation
Theoretically, embodied energy stands for the energy used to extract materials from mines, manufacture vehicles, assemble, transport, maintain, and transform them to transport energy, and ultimately recycle these vehicles. Besides, the energy needed to build and maintain transport networks, whether road or rail, should be taken into account as well. The process to be implemented is so complex that no one dares to put forward a figure.
According to the Institut du développement durable et des relations internationales, in the field of transportation, "it is striking to note that we consume more embodied energy in our transportation expenditures than direct energy", and "we consume less energy to move around in our personal vehicles than we consume the energy we need to produce, sell and transport the cars, trains or buses we use".
Jean-Marc Jancovici advocates a carbon footprint analysis of any transportation infrastructure project, prior to its construction.
In automobiles
Manufacturing
According to Volkswagen, the embodied energy contents of a Golf A3 with a petrol engine amounts to 18 000 kWh (i.e. 12% of 545 GJ as shown in the report). A Golf A4 (equipped with a turbocharged direct injection) will show an embodied energy amounting to 22 000 kWh (i.e. 15% of 545 GJ as shown in the report). According to the French energy and environment agency ADEME a motor car has an embodied energy contents of 20 800 kWh whereas an electric vehicle shows an embodied energy contents amounting to 34 700 kWh.
An electric car has a higher embodied energy than a combustion engine one, owing to the battery and electronics. According to Science & Vie, the embodied energy of batteries is so high that rechargeable hybrid cars constitute the most appropriate solution, with their batteries smaller than those of an all-electric car.
Fuel
As regards energy itself, the factor energy returned on energy invested (EROEI) of fuel can be estimated at 8, which means that to some amount of useful energy provided by fuel should be added 1/7 of that amount in embodied energy of the fuel. In other words, the fuel consumption should be augmented by 14.3% due to the fuel EROEI.
According to some authors, to produce 6 liters of petrol requires 42 kWh of embodied energy (which corresponds to approximately 4.2 liters of diesel in terms of energy content).
Road construction
We have to work here with figures, which prove still more difficult to obtain. In the case of road construction, the embodied energy would amount to 1/18 of the fuel consumption (i.e. 6%).
Other figures available
Treloar, et al. have estimated the embodied energy in an average automobile in Australia as 0.27 terajoules (i.e. 75 000 kWh) as one component in an overall analysis of the energy involved in road transportation.
In buildings
Although most of the focus for improving energy efficiency in buildings has been on their operational emissions, it is estimated that about 30% of all energy consumed throughout the lifetime of a building can be in its embodied energy (this percentage varies based on factors such as age of building, climate, and materials). In the past, this percentage was much lower, but as much focus has been placed on reducing operational emissions (such as efficiency improvements in heating and cooling systems), the embodied energy contribution has come much more into play. Examples of embodied energy include: the energy used to extract raw resources, process materials, assemble product components, transport between each step, construction, maintenance and repair, deconstruction and disposal. As such, it is important to employ a whole-life carbon accounting framework in analyzing the carbon emissions in buildings. Studies have also shown the need to go beyond the building scale and to take into account the energy associated with mobility of occupants and the embodied energy of infrastructure requirements, in order to avoid shifting energy needs across scales of the built environment.
In the energy field
EROEI
EROEI (Energy Returned On Energy Invested) provides a basis for evaluating the embodied energy due to energy.
Final energy has to be multiplied by in order to get the embodied energy.
Given an EROEI amounting to eight e.g., a seventh of the final energy corresponds to the embodied energy.
Not only that, for really obtaining overall embodied energy, embodied energy due to the construction and maintenance of power plants should be taken into account, too. Here, figures are badly needed.
Electricity
In the BP Statistical Review of World Energy June 2018, toe are converted into kWh "on the basis of thermal equivalence assuming 38% conversion efficiency in a modern thermal power station".
In France, by convention, the ratio between primary energy and final energy in electricity amounts to 2.58, corresponding to an efficiency of 38.8%.
In Germany, on the contrary, because of the swift development of the renewable energies, the ratio between primary energy and final energy in electricity amounts to only 1.8, corresponding to an efficiency of 55.5%.
According to EcoPassenger, overall electricity efficiency would amount to 34% in the UK, 36% in Germany and 29% in France.
Data processing
According to association négaWatt, embodied energy related to digital services amounted to 3.5 TWh/a for networks and 10.0 TWh/a for data centres (half for the servers per se, i. e. 5 TWh/a, and the other half for the buildings in which they are housed, i. e. 5 TWh/a), figures valid in France, in 2015. The organization is optimistic about the evolution of the energy consumption in the digital field, underlining the technical progress being made. The Shift Project, chaired by Jean-Marc Jancovici, contradicts the optimistic vision of the association négaWatt, and notes that the digital energy footprint is growing at 9% per year.
See also
Biophysical economics
Crystallized labor
Ecological economics
Embodied emissions
Energy accounting
Energy cannibalism
Energy economics
Environmental accounting
Life cycle assessment
Systems ecology
References
Bibliography
External links
Embodied energy data and research at The University of Melbourne
Research on embodied energy at the University of Sydney, Australia
Australian Greenhouse Office, Department of the Environment and Heritage
University of Bath (UK), Inventory of Carbon & Energy (ICE) Material Inventory
Energy development
Systems ecology
Ecological economics
Russian inventions
Management cybernetics | Embodied energy | [
"Environmental_science"
] | 2,601 | [
"Environmental social science",
"Systems ecology"
] |
1,520,379 | https://en.wikipedia.org/wiki/Nonelementary%20integral | In mathematics, a nonelementary antiderivative of a given elementary function is an antiderivative (or indefinite integral) that is, itself, not an elementary function. A theorem by Liouville in 1835 provided the first proof that nonelementary antiderivatives exist. This theorem also provides a basis for the Risch algorithm for determining (with difficulty) which elementary functions have elementary antiderivatives.
Examples
Examples of functions with nonelementary antiderivatives include:
(elliptic integral)
(logarithmic integral)
(error function, Gaussian integral)
and (Fresnel integral)
(sine integral, Dirichlet integral)
(exponential integral)
(in terms of the exponential integral)
(in terms of the logarithmic integral)
(incomplete gamma function); for the antiderivative can be written in terms of the exponential integral; for in terms of the error function; for any positive integer, the antiderivative elementary.
Some common non-elementary antiderivative functions are given names, defining so-called special functions, and formulas involving these new functions can express a larger class of non-elementary antiderivatives. The examples above name the corresponding special functions in parentheses.
Properties
Nonelementary antiderivatives can often be evaluated using Taylor series. Even if a function has no elementary antiderivative, its Taylor series can be integrated term-by-term like a polynomial, giving the antiderivative function as a Taylor series with the same radius of convergence. However, even if the integrand has a convergent Taylor series, its sequence of coefficients often has no elementary formula and must be evaluated term by term, with the same limitation for the integral Taylor series.
Even if it is not possible to evaluate an indefinite integral (antiderivative) in elementary terms, one can always approximate a corresponding definite integral by numerical integration. There are also cases where there is no elementary antiderivative, but specific definite integrals (often improper integrals over unbounded intervals) can be evaluated in elementary terms: most famously the Gaussian integral
The closure under integration of the set of the elementary functions is the set of the Liouvillian functions.
See also
References
Integration of Nonelementary Functions, S.O.S MATHematics.com; accessed 7 Dec 2012.
Further reading
Williams, Dana P., NONELEMENTARY ANTIDERIVATIVES, 1 Dec 1993. Accessed January 24, 2014.
Integral calculus
Integrals | Nonelementary integral | [
"Mathematics"
] | 524 | [
"Integral calculus",
"Calculus"
] |
1,520,573 | https://en.wikipedia.org/wiki/Jon%20Bosak | Jon Bosak led the creation of the XML specification at the W3C. From 1996–2008, he worked for Sun Microsystems.
XML
Tim Bray, who was one of the editors of the XML specification, has this to say in his note on Bosak in his annotated version of the specification:
In a 1999 posting to the xml-dev mailing list, Bray writes:
When he stepped down from the W3C XML Coordination Group in 2000, Jon Bosak was given the unusual recognition of having a formal identifier reserved for him:
In appreciation for his vision and leadership and dedication the W3C XML Plenary on this 10th day of February, 2000 reserves for Jon Bosak in perpetuity the XML name "xml:Father".
The Universal Business Language
In 2001, Bosak organized the OASIS Universal Business Language Technical Committee to create standard formats for basic electronic business documents. He led the UBL TC through the completion of UBL 2.1 in November 2013 and continues to serve on the Committee as Secretary. UBL was approved for use in European public sector procurement by decision of the European Commission dated 31 October 2014 and published as an International Standard, ISO/IEC 19845:2015, on 15 December 2015.
Metrological Studies
Bosak is author of the book (, 2010) and the article (latest version 19 April 2014)
Bob Bosak
Jon Bosak's father, Robert Bosak (1925–1987), began the family's long involvement in the computer industry in 1947 when he went to work on the first computer on the west coast of the US. He joined RAND in 1948 to work on analysis and programming of scientific problems. In 1951, he joined Lockheed Aircraft Corporation, where he organized and directed the Mathematical Analysis Group. For a short time after his divorce in the 1950s, he shared an apartment with Bob Bemer, "the Father of ASCII."
Bob Bosak returned to RAND in 1956 to become head of programming for the Semi Automatic Ground Environment (SAGE), the automated NORAD system that controlled US air defenses from 1959 to 1983 and
strongly influenced the design of modern air traffic control systems. He was one of the designers of JOVIAL and principal author of the seminal paper An Information Algebra.
References
External links
Background information at ibiblio
Interview with JavaWorld
Schema document for namespace 1998
Schema document for namespace 2001
Year of birth missing (living people)
Living people
Sun Microsystems people
XML Guild | Jon Bosak | [
"Technology"
] | 511 | [
"Computing stubs",
"Computer specialist stubs"
] |
1,520,619 | https://en.wikipedia.org/wiki/Proper%20orthogonal%20decomposition | The proper orthogonal decomposition is a numerical method that enables a reduction in the complexity of computer intensive simulations such as computational fluid dynamics and structural analysis (like crash simulations). Typically in fluid dynamics and turbulences analysis, it is used to replace the Navier–Stokes equations by simpler models to solve.
It belongs to a class of algorithms called model order reduction (or in short model reduction). What it essentially does is to train a model based on simulation data. To this extent, it can be associated with the field of machine learning.
POD and PCA
The main use of POD is to decompose a physical field (like pressure, temperature in fluid dynamics or stress and deformation in structural analysis), depending on the different variables that influence its physical behaviors. As its name hints, it's operating an Orthogonal Decomposition along with the Principal Components of the field. As such it is assimilated with the principal component analysis from Pearson in the field of statistics, or the singular value decomposition in linear algebra because it refers to eigenvalues and eigenvectors of a physical field. In those domains, it is associated with the research of Karhunen and Loève, and their Karhunen–Loève theorem.
Mathematical expression
The first idea behind the Proper Orthogonal Decomposition (POD), as it was originally formulated in the domain of fluid dynamics to analyze turbulences, is to decompose a random vector field u(x, t) into a set of deterministic spatial functions Φk(x) modulated by random time coefficients ak(t) so that:
The first step is to sample the vector field over a period of time in what we call snapshots (as display in the image of the POD snapshots). This snapshot method is averaging the samples over the space dimension n, and correlating them with each other along the time samples p:
with n spatial elements, and p time samples
The next step is to compute the covariance matrix C
We then compute the eigenvalues and eigenvectors of C and we order them from the largest eigenvalue to the smallest.
We obtain n eigenvalues λ1,...,λn and a set of n eigenvectors arranged as columns in an n × n matrix Φ:
References
External links
MIT: http://web.mit.edu/6.242/www/images/lec6_6242_2004.pdf
Stanford University - Charbel Farhat & David Amsallem https://web.stanford.edu/group/frg/course_work/CME345/CA-CME345-Ch4.pdf
Weiss, Julien: A Tutorial on the Proper Orthogonal Decomposition. In: 2019 AIAA Aviation Forum. 17–21 June 2019, Dallas, Texas, United States.
French course from CNRS https://www.math.u-bordeaux.fr/~mbergman/PDF/OuvrageSynthese/OCET06.pdf
Applications of the Proper Orthogonal Decomposition Method http://www.cerfacs.fr/~cfdbib/repository/WN_CFD_07_97.pdf
Continuum mechanics
Numerical differential equations
Partial differential equations
Structural analysis
Computational electromagnetics | Proper orthogonal decomposition | [
"Physics",
"Engineering"
] | 680 | [
"Structural engineering",
"Computational electromagnetics",
"Continuum mechanics",
"Structural analysis",
"Classical mechanics",
"Computational physics",
"Mechanical engineering",
"Aerospace engineering"
] |
1,520,732 | https://en.wikipedia.org/wiki/Simon%20Conway%20Morris | Simon Conway Morris (born 1951) is an English palaeontologist, evolutionary biologist, and astrobiologist known for his study of the fossils of the Burgess Shale and the Cambrian explosion. The results of these discoveries were celebrated in Stephen Jay Gould's 1989 book Wonderful Life. Conway Morris's own book on the subject, The Crucible of Creation (1998), however, is critical of Gould's presentation and interpretation.
Conway Morris, a Christian, holds to theistic views of biological evolution. He has held the Chair of Evolutionary Palaeobiology in the Department of Earth Sciences, University of Cambridge since 1995.
Biography
Early years
Conway Morris was born on 6 November 1951. A native of Carshalton, Surrey, he was brought up in London, England. and went on to study geology at Bristol University, achieving a First Class Honours degree. He then moved to Cambridge University and completed a PhD at St John's College under Harry Blackmore Whittington. He is professor of evolutionary palaeobiology in the Department of Earth Sciences at Cambridge. He is renowned for his insights into early evolution and his studies of paleobiology. He gave the Royal Institution Christmas Lecture in 1996 on the subject of The History in our Bones. He was elected a Fellow of the Royal Society at age 39, was awarded the Walcott Medal of the National Academy of Sciences in 1987 and the Lyell Medal of the Geological Society of London in 1998.
Work
Conway Morris is based in the Department of Earth Sciences at the University of Cambridge and is best known for his work on the Cambrian explosion, the Burgess Shale fossil fauna and similar deposits in China and Greenland. In addition to working in these countries he has undertaken research in Australia, Canada, Mongolia and the United States. His studies on the Burgess Shale-type faunas, as well as the early evolution of skeletons, has encompassed a wide variety of groups, ranging from ctenophores to the earliest vertebrates. His thinking on the significance of the Burgess Shale has evolved and his current interest in evolutionary convergence and its wider significance – the topic of his 2007 Gifford Lectures – was in part spurred by Stephen Jay Gould's arguments for the importance of contingency in the history of life.
In January 2017, his team announced the discovery of Saccorhytus and initially described it as an early member of the deuterostomes which contain a diverse group of animals including vertebrates, but subsequent analysis reclassified this taxon as a member of the protostomes, probably within the ecdysozoans.
Burgess Shale
Conway Morris' views on the Burgess Shale are reported in numerous technical papers and more generally in The Crucible of Creation (Oxford University Press, 1998). In recent years he has been investigating the phenomenon of evolutionary convergence, the main thesis of which is put forward in Life's Solution: Inevitable Humans in a Lonely Universe (Cambridge University Press, 2003). He is now involved on a major project to investigate both the scientific ramifications of convergence and also to establish a website (www.mapoflife.org) that aims to provide an easily accessible introduction to the thousands of known examples of convergence. This work is funded by the John Templeton Foundation.
Evolution, science and religion
Conway Morris is active in the public understanding of science and has broadcast extensively on radio and television. The latter includes the Royal Institution Christmas Lectures delivered in 1996. A Christian, he has participated in science and religion debates, including arguments against intelligent design on the one hand and materialism on the other. In 2005 he gave the second Boyle Lecture. He has lectured at the Faraday Institute for Science and Religion on "Evolution and fine-tuning in Biology". He gave the University of Edinburgh Gifford Lectures for 2007 in a series titled "Darwin's Compass: How Evolution Discovers the Song of Creation". In these lectures Conway Morris explained why evolution is compatible with belief in the existence of a God.
He is a critic of materialism and of reductionism:
That satisfactory definitions of life elude us may be one hint that when materialists step forward and declare with a brisk slap of the hands that this is it, we should be deeply skeptical. Whether the "it" be that of Richard Dawkins' reductionist gene-centred worldpicture, the "universal acid" of Daniel Dennett's meaningless Darwinism, or David Sloan Wilson's faith in group selection (not least to explain the role of human religions), we certainly need to acknowledge each provides insights but as total explanations of what we see around us they are, to put it politely, somewhat incomplete.
and of scientists who are militantly against religion:
the scientist who boomingly – and they always boom – declares that those who believe in the Deity are unavoidably crazy, "cracked" as my dear father would have said, although I should add that I have every reason to believe he was – and now hope is – on the side of the angels.
In March 2009 he was the opening speaker at the Biological Evolution: Facts and Theories conference held at the Pontifical Gregorian University in Rome, as well as chairing one of the sessions. The conference was sponsored by the Catholic Church. Conway Morris has contributed articles on evolution and Christian belief to several collections, including The Cambridge Companion to Science and Religion (2010) and The Blackwell Companion to Science and Christianity (2012).
{| class="wikitable"
|+ Simon Conway Morris appointments and accomplishments
|-
! Date || Position
|-
| 1969–1972 || University of Bristol: First Class Honours in Geology (BSc)
|-
| 1975 || Elected Fellow (Title A) of St John's College
|-
| 1976 || University of Cambridge: PhD
|-
| 1976 || Research Fellowship at St John's College, University of Cambridge
|-
| 1979 || Lecturer in Department of Earth Sciences, Open University
|-
| 1983 || Lecturer in Department of Earth Sciences, University of Cambridge
|-
| 1987–1988 || Awarded a One-Year Science Research Fellowship by the Nuffield Foundation
|-
| 1990 || Elected Fellow of the Royal Society
|-
| 1991 || Appointed Reader in Evolutionary Palaeobiology
|-
| 1995 || Elected to an ad hominem Chair in Evolutionary Palaeobiology
|-
| 1997–2002 || Natural Environment Research Council
|}
Awards and honours
The Walcott Medal 1987
PS Charles Schuchert Award 1989
GSL Charles Lyell Medal 1998
Trotter Prize 2007
Bibliography
The Early Evolution of Metazoa and the Significance of Problematic Taxa. (ed., with Alberto M. Simonetta) Cambridge University Press, 1991.
"The Cambrian "Explosion" of Metazoans". in Origination of Organismal Form: Beyond the Gene in Developmental and Evolutionary Biology, 2003,
The Deep Structure of Biology. (ed.) Templeton Foundation Press, 2008.
Fitness of the Cosmos for Life: Biochemistry and Fine-Tuning. (ed., with John D. Barrow, Stephen J. Freeland, Charles L. Harper, Jr.) Cambridge University Press, 2008.
Water and Life: The Unique Properties of H2O. (ed., with Ruth M. Lynden-Bell, John D. Barrow, John L. Finney, Charles Harper, Jr.) CRC Press, 2010.
The Runes of Evolution: How the Universe became Self-Aware. Templeton Press, 2015
From Extraterrestrials to Animal Minds: Six Myths of Evolution. Templeton Press, 2022
See also
Extraterrestrial (TV program) in which Conway Morris participates.
References
External links
Simon Conway Morris webpage at the Earth Sciences department, University of Cambridge
Simon Conway Morris resource page at ISCAST
Simon Conway Morris extended film interview with transcripts for the 'Why Are We Here?' documentary series.
1951 births
Living people
People educated at King's College School, London
Alumni of the University of Bristol
Fellows of St John's College, Cambridge
Academics of the Open University
English palaeontologists
Astrobiologists
British evolutionary biologists
Fellows of the Royal Society
English Christians
Charles Doolittle Walcott Medal winners
Lyell Medal winners
Theistic evolutionists
Critics of New Atheism
British critics of atheism
Presidents of the Cambridge Philosophical Society
Earth scientists at the University of Cambridge
Alumni of St John's College, Cambridge | Simon Conway Morris | [
"Biology"
] | 1,703 | [
"Non-Darwinian evolution",
"Theistic evolutionists",
"Biology theories"
] |
1,520,931 | https://en.wikipedia.org/wiki/Micronet%20800 | Micronet 800 was an information provider (IP) on Prestel, aimed at the 1980s personal computer market. It was an online magazine that gave subscribers computer related news, reviews, general subject articles and downloadable telesoftware.
Users would log onto the Prestel network (which was usually a local call) and then access the Micronet 800 home page by entering *800# (hence the name) on their modem or computer. Most Micronet 800 members would have their default main index page set to page 800 automatically.
History
The name Micronet 800 derives from its home page, 800, on the BT Prestel videotext service.
Micronet 800 derived from the earlier development in 1980 and 1981 of 'Electronic Insight' by Bob Denton. Electronic Insight was a Prestel-based feature-and-price-comparison site listing computers, calculators and other electronic and IT products, whose main page was on page 800 of Prestel. Electronic Insight was acquired by Telemap Group, a part of EMAP, East Midland (note, not Midlands) Allied Press, in 1982 on the recommendation of Richard Hease, a number of whose computer magazines EMAP had just bought. Telemap had been formed in 1981 to explore the opportunities of British Telecom's Prestel videotext service. It had been looking at the horticultural market that EMAP served with a number of magazine titles, notably providing a 'Closed User Group' purchasing network for garden centre businesses, complementing EMAP's printed 'Garden Trade News' magazine. But horticulturalists and IT proved not to be a natural marriage, and the service had insufficient users to make it viable.
Richard Hease, in 1982 Chairman of EMAP's Computer & Business Press which had acquired Electronic Insight, organised a pitch to the Telemap Group by David Babsky of a projected interactive online computer magazine to replace the existing content of Electronic Insight. Babsky showed a 'dummy issue' of the intended online magazine, programmed in Integer BASIC on an Apple II computer. Hease suggested that there be several different 'areas' of the magazine, with titles such as MicroNews, MicroNet (for those interested in networking), etc., and Babsky proposed that the entire project be called 'Micronet 800' to ensure that it could be easily found by anyone using Prestel, as its page number would be part of its name. Hease and Denton negotiated with BT Prestel for a special relationship that would rank it alongside the Nottingham Building Society's plans for its Homelink as the two key thrusts for Prestel.
Hease negotiated with then telecoms minister John Butcher a £25 subsidy for Micronet subscribers to have their homes equipped free with a telephone jack-socket for the relevant modem.
The Telemap editorial staff was first based at 8 Herbal Hill, Clerkenwell, London (after the preliminary discussions and presentation at EMAP's offices in Hatton Garden), and the technical staff in an EMAP building in Peterborough. In 1986 the technical staff moved down to the London building.
Telemap was to be the base for Micronet 800 and the editorial development of the site. Hease's and Denton's "Prism Micro Products", the exclusive distributor of Sinclair Computers in the UK, was charged with developing the required modems for the enterprise, to ensure that Micronet 800's pages could be accessed by such microcomputers as Apple II, ZX81, BBC Micro, Dragon 32/64, IBM PCs, PET, and subsequently the ZX Spectrum, Sinclair QL, Camputers Lynx, VIC-20, Commodore 64, and other 1980s home computers.
Although fast by contemporary standards, Prestel modems were quite slow from today's point of view (1200 baud download, 75 upload) and the display was just 24 lines of 40 characters, with seven colours and very simple block graphics. Yet Micronet 800 had versions of many of the Internet's subsequent features, especially an interactive 'ChatLine' (similar to Internet Relay Chat) developed by Mike Brown, who joined Micronet 800 from the Council for Educational Technology, where he'd devised a standard UK format for downloadable programs which became known as 'telesoftware'.
Micronet 800 was quite similar in scope to, and compatible with, the German Bildschirmtext and French Minitel services, but Minitel achieved volume sales for its terminals by the simple expedient of replacing paper telephone books with their terminals. Based on its success, Minitel proved resilient against the Internet adoption in France.
For Micronet, Denton negotiated that the interested parties would all agree to adopt the CET, Council for Educational Technology, format for telesoftware - one of two then competing formats. Telesoftware allowed users to download software directly from the Prestel site. Micronet then negotiated with hobbyist computer groups to provide applications and utilities that would be listed on, and be downloadable from, the Micronet 800 site. Approximately 50% of software - for Sinclair, Apple, BBC Micro, IBM, etc. - was available at no cost, and the other 50% was paid for by the automatic addition of the cost of the software to the subscriber's telephone bill.
Prism developed a broad range of modems from a simple acoustic coupler to integrated 'network interfaces' for each of the early home and personal computers. Prism models included the VTX5000, the only modem custom designed for the popular ZX Spectrum, and the more general purpose Modem 1000 and Modem 2000. These were ready-to-use out of a box, so that the buyer would get the modem with all relevant leads, cards (if necessary) and software to connect with Micronet.
Some 25,000 subscribers were eventually signed up to Micronet 800 to make it the largest CUG, Closed User Group, on Prestel; its total user base peaked at 90,000. Micronet achieved over 1.1 million page views a week. Its first subscriber, who joined on its opening day, 1 March 1983, was Jeremy Dredge, an estate agent from Thames Ditton in Surrey. Its 10,000th subscriber was Tom Corcoran, a director of BBC television's Top of the Pops.
In 1985 Telemap saw that Prism was preoccupied with its Sinclair computer distribution agency and in developing Prism's own 'luggable' Wren microcomputer, so prospective Micronet subscribers were then sent a list of several other modem suppliers.
Following Prism's collapse in 1985 and the subsequent purchase of their stock by Telemap, and in a bid to increase take-up, Micronet 800 encouraged users by giving away a free modem to new users subscribing for a year.
However, in a move that saw the demise of Micronet, Prestel priced the home user out of the service with a new pricing structure, adding time charges on top of the phone charges for evening access which effectively killed off home usage even though the network was under-utilized during the 6pm to 8am time-slot. Today this remains the peak usage time of the Internet.
Many of the lessons learned with respect to online publishing and interactive services were pioneered by Micronet 800 and became every bit as important with the growth of the Internet.
BT became the majority shareholder in 1987 (after a previous 19% Telemap stake had been sold to Bell Canada) initially managing the company as part of BT Spectrum, its Value Added Services Group, before passing the group to BT Prestel. In 1988 the company passed a milestone by becoming the only Value Added Data service to become profitable. In 1989 BT finally acquired the entire company, moved it into a BT building (Dialcom House) in Apsley, just outside Hemel Hempstead in Hertfordshire, and folded the business into first the Dialcom Group along with the rest of the BT Prestel companies and Telecom Gold and subsequently BT Managed Network Services.
In 1991 along with all its online services, BT closed the service deciding to focus on providing network services and transferred the subscriber base to Compuserve which subsequently became AOL in the UK.
The Micronet service closed 31 October 1991. It had 10,000 members at closure and was "easily the largest online service in the UK specialising in microcomputing". Despite this apparent success, this was less than 10% of the number of users they were predicting shortly after launch.
Micronet/Telemap management:
Richard Hease - chairman and co-founder 1982-1983
Bob Denton - co-founder 1982-1983
Tim Schoonmaker - Managing Director 1983-1986
Ian Rock - Publisher (formerly Marketing Manager) 1983-1986 (Author of 'How To Run The Country Manual')
Tom Baird - BT liaison
John Tomany - Managing Director 1987-1990
Michael Weatherseed - General Manager 1990-1991
Micronet editors:
David Babsky, founding editor
Simon D'Arcy, Editor then Publisher
Sid Smith (author of "Something Like A House", Whitbread award-winning novel), news editor, then editor.
Francis Jago (Now CEO of Fingal, a creative communications agency in London)
Paul Needs, Amstrad & PC staff writer, then editor then managing editor computer and leisure service. Paul is now a professional entertainer and recording artist.
Ian Burley, Micronet's final editor (Now CEO of The Write Technology Ltd, an Internet online publishing business behind Digital Photography Now)
Barbara Conway (died 1991), part-time media editor in the early years of Micronet 800
Other editorial staff
Ken Young - online journo and roving reporter
Adam Denning - original Technical Editor
David Rosenbaum - News Editor and editor of Musicnet
Chris Bourne - Sunday Xtra editor
Paul Vigay - Acorn Editor
Chris Lewis - Sinclair Editor
Ian Burley - Acorn Editor, then News Editor.
Rupert Goodwins - editorial assistant
Afshin Rattansi - music and arts journalist
David Farmbrough - music journalist
Production team:
Robin Wilkinson - publisher in Peterborough, testing, sales and downloading; previously EMAP's Telemap publisher
Val Burgess - previously of Prestel, Micronet 800 telesoftware database manager
Mike Brown - previously of CET, Technical Director
Richard Tyner- Software sales and acquisition,
John Mason - software testing and pricing
John Prout - technical help desk
Denise Shemuel - editorial database manager, London
Colin Morgan
Roger "Woj" Cracknell
Gary "Grism" Smith
Robert O'Donnell
Patrick Reilly
Daemonn Brody
Denise Slater - graphic designer for downloadable software pages, in Peterborough
Anna Smith - editorial graphic designer in London, then Super sub-editor
Sharon Giles
Marketing team:
Ian Rock - Marketing Manager
Peter Probert - PR Manager.
Phil Godsell - Product Manager
Lynne Thomas (the late) - Exhibitions Manager
Claire Walker - Advertising and PR Executive
Lynne Bennett - Marketing Executive
Other contributors:
Steve Gold
Robert Schifreen - previously 'Bug Buster' columnist in Richard Hease's 'Computer & Video Games' magazine.
David Janda
Richard Poynder, Bizznet Editor
Quotes:
"There is no future for online services aimed at domestic computer users" - Michael Collins, the department head of Prestel/Telecom Gold Business Services, stated in a meeting with Paul Needs. [February 1990 - Paul Needs]
"Micronet is to communication in the 80s what the [Gutenberg] Bible was to the Middle Ages" - David Babsky, Micronet Editor, 1984.
“Long term I see being able to program your computer with various names of journalists that you particularly like, various sports that you have a habit of looking at and being able to program your computer at … 5 o’clock in the morning to log on to Prestel/Micronet and download very rapidly information which will then be printed out. So instead of sitting on the train in the morning with your Times/Guardian/Telegraph, or whatever, you will have a printout with all your favourite journalists, your sports pages, cartoons etc. … you can make up your own newspaper.” - Simon D'Arcy, Micronet Publisher, 1986.
Services provided
Micronet 800 pioneered many public online services, such as Multi User Games, long before the Internet was in widespread use.
Chatlines: Users could post messages that other users could see and respond to. Celebrity Chatline was a weekly feature conceived by Publisher Ian Rock and implemented by Sid Smith in which a prominent person was interviewed by Micronet users whose questions appeared onscreen, with Micronet personnel usually typing the answers (if the 'celebrity' couldn't type or format the text themselves). Early 'celebrities' included Sir Clive Sinclair, Feargal Sharkey, Fatima Whitbread, Lord Cardigan, Cynthia Payne and Sir Terry Pratchett.
Downloadable software: Micronet 800 implemented the CET specification that allowed 8 bit files to be transmitted over a 7 bit medium, with some basic error detection and error correction. Micro Arts (computer art) were invited on by David Babsky in 1985, and published articles and downloaded 'art programs' for the ZX Spectrum and the BBC Micro.
Online games: The longest-running game on the system was StarNet, a Play-by-mail game, whereby the players would send in moves which would be executed once a day (a sort of very slow game of chess, where the aim was to become the emperor of the galaxy) run by Liverpudlian Mike Singleton by inputting the moves he was forwarded by email from Micronet into a Commodore PET computer. Micronet 800 also hosted SHADES, one of the first MUDs - a realtime, highly competitive hack-and-slash game that is still running today.
Email: Each Prestel user had a unique number (usually the last nine digits of the subscriber's telephone number), and this could be used to send messages. Micronet users were reported to be particularly enthusiastic about the medium, sending twice as many 'mailbox' messages as regular Prestel users. On 1 July 1984 users could send a pre-formatted 'Happy Birthday' email to Princess Diana via Prince Philip, in whose name the Buckingham Palace press office telephone number had been registered as a Prestel user.
Gallery: Conceived by Publisher Ian Rock, this was an area where users could post their own pages about anything they wished, subject to minor oversight for libel and obscenity.
News and reviews: Micronet was frequently the first organisation worldwide to report on happenings in the UK computer industry.
References
External links
The Micronet Story
PrismVTX 5000 modem for Micronet 800
Shades the Game
Celebrating the Viewdata Revolution
Computer Arts Archive: Micro Arts
BT Group
Legacy systems
Pre–World Wide Web online services
Teletext | Micronet 800 | [
"Technology"
] | 3,048 | [
"Legacy systems",
"Computer systems",
"History of computing"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.