id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
44,259 | https://en.wikipedia.org/wiki/Microsatellite | A microsatellite is a tract of repetitive DNA in which certain DNA motifs (ranging in length from one to six or more base pairs) are repeated, typically 5–50 times. Microsatellites occur at thousands of locations within an organism's genome. They have a higher mutation rate than other areas of DNA leading to high genetic diversity. Microsatellites are often referred to as short tandem repeats (STRs) by forensic geneticists and in genetic genealogy, or as simple sequence repeats (SSRs) by plant geneticists.
Microsatellites and their longer cousins, the minisatellites, together are classified as VNTR (variable number of tandem repeats) DNA. The name "satellite" DNA refers to the early observation that centrifugation of genomic DNA in a test tube separates a prominent layer of bulk DNA from accompanying "satellite" layers of repetitive DNA.
They are widely used for DNA profiling in cancer diagnosis, in kinship analysis (especially paternity testing) and in forensic identification. They are also used in genetic linkage analysis to locate a gene or a mutation responsible for a given trait or disease. Microsatellites are also used in population genetics to measure levels of relatedness between subspecies, groups and individuals.
History
Although the first microsatellite was characterised in 1984 at the University of Leicester by Weller, Jeffreys and colleagues as a polymorphic GGAT repeat in the human myoglobin gene, the term "microsatellite" was introduced later, in 1989, by Litt and Luty. The name "satellite" DNA refers to the early observation that centrifugation of genomic DNA in a test tube separates a prominent layer of bulk DNA from accompanying "satellite" layers of repetitive DNA. The increasing availability of DNA amplification by PCR at the beginning of the 1990s triggered a large number of studies using the amplification of microsatellites as genetic markers for forensic medicine, for paternity testing, and for positional cloning to find the gene underlying a trait or disease. Prominent early applications include the identifications by microsatellite genotyping of the eight-year-old skeletal remains of a British murder victim (Hagelberg et al. 1991), and of the Auschwitz concentration camp doctor Josef Mengele who escaped to South America following World War II (Jeffreys et al. 1992).
Structures, locations, and functions
A microsatellite is a tract of tandemly repeated (i.e. adjacent) DNA motifs that range in length from one to six or up to ten nucleotides (the exact definition and delineation to the longer minisatellites varies from author to author), and are typically repeated 5–50 times. For example, the sequence TATATATATA is a dinucleotide microsatellite, and GTCGTCGTCGTCGTC is a trinucleotide microsatellite (with A being Adenine, G Guanine, C Cytosine, and T Thymine). Repeat units of four and five nucleotides are referred to as tetra- and pentanucleotide motifs, respectively. Most eukaryotes have microsatellites, with the notable exception of some yeast species. Microsatellites are distributed throughout the genome. The human genome for example contains 50,000–100,000 dinucleotide microsatellites, and lesser numbers of tri-, tetra- and pentanucleotide microsatellites. Many are located in non-coding parts of the human genome and therefore do not produce proteins, but they can also be located in regulatory regions and coding regions.
Microsatellites in non-coding regions may not have any specific function, and therefore might not be selected against; this allows them to accumulate mutations unhindered over the generations and gives rise to variability that can be used for DNA fingerprinting and identification purposes. Other microsatellites are located in regulatory flanking or intronic regions of genes, or directly in codons of genes – microsatellite mutations in such cases can lead to phenotypic changes and diseases, notably in triplet expansion diseases such as fragile X syndrome and Huntington's disease.
Telomeres are linear sequences of DNA that sit at the very ends of chromosomes and protect the integrity of genomic material (not unlike an aglet on the end of a shoelace) during successive rounds of cell division due to the "end replication problem". In white blood cells, the gradual shortening of telomeric DNA has been shown to inversely correlate with ageing in several sample types. Telomeres consist of repetitive DNA, with the hexanucleotide repeat motif TTAGGG in vertebrates. They are thus classified as minisatellites. Similarly, insects have shorter repeat motifs in their telomeres that could arguably be considered microsatellites.
Mutation mechanisms and mutation rates
Unlike point mutations, which affect only a single nucleotide, microsatellite mutations lead to the gain or loss of an entire repeat unit, and sometimes two or more repeats simultaneously. Thus, the mutation rate at microsatellite loci is expected to differ from other mutation rates, such as base substitution rates. The mutation rate at microsatellite loci depends on the repeat motif sequence, the number of repeated motif units and the purity of the canonical repeated sequence. A variety of mechanisms for mutation of microsatellite loci have been reviewed, and their resulting polymorphic nature has been quantified. The actual cause of mutations in microsatellites is debated.
One proposed cause of such length changes is replication slippage, caused by mismatches between DNA strands while being replicated during meiosis. DNA polymerase, the enzyme responsible for reading DNA during replication, can slip while moving along the template strand and continue at the wrong nucleotide. DNA polymerase slippage is more likely to occur when a repetitive sequence (such as CGCGCG) is replicated. Because microsatellites consist of such repetitive sequences, DNA polymerase may make errors at a higher rate in these sequence regions. Several studies have found evidence that slippage is the cause of microsatellite mutations. Typically, slippage in each microsatellite occurs about once per 1,000 generations. Thus, slippage changes in repetitive DNA are three orders of magnitude more common than point mutations in other parts of the genome. Most slippage results in a change of just one repeat unit, and slippage rates vary for different allele lengths and repeat unit sizes, and within different species. If there is a large size difference between individual alleles, then there may be increased instability during recombination at meiosis.
Another possible cause of microsatellite mutations are point mutations, where only one nucleotide is incorrectly copied during replication. A study comparing human and primate genomes found that most changes in repeat number in short microsatellites appear due to point mutations rather than slippage.
Microsatellite mutation rates
Direct estimates of microsatellite mutation rates have been made in numerous organisms, from insects to humans. In the desert locust Schistocerca gregaria, the microsatellite mutation rate was estimated at 2.1 × 10−4 per generation per locus. The microsatellite mutation rate in human male germ lines is five to six times higher than in female germ lines and ranges from 0 to 7 × 10−3 per locus per gamete per generation. In the nematode Pristionchus pacificus, the estimated microsatellite mutation rate ranges from 8.9 × 10−5 to 7.5 × 10−4 per locus per generation.
Microsatellite mutation rates vary with base position relative to the microsatellite, repeat type, and base identity. Mutation rate rises specifically with repeat number, peaking around six to eight repeats and then decreasing again. Increased heterozygosity in a population will also increase microsatellite mutation rates, especially when there is a large length difference between alleles. This is likely due to homologous chromosomes with arms of unequal lengths causing instability during meiosis.
Biological effects of microsatellite mutations
Many microsatellites are located in non-coding DNA and are biologically silent. Others are located in regulatory or even coding DNA – microsatellite mutations in such cases can lead to phenotypic changes and diseases. A genome-wide study estimates that microsatellite variation contributes 10–15% of heritable gene expression variation in humans.
Effects on proteins
In mammals, 20–40% of proteins contain repeating sequences of amino acids encoded by short sequence repeats. Most of the short sequence repeats within protein-coding portions of the genome have a repeating unit of three nucleotides, since that length will not cause frame-shifts when mutating. Each trinucleotide repeating sequence is transcribed into a repeating series of the same amino acid. In yeasts, the most common repeated amino acids are glutamine, glutamic acid, asparagine, aspartic acid and serine.
Mutations in these repeating segments can affect the physical and chemical properties of proteins, with the potential for producing gradual and predictable changes in protein action. For example, length changes in tandemly repeating regions in the Runx2 gene lead to differences in facial length in domesticated dogs (Canis familiaris), with an association between longer sequence lengths and longer faces. This association also applies to a wider range of Carnivora species. Length changes in polyalanine tracts within the HOXA13 gene are linked to hand-foot-genital syndrome, a developmental disorder in humans. Length changes in other triplet repeats are linked to more than 40 neurological diseases in humans, notably trinucleotide repeat disorders such as fragile X syndrome and Huntington's disease. Evolutionary changes from replication slippage also occur in simpler organisms. For example, microsatellite length changes are common within surface membrane proteins in yeast, providing rapid evolution in cell properties. Specifically, length changes in the FLO1 gene control the level of adhesion to substrates. Short sequence repeats also provide rapid evolutionary change to surface proteins in pathenogenic bacteria; this may allow them to keep up with immunological changes in their hosts. Length changes in short sequence repeats in a fungus (Neurospora crassa) control the duration of its circadian clock cycles.
Effects on gene regulation
Length changes of microsatellites within promoters and other cis-regulatory regions can change gene expression quickly, between generations. The human genome contains many (>16,000) short sequence repeats in regulatory regions, which provide 'tuning knobs' on the expression of many genes.
Length changes in bacterial SSRs can affect fimbriae formation in Haemophilus influenzae, by altering promoter spacing. Dinucleotide microsatellites are linked to abundant variation in cis-regulatory control regions in the human genome. Microsatellites in control regions of the Vasopressin 1a receptor gene in voles influence their social behavior, and level of monogamy.
In Ewing sarcoma (a type of painful bone cancer in young humans), a point mutation has created an extended GGAA microsatellite which binds a transcription factor, which in turn activates the EGR2 gene which drives the cancer. In addition, other GGAA microsatellites may influence the expression of genes that contribute to the clinical outcome of Ewing sarcoma patients.
Effects within introns
Microsatellites within introns also influence phenotype, through means that are not currently understood. For example, a GAA triplet expansion in the first intron of the X25 gene appears to interfere with transcription, and causes Friedreich's ataxia. Tandem repeats in the first intron of the Asparagine synthetase gene are linked to acute lymphoblastic leukaemia. A repeat polymorphism in the fourth intron of the NOS3 gene is linked to hypertension in a Tunisian population. Reduced repeat lengths in the EGFR gene are linked with osteosarcomas.
An archaic form of splicing preserved in zebrafish is known to use microsatellite sequences within intronic mRNA for the removal of introns in the absence of U2AF2 and other splicing machinery. It is theorized that these sequences form highly stable cloverleaf configurations that bring the 3' and 5' intron splice sites into close proximity, effectively replacing the spliceosome. This method of RNA splicing is believed to have diverged from human evolution at the formation of tetrapods and to represent an artifact of an RNA world.
Effects within transposons
Almost 50% of the human genome is contained in various types of transposable elements (also called transposons, or 'jumping genes'), and many of them contain repetitive DNA. It is probable that short sequence repeats in those locations are also involved in the regulation of gene expression.
Applications
Microsatellites are used for assessing chromosomal DNA deletions in cancer diagnosis. Microsatellites are widely used for DNA profiling, also known as "genetic fingerprinting", of crime stains (in forensics) and of tissues (in transplant patients). They are also widely used in kinship analysis (most commonly in paternity testing). Also, microsatellites are used for mapping locations within the genome, specifically in genetic linkage analysis to locate a gene or a mutation responsible for a given trait or disease. As a special case of mapping, they can be used for studies of gene duplication or deletion. Researchers use microsatellites in population genetics and in species conservation projects. Plant geneticists have proposed the use of microsatellites for marker assisted selection of desirable traits in plant breeding.
Cancer diagnosis
In tumour cells, whose controls on replication are damaged, microsatellites may be gained or lost at an especially high frequency during each round of mitosis. Hence a tumour cell line might show a different genetic fingerprint from that of the host tissue, and, especially in colorectal cancer, might present with loss of heterozygosity. Microsatellites analyzed in primary tissue therefore been routinely used in cancer diagnosis to assess tumour progression. Genome Wide Association Studies (GWAS) have been used to identify microsatellite biomarkers as a source of genetic predisposition in a variety of cancers.
Forensic and medical fingerprinting
Microsatellite analysis became popular in the field of forensics in the 1990s. It is used for the genetic fingerprinting of individuals where it permits forensic identification (typically matching a crime stain to a victim or perpetrator). It is also used to follow up bone marrow transplant patients.
The microsatellites in use today for forensic analysis are all tetra- or penta-nucleotide repeats, as these give a high degree of error-free data while being short enough to survive degradation in non-ideal conditions. Even shorter repeat sequences would tend to suffer from artifacts such as PCR stutter and preferential amplification, while longer repeat sequences would suffer more highly from environmental degradation and would amplify less well by PCR. Another forensic consideration is that the person's medical privacy must be respected, so that forensic STRs are chosen which are non-coding, do not influence gene regulation, and are not usually trinucleotide STRs which could be involved in triplet expansion diseases such as Huntington's disease. Forensic STR profiles are stored in DNA databanks such as the UK National DNA Database (NDNAD), the American CODIS or the Australian NCIDD.
Kinship analysis (paternity testing)
Autosomal microsatellites are widely used for DNA profiling in kinship analysis (most commonly in paternity testing). Paternally inherited Y-STRs (microsatellites on the Y chromosome) are often used in genealogical DNA testing.
Genetic linkage analysis
During the 1990s and the first several years of this millennium, microsatellites were the workhorse genetic markers for genome-wide scans to locate any gene responsible for a given phenotype or disease, using segregation observations across generations of a sampled pedigree. Although the rise of higher throughput and cost-effective single-nucleotide polymorphism (SNP) platforms led to the era of the SNP for genome scans, microsatellites remain highly informative measures of genomic variation for linkage and association studies. Their continued advantage lies in their greater allelic diversity than biallelic SNPs, thus microsatellites can differentiate alleles within a SNP-defined linkage disequilibrium block of interest. Thus, microsatellites have successfully led to discoveries of type 2 diabetes (TCF7L2) and prostate cancer genes (the 8q21 region).
Population genetics
Microsatellites were popularized in population genetics during the 1990s because as PCR became ubiquitous in laboratories researchers were able to design primers and amplify sets of microsatellites at low cost. Their uses are wide-ranging. A microsatellite with a neutral evolutionary history makes it applicable for measuring or inferring bottlenecks, local adaptation, the allelic fixation index (FST), population size, and gene flow. As next generation sequencing becomes more affordable the use of microsatellites has decreased, however they remain a crucial tool in the field.
Plant breeding
Marker assisted selection or marker aided selection (MAS) is an indirect selection process where a trait of interest is selected based on a marker (morphological, biochemical or DNA/RNA variation) linked to a trait of interest (e.g. productivity, disease resistance, stress tolerance, and quality), rather than on the trait itself. Microsatellites have been proposed to be used as such markers to assist plant breeding.
Analysis
Repetitive DNA is not easily analysed by next generation DNA sequencing methods, for some technologies struggle with homopolymeric tracts. A variety of software approaches have been created for the analysis or raw nextgen DNA sequencing reads to determine the genotype and variants at repetitive loci. Microsatellites can be analysed and verified by established PCR amplification and amplicon size determination, sometimes followed by Sanger DNA sequencing.
In forensics, the analysis is performed by extracting nuclear DNA from the cells of a sample of interest, then amplifying specific polymorphic regions of the extracted DNA by means of the polymerase chain reaction. Once these sequences have been amplified, they are resolved either through gel electrophoresis or capillary electrophoresis, which will allow the analyst to determine how many repeats of the microsatellites sequence in question there are. If the DNA was resolved by gel electrophoresis, the DNA can be visualized either by silver staining (low sensitivity, safe, inexpensive), or an intercalating dye such as ethidium bromide (fairly sensitive, moderate health risks, inexpensive), or as most modern forensics labs use, fluorescent dyes (highly sensitive, safe, expensive). Instruments built to resolve microsatellite fragments by capillary electrophoresis also use fluorescent dyes. Forensic profiles are stored in major databanks. The British data base for microsatellite loci identification was originally based on the British SGM+ system using 10 loci and a sex marker. The Americans increased this number to 13 loci. The Australian database is called the NCIDD, and since 2013 it has been using 18 core markers for DNA profiling.
Amplification
Microsatellites can be amplified for identification by the polymerase chain reaction (PCR) process, using the unique sequences of flanking regions as primers. DNA is repeatedly denatured at a high temperature to separate the double strand, then cooled to allow annealing of primers and the extension of nucleotide sequences through the microsatellite. This process results in production of enough DNA to be visible on agarose or polyacrylamide gels; only small amounts of DNA are needed for amplification because in this way thermocycling creates an exponential increase in the replicated segment. With the abundance of PCR technology, primers that flank microsatellite loci are simple and quick to use, but the development of correctly functioning primers is often a tedious and costly process.
Design of microsatellite primers
If searching for microsatellite markers in specific regions of a genome, for example within a particular intron, primers can be designed manually. This involves searching the genomic DNA sequence for microsatellite repeats, which can be done by eye or by using automated tools such as repeat masker. Once the potentially useful microsatellites are determined, the flanking sequences can be used to design oligonucleotide primers which will amplify the specific microsatellite repeat in a PCR reaction.
Random microsatellite primers can be developed by cloning random segments of DNA from the focal species. These random segments are inserted into a plasmid or bacteriophage vector, which is in turn implanted into Escherichia coli bacteria. Colonies are then developed, and screened with fluorescently–labelled oligonucleotide sequences that will hybridize to a microsatellite repeat, if present on the DNA segment. If positive clones can be obtained from this procedure, the DNA is sequenced and PCR primers are chosen from sequences flanking such regions to determine a specific locus. This process involves significant trial and error on the part of researchers, as microsatellite repeat sequences must be predicted and primers that are randomly isolated may not display significant polymorphism. Microsatellite loci are widely distributed throughout the genome and can be isolated from semi-degraded DNA of older specimens, as all that is needed is a suitable substrate for amplification through PCR.
More recent techniques involve using oligonucleotide sequences consisting of repeats complementary to repeats in the microsatellite to "enrich" the DNA extracted (microsatellite enrichment). The oligonucleotide probe hybridizes with the repeat in the microsatellite, and the probe/microsatellite complex is then pulled out of solution. The enriched DNA is then cloned as normal, but the proportion of successes will now be much higher, drastically reducing the time required to develop the regions for use. However, which probes to use can be a trial and error process in itself.
ISSR-PCR
ISSR (for inter-simple sequence repeat) is a general term for a genome region between microsatellite loci. The complementary sequences to two neighboring microsatellites are used as PCR primers; the variable region between them gets amplified. The limited length of amplification cycles during PCR prevents excessive replication of overly long contiguous DNA sequences, so the result will be a mix of a variety of amplified DNA strands which are generally short but vary much in length.
Sequences amplified by ISSR-PCR can be used for DNA fingerprinting. Since an ISSR may be a conserved or nonconserved region, this technique is not useful for distinguishing individuals, but rather for phylogeography analyses or maybe delimiting species; sequence diversity is lower than in SSR-PCR, but still higher than in actual gene sequences. In addition, microsatellite sequencing and ISSR sequencing are mutually assisting, as one produces primers for the other.
Limitations
Repetitive DNA is not easily analysed by next generation DNA sequencing methods, which struggle with homopolymeric tracts. Therefore, microsatellites are normally analysed by conventional PCR amplification and amplicon size determination. The use of PCR means that microsatellite length analysis is prone to PCR limitations like any other PCR-amplified DNA locus. A particular concern is the occurrence of 'null alleles':
Occasionally, within a sample of individuals such as in paternity testing casework, a mutation in the DNA flanking the microsatellite can prevent the PCR primer from binding and producing an amplicon (creating a "null allele" in a gel assay), thus only one allele is amplified (from the non-mutated sister chromosome), and the individual may then falsely appear to be homozygous. This can cause confusion in paternity casework. It may then be necessary to amplify the microsatellite using a different set of primers. Null alleles are caused especially by mutations at the 3' section, where extension commences.
In species or population analysis, for example in conservation work, PCR primers which amplify microsatellites in one individual or species can work in other species. However, the risk of applying PCR primers across different species is that null alleles become likely, whenever sequence divergence is too great for the primers to bind. The species may then artificially appear to have a reduced diversity. Null alleles in this case can sometimes be indicated by an excessive frequency of homozygotes causing deviations from Hardy-Weinberg equilibrium expectations.
See also
References
Further reading
External links
All known disease-causing short tandem repeats
MicroSatellite DataBase
Search tools:
FireMuSat2+
IMEx
Imperfect SSR Finder —find perfect or imperfect SSRs in FASTA sequences.
JSTRING—Java Search for Tandem Repeats In Genomes
Microsatellite repeats finder
MISA—MIcroSAtellite identification tool
MREPATT
Mreps
Phobos—a tandem repeat search tool for perfect and imperfect repeats—the maximum pattern size depends only on computational power
Poly
SciRoKo
SSR Finder
STAR
SERF De Novo Genome Analysis and Tandem Repeats Finder
Tandem Repeats Finder
TandemSWAN
TRED
TROLL
Zebrafish Repeats
Genetics
Forensic genetics
Repetitive DNA sequences | Microsatellite | [
"Biology"
] | 5,376 | [
"Molecular genetics",
"Repetitive DNA sequences",
"Genetics"
] |
44,284 | https://en.wikipedia.org/wiki/Non-coding%20DNA | Non-coding DNA (ncDNA) sequences are components of an organism's DNA that do not encode protein sequences. Some non-coding DNA is transcribed into functional non-coding RNA molecules (e.g. transfer RNA, microRNA, piRNA, ribosomal RNA, and regulatory RNAs). Other functional regions of the non-coding DNA fraction include regulatory sequences that control gene expression; scaffold attachment regions; origins of DNA replication; centromeres; and telomeres. Some non-coding regions appear to be mostly nonfunctional, such as introns, pseudogenes, intergenic DNA, and fragments of transposons and viruses. Regions that are completely nonfunctional are called junk DNA.
Fraction of non-coding genomic DNA
In bacteria, the coding regions typically take up 88% of the genome. The remaining 12% does not encode proteins, but much of it still has biological function through genes where the RNA transcript is functional (non-coding genes) and regulatory sequences, which means that almost all of the bacterial genome has a function. The amount of coding DNA in eukaryotes is usually a much smaller fraction of the genome because eukaryotic genomes contain large amounts of repetitive DNA not found in prokaryotes. The human genome contains somewhere between 1–2% coding DNA. The exact number is not known because there are disputes over the number of functional coding exons and over the total size of the human genome. This means that 98–99% of the human genome consists of non-coding DNA and this includes many functional elements such as non-coding genes and regulatory sequences.
Genome size in eukaryotes can vary over a wide range, even between closely related species. This puzzling observation was originally known as the C-value Paradox where "C" refers to the haploid genome size. The paradox was resolved with the discovery that most of the differences were due to the expansion and contraction of repetitive DNA and not the number of genes. Some researchers speculated that this repetitive DNA was mostly junk DNA. The reasons for the changes in genome size are still being worked out and this problem is called the C-value Enigma.
This led to the observation that the number of genes does not seem to correlate with perceived notions of complexity because the number of genes seems to be relatively constant, an issue termed the G-value Paradox. For example, the genome of the unicellular Polychaos dubium (formerly known as Amoeba dubia) has been reported to contain more than 200 times the amount of DNA in humans (i.e. more than 600 billion pairs of bases vs a bit more than 3 billion in humans). The pufferfish Takifugu rubripes genome is only about one eighth the size of the human genome, yet seems to have a comparable number of genes. Genes take up about 30% of the pufferfish genome and the coding DNA is about 10%. (Non-coding DNA = 90%.) The reduced size of the pufferfish genome is due to a reduction in the length of introns and less repetitive DNA.
Utricularia gibba, a bladderwort plant, has a very small nuclear genome (100.7 Mb) compared to most plants. It likely evolved from an ancestral genome that was 1,500 Mb in size. The bladderwort genome has roughly the same number of genes as other plants but the total amount of coding DNA comes to about 30% of the genome.
The remainder of the genome (70% non-coding DNA) consists of promoters and regulatory sequences that are shorter than those in other plant species. The genes contain introns but there are fewer of them and they are smaller than the introns in other plant genomes. There are noncoding genes, including many copies of ribosomal RNA genes. The genome also contains telomere sequences and centromeres as expected. Much of the repetitive DNA seen in other eukaryotes has been deleted from the bladderwort genome since that lineage split from those of other plants. About 59% of the bladderwort genome consists of transposon-related sequences but since the genome is so much smaller than other genomes, this represents a considerable reduction in the amount of this DNA. The authors of the original 2013 article note that claims of additional functional elements in the non-coding DNA of animals do not seem to apply to plant genomes.
According to a New York Times article, during the evolution of this species, "... genetic junk that didn't serve a purpose was expunged, and the necessary stuff was kept." According to Victor Albert of the University of Buffalo, the plant is able to expunge its so-called junk DNA and "have a perfectly good multicellular plant with lots of different cells, organs, tissue types and flowers, and you can do it without the junk. Junk is not needed."
Types of non-coding DNA sequences
Noncoding genes
There are two types of genes: protein coding genes and noncoding genes. Noncoding genes are an important part of non-coding DNA and they include genes for transfer RNA and ribosomal RNA. These genes were discovered in the 1960s. Prokaryotic genomes contain genes for a number of other noncoding RNAs but noncoding RNA genes are much more common in eukaryotes.
Typical classes of noncoding genes in eukaryotes include genes for small nuclear RNAs (snRNAs), small nucleolar RNAs (sno RNAs), microRNAs (miRNAs), short interfering RNAs (siRNAs), PIWI-interacting RNAs (piRNAs), and long noncoding RNAs (lncRNAs). In addition, there are a number of unique RNA genes that produce catalytic RNAs.
Noncoding genes account for only a few percent of prokaryotic genomes but they can represent a vastly higher fraction in eukaryotic genomes. In humans, the noncoding genes take up at least 6% of the genome, largely because there are hundreds of copies of ribosomal RNA genes. Protein-coding genes occupy about 38% of the genome; a fraction that is much higher than the coding region because genes contain large introns.
The total number of noncoding genes in the human genome is controversial. Some scientists think that there are only about 5,000 noncoding genes while others believe that there may be more than 100,000 (see the article on Non-coding RNA). The difference is largely due to debate over the number of lncRNA genes.
Promoters and regulatory elements
Promoters are DNA segments near the 5' end of the gene where transcription begins. They are the sites where RNA polymerase binds to initiate RNA synthesis. Every gene has a noncoding promoter.
Regulatory elements are sites that control the transcription of a nearby gene. They are almost always sequences where transcription factors bind to DNA and these transcription factors can either activate transcription (activators) or repress transcription (repressors). Regulatory elements were discovered in the 1960s and their general characteristics were worked out in the 1970s by studying specific transcription factors in bacteria and bacteriophage.
Promoters and regulatory sequences represent an abundant class of noncoding DNA but they mostly consist of a collection of relatively short sequences so they do not take up a very large fraction of the genome. The exact amount of regulatory DNA in mammalian genome is unclear because it is difficult to distinguish between spurious transcription factor binding sites and those that are functional. The binding characteristics of typical DNA-binding proteins were characterized in the 1970s and the biochemical properties of transcription factors predict that in cells with large genomes, the majority of binding sites will not be biologically functional.
Many regulatory sequences occur near promoters, usually upstream of the transcription start site of the gene. Some occur within a gene and a few are located downstream of the transcription termination site. In eukaryotes, there are some regulatory sequences that are located at a considerable distance from the promoter region. These distant regulatory sequences are often called enhancers but there is no rigorous definition of enhancer that distinguishes it from other transcription factor binding sites.
Introns
Introns are the parts of a gene that are transcribed into the precursor RNA sequence, but ultimately removed by RNA splicing during the processing to mature RNA. Introns are found in both types of genes: protein-coding genes and noncoding genes. They are present in prokaryotes but they are much more common in eukaryotic genomes.
Group I and group II introns take up only a small percentage of the genome when they are present. Spliceosomal introns (see Figure) are only found in eukaryotes and they can represent a substantial proportion of the genome. In humans, for example, introns in protein-coding genes cover 37% of the genome. Combining that with about 1% coding sequences means that protein-coding genes occupy about 38% of the human genome. The calculations for noncoding genes are more complicated because there is considerable dispute over the total number of noncoding genes but taking only the well-defined examples means that noncoding genes occupy at least 6% of the genome.
Untranslated regions
The standard biochemistry and molecular biology textbooks describe non-coding nucleotides in mRNA located between the 5' end of the gene and the translation initiation codon. These regions are called 5'-untranslated regions or 5'-UTRs. Similar regions called 3'-untranslated regions (3'-UTRs) are found at the end of the gene. The 5'-UTRs and 3'UTRs are very short in bacteria but they can be several hundred nucleotides in length in eukaryotes. They contain short elements that control the initiation of translation (5'-UTRs) and transcription termination (3'-UTRs) as well as regulatory elements that may control mRNA stability, processing, and targeting to different regions of the cell.
Origins of replication
DNA synthesis begins at specific sites called origins of replication. These are regions of the genome where the DNA replication machinery is assembled and the DNA is unwound to begin DNA synthesis. In most cases, replication proceeds in both directions from the replication origin.
The main features of replication origins are sequences where specific initiation proteins are bound. A typical replication origin covers about 100-200 base pairs of DNA. Prokaryotes have one origin of replication per chromosome or plasmid but there are usually multiple origins in eukaryotic chromosomes. The human genome contains about 100,000 origins of replication representing about 0.3% of the genome.
Centromeres
Centromeres are the sites where spindle fibers attach to newly replicated chromosomes in order to segregate them into daughter cells when the cell divides. Each eukaryotic chromosome has a single functional centromere that is seen as a constricted region in a condensed metaphase chromosome. Centromeric DNA consists of a number of repetitive DNA sequences that often take up a significant fraction of the genome because each centromere can be millions of base pairs in length. In humans, for example, the sequences of all 24 centromeres have been determined and they account for about 6% of the genome. However, it is unlikely that all of this noncoding DNA is essential since there is considerable variation in the total amount of centromeric DNA in different individuals. Centromeres are another example of functional noncoding DNA sequences that have been known for almost half a century and it is likely that they are more abundant than coding DNA.
Telomeres
Telomeres are regions of repetitive DNA at the end of a chromosome, which provide protection from chromosomal deterioration during DNA replication. Recent studies have shown that telomeres function to aid in its own stability. Telomeric repeat-containing RNA (TERRA) are transcripts derived from telomeres. TERRA has been shown to maintain telomerase activity and lengthen the ends of chromosomes.
Scaffold attachment regions
Both prokaryotic and eukarotic genomes are organized into large loops of protein-bound DNA. In eukaryotes, the bases of the loops are called scaffold attachment regions (SARs) and they consist of stretches of DNA that bind an RNA/protein complex to stabilize the loop. There are about 100,000 loops in the human genome and each SAR consists of about 100 bp of DNA, so the total amount of DNA devoted to SARs accounts for about 0.3% of the human genome.
Pseudogenes
Pseudogenes are mostly former genes that have become non-functional due to mutation, but the term also refers to inactive DNA sequences that are derived from RNAs produced by functional genes (processed pseudogenes). Pseudogenes are only a small fraction of noncoding DNA in prokaryotic genomes because they are eliminated by negative selection. In some eukaryotes, however, pseudogenes can accumulate because selection is not powerful enough to eliminate them (see Nearly neutral theory of molecular evolution).
The human genome contains about 15,000 pseudogenes derived from protein-coding genes and an unknown number derived from noncoding genes. They may cover a substantial fraction of the genome (~5%) since many of them contain former intron sequences.
Pseudogenes are junk DNA by definition and they evolve at the neutral rate as expected for junk DNA. Some former pseudogenes have secondarily acquired a function and this leads some scientists to speculate that most pseudogenes are not junk because they have a yet-to-be-discovered function.
Repeat sequences, transposons and viral elements
Transposons and retrotransposons are mobile genetic elements. Retrotransposon repeated sequences, which include long interspersed nuclear elements (LINEs) and short interspersed nuclear elements (SINEs), account for a large proportion of the genomic sequences in many species. Alu sequences, classified as a short interspersed nuclear element, are the most abundant mobile elements in the human genome. Some examples have been found of SINEs exerting transcriptional control of some protein-encoding genes.
Endogenous retrovirus sequences are the product of reverse transcription of retrovirus genomes into the genomes of germ cells. Mutation within these retro-transcribed sequences can inactivate the viral genome.
Over 8% of the human genome is made up of (mostly decayed) endogenous retrovirus sequences, as part of the over 42% fraction that is recognizably derived of retrotransposons, while another 3% can be identified to be the remains of DNA transposons. Much of the remaining half of the genome that is currently without an explained origin is expected to have found its origin in transposable elements that were active so long ago (> 200 million years) that random mutations have rendered them unrecognizable. Genome size variation in at least two kinds of plants is mostly the result of retrotransposon sequences.
Highly repetitive DNA
Highly repetitive DNA consists of short stretches of DNA that are repeated many times in tandem (one after the other). The repeat segments are usually between 2 bp and 10 bp but longer ones are known. Highly repetitive DNA is rare in prokaryotes but common in eukaryotes, especially those with large genomes. It is sometimes called satellite DNA.
Most of the highly repetitive DNA is found in centromeres and telomeres (see above) and most of it is functional although some might be redundant. The other significant fraction resides in short tandem repeats (STRs; also called microsatellites) consisting of short stretches of a simple repeat such as ATC. There are about 350,000 STRs in the human genome and they are scattered throughout the genome with an average length of about 25 repeats.
Variations in the number of STR repeats can cause genetic diseases when they lie within a gene but most of these regions appear to be non-functional junk DNA where the number of repeats can vary considerably from individual to individual. This is why these length differences are used extensively in DNA fingerprinting.
Junk DNA
Junk DNA is DNA that has no biologically relevant function such as pseudogenes and fragments of once active transposons. Bacteria and viral genomes have very little junk DNA but some eukaryotic genomes may have a substantial amount of junk DNA. The exact amount of nonfunctional DNA in humans and other species with large genomes has not been determined and there is considerable controversy in the scientific literature.
The nonfunctional DNA in bacterial genomes is mostly located in the intergenic fraction of non-coding DNA but in eukaryotic genomes it may also be found within introns. There are many examples of functional DNA elements in non-coding DNA, and it is erroneous to equate non-coding DNA with junk DNA.
Genome-wide association studies (GWAS) and non-coding DNA
Genome-wide association studies (GWAS) identify linkages between alleles and observable traits such as phenotypes and diseases. Most of the associations are between single-nucleotide polymorphisms (SNPs) and the trait being examined and most of these SNPs are located in non-functional DNA. The association establishes a linkage that helps map the DNA region responsible for the trait but it does not necessarily identify the mutations causing the disease or phenotypic difference.
SNPs that are tightly linked to traits are the ones most likely to identify a causal mutation. (The association is referred to as tight linkage disequilibrium.) About 12% of these polymorphisms are found in coding regions; about 40% are located in introns; and most of the rest are found in intergenic regions, including regulatory sequences.
See also
Conserved non-coding sequence
Eukaryotic chromosome fine structure
Gene-centered view of evolution
Gene regulatory network
Intergenic region
Intragenomic conflict
Phylogenetic footprinting
Transcriptome
Non-coding RNA
Gene desert
The Onion Test
References
Further reading
External links
Plant DNA C-values Database at Royal Botanic Gardens, Kew
Fungal Genome Size Database at Estonian Institute of Zoology and Botany
ENCODE: The human encyclopaedia at Nature ENCODE
DNA
Gene expression | Non-coding DNA | [
"Chemistry",
"Biology"
] | 3,761 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
44,290 | https://en.wikipedia.org/wiki/DNA%20profiling | DNA profiling (also called DNA fingerprinting and genetic fingerprinting) is the process of determining an individual's deoxyribonucleic acid (DNA) characteristics. DNA analysis intended to identify a species, rather than an individual, is called DNA barcoding.
DNA profiling is a forensic technique in criminal investigations, comparing criminal suspects' profiles to DNA evidence so as to assess the likelihood of their involvement in the crime. It is also used in paternity testing, to establish immigration eligibility, and in genealogical and medical research. DNA profiling has also been used in the study of animal and plant populations in the fields of zoology, botany, and agriculture.
Background
Starting in the mid 1970s, scientific advances allowed the use of DNA as a material for the identification of an individual. The first patent covering the direct use of DNA variation for forensics (US5593832A) was filed by Jeffrey Glassberg in 1983, based upon work he had done while at Rockefeller University in the United States in 1981.
British geneticist Sir Alec Jeffreys independently developed a process for DNA profiling in 1984 while working in the Department of Genetics at the University of Leicester. Jeffreys discovered that a DNA examiner could establish patterns in unknown DNA. These patterns were a part of inherited traits that could be used to advance the field of relationship analysis. These discoveries led to the first use of DNA profiling in a criminal case.
The process, developed by Jeffreys in conjunction with Peter Gill and Dave Werrett of the Forensic Science Service (FSS), was first used forensically in the solving of the murder of two teenagers who had been raped and murdered in Narborough, Leicestershire in 1983 and 1986. In the murder inquiry, led by Detective David Baker, the DNA contained within blood samples obtained voluntarily from around 5,000 local men who willingly assisted Leicestershire Constabulary with the investigation, resulted in the exoneration of Richard Buckland, an initial suspect who had confessed to one of the crimes, and the subsequent conviction of Colin Pitchfork on January 2, 1988. Pitchfork, a local bakery employee, had coerced his coworker Ian Kelly to stand in for him when providing a blood sample—Kelly then used a forged passport to impersonate Pitchfork. Another coworker reported the deception to the police. Pitchfork was arrested, and his blood was sent to Jeffreys' lab for processing and profile development. Pitchfork's profile matched that of DNA left by the murderer which confirmed Pitchfork's presence at both crime scenes; he pleaded guilty to both murders. After some years, a chemical company named Imperial Chemical Industries (ICI) introduced the first ever commercially available kit to the world. Despite being a relatively recent field, it had a significant global influence on both criminal justice system and society.
Although 99.9% of human DNA sequences are the same in every person, enough of the DNA is different that it is possible to distinguish one individual from another, unless they are monozygotic (identical) twins. DNA profiling uses repetitive sequences that are highly variable, called variable number tandem repeats (VNTRs), in particular short tandem repeats (STRs), also known as microsatellites, and minisatellites. VNTR loci are similar between closely related individuals, but are so variable that unrelated individuals are unlikely to have the same VNTRs.
Before VNTRs and STRs, people like Jeffreys used a process called restriction fragment length polymorphism (RFLP). This process regularly used large portions of DNA to analyze the differences between two DNA samples. RFLP was among the first technologies used in DNA profiling and analysis. However, as technology has evolved, new technologies, like STR, emerged and took the place of older technology like RFLP.
The admissibility of DNA evidence in courts was disputed in the United States in the 1980s and 1990s, but has since become more universally accepted due to improved techniques.
Profiling processes
DNA extraction
When a sample such as blood or saliva is obtained, the DNA is only a small part of what is present in the sample. Before the DNA can be analyzed, it must be extracted from the cells and purified. There are many ways this can be accomplished, but all methods follow the same basic procedure. The cell and nuclear membranes need to be broken up to allow the DNA to be free in solution. Once the DNA is free, it can be separated from all other cellular components. After the DNA has been separated in solution, the remaining cellular debris can then be removed from the solution and discarded, leaving only DNA. The most common methods of DNA extraction include organic extraction (also called phenol–chloroform extraction), Chelex extraction, and solid-phase extraction. Differential extraction is a modified version of extraction in which DNA from two different types of cells can be separated from each other before being purified from the solution. Each method of extraction works well in the laboratory, but analysts typically select their preferred method based on factors such as the cost, the time involved, the quantity of DNA yielded, and the quality of DNA yielded.
RFLP analysis
RFLP stands for restriction fragment length polymorphism and, in terms of DNA analysis, describes a DNA testing method which utilizes restriction enzymes to "cut" the DNA at short and specific sequences throughout the sample. To start off processing in the laboratory, the sample has to first go through an extraction protocol, which may vary depending on the sample type or laboratory SOPs (Standard Operating Procedures). Once the DNA has been "extracted" from the cells within the sample and separated away from extraneous cellular materials and any nucleases that would degrade the DNA, the sample can then be introduced to the desired restriction enzymes to be cut up into discernable fragments. Following the enzyme digestion, a Southern Blot is performed. Southern Blots are a size-based separation method that are performed on a gel with either radioactive or chemiluminescent probes. RFLP could be conducted with single-locus or multi-locus probes (probes which target either one location on the DNA or multiple locations on the DNA). Incorporating the multi-locus probes allowed for higher discrimination power for the analysis, however completion of this process could take several days to a week for one sample due to the extreme amount of time required by each step required for visualization of the probes.
Polymerase chain reaction (PCR) analysis
This technique was developed in 1983 by Kary Mullis. PCR is now a common and important technique used in medical and biological research labs for a variety of applications.
PCR, or Polymerase Chain Reaction, is a widely used molecular biology technique to amplify a specific DNA sequence.
Amplification is achieved by a series of three steps:
1- Denaturation : In this step, the DNA is heated to 95 °C to dissociate the hydrogen bonds between the complementary base pairs of the double-stranded DNA.
2-Annealing : During this stage the reaction is cooled to 50-65 °C . This enables the primers to attach to a specific location on the single -stranded template DNA by way of hydrogen bonding.
3-Extension : A thermostable DNA polymerase which is Taq polymerase is commonly used at this step. This is done at a temperature of 72 °C . DNA polymerase adds nucleotides in the 5'-3' direction and synthesizes the complementary strand of the DNA template .
STR analysis
The system of DNA profiling used today is based on polymerase chain reaction (PCR) and uses simple sequences.
From country to country, different STR-based DNA-profiling systems are in use. In North America, systems that amplify the CODIS 20 core loci are almost universal, whereas in the United Kingdom the DNA-17 loci system is in use, and Australia uses 18 core markers.
The true power of STR analysis is in its statistical power of discrimination. Because the 20 loci that are currently used for discrimination in CODIS are independently assorted (having a certain number of repeats at one locus does not change the likelihood of having any number of repeats at any other locus), the product rule for probabilities can be applied. This means that, if someone has the DNA type of ABC, where the three loci were independent, then the probability of that individual having that DNA type is the probability of having type A times the probability of having type B times the probability of having type C. This has resulted in the ability to generate match probabilities of 1 in a quintillion (1x1018) or more. However, DNA database searches showed much more frequent than expected false DNA profile matches.
Y-chromosome analysis
Due to the paternal inheritance, Y-haplotypes provide information about the genetic ancestry of the male population. To investigate this population history, and to provide estimates for haplotype frequencies in criminal casework, the "Y haplotype reference database (YHRD)" has been created in 2000 as an online resource. It currently comprises more than 300,000 minimal (8 locus) haplotypes from world-wide populations.
Mitochondrial analysis
mtDNA can be obtained from such material as hair shafts and old bones/teeth. Control mechanism based on interaction point with data. This can be determined by tooled placement in sample.
Issues with forensic DNA samples
When people think of DNA analysis, they often think about television shows like NCIS or CSI, which portray DNA samples coming into a lab and being instantly analyzed, followed by the pulling up of a picture of the suspect within minutes. However, the reality is quite different, and perfect DNA samples are often not collected from the scene of a crime. Homicide victims are frequently left exposed to harsh conditions before they are found, and objects that are used to commit crimes have often been handled by more than one person. The two most prevalent issues that forensic scientists encounter when analyzing DNA samples are degraded samples and DNA mixtures.
Degraded DNA
Before modern PCR methods existed, it was almost impossible to analyze degraded DNA samples. Methods like restriction fragment length polymorphism (RFLP), which was the first technique used for DNA analysis in forensic science, required high molecular weight DNA in the sample in order to get reliable data. High molecular weight DNA, however, is lacking in degraded samples, as the DNA is too fragmented to carry out RFLP accurately. It was only when polymerase chain reaction techniques were invented that analysis of degraded DNA samples were able to be carried out. Multiplex PCR in particular made it possible to isolate and to amplify the small fragments of DNA that are still left in degraded samples. When multiplex PCR methods are compared to the older methods like RFLP, a vast difference can be seen. Multiplex PCR can theoretically amplify less than 1 ng of DNA, but RFLP had to have a least 100 ng of DNA in order to carry out an analysis.
Low-Template DNA
Low-template DNA can happen when there is less than 0.1 ng() of DNA in a sample. This can lead to more stochastic effects (random events) such as allelic dropout or allelic drop-in which can alter the interpretation of a DNA profile. These stochastic effects can lead to the unequal amplification of the 2 alleles that come from a heterozygous individual. It is especially important to take low-template DNA into account when dealing with a mixture of DNA sample. This is because for one (or more) of the contributors in the mixture, they are more likely to have less than the optimal amount of DNA for the PCR reaction to work properly. Therefore, stochastic thresholds are developed for DNA profile interpretation. The stochastic threshold is the minimum peak height (RFU value), seen in an electropherogram where dropout occurs. If the peak height value is above this threshold, then it is reasonable to assume that allelic dropout has not occurred. For example, if only 1 peak is seen for a particular locus in the electropherogram but its peak height is above the stochastic threshold, then we can reasonably assume that this individual is homozygous and is not missing its heterozygous partner allele that otherwise would have dropped out due to having low-template DNA. Allelic dropout can occur when there is low-template DNA because there is such little DNA to start with that at this locus the contributor to the DNA sample (or mixture) is a true heterozygote but the other allele is not amplified and so it would be lost. Allelic drop-in can also occur when there is low-template DNA because sometimes the stutter peak can be amplified. The stutter is an artifact of PCR. During the PCR reaction, DNA Polymerase will come in and add nucleotides off of the primer, but this whole process is very dynamic, meaning that the DNA Polymerase is constantly binding, popping off and then rebinding. Therefore, sometimes DNA Polymerase will rejoin at the short tandem repeat ahead of it, leading to a short tandem repeat that is 1 repeat less than the template. During PCR, if DNA Polymerase happens to bind to a locus in stutter and starts to amplify it to make lots of copies, then this stutter product will appear randomly in the electropherogram, leading to allelic drop-in.
MiniSTR analysis
In instances in which DNA samples are degraded, like if there are intense fires or all that remains are bone fragments, standard STR testing on those samples can be inadequate. When standard STR testing is done on highly degraded samples, the larger STR loci often drop out, and only partial DNA profiles are obtained. Partial DNA profiles can be a powerful tool, but the probability of a random match is larger than if a full profile was obtained. One method that has been developed to analyse degraded DNA samples is to use miniSTR technology. In the new approach, primers are specially designed to bind closer to the STR region.
In normal STR testing, the primers bind to longer sequences that contain the STR region within the segment. MiniSTR analysis, however, targets only the STR location, which results in a DNA product that is much smaller.
By placing the primers closer to the actual STR regions, there is a higher chance that successful amplification of this region will occur. Successful amplification of those STR regions can now occur, and more complete DNA profiles can be obtained. The success that smaller PCR products produce a higher success rate with highly degraded samples was first reported in 1995, when miniSTR technology was used to identify victims of the Waco fire.
DNA mixtures
Mixtures are another common issue faced by forensic scientists when they are analyzing unknown or questionable DNA samples. A mixture is defined as a DNA sample that contains two or more individual contributors. That can often occur when a DNA sample is swabbed from an item that is handled by more than one person or when a sample contains both the victim's and the assailant's DNA. The presence of more than one individual in a DNA sample can make it challenging to detect individual profiles, and interpretation of mixtures should be performed only by highly trained individuals. Mixtures that contain two or three individuals can be interpreted with difficulty. Mixtures that contain four or more individuals are much too convoluted to get individual profiles. One common scenario in which a mixture is often obtained is in the case of sexual assault. A sample may be collected that contains material from the victim, the victim's consensual sexual partners, and the perpetrator(s).
Mixtures can generally be sorted into three categories: Type A, Type B, and Type C. Type A mixtures have alleles with similar peak-heights all around, so the contributors cannot be distinguished from each other. Type B mixtures can be deconvoluted by comparing peak-height ratios to determine which alleles were donated together. Type C mixtures cannot be safely interpreted with current technology because the samples were affected by DNA degradation or having too small a quantity of DNA present.
When looking at an electropherogram, it is possible to determine the number of contributors in less complex mixtures based on the number of peaks located in each locus. In comparison to a single source profile, which will only have one or two peaks at each locus, a mixture is when there are three or more peaks at two or more loci. If there are three peaks at only a single locus, then it is possible to have a single contributor who is tri-allelic at that locus. Two person mixtures will have between two and four peaks at each locus, and three person mixtures will have between three and six peaks at each locus. Mixtures become increasingly difficult to deconvolute as the number of contributors increases.
As detection methods in DNA profiling advance, forensic scientists are seeing more DNA samples that contain mixtures, as even the smallest contributor can now be detected by modern tests. The ease in which forensic scientists have in interpenetrating DNA mixtures largely depends on the ratio of DNA present from each individual, the genotype combinations, and the total amount of DNA amplified. The DNA ratio is often the most important aspect to look at in determining whether a mixture can be interpreted. For example, if a DNA sample had two contributors, it would be easy to interpret individual profiles if the ratio of DNA contributed by one person was much higher than the second person. When a sample has three or more contributors, it becomes extremely difficult to determine individual profiles. Fortunately, advancements in probabilistic genotyping may make that sort of determination possible in the future. Probabilistic genotyping uses complex computer software to run through thousands of mathematical computations to produce statistical likelihoods of individual genotypes found in a mixture.
DNA profiling in plant:
Plant DNA profiling (fingerprinting) is a method for identifying cultivars that uses molecular marker techniques. This method is gaining attention due to Trade Related Intellectual property rights (TRIPs) and the Convention on Biological Diversity (CBD).
Advantages of Plant DNA profiling:
Identification, authentication, specific distinction, detecting adulteration and identifying phytoconstituents are all possible with DNA fingerprinting in medical plants.
DNA based markers are critical for these applications, determining the future of scientific study in pharmacognosy.
It also helps with determining the traits (such as seed size and leaf color) are likely to improve the offspring or not.
DNA databases
An early application of a DNA database was the compilation of a Mitochondrial DNA Concordance, prepared by Kevin W. P. Miller and John L. Dawson at the University of Cambridge from 1996 to 1999 from data collected as part of Miller's PhD thesis. There are now several DNA databases in existence around the world. Some are private, but most of the largest databases are government-controlled. The United States maintains the largest DNA database, with the Combined DNA Index System (CODIS) holding over 13 million records as of May 2018. The United Kingdom maintains the National DNA Database (NDNAD), which is of similar size, despite the UK's smaller population. The size of this database, and its rate of growth, are giving concern to civil liberties groups in the UK, where police have wide-ranging powers to take samples and retain them even in the event of acquittal. The Conservative–Liberal Democrat coalition partially addressed these concerns with part 1 of the Protection of Freedoms Act 2012, under which DNA samples must be deleted if suspects are acquitted or not charged, except in relation to certain (mostly serious or sexual) offenses. Public discourse around the introduction of advanced forensic techniques (such as genetic genealogy using public genealogy databases and DNA phenotyping approaches) has been limited, disjointed, unfocused, and raises issues of privacy and consent that may warrant the establishment of additional legal protections.
The U.S. Patriot Act of the United States provides a means for the U.S. government to get DNA samples from suspected terrorists. DNA information from crimes is collected and deposited into the CODIS database, which is maintained by the FBI. CODIS enables law enforcement officials to test DNA samples from crimes for matches within the database, providing a means of finding specific biological profiles associated with collected DNA evidence.
When a match is made from a national DNA databank to link a crime scene to an offender having provided a DNA sample to a database, that link is often referred to as a cold hit. A cold hit is of value in referring the police agency to a specific suspect but is of less evidential value than a DNA match made from outside the DNA Databank.
FBI agents cannot legally store DNA of a person not convicted of a crime. DNA collected from a suspect not later convicted must be disposed of and not entered into the database. In 1998, a man residing in the UK was arrested on accusation of burglary. His DNA was taken and tested, and he was later released. Nine months later, this man's DNA was accidentally and illegally entered in the DNA database. New DNA is automatically compared to the DNA found at cold cases and, in this case, this man was found to be a match to DNA found at a rape and assault case one year earlier. The government then prosecuted him for these crimes. During the trial the DNA match was requested to be removed from the evidence because it had been illegally entered into the database. The request was carried out.
The DNA of the perpetrator, collected from victims of rape, can be stored for years until a match is found. In 2014, to address this problem, Congress extended a bill that helps states deal with "a backlog" of evidence.
DNA profiling databases in Plants:
PIDS:
PIDS(Plant international DNA-fingerprinting system) is an open source web server and free software based plant international DNA fingerprinting system.
It manages huge amount of microsatellite DNA fingerprint data, performs genetic studies, and automates collection, storage and maintenance while decreasing human error and increasing efficiency.
The system may be tailored to specific laboratory needs, making it a valuable tool for plant breeders, forensic science, and human fingerprint recognition.
It keeps track of experiments, standardizes data and promotes inter-database communication.
It also helps with the regulation of variety quality, the preservation of variety rights and the use of molecular markers in breeding by providing location statistics, merging, comparison and genetic analysis function.
Considerations in evaluating DNA evidence
When using RFLP, the theoretical risk of a coincidental match is 1 in 100 billion (100,000,000,000) although the practical risk is actually 1 in 1,000 because monozygotic twins are 0.2% of the human population. Moreover, the rate of laboratory error is almost certainly higher than that and actual laboratory procedures often do not reflect the theory under which the coincidence probabilities were computed. For example, coincidence probabilities may be calculated based on the probabilities that markers in two samples have bands in precisely the same location, but a laboratory worker may conclude that similar but not precisely-identical band patterns result from identical genetic samples with some imperfection in the agarose gel. However, in that case, the laboratory worker increases the coincidence risk by expanding the criteria for declaring a match. Studies conducted in the 2000s quoted relatively-high error rates, which may be cause for concern. In the early days of genetic fingerprinting, the necessary population data to compute a match probability accurately was sometimes unavailable. Between 1992 and 1996, arbitrary-low ceilings were controversially put on match probabilities used in RFLP analysis, rather than the higher theoretically computed ones.
Evidence of genetic relationship
It is possible to use DNA profiling as evidence of genetic relationship although such evidence varies in strength from weak to positive. Testing that shows no relationship is absolutely certain. Further, while almost all individuals have a single and distinct set of genes, ultra-rare individuals, known as "chimeras", have at least two different sets of genes. There have been two cases of DNA profiling that falsely suggested that a mother was unrelated to her children.
Fake DNA evidence
The functional analysis of genes and their coding sequences (open reading frames [ORFs]) typically requires that each ORF be expressed, the encoded protein purified, antibodies produced, phenotypes examined, intracellular localization determined, and interactions with other proteins sought. In a study conducted by the life science company Nucleix and published in the journal Forensic Science International, scientists found that an in vitro synthesized sample of DNA matching any desired genetic profile can be constructed using standard molecular biology techniques without obtaining any actual tissue from that person.
DNA evidence in criminal trials
Familial DNA searching
Familial DNA searching (sometimes referred to as "familial DNA" or "familial DNA database searching") is the practice of creating new investigative leads in cases where DNA evidence found at the scene of a crime (forensic profile) strongly resembles that of an existing DNA profile (offender profile) in a state DNA database but there is not an exact match. After all other leads have been exhausted, investigators may use specially developed software to compare the forensic profile to all profiles taken from a state's DNA database to generate a list of those offenders already in the database who are most likely to be a very close relative of the individual whose DNA is in the forensic profile.
Familial DNA database searching was first used in an investigation leading to the conviction of Jeffrey Gafoor of the murder of Lynette White in the United Kingdom on 4 July 2003. DNA evidence was matched to Gafoor's nephew, who at 14 years old had not been born at the time of the murder in 1988. It was used again in 2004 to find a man who threw a brick from a motorway bridge and hit a lorry driver, killing him. DNA found on the brick matched that found at the scene of a car theft earlier in the day, but there were no good matches on the national DNA database. A wider search found a partial match to an individual; on being questioned, this man revealed he had a brother, Craig Harman, who lived very close to the original crime scene. Harman voluntarily submitted a DNA sample, and confessed when it matched the sample from the brick. As of 2011, familial DNA database searching is not conducted on a national level in the United States, where states determine how and when to conduct familial searches. The first familial DNA search with a subsequent conviction in the United States was conducted in Denver, Colorado, in 2008, using software developed under the leadership of Denver District Attorney Mitch Morrissey and Denver Police Department Crime Lab Director Gregg LaBerge. California was the first state to implement a policy for familial searching under then-Attorney General Jerry Brown, who later became Governor. In his role as consultant to the Familial Search Working Group of the California Department of Justice, former Alameda County Prosecutor Rock Harmon is widely considered to have been the catalyst in the adoption of familial search technology in California. The technique was used to catch the Los Angeles serial killer known as the "Grim Sleeper" in 2010. It was not a witness or informant that tipped off law enforcement to the identity of the "Grim Sleeper" serial killer, who had eluded police for more than two decades, but DNA from the suspect's own son. The suspect's son had been arrested and convicted in a felony weapons charge and swabbed for DNA the year before. When his DNA was entered into the database of convicted felons, detectives were alerted to a partial match to evidence found at the "Grim Sleeper" crime scenes. David Franklin Jr., also known as the Grim Sleeper, was charged with ten counts of murder and one count of attempted murder. More recently, familial DNA led to the arrest of 21-year-old Elvis Garcia on charges of sexual assault and false imprisonment of a woman in Santa Cruz in 2008. In March 2011 Virginia Governor Bob McDonnell announced that Virginia would begin using familial DNA searches.
At a press conference in Virginia on 7 March 2011, regarding the East Coast Rapist, Prince William County prosecutor Paul Ebert and Fairfax County Police Detective John Kelly said the case would have been solved years ago if Virginia had used familial DNA searching. Aaron Thomas, the suspected East Coast Rapist, was arrested in connection with the rape of 17 women from Virginia to Rhode Island, but familial DNA was not used in the case.
Critics of familial DNA database searches argue that the technique is an invasion of an individual's 4th Amendment rights. Privacy advocates are petitioning for DNA database restrictions, arguing that the only fair way to search for possible DNA matches to relatives of offenders or arrestees would be to have a population-wide DNA database. Some scholars have pointed out that the privacy concerns surrounding familial searching are similar in some respects to other police search techniques, and most have concluded that the practice is constitutional. The Ninth Circuit Court of Appeals in United States v. Pool (vacated as moot) suggested that this practice is somewhat analogous to a witness looking at a photograph of one person and stating that it looked like the perpetrator, which leads law enforcement to show the witness photos of similar looking individuals, one of whom is identified as the perpetrator.
Critics also state that racial profiling could occur on account of familial DNA testing. In the United States, the conviction rates of racial minorities are much higher than that of the overall population. It is unclear whether this is due to discrimination from police officers and the courts, as opposed to a simple higher rate of offence among minorities. Arrest-based databases, which are found in the majority of the United States, lead to an even greater level of racial discrimination. An arrest, as opposed to conviction, relies much more heavily on police discretion.
For instance, investigators with Denver District Attorney's Office successfully identified a suspect in a property theft case using a familial DNA search. In this example, the suspect's blood left at the scene of the crime strongly resembled that of a current Colorado Department of Corrections prisoner.
Partial matches
Partial DNA matches are the result of moderate stringency CODIS searches that produce a potential match that shares at least one allele at every locus. Partial matching does not involve the use of familial search software, such as those used in the United Kingdom and the United States, or additional Y-STR analysis and therefore often misses sibling relationships. Partial matching has been used to identify suspects in several cases in both countries and has also been used as a tool to exonerate the falsely accused. Darryl Hunt was wrongly convicted in connection with the rape and the murder of a young woman in 1984 in North Carolina.
Surreptitious DNA collecting
Police forces may collect DNA samples without a suspect's knowledge, and use it as evidence. The legality of the practice has been questioned in Australia.
In the United States, where it has been accepted, courts often rule that there is no expectation of privacy and cite California v. Greenwood (1988), in which the Supreme Court held that the Fourth Amendment does not prohibit the warrantless search and seizure of garbage left for collection outside the curtilage of a home. Critics of this practice underline that this analogy ignores that "most people have no idea that they risk surrendering their genetic identity to the police by, for instance, failing to destroy a used coffee cup. Moreover, even if they do realize it, there is no way to avoid abandoning one's DNA in public."
The United States Supreme Court ruled in Maryland v. King (2013) that DNA sampling of prisoners arrested for serious crimes is constitutional.
In the United Kingdom, the Human Tissue Act 2004 prohibits private individuals from covertly collecting biological samples (hair, fingernails, etc.) for DNA analysis but exempts medical and criminal investigations from the prohibition.
England and Wales
Evidence from an expert who has compared DNA samples must be accompanied by evidence as to the sources of the samples and the procedures for obtaining the DNA profiles. The judge must ensure that the jury must understand the significance of DNA matches and mismatches in the profiles. The judge must also ensure that the jury does not confuse the match probability (the probability that a person that is chosen at random has a matching DNA profile to the sample from the scene) with the probability that a person with matching DNA committed the crime. In 1996 R v. Doheny
Juries should weigh up conflicting and corroborative evidence, using their own common sense and not by using mathematical formulae, such as Bayes' theorem, so as to avoid "confusion, misunderstanding and misjudgment".
Presentation and evaluation of evidence of partial or incomplete DNA profiles
In R v Bates, Moore-Bick LJ said:
DNA testing in the United States
There are state laws on DNA profiling in all 50 states of the United States. Detailed information on database laws in each state can be found at the National Conference of State Legislatures website.
Development of artificial DNA
In August 2009, scientists in Israel raised serious doubts concerning the use of DNA by law enforcement as the ultimate method of identification. In a paper published in the journal Forensic Science International: Genetics, the Israeli researchers demonstrated that it is possible to manufacture DNA in a laboratory, thus falsifying DNA evidence. The scientists fabricated saliva and blood samples, which originally contained DNA from a person other than the supposed donor of the blood and saliva.
The researchers also showed that, using a DNA database, it is possible to take information from a profile and manufacture DNA to match it, and that this can be done without access to any actual DNA from the person whose DNA they are duplicating. The synthetic DNA oligos required for the procedure are common in molecular laboratories.
The New York Times quoted the lead author, Daniel Frumkin, saying, "You can just engineer a crime scene ... any biology undergraduate could perform this". Frumkin perfected a test that can differentiate real DNA samples from fake ones. His test detects epigenetic modifications, in particular, DNA methylation. Seventy percent of the DNA in any human genome is methylated, meaning it contains methyl group modifications within a CpG dinucleotide context. Methylation at the promoter region is associated with gene silencing. The synthetic DNA lacks this epigenetic modification, which allows the test to distinguish manufactured DNA from genuine DNA.
It is unknown how many police departments, if any, currently use the test. No police lab has publicly announced that it is using the new test to verify DNA results.
Researchers at the University of Tokyo integrated an artificial DNA replication scheme with a rebuilt gene expression system and micro-compartmentalization utilizing cell-free materials alone for the first time. Multiple cycles of serial dilution were performed on a system contained in microscale water-in-oil droplets.
Chances of making DNA change on purpose
Overall, this study's artificial genomic DNA, which kept copying itself using self-encoded proteins and made its sequence better on its own, is a good starting point for making more complex artificial cells. By adding the genes needed for transcription and translation to artificial genomic DNA, it may be possible in the future to make artificial cells that can grow on their own when fed small molecules like amino acids and nucleotides. Using living organisms to make useful things, like drugs and food, would be more stable and easier to control in these artificial cells.
On July 7, 2008, the American chemical society reported that Japanese chemists have created the world's first DNA molecule comprised nearly completely of synthetic components.
A nano-particle based artificial transcription factor for gene regulation:
Nano Script is a nanoparticle-based artificial transcription factor that is supposed to replicate the structure and function of TFs. On gold nanoparticles, functional peptides and tiny molecules referred to as synthetic transcription factors, which imitate the various TF domains, were attached to create Nano Script. We show that Nano Script localizes to the nucleus and begins transcription of a reporter plasmid by an amount more than 15-fold. Moreover, Nano Script can successfully transcribe targeted genes onto endogenous DNA in a nonviral manner.
Three different fluorophores—red, green, and blue—were carefully fixed on the DNA rod surface to provide spatial information and create a nanoscale barcode. Epifluorescence and total internal reflection fluorescence microscopy reliably deciphered spatial information between fluorophores. By moving the three fluorophores on the DNA rod, this nanoscale barcode created 216 fluorescence patterns.
Cases
In 1986, Richard Buckland was exonerated, despite having admitted to the rape and murder of a teenager near Leicester, the city where DNA profiling was first developed. This was the first use of DNA fingerprinting in a criminal investigation, and the first to prove a suspect's innocence. The following year Colin Pitchfork was identified as the perpetrator of the same murder, in addition to another, using the same techniques that had cleared Buckland.
In 1987, genetic fingerprinting was used in a US criminal court for the first time in the trial of a man accused of unlawful intercourse with a mentally disabled 14-year-old female who gave birth to a baby.
In 1987, Florida rapist Tommie Lee Andrews was the first person in the United States to be convicted as a result of DNA evidence, for raping a woman during a burglary; he was convicted on 6 November 1987, and sentenced to 22 years in prison.
In 1990, a violent murder of a young student in Brno was the first criminal case in Czechoslovakia solved by DNA evidence, with the murderer sentenced to 23 years in prison.
In 1992, DNA from a palo verde tree was used to convict Mark Alan Bogan of murder. DNA from seed pods of a tree at the crime scene was found to match that of seed pods found in Bogan's truck. This is the first instance of plant DNA admitted in a criminal case.
In 1994, the claim that Anna Anderson was Grand Duchess Anastasia Nikolaevna of Russia was tested after her death using samples of her tissue that had been stored at a Charlottesville hospital following a medical procedure. The tissue was tested using DNA fingerprinting, and showed that she bore no relation to the Romanovs.
In 1994, Earl Washington, Jr., of Virginia had his death sentence commuted to life imprisonment a week before his scheduled execution date based on DNA evidence. He received a full pardon in 2000 based on more advanced testing.
In 1999, Raymond Easton, a disabled man from Swindon, England, was arrested and detained for seven hours in connection with a burglary. He was released due to an inaccurate DNA match. His DNA had been retained on file after an unrelated domestic incident some time previously.
In 2000 Frank Lee Smith was proved innocent by DNA profiling of the murder of an eight-year-old girl after spending 14 years on death row in Florida, USA. However he had died of cancer just before his innocence was proven. In view of this the Florida state governor ordered that in future any death row inmate claiming innocence should have DNA testing.
In May 2000 Gordon Graham murdered Paul Gault at his home in Lisburn, Northern Ireland. Graham was convicted of the murder when his DNA was found on a sports bag left in the house as part of an elaborate ploy to suggest the murder occurred after a burglary had gone wrong. Graham was having an affair with the victim's wife at the time of the murder. It was the first time Low Copy Number DNA was used in Northern Ireland.
In 2001, Wayne Butler was convicted for the murder of Celia Douty. It was the first murder in Australia to be solved using DNA profiling.
In 2002, the body of James Hanratty, hanged in 1962 for the "A6 murder", was exhumed and DNA samples from the body and members of his family were analysed. The results convinced Court of Appeal judges that Hanratty's guilt, which had been strenuously disputed by campaigners, was proved "beyond doubt". Paul Foot and some other campaigners continued to believe in Hanratty's innocence and argued that the DNA evidence could have been contaminated, noting that the small DNA samples from items of clothing, kept in a police laboratory for over 40 years "in conditions that do not satisfy modern evidential standards", had had to be subjected to very new amplification techniques in order to yield any genetic profile. However, no DNA other than Hanratty's was found on the evidence tested, contrary to what would have been expected had the evidence indeed been contaminated.
In August 2002, Annalisa Vicentini was shot dead in Tuscany. Bartender Peter Hamkin, 23, was arrested, in Merseyside in March 2003 on an extradition warrant heard at Bow Street Magistrates' Court in London to establish whether he should be taken to Italy to face a murder charge. DNA "proved" he shot her, but he was cleared on other evidence.
In 2003, Welshman Jeffrey Gafoor was convicted of the 1988 murder of Lynette White, when crime scene evidence collected 12 years earlier was re-examined using STR techniques, resulting in a match with his nephew.
In June 2003, because of new DNA evidence, Dennis Halstead, John Kogut and John Restivo won a re-trial on their murder conviction, their convictions were struck down and they were released.
In 2004, DNA testing shed new light into the mysterious 1912 disappearance of Bobby Dunbar, a four-year-old boy who vanished during a fishing trip. He was allegedly found alive eight months later in the custody of William Cantwell Walters, but another woman claimed that the boy was her son, Bruce Anderson, whom she had entrusted in Walters' custody. The courts disbelieved her claim and convicted Walters for the kidnapping. The boy was raised and known as Bobby Dunbar throughout the rest of his life. However, DNA tests on Dunbar's son and nephew revealed the two were not related, thus establishing that the boy found in 1912 was not Bobby Dunbar, whose real fate remains unknown.
In 2005, Gary Leiterman was convicted of the 1969 murder of Jane Mixer, a law student at the University of Michigan, after DNA found on Mixer's pantyhose was matched to Leiterman. DNA in a drop of blood on Mixer's hand was matched to John Ruelas, who was only four years old in 1969 and was never successfully connected to the case in any other way. Leiterman's defense unsuccessfully argued that the unexplained match of the blood spot to Ruelas pointed to cross-contamination and raised doubts about the reliability of the lab's identification of Leiterman.
In November 2008, Anthony Curcio was arrested for masterminding one of the most elaborately planned armored car heists in history. DNA evidence linked Curcio to the crime.
In March 2009, Sean Hodgson—convicted of 1979 killing of Teresa De Simone, 22, in her car in Southampton—was released after tests proved DNA from the scene was not his. It was later matched to DNA retrieved from the exhumed body of David Lace. Lace had previously confessed to the crime but was not believed by the detectives. He served time in prison for other crimes committed at the same time as the murder and then committed suicide in 1988.
In 2012, a case of babies being switched, many decades earlier, was discovered by accident. After undertaking DNA testing for other purposes, Alice Collins Plebuch was advised that her ancestry appeared to include a significant Ashkenazi Jewish component, despite a belief in her family that they were of predominantly Irish descent. Profiling of Plebuch's genome suggested that it included distinct and unexpected components associated with Ashkenazi, Middle Eastern, and Eastern European populations. This led Plebuch to conduct an extensive investigation, after which she concluded that her father had been switched (possibly accidentally) with another baby soon after birth. Plebuch was also able to identify the biological ancestors of her father.
In 2016 Anthea Ring, abandoned as a baby, was able to use a DNA sample and DNA matching database to discover her deceased mother's identity and roots in County Mayo, Ireland. A recently developed forensic test was subsequently used to capture DNA from saliva left on old stamps and envelopes by her suspected father, uncovered through painstaking genealogy research. The DNA in the first three samples was too degraded to use. However, on the fourth, more than enough DNA was found. The test, which has a degree of accuracy acceptable in UK courts, proved that a man named Patrick Coyne was her biological father.
In 2018 the Buckskin girl (a body found in 1981 in Ohio) was identified as Marcia King from Arkansas using DNA genealogical techniques
In 2018 Joseph James DeAngelo was arrested as the main suspect for the Golden State Killer using DNA and genealogy techniques.
In 2018, William Earl Talbott II was arrested as a suspect for the 1987 murders of Jay Cook and Tanya Van Cuylenborg with the assistance of genealogical DNA testing. The same genetic genealogist that helped in this case also helped police with 18 other arrests in 2018.
In 2018, With the use of Next Generation Identification System's enhanced biometric capabilities, the FBI matched the fingerprint of a suspect named Timothy David Nelson and arrested him 20 years after the alleged sexual assault.
DNA evidence as evidence to prove rights of succession to British titles
DNA testing has been used to establish the right of succession to British titles.
Cases:
Baron Moynihan
Pringle baronets
See also
Forensic identification
Full genome sequencing
Gene mapping
Harvey v. Horan
Identification (biology)
Project Innocence
Ribotyping
International Society for Forensic Genetics
International Society of Genetic Genealogy
Satellite DNA
References
Further reading
External links
Forensic Science, Statistics, and the Law – Blog that tracks scientific and legal developments pertinent to forensic DNA profiling
Create a DNA Fingerprint – PBS.org
In silico simulation of Molecular Biology Techniques – A place to learn typing techniques by simulating them
National DNA Databases in the EU
The Innocence Record , Winston & Strawn LLP/The Innocence Project
Making Sense of DNA Backlogs, 2012: Myths vs. Reality United States Department of Justice
Applied genetics
Biometrics
DNA
Forensic genetics
Forensic statistics
History of genetics
Identity documents
Molecular biology | DNA profiling | [
"Chemistry",
"Biology"
] | 9,570 | [
"Genetics techniques",
"DNA profiling techniques"
] |
44,312 | https://en.wikipedia.org/wiki/Invention | An invention is a unique or novel device, method, composition, idea or process. An invention may be an improvement upon a machine, product, or process for increasing efficiency or lowering cost. It may also be an entirely new concept. If an idea is unique enough either as a stand-alone invention or as a significant improvement over the work of others, it can be patented. A patent, if granted, gives the inventor a proprietary interest in the patent over a specific period of time, which can be licensed for financial gain.
An inventor creates or discovers an invention. The word inventor comes from the Latin verb invenire, invent-, to find. Although inventing is closely associated with science and engineering, inventors are not necessarily engineers or scientists. Due to advances in artificial intelligence, the term "inventor" no longer exclusively applies to an occupation (see human computers).
Some inventions can be patented. The system of patents was established to encourage inventors by granting limited-term, limited monopoly on inventions determined to be sufficiently novel, non-obvious, and useful. A patent legally protects the intellectual property rights of the inventor and legally recognizes that a claimed invention is actually an invention. The rules and requirements for patenting an invention vary by country and the process of obtaining a patent is often expensive.
Another meaning of invention is cultural invention, which is an innovative set of useful social behaviours adopted by people and passed on to others. The Institute for Social Inventions collected many such ideas in magazines and books. Invention is also an important component of artistic and design creativity. Inventions often extend the boundaries of human knowledge, experience or capability.
Types
Inventions are of three kinds: scientific-technological (including medicine), sociopolitical (including economics and law), and humanistic, or cultural.
Scientific-technological inventions include railroads, aviation, vaccination, hybridization, antibiotics, astronautics, holography, the atomic bomb, computing, the Internet, and the smartphone.
Sociopolitical inventions comprise new laws, institutions, and procedures that change modes of social behavior and establish new forms of human interaction and organization. Examples include the British Parliament, the US Constitution, the Manchester (UK) General Union of Trades, the Boy Scouts, the Red Cross, the Olympic Games, the United Nations, the European Union, and the Universal Declaration of Human Rights, as well as movements such as socialism, Zionism, suffragism, feminism, and animal-rights veganism.
Humanistic inventions encompass culture in its entirety and are as transformative and important as any in the sciences, although people tend to take them for granted. In the domain of linguistics, for example, many alphabets have been inventions, as are all neologisms (Shakespeare invented about 1,700 words). Literary inventions include the epic, tragedy, comedy, the novel, the sonnet, the Renaissance, neoclassicism, Romanticism, Symbolism, Aestheticism, Socialist Realism, Surrealism, postmodernism, and (according to Freud) psychoanalysis. Among the inventions of artists and musicians are oil painting, printmaking, photography, cinema, musical tonality, atonality, jazz, rock, opera, and the symphony orchestra. Philosophers have invented logic (several times), dialectics, idealism, materialism, utopia, anarchism, semiotics, phenomenology, behaviorism, positivism, pragmatism, and deconstruction. Religious thinkers are responsible for such inventions as monotheism, pantheism, Methodism, Mormonism, iconoclasm, puritanism, deism, secularism, ecumenism, and the Baháʼí Faith. Some of these disciplines, genres, and trends may seem to have existed eternally or to have emerged spontaneously of their own accord, but most of them have had inventors.
Process
Practical means
Ideas for an invention may be developed on paper or on a computer, by writing or drawing, by trial and error, by making models, by experimenting, by testing and/or by making the invention in its whole form. Brainstorming also can spark new ideas for an invention. Collaborative creative processes are frequently used by engineers, designers, architects and scientists. Co-inventors are frequently named on patents.
In addition, many inventors keep records of their working process – notebooks, photos, etc., including Leonardo da Vinci, Galileo Galilei, Evangelista Torricelli, Thomas Jefferson and Albert Einstein.
In the process of developing an invention, the initial idea may change. The invention may become simpler, more practical, it may expand, or it may even morph into something totally different. Working on one invention can lead to others too.
History shows that turning the concept of an invention into a working device is not always swift or direct. Inventions may also become more useful after time passes and other changes occur. For example, the parachute became more useful once powered flight was a reality.
Conceptual means
Invention is often a creative process. An open and curious mind allows an inventor to see beyond what is known. Seeing a new possibility, connection or relationship can spark an invention. Inventive thinking frequently involves combining concepts or elements from different realms that would not normally be put together. Sometimes inventors disregard the boundaries between distinctly separate territories or fields. Several concepts may be considered when thinking about invention.
Play
Play may lead to invention. Childhood curiosity, experimentation, and imagination can develop one's play instinct. Inventors feel the need to play with things that interest them, and to explore, and this internal drive brings about novel creations.
Sometimes inventions and ideas may seem to arise spontaneously while daydreaming, especially when the mind is free from its usual concerns. For example, both J. K. Rowling (the creator of Harry Potter) and Frank Hornby (the inventor of Meccano) first had their ideas while on train journeys.
In contrast, the successful aerospace engineer Max Munk advocated "aimful thinking".
Re-envisioning
To invent is to see anew. Inventors often envision a new idea, seeing it in their mind's eye. New ideas can arise when the conscious mind turns away from the subject or problem when the inventor's focus is on something else, or while relaxing or sleeping. A novel idea may come in a flash—a Eureka! moment. For example, after years of working to figure out the general theory of relativity, the solution came to Einstein suddenly in a dream "like a giant die making an indelible impress, a huge map of the universe outlined itself in one clear vision". Inventions can also be accidental, such as in the case of polytetrafluoroethylene (Teflon).
Insight
Insight can also be a vital element of invention. Such inventive insights may begin with questions, doubt or a hunch. It may begin by recognizing that something unusual or accidental may be useful or that it could open a new avenue for exploration. For example, the odd metallic color of plastic made by accidentally adding a thousand times too much catalyst led scientists to explore its metal-like properties, inventing electrically conductive plastic and light emitting plastic—an invention that won the Nobel Prize in 2000 and has led to innovative lighting, display screens, wallpaper and much more (see conductive polymer, and organic light-emitting diode or OLED).
Exploration
Invention is often an exploratory process with an uncertain or unknown outcome. There are failures as well as successes. Inspiration can start the process, but no matter how complete the initial idea, inventions typically must be developed.
Improvement
Inventors may, for example, try to improve something by making it more effective, healthier, faster, more efficient, easier to use, serve more purposes, longer lasting, cheaper, more ecologically friendly, or aesthetically different, lighter weight, more ergonomic, structurally different, with new light or color properties, etc.
Implementation
In economic theory, inventions are one of the chief examples of "positive externalities", a beneficial side effect that falls on those outside a transaction or activity. One of the central concepts of economics is that externalities should be internalized—unless some of the benefits of this positive externality can be captured by the parties, the parties are under-rewarded for their inventions, and systematic under-rewarding leads to under-investment in activities that lead to inventions. The patent system captures those positive externalities for the inventor or other patent owner so that the economy as a whole invests an optimum amount of resources in the invention process.
Comparison with innovation
In contrast to invention, innovation is the implementation of a creative idea that specifically leads to greater value or usefulness. That is, while an invention may be useless or have no value yet still be an invention, an innovation must have some sort of value, typically economic.
As defined by patent law
The term invention is also an important legal concept and central to patent law systems worldwide. As is often the case for legal concepts, its legal meaning is slightly different from common usage of the word. Additionally, the legal concept of invention is quite different in American and European patent law.
In Europe, the first test a patent application must pass is, "Is this an invention?" If it is, subsequent questions are whether it is new and sufficiently inventive. The implication—counter-intuitively—is that a legal invention is not inherently novel. Whether a patent application relates to an invention is governed by Article 52 of the European Patent Convention, that excludes, e.g., discoveries as such and software as such. The EPO Boards of Appeal decided that the technical character of an application is decisive for it to represent an invention, following an age-old Italian and German tradition. British courts do not agree with this interpretation. Following a 1959 Australian decision ("NRDC"), they believe that it is not possible to grasp the invention concept in a single rule. A British court once stated that the technical character test implies a "restatement of the problem in more imprecise terminology."
In the United States, all patent applications are considered inventions. The statute explicitly says that the American invention concept includes discoveries (35 USC § 100(a)), contrary to the European invention concept. The European invention concept corresponds to the American "patentable subject matter" concept: the first test a patent application is submitted to. While the statute (35 USC § 101) virtually poses no limits to patenting whatsoever, courts have decided in binding precedents that abstract ideas, natural phenomena and laws of nature are not patentable. Various attempts have been made to substantiate the "abstract idea" test, which suffers from abstractness itself, but none have succeeded. The last attempt so far was the "machine or transformation" test, but the U.S. Supreme Court decided in 2010 that it is merely an indication at best.
In India, invention means a new product or process that involves an inventive step, and capable of being made or used in an industry. Whereas, "new invention" means any invention that has not been anticipated in any prior art or used in the country or anywhere in the world.
In the arts
Invention has a long and important history in the arts. Inventive thinking has always played a vital role in the creative process. While some inventions in the arts are patentable, others are not because they cannot fulfill the strict requirements governments have established for granting them. (see patent).
Some inventions in art include the:
Collage and construction invented by Picasso
Readymade art invented by Marcel Duchamp
mobile invented by Alexander Calder
Combine invented by Robert Rauschenberg
Shaped painting invented by Frank Stella
Motion picture, the invention of which is attributed to Eadweard Muybridge
Video art invented by Nam June Paik
Likewise, Jackson Pollock invented an entirely new form of painting and a new kind of abstraction by dripping, pouring, splashing and splattering paint onto un-stretched canvas lying on the floor.
Inventive tools of the artist's trade also produced advances in creativity. Impressionist painting became possible because of newly invented collapsible, resealable metal paint tubes that facilitated spontaneous painting outdoors. Inventions originally created in the form of artwork can also develop other uses, e.g. Alexander Calder's mobile, which is now commonly used over babies' cribs. Funds generated from patents on inventions in art, design and architecture can support the realization of the invention or other creative work. Frédéric Auguste Bartholdi's 1879 design patent on the Statue of Liberty helped fund the famous statue because it covered small replicas, including those sold as souvenirs.
The timeline for invention in the arts lists the most notable artistic inventors.
Gender gap in inventions
Historically, women in many regions have been unrecognised for their inventive contributions (except Russia and France), despite being the sole inventor or co-inventor in inventions, including highly notable inventions. Notable examples include Margaret Knight who faced significant challenges in receiving credit for her inventions; Elizabeth Magie who was not credited for her invention of the game of Monopoly; and among other such examples, Chien-Shiung Wu whose male colleagues alone were awarded the Nobel Prize for their joint contributions to physics. Societal prejudice, institutional, educational and often legal patent barriers have both played a role in the gender invention gap. For example, although there could be found female patenters in US patent Office who also are likely to be helpful in their experience, still a patent applications made to the US Patent Office for inventions are less likely to succeed where the applicant have a "feminine" name, and additionally women could lose their independent legal patent rights to their husbands once married. See also the gender gap in patents.
See also
Bayh–Dole Act
Bold hypothesis
Chindōgu
Creativity techniques
Directive on the legal protection of biotechnological inventions
Discovery (observation)
Edisonian approach
Heroic theory of invention and scientific development
Independent inventor
INPEX (invention show)
International Innovation Index
Inventing the Future: Postcapitalism and a World Without Work
Invention promotion firm
Inventors' Day
Kranzberg's laws of technology
Lemelson-MIT Prize
:Category:Lists of inventions or discoveries
List of inventions named after people
List of inventors
List of prolific inventors
Multiple discovery
National Inventors Hall of Fame
Necessity (Invention's mother)
Patent model
Proof of concept
Proposed directive on the patentability of computer-implemented inventions – it was rejected
Scientific priority
Technological revolution
The Illustrated Science and Invention Encyclopedia
Timeline of historic inventions
Science and invention in Birmingham – The first cotton spinning mill to plastics and steam power.
References
Further reading
Asimov, Isaac. Asimov's Chronology of Science and Discovery, Harper & Row, 1989.
Fuller, Edmund, Tinkers and Genius: The Story of the Yankee Inventors. New York: Hastings House, 1955.
External links
List of PCT (Patent Cooperation Treaty) Notable Inventions at WIPO
Creativity
Human activities
Inventors | Invention | [
"Biology"
] | 3,062 | [
"Human activities",
"Creativity",
"Behavior",
"Human behavior"
] |
44,353 | https://en.wikipedia.org/wiki/Selfish%20genetic%20element | Selfish genetic elements (historically also referred to as selfish genes, ultra-selfish genes, selfish DNA, parasitic DNA and genomic outlaws) are genetic segments that can enhance their own transmission at the expense of other genes in the genome, even if this has no positive or a net negative effect on organismal fitness. Genomes have traditionally been viewed as cohesive units, with genes acting together to improve the fitness of the organism.
Early observations of selfish genetic elements were made almost a century ago, but the topic did not get widespread attention until several decades later. Inspired by the gene-centred views of evolution popularized by George Williams and Richard Dawkins, two papers were published back-to-back in Nature in 1980 – by Leslie Orgel and Francis Crick and by Ford Doolittle and Carmen Sapienza – introducing the concept of selfish genetic elements (at the time called "selfish DNA") to the wider scientific community. Both papers emphasized that genes can spread in a population regardless of their effect on organismal fitness as long as they have a transmission advantage.
Selfish genetic elements have now been described in most groups of organisms, and they demonstrate a remarkable diversity in the ways by which they promote their own transmission. Though long dismissed as genetic curiosities, with little relevance for evolution, they are now recognized to affect a wide swath of biological processes, ranging from genome size and architecture to speciation.
History
Early observations
Observations of what is now referred to as selfish genetic elements go back to the early days in the history of genetics. Already in 1928, Russian geneticist Sergey Gershenson reported the discovery of a driving X chromosome in Drosophila obscura. Crucially, he noted that the resulting female-biased sex ratio may drive a population extinct (see Species extinction). The earliest clear statement of how chromosomes may spread in a population not because of their positive fitness effects on the individual organism, but because of their own "parasitic" nature came from the Swedish botanist and cytogeneticist Gunnar Östergren in 1945. Discussing B chromosomes in plants he wrote:
In many cases these chromosomes have no useful function at all to the species carrying them, but that they often lead an exclusively parasitic existence ... [B chromosomes] need not be useful for the plants. They need only be useful to themselves.
Around the same time, several other examples of selfish genetic elements were reported. For example, the American maize geneticist Marcus Rhoades described how chromosomal knobs led to female meiotic drive in maize. Similarly, this was also when it was first suggested that an intragenomic conflict between uniparentally inherited mitochondrial genes and biparentally inherited nuclear genes could lead to cytoplasmic male sterility in plants. Then, in the early 1950s, Barbara McClintock published a series of papers describing the existence of transposable elements, which are now recognized to be among the most successful selfish genetic elements. The discovery of transposable elements led to her being awarded the Nobel Prize in Medicine or Physiology in 1983.
Conceptual developments
The empirical study of selfish genetic elements benefited greatly from the emergence of the so-called gene-centred view of evolution in the nineteen sixties and seventies. In contrast with Darwin's original formulation of the theory of evolution by natural selection that focused on individual organisms, the gene's-eye view takes the gene to be the central unit of selection in evolution. It conceives evolution by natural selection as a process involving two separate entities: replicators (entities that produce faithful copies of themselves, usually genes) and vehicles (or interactors; entities that interact with the ecological environment, usually organisms).
Since organisms are temporary occurrences, present in one generation and gone in the next, genes (replicators) are the only entity faithfully transmitted from parent to offspring. Viewing evolution as a struggle between competing replicators made it easier to recognize that not all genes in an organism would share the same evolutionary fate.
The gene's-eye view was a synthesis of the population genetic models of the modern synthesis, in particular the work of RA Fisher, and the social evolution models of W. D. Hamilton. The view was popularized by George Williams's Adaptation and Natural Selection and Richard Dawkins's best seller The Selfish Gene. Dawkins summarized a key benefit from the gene's-eye view as follows:
"If we allow ourselves the license of talking about genes as if they had conscious aims, always reassuring ourselves that we could translate our sloppy language back into respectable terms if we wanted to, we can ask the question, what is a single selfish gene trying to do?" — Richard Dawkins, The Selfish Gene
In 1980, two high-profile papers published back-to-back in Nature by Leslie Orgel and Francis Crick, and by Ford Doolittle and Carmen Sapienza, brought the study of selfish genetic elements to the centre of biological debate. The papers took their starting point in the contemporary debate of the so-called C-value paradox, the lack of correlation between genome size and perceived complexity of a species. Both papers attempted to counter the prevailing view of the time that the presence of differential amounts of non-coding DNA and transposable elements is best explained from the perspective of individual fitness, described as the "phenotypic paradigm" by Doolittle and Sapienza. Instead, the authors argued that much of the genetic material in eukaryotic genomes persists, not because of its phenotypic effects, but can be understood from a gene's-eye view, without invoking individual-level explanations. The two papers led to a series of exchanges in Nature.
Current views
If the selfish DNA papers marked the beginning of the serious study of selfish genetic elements, the subsequent decades have seen an explosion in theoretical advances and empirical discoveries. Leda Cosmides and John Tooby wrote a landmark review about the conflict between maternally inherited cytoplasmic genes and biparentally inherited nuclear genes. The paper also provided a comprehensive introduction to the logic of genomic conflicts, foreshadowing many themes that would later be subject of much research. Then in 1988 John H. Werren and colleagues wrote the first major empirical review of the topic. This paper achieved three things. First, it coined the term selfish genetic element, putting an end to a sometimes confusingly diverse terminology (selfish genes, ultra-selfish genes, selfish DNA, parasitic DNA, genomic outlaws). Second, it formally defined the concept of selfish genetic elements. Finally, it was the first paper to bring together all different kinds of selfish genetic elements known at the time (genomic imprinting, for example, was not covered).
In the late 1980s, most molecular biologists considered selfish genetic elements to be the exception, and that genomes were best thought of as highly integrated networks with a coherent effect on organismal fitness. In 2006, when Austin Burt and Robert Trivers published the first book-length treatment of the topic, the tide was changing. While their role in evolution long remained controversial, in a review published a century after their first discovery, William R. Rice concluded that "nothing in genetics makes sense except in the light of genomic conflicts".
Logic
Though selfish genetic elements show a remarkable diversity in the way they promote their own transmission, some generalizations about their biology can be made. In a classic 2001 review, Gregory D.D. Hurst and John H. Werren proposed two ‘rules' of selfish genetic elements.
Rule 1: Spread requires sex and outbreeding
Sexual reproduction involves the mixing of genes from two individuals. According to Mendel's Law of Segregation, alleles in a sexually reproducing organism have a 50% chance of being passed from parent to offspring. Meiosis is therefore sometimes referred to as "fair".
Highly self-fertilizing or asexual genomes are expected to experience less conflict between selfish genetic elements and the rest of the host genome than outcrossing sexual genomes. There are several reasons for this. First, sex and outcrossing put selfish genetic elements into new genetic lineages. In contrast, in a highly selfing or asexual lineage, any selfish genetic element is essentially stuck in that lineage, which should increase variation in fitness among individuals. The increased variation should result in stronger purifying selection in selfers/asexuals, as a lineage without the selfish genetic elements should out-compete a lineage with the selfish genetic element. Second, the increased homozygosity in selfers removes the opportunity for competition among homologous alleles. Third, theoretical work has shown that the greater linkage disequilibrium in selfing compared to outcrossing genomes may in some, albeit rather limited, cases cause selection for reduced transposition rates. Overall, this reasoning leads to the prediction that asexuals/selfers should experience a lower load of selfish genetic elements. One caveat to this is that the evolution of selfing is associated with a reduction in the effective population size. A reduction in the effective population size should reduce the efficacy of selection and therefore leads to the opposite prediction: higher accumulation of selfish genetic elements in selfers relative to outcrossers.
Empirical evidence for the importance of sex and outcrossing comes from a variety of selfish genetic elements, including transposable elements, self-promoting plasmids, and B chromosomes.
Rule 2: Presence is often revealed in hybrids
The presence of selfish genetic elements can be difficult to detect in natural populations. Instead, their phenotypic consequences often become apparent in hybrids. The first reason for this is that some selfish genetic elements rapidly sweep to fixation, and the phenotypic effects will therefore not be segregating in the population. Hybridization events, however, will produce offspring with and without the selfish genetic elements and so reveal their presence. The second reason is that host genomes have evolved mechanisms to suppress the activity of the selfish genetic elements, for example the small RNA administered silencing of transposable elements. The co-evolution between selfish genetic elements and their suppressors can be rapid, and follow a Red Queen dynamics, which may mask the presence of selfish genetic elements in a population. Hybrid offspring, on the other hand, may inherit a given selfish genetic element, but not the corresponding suppressor and so reveal the phenotypic effect of the selfish genetic element.
Examples
Segregation distorters
Some selfish genetic elements manipulate the genetic transmission process to their own advantage, and so end up being overrepresented in the gametes. Such distortion can occur in various ways, and the umbrella term that encompasses all of them is segregation distortion. Some elements can preferentially be transmitted in egg cells as opposed to polar bodies during meiosis, where only the former will be fertilized and transmitted to the next generation. Any gene that can manipulate the odds of ending up in the egg rather than the polar body will have a transmission advantage, and will increase in frequency in a population.
Segregation distortion can happen in several ways. When this process occurs during meiosis it is referred to as meiotic drive. Many forms of segregation distortion occur in male gamete formation, where there is differential mortality of spermatids during the process of sperm maturation or spermiogenesis. The segregation distorter (SD) in Drosophila melanogaster is the best studied example, and it involves a nuclear envelope protein Ran-GAP and the X-linked repeat array called Responder (Rsp), where the SD allele of Ran-GAP favors its own transmission only in the presence of a Rspsensitive allele on the homologous chromosome. SD acts to kill RSPsensitive sperm, in a post-meiotic process (hence it is not strictly speaking meiotic drive). Systems like this can have interesting rock-paper-scissors dynamics, oscillating between the SD-RSPinsensitive, SD+-RSPinsensitive and SD+-RSPsensitive haplotypes. The SD-RSPsensitive haplotype is not seen because it essentially commits suicide.
When segregation distortion acts on sex chromosomes, they can skew the sex ratio. The SR system in Drosophila pseudoobscura, for example, is on the X chromosome, and XSR/Y males produce only daughters, whereas females undergo normal meiosis with Mendelian proportions of gametes. Segregation distortion systems would drive the favored allele to fixation, except that most of the cases where these systems have been identified have the driven allele opposed by some other selective force. One example is the lethality of the t-haplotype in mice, another is the effect on male fertility of the Sex Ratio system in D. pseudoobscura.
Homing endonucleases
A phenomenon closely related to segregation distortion is homing endonucleases. These are enzymes that cut DNA in a sequence-specific way, and those cuts, generally double-strand breaks, are then "healed" by the regular DNA repair machinery. Homing endonucleases insert themselves into the genome at the site homologous to the first insertion site, resulting in a conversion of a heterozygote into a homozygote bearing a copy of the homing endonuclease on both homologous chromosomes. This gives homing endonucleases an allele frequency dynamics rather similar to a segregation distortion system, and generally unless opposed by strong countervailing selection, they are expected to go to fixation in a population. CRISPR-Cas9 technology allows the artificial construction of homing endonuclease systems. These so-called "gene drive" systems pose a combination of great promise for biocontrol but also potential risk.
Transposable elements
Transposable elements (TEs) include a wide variety of DNA sequences that all have the ability to move to new locations in the genome of their host. Transposons do this by a direct cut-and-paste mechanism, whereas retrotransposons need to produce an RNA intermediate to move. TEs were first discovered in maize by Barbara McClintock in the 1940s and their ability to occur in both active and quiescent states in the genome was also first elucidated by McClintock. TEs have been referred to as selfish genetic elements because they have some control over their own propagation in the genome. Most random insertions into the genome appear to be relatively innocuous, but they can disrupt critical gene functions with devastating results. For example, TEs have been linked to a variety of human diseases, ranging from cancer to haemophilia. TEs that tend to avoid disrupting vital functions in the genome tend to remain in the genome longer, and hence they are more likely to be found in innocuous locations.
Both plant and animal hosts have evolved means for reducing the fitness impact of TEs, both by directly silencing them and by reducing their ability to transpose in the genome. It would appear that hosts in general are fairly tolerant of TEs in their genomes, since a sizable portion (30-80%) of the genome of many animals and plants is TEs. When the host is able to stop their movement, TEs can simply be frozen in place, and it then can take millions of years for them to mutate away. The fitness of a TE is a combination of its ability to expand in numbers within a genome, to evade host defenses, but also to avoid eroding host fitness too drastically. The effect of TEs in the genome is not entirely selfish. Because their insertion into the genome can disrupt gene function, sometimes those disruptions can have positive fitness value for the host. Many adaptive changes in Drosophila and dogs for example, are associated with TE insertions.
B chromosomes
B chromosomes refer to chromosomes that are not required for the viability or fertility of the organism, but exist in addition to the normal (A) set. They persist in the population and accumulate because they have the ability to propagate their own transmission independently of the A chromosomes. They often vary in copy number between individuals of the same species.
B chromosomes were first detected over a century ago. Though typically smaller than normal chromosomes, their gene poor, heterochromatin-rich structure made them visible to early cytogenetic techniques. B chromosomes have been thoroughly studied and are estimated to occur in 15% of all eukaryotic species. In general, they appear to be particularly common among eudicot plants, rare in mammals, and absent in birds. In 1945, they were the subject of Gunnar Östergren's classic paper "Parasitic nature of extra fragment chromosomes", where he argues that the variation in abundance of B chromosomes between and within species is because of the parasitic properties of the Bs. This was the first time genetic material was referred to as "parasitic" or "selfish". B chromosome number correlates positively with genome size and has also been linked to a decrease in egg production in the grasshopper Eyprepocnemis plorans.
Selfish mitochondria
Genomic conflicts often arise because not all genes are inherited in the same way. Probably the best example of this is the conflict between uniparentally (usually but not always, maternally) inherited mitochondrial and biparentally inherited nuclear genes. Indeed, one of the earliest clear statements about the possibility of genomic conflict was made by the English botanist Dan Lewis in reference to the conflict between maternally inherited mitochondrial and biparentally inherited nuclear genes over sex allocation in hermaphroditic plants.
A single cell typically contains multiple mitochondria, creating a situation for competition over transmission. Uniparental inheritance has been suggested to be a way to reduce the opportunity for selfish mitochondria to spread, as it ensures all mitochondria share the same genome, thus removing the opportunity for competition. This view remains widely held, but has been challenged. Why inheritance ended up being maternal, rather than paternal, is also much debated, but one key hypothesis is that the mutation rate is lower in female compared to male gametes.
The conflict between mitochondrial and nuclear genes is especially easy to study in flowering plants. Flowering plants are typically hermaphrodites, and the conflict thus occurs within a single individual. Mitochondrial genes are typically only transmitted through female gametes, and therefore from their point of view the production of pollen leads to an evolutionary dead end. Any mitochondrial mutation that can affect the amount of resources the plant invests in the female reproductive functions at the expense of the male reproductive functions improves its own chance of transmission. Cytoplasmic male sterility is the loss of male fertility, typically through loss of functional pollen production, resulting from a mitochondrial mutation. In many species where cytoplasmic male sterility occurs, the nuclear genome has evolved so-called restorer genes, which repress the effects of the cytoplasmic male sterility genes and restore the male function, making the plant a hermaphrodite again.
The co-evolutionary arms race between selfish mitochondrial genes and nuclear compensatory alleles can often be detected by crossing individuals from different species that have different combinations of male sterility genes and nuclear restorers, resulting in hybrids with a mismatch.
Another consequence of the maternal inheritance of the mitochondrial genome is the so-called Mother's Curse. Because genes in the mitochondrial genome are strictly maternally inherited, mutations that are beneficial in females can spread in a population even if they are deleterious in males. Explicit screens in fruit flies have successfully identified such female-neutral but male-harming mtDNA mutations. Furthermore, a 2017 paper showed how a mitochondrial mutation causing Leber's hereditary optic neuropathy, a male-biased eye disease, was brought over by one of the Filles du roi that arrived in Quebec, Canada, in the 17th century and subsequently spread among many descendants.
Genomic imprinting
Another sort of conflict that genomes face is that between the mother and father competing for control of gene expression in the offspring, including the complete silencing of one parental allele. Due to differences in methylation status of gametes, there is an inherent asymmetry to the maternal and paternal genomes that can be used to drive a differential parent-of-origin expression. This results in a violation of Mendel's rules at the level of expression, not transmission, but if the gene expression affects fitness, it can amount to a similar result.
Imprinting seems like a maladaptive phenomenon, since it essentially means giving up diploidy, and heterozygotes for one defective allele are in trouble if the active allele is the one that is silenced. Several human diseases, such as Prader-Willi and Angelman syndromes, are associated with defects in imprinted genes. The asymmetry of maternal and paternal expression suggests that some kind of conflict between these two genomes might be driving the evolution of imprinting. In particular, several genes in placental mammals display expression of paternal genes that maximize offspring growth, and maternal genes that tend to keep that growth in check. Many other conflict-based theories about the evolution of genomic imprinting have been put forward.
At the same time, genomic or sexual conflict are not the only possible mechanisms whereby imprinting can evolve. Several molecular mechanisms for genomic imprinting have been described, and all have the aspect that maternally and paternally derived alleles are made to have distinct epigenetic marks, in particular the degree of methylation of cytosines. An important point to note regarding genomic imprinting is that it is quite heterogeneous, with different mechanisms and different consequences of having single parent-of-origin expression. For example, examining the imprinting status of closely related species allows one to see that a gene that is moved by an inversion into close proximity of imprinted genes may itself acquire an imprinted status, even if there is no particular fitness consequence of the imprinting.
Greenbeards
A greenbeard gene is a gene that has the ability to recognize copies of itself in other individuals and then make its carrier act preferentially toward such individuals. The name itself comes from thought-experiment first presented by William Hamilton and then it was developed and given its current name by Richard Dawkins in The Selfish Gene. The point of the thought experiment was to highlight that from a gene's-eye view, it is not the genome-wide relatedness that matters (which is usually how kin selection operates, i.e. cooperative behavior is directed towards relatives), but the relatedness at the particular locus that underlies the social behavior.
Following Dawkins, a greenbeard is usually defined as a gene, or set of closely linked genes, that has three effects:
It gives carriers of the gene a phenotypic label, such as a greenbeard.
The carrier is able to recognize other individuals with the same label.
The carrier then behaves altruistically towards individuals with the same label.
Greenbeards were long thought to be a fun theoretical idea, with limited possibility of them actually existing in nature. However, since its conception, several examples have been identified, including in yeast, slime moulds, and fire ants.
There has been some debate whether greenbeard genes should be considered selfish genetic elements. Conflict between a greenbeard locus and the rest of the genome can arise because during a given social interaction between two individuals, the relatedness at the greenbeard locus can be higher than at other loci in the genome. As a consequence, it may in the interest of the greenbeard locus to perform a costly social act, but not in the interest of the rest of the genome.
In conjunction with selfish genetic elements, greenbeard selection has also been used as a theoretical explanation for suicide.
Consequences to the host
Species extinction
Perhaps one of the clearest ways to see that the process of natural selection does not always have organismal fitness as the sole driver is when selfish genetic elements have their way without restriction. In such cases, selfish elements can, in principle, result in species extinction. This possibility was pointed out already in 1928 by Sergey Gershenson and then in 1967, Bill Hamilton developed a formal population genetic model for a case of segregation distortion of sex chromosomes driving a population to extinction. In particular, if a selfish element should be able to direct the production of sperm, such that males bearing the element on the Y chromosome would produce an excess of Y-bearing sperm, then in the absence of any countervailing force, this would ultimately result in the Y chromosome going to fixation in the population, producing an extremely male-biased sex ratio. In ecologically challenged species, such biased sex ratios imply that the conversion of resources to offspring becomes very inefficient, to the point of risking extinction.
Speciation
Selfish genetic elements have been shown to play a role in speciation. This could happen because the presence of selfish genetic elements can result in changes in morphology and/or life history, but ways by which the co-evolution between selfish genetic elements and their suppressors can cause reproductive isolation through so-called Bateson–Dobzhansky–Muller incompatibilities has received particular attention.
An early striking example of hybrid dysgenesis induced by a selfish genetic element was the P element in Drosophila. If males carrying the P element were crossed to females lacking it, the resulting offspring suffered from reduced fitness. However, offspring of the reciprocal cross were normal, as would be expected since piRNAs are maternally inherited. The P element is typically present only in wild strains, and not in lab strains of D. melanogaster, as the latter were collected before the P elements were introduced into the species, probably from a closely related Drosophila species. The P element story is also a good example of how the rapid co-evolution between selfish genetic elements and their silencers can lead to incompatibilities on short evolutionary time scales, as little as within a few decades.
Several other examples of selfish genetic elements causing reproductive isolation have since been demonstrated. Crossing different species of Arabidopsis results in both higher activity of transposable elements and disruption in imprinting, both of which have been linked to fitness reduction in the resulting hybrids. Hybrid dysgenesis has also been shown to be caused by centromeric drive in barley and in several species of angiosperms by mito-nuclear conflict.
Genome-size variation
Attempts to understand the extraordinary variation in genome size (C-value)—animals vary 7,000 fold and land plants some 2,400-fold—has a long history in biology. However, this variation is poorly correlated with gene number or any measure of organismal complexity, which led CA Thomas to coin the term C-value paradox in 1971. The discovery of non-coding DNA resolved some of the paradox, and most current researchers now use the term "C-value enigma".
Two kinds of selfish genetic elements in particular have been shown to contribute to genome-size variation: B chromosomes and transposable elements. The contribution of transposable elements to the genome is especially well studied in plants. A striking example is how the genome of the model organism Arabidopsis thaliana contains the same number of genes as that of the Norwegian spruce (Picea abies), around 30,000, but accumulation of transposons means that the genome of the latter is some 100 times larger. Transposable element abundance has also been shown to cause the unusually large genomes found in salamanders.
The presence of an abundance of transposable elements in many eukaryotic genomes was a central theme of the original selfish DNA papers mentioned above (See Conceptual developments). Most people quickly accepted the central message of those papers, that the existence of transposable elements can be explained by selfish selection at the gene level and there is no need to invoke individual level selection. However, the idea that organisms keep transposable elements around as genetic reservoir to "speed up evolution" or for other regulatory functions persists in some quarters. In 2012, when the ENCODE Project published a paper claiming that 80% of the human genome can be assigned a function, a claim interpreted by many as the death of the idea of junk DNA, this debate was reignited.
Applications in agriculture and biotechnology
Cytoplasmic male sterility in plant breeding
A common problem for plant breeders is unwanted self-fertilization. This is particularly a problem when breeders try to cross two different strains to create a new hybrid strain. One way to avoid this is manual emasculation, i.e. physically removing anthers to render the individual male sterile. Cytoplasmic male sterility offers an alternative to this laborious exercise. Breeders cross a strain that carries a cytoplasmic male sterility mutation with a strain that does not, the latter acting as the pollen donor. If the hybrid offspring are to be harvested for their seed (like maize), and therefore needs to be male fertile, the parental strains need to be homozygous for the restorer allele. In contrast, in species that harvested for their vegetable parts, like onions, this is not an issue. This technique has been used in a wide variety of crops, including rice, maize, sunflower, wheat, and cotton.
PiggyBac vectors
While many transposable elements seem to do no good for the host, some transposable elements have been "tamed" by molecular biologists so that the elements can be made to insert and excise at the will of the scientist. Such elements are especially useful for doing genetic manipulations, like inserting foreign DNA into the genomes of a variety of organisms.
One excellent example of this is PiggyBac, a transposable element that can efficiently move between cloning vectors and chromosomes using a "cut and paste" mechanism. The investigator constructs a PiggyBac element with the desired payload spliced in, and a second element (the PiggyBac transposase), located on another plasmid vector, can be co-transfected into the target cell. The PiggyBac transposase cuts at the inverted terminal repeat sequences located on both ends of the PiggyBac vector and efficiently moves the contents from the original sites and integrates them into chromosomal positions where the sequence TTAA is found. The three things that make PiggyBac so useful are the remarkably high efficiency of this cut-and-paste operation, its ability to take payloads up to 200 kb in size, and its ability to leave a perfectly seamless cut from a genomic site, leaving no sequences or mutations behind.
CRISPR gene drive and homing endonuclease systems
CRISPR allows the construction of artificial homing endonucleases, where the construct produces guide RNAs that cut the target gene, and homologous flanking sequences then allow insertion of the same construct harboring the Cas9 gene and the guide RNAs. Such gene drives ought to have the ability to rapidly spread in a population (see Gene-drive systems), and one practical application of such a system that has been proposed is to apply it to a pest population, greatly reducing its numbers or even driving it extinct. This has not yet been attempted in the field, but gene drive constructs have been tested in the lab, and the ability to insert into the wild-type homologous allele in heterozygotes for the gene drive has been demonstrated. Unfortunately, the double-strand break that is introduced by Cas9 can be corrected by homology directed repair, which would make a perfect copy of the drive, or by non-homologous end joining, which would produce "resistant" alleles unable to further propagate themselves. When Cas9 is expressed outside of meiosis, it seems like non-homologous end joining predominates, making this the biggest hurdle to practical application of gene drives.
Mathematical theory
Much of the confusion regarding ideas about selfish genetic elements center on the use of language and the way the elements and their evolutionary dynamics are described. Mathematical models allow the assumptions and the rules to be given a priori for establishing mathematical statements about the expected dynamics of the elements in populations. The consequences of having such elements in genomes can then be explored objectively. The mathematics can define very crisply the different classes of elements by their precise behavior within a population, sidestepping any distracting verbiage about the inner hopes and desires of greedy selfish genes. There are many good examples of this approach, and this article focuses on segregation distorters, gene drive systems and transposable elements.
Segregation distorters
The mouse t-allele is a classic example of a segregation distorter system that has been modeled in great detail. Heterozygotes for a t-haplotype produce >90% of their gametes bearing the t (see Segregation distorters), and homozygotes for a t-haplotype die as embryos. This can result in a stable polymorphism, with an equilibrium frequency that depends on the drive strength and direct fitness impacts of t-haplotypes. This is a common theme in the mathematics of segregation distorters:virtually every example we know entails a countervailing selective effect, without which the allele with biased transmission would go to fixation and the segregation distortion would no longer be manifested. Whenever sex chromosomes undergo segregation distortion, the population sex ratio is altered, making these systems particularly interesting. Two classic examples of segregation distortion involving sex chromosomes include the "Sex Ratio" X chromosomes of Drosophila pseudoobscura and Y chromosome drive suppressors of Drosophila mediopunctata. A crucial point about the theory of segregation distorters is that just because there are fitness effects acting against the distorter, this does not guarantee that there will be a stable polymorphism. In fact, some sex chromosome drivers can produce frequency dynamics with wild oscillations and cycles.
Gene-drive systems
The idea of spreading a gene into a population as a means of population control is actually quite old, and models for the dynamics of introduced compound chromosomes date back to the 1970s. Subsequently, the population genetics theory for homing endonucleases and CRISPR-based gene drives has become much more advanced. An important component of modeling these processes in natural populations is to consider the genetic response in the target population. For one thing, any natural population will harbor standing genetic variation, and that variation might well include polymorphism in the sequences homologous to the guide RNAs, or the homology arms that are meant to direct the repair. In addition, different hosts and different constructs may have quite different rates of non-homologous end joining, the form of repair that results in broken or resistant alleles that no longer spread. Full accommodation of the host factors presents considerable challenge for getting a gene drive construct to go to fixation, and Unckless and colleagues show that in fact the current constructs are quite far from being able to attain even moderate frequencies in natural populations. This is another excellent example showing that just because an element appears to have a strong selfish transmission advantage, whether it can successfully spread may depend on subtle configurations of other parameters in the population.
Transposable elements
To model the dynamics of transposable elements (TEs) within a genome, one has to realize that the elements behave like a population within each genome, and they can jump from one haploid genome to another by horizontal transfer. The mathematics has to describe the rates and dependencies of these transfer events. It was observed early on that the rate of jumping of many TEs varies with copy number, and so the first models simply used an empirical function for the rate of transposition. This had the advantage that it could be measured by experiments in the lab, but it left open the question of why the rate differs among elements and differs with copy number. Stan Sawyer and Daniel L. Hartl fitted models of this sort to a variety of bacterial TEs, and obtained quite good fits between copy number and transmission rate and the population-wide incidence of the TEs. TEs in higher organisms, like Drosophila, have a very different dynamics because of sex, and Brian Charlesworth, Deborah Charlesworth, Charles Langley, John Brookfield and others modeled TE copy number evolution in Drosophila and other species. What is impressive about all these modeling efforts is how well they fitted empirical data, given that this was decades before discovery of the fact that the host fly has a powerful defense mechanism in the form of piRNAs. Incorporation of host defense along with TE dynamics into evolutionary models of TE regulation is still in its infancy.
See also
C-value enigma
Endogenous retrovirus
Gene-centered view of evolution
Genome size
Intragenomic conflict
Introns: introns as mobile genetic elements
Junk DNA
Mobile genetic elements
Mutation
Noncoding DNA
Retrotransposon
Transposable element
References
Further reading
DNA
Selection | Selfish genetic element | [
"Biology"
] | 7,608 | [
"Evolutionary processes",
"Selection"
] |
44,363 | https://en.wikipedia.org/wiki/Wien%27s%20displacement%20law | In physics, Wien's displacement law states that the black-body radiation curve for different temperatures will peak at different wavelengths that are inversely proportional to the temperature. The shift of that peak is a direct consequence of the Planck radiation law, which describes the spectral brightness or intensity of black-body radiation as a function of wavelength at any given temperature. However, it had been discovered by German physicist Wilhelm Wien several years before Max Planck developed that more general equation, and describes the entire shift of the spectrum of black-body radiation toward shorter wavelengths as temperature increases.
Formally, the wavelength version of Wien's displacement law states that the spectral radiance of black-body radiation per unit wavelength, peaks at the wavelength given by:
where is the absolute temperature and is a constant of proportionality called Wien's displacement constant, equal to or .
This is an inverse relationship between wavelength and temperature. So the higher the temperature, the shorter or smaller the wavelength of the thermal radiation. The lower the temperature, the longer or larger the wavelength of the thermal radiation. For visible radiation, hot objects emit bluer light than cool objects. If one is considering the peak of black body emission per unit frequency or per proportional bandwidth, one must use a different proportionality constant. However, the form of the law remains the same: the peak wavelength is inversely proportional to temperature, and the peak frequency is directly proportional to temperature.
There are other formulations of Wien's displacement law, which are parameterized relative to other quantities. For these alternate formulations, the form of the relationship is similar, but the proportionality constant, , differs.
Wien's displacement law may be referred to as "Wien's law", a term which is also used for the Wien approximation.
In "Wien's displacement law", the word displacement refers to how the intensity-wavelength graphs appear shifted (displaced) for different temperatures.
Examples
Wien's displacement law is relevant to some everyday experiences:
A piece of metal heated by a blow torch first becomes "red hot" as the very longest visible wavelengths appear red, then becomes more orange-red as the temperature is increased, and at very high temperatures would be described as "white hot" as shorter and shorter wavelengths come to predominate the black body emission spectrum. Before it had even reached the red hot temperature, the thermal emission was mainly at longer infrared wavelengths, which are not visible; nevertheless, that radiation could be felt as it warms one's nearby skin.
One easily observes changes in the color of an incandescent light bulb (which produces light through thermal radiation) as the temperature of its filament is varied by a light dimmer. As the light is dimmed and the filament temperature decreases, the distribution of color shifts toward longer wavelengths and the light appears redder, as well as dimmer.
A wood fire at 1500 K puts out peak radiation at about 2000 nanometers. 98% of its radiation is at wavelengths longer than 1000 nm, and only a tiny proportion at visible wavelengths (390–700 nanometers). Consequently, a campfire can keep one warm but is a poor source of visible light.
The effective temperature of the Sun is 5778 Kelvin. Using Wien's law, one finds a peak emission per nanometer (of wavelength) at a wavelength of about 500 nm, in the green portion of the spectrum near the peak sensitivity of the human eye. On the other hand, in terms of power per unit optical frequency, the Sun's peak emission is at 343 THz or a wavelength of 883 nm in the near infrared. In terms of power per percentage bandwidth, the peak is at about 635 nm, a red wavelength. About half of the Sun's radiation is at wavelengths shorter than 710 nm, about the limit of the human vision. Of that, about 12% is at wavelengths shorter than 400 nm, ultraviolet wavelengths, which is invisible to an unaided human eye. A large amount of the Sun's radiation falls in the fairly small visible spectrum and passes through the atmosphere.
The preponderance of emission in the visible range, however, is not the case in most stars. The hot supergiant Rigel emits 60% of its light in the ultraviolet, while the cool supergiant Betelgeuse emits 85% of its light at infrared wavelengths. With both stars prominent in the constellation of Orion, one can easily appreciate the color difference between the blue-white Rigel (T = 12100 K) and the red Betelgeuse (T ≈ 3800 K). While few stars are as hot as Rigel, stars cooler than the Sun or even as cool as Betelgeuse are very commonplace.
Mammals with a skin temperature of about 300 K emit peak radiation at around 10 μm in the far infrared. This is therefore the range of infrared wavelengths that pit viper snakes and passive IR cameras must sense.
When comparing the apparent color of lighting sources (including fluorescent lights, LED lighting, computer monitors, and photoflash), it is customary to cite the color temperature. Although the spectra of such lights are not accurately described by the black-body radiation curve, a color temperature (the correlated color temperature) is quoted for which black-body radiation would most closely match the subjective color of that source. For instance, the blue-white fluorescent light sometimes used in an office may have a color temperature of 6500 K, whereas the reddish tint of a dimmed incandescent light may have a color temperature (and an actual filament temperature) of 2000 K. Note that the informal description of the former (bluish) color as "cool" and the latter (reddish) as "warm" is exactly opposite the actual temperature change involved in black-body radiation.
Discovery
The law is named for Wilhelm Wien, who derived it in 1893 based on a thermodynamic argument. Wien considered adiabatic expansion of a cavity containing waves of light in thermal equilibrium. Using Doppler's principle, he showed that, under slow expansion or contraction, the energy of light reflecting off the walls changes in exactly the same way as the frequency. A general principle of thermodynamics is that a thermal equilibrium state, when expanded very slowly, stays in thermal equilibrium.
Wien himself deduced this law theoretically in 1893, following Boltzmann's thermodynamic reasoning. It had previously been observed, at least semi-quantitatively, by an American astronomer, Langley. This upward shift in with is familiar to everyone—when an iron is heated in a fire, the first visible radiation (at around 900 K) is deep red, the lowest frequency visible light. Further increase in causes the color to change to orange then yellow, and finally blue at very high temperatures (10,000 K or more) for which the peak in radiation intensity has moved beyond the visible into the ultraviolet.
The adiabatic principle allowed Wien to conclude that for each mode, the adiabatic invariant energy/frequency is only a function of the other adiabatic invariant, the frequency/temperature. From this, he derived the "strong version" of Wien's displacement law: the statement that the blackbody spectral radiance is proportional to for some function of a single variable. A modern variant of Wien's derivation can be found in the textbook by Wannier and in a paper by E. Buckingham
The consequence is that the shape of the black-body radiation function (which was not yet understood) would shift proportionally in frequency (or inversely proportionally in wavelength) with temperature. When Max Planck later formulated the correct black-body radiation function it did not explicitly include Wien's constant . Rather, the Planck constant was created and introduced into his new formula. From the Planck constant and the Boltzmann constant , Wien's constant can be obtained.
Peak differs according to parameterization
The results in the tables above summarize results from other sections of this article. Percentiles are percentiles of the Planck blackbody spectrum. Only 25 percent of the energy in the black-body spectrum is associated with wavelengths shorter than the value given by the peak-wavelength version of Wien's law.
Notice that for a given temperature, different parameterizations imply different maximal wavelengths. In particular, the curve of intensity per unit frequency peaks at a different wavelength than the curve of intensity per unit wavelength.
For example, using and parameterization by wavelength, the wavelength for maximal spectral radiance is with corresponding frequency . For the same temperature, but parameterizing by frequency, the frequency for maximal spectral radiance is with corresponding wavelength .
These functions are radiance density functions, which are probability density functions scaled to give units of radiance. The density function has different shapes for different parameterizations, depending on relative stretching or compression of the abscissa, which measures the change in probability density relative to a linear change in a given parameter. Since wavelength and frequency have a reciprocal relation, they represent significantly non-linear shifts in probability density relative to one another.
The total radiance is the integral of the distribution over all positive values, and that is invariant for a given temperature under any parameterization. Additionally, for a given temperature the radiance consisting of all photons between two wavelengths must be the same regardless of which distribution you use. That is to say, integrating the wavelength distribution from to will result in the same value as integrating the frequency distribution between the two frequencies that correspond to and , namely from to . However, the distribution shape depends on the parameterization, and for a different parameterization the distribution will typically have a different peak density, as these calculations demonstrate.
The important point of Wien's law, however, is that any such wavelength marker, including the median wavelength (or, alternatively, the wavelength below which any specified percentage of the emission occurs) is proportional to the reciprocal of temperature. That is, the shape of the distribution for a given parameterization scales with and translates according to temperature, and can be calculated once for a canonical temperature, then appropriately shifted and scaled to obtain the distribution for another temperature. This is a consequence of the strong statement of Wien's law.
Frequency-dependent formulation
For spectral flux considered per unit frequency (in hertz), Wien's displacement law describes a peak emission at the optical frequency given by:
or equivalently
where is a constant resulting from the maximization equation, is the Boltzmann constant, is the Planck constant, and is the absolute temperature. With the emission now considered per unit frequency, this peak now corresponds to a wavelength about 76% longer than the peak considered per unit wavelength. The relevant math is detailed in the next section.
Derivation from Planck's law
Parameterization by wavelength
Planck's law for the spectrum of black-body radiation predicts the Wien displacement law and may be used to numerically evaluate the constant relating temperature and the peak parameter value for any particular parameterization. Commonly a wavelength parameterization is used and in that case the black body spectral radiance (power per emitting area per solid angle) is:
Differentiating with respect to and setting the derivative equal to zero gives:
which can be simplified to give:
By defining:
the equation becomes one in the single variable x:
which is equivalent to:
This equation is solved by
where is the principal branch of the Lambert W function, and gives . Solving for the wavelength in millimetres, and using kelvins for the temperature yields:
Parameterization by frequency
Another common parameterization is by frequency. The derivation yielding peak parameter value is similar, but starts with the form of Planck's law as a function of frequency :
The preceding process using this equation yields:
The net result is:
This is similarly solved with the Lambert W function:
giving .
Solving for produces:
Parameterization by the logarithm of wavelength or frequency
Using the implicit equation yields the peak in the spectral radiance density function expressed in the parameter radiance per proportional bandwidth. (That is, the density of irradiance per frequency bandwidth proportional to the frequency itself, which can be calculated by considering infinitesimal intervals of (or equivalently ) rather of frequency itself.) This is perhaps a more intuitive way of presenting "wavelength of peak emission". That yields .
Mean photon energy as an alternate characterization
Another way of characterizing the radiance distribution is via the mean photon energy
where is the Riemann zeta function.
The wavelength corresponding to the mean photon energy is given by
Criticism
Marr and Wilkin (2012) contend that the widespread teaching of Wien's displacement law in introductory courses is undesirable, and it would be better replaced by alternate material. They argue that teaching the law is problematic because:
the Planck curve is too broad for the peak to stand out or be regarded as significant;
the location of the peak depends on the parameterization, and they cite several sources as concurring that "that the designation of any peak of the function is not meaningful and should, therefore, be de-emphasized";
the law is not used for determining temperatures in actual practice, direct use of the Planck function being relied upon instead.
They suggest that the average photon energy be presented in place of Wien's displacement law, as being a more physically meaningful indicator of changes that occur with changing temperature. In connection with this, they recommend that the average number of photons per second be discussed in connection with the Stefan–Boltzmann law. They recommend that the Planck spectrum be plotted as a "spectral energy density per fractional bandwidth distribution," using a logarithmic scale for the wavelength or frequency.
See also
Wien approximation
Emissivity
Sakuma–Hattori equation
Stefan–Boltzmann law
Thermometer
Ultraviolet catastrophe
References
Further reading
External links
Eric Weisstein's World of Physics
Eponymous laws of physics
Statistical mechanics
Foundational quantum physics
Light
1893 in science
1893 in Germany | Wien's displacement law | [
"Physics"
] | 2,831 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Foundational quantum physics",
"Electromagnetic spectrum",
"Quantum mechanics",
"Waves",
"Light",
"Statistical mechanics"
] |
44,364 | https://en.wikipedia.org/wiki/Black%20body | A black body or blackbody is an idealized physical body that absorbs all incident electromagnetic radiation, regardless of frequency or angle of incidence. The radiation emitted by a black body in thermal equilibrium with its environment is called black-body radiation. The name "black body" is given because it absorbs all colors of light. In contrast, a white body is one with a "rough surface that reflects all incident rays completely and uniformly in all directions."
A black body in thermal equilibrium (that is, at a constant temperature) emits electromagnetic black-body radiation. The radiation is emitted according to Planck's law, meaning that it has a spectrum that is determined by the temperature alone (see figure at right), not by the body's shape or composition.
An ideal black body in thermal equilibrium has two main properties:
It is an ideal emitter: at every frequency, it emits as much or more thermal radiative energy as any other body at the same temperature.
It is a diffuse emitter: measured per unit area perpendicular to the direction, the energy is radiated isotropically, independent of direction.
Real materials emit energy at a fraction—called the emissivity—of black-body energy levels. By definition, a black body in thermal equilibrium has an emissivity . A source with a lower emissivity, independent of frequency, is often referred to as a gray body.
Constructing black bodies with an emissivity as close to 1 as possible remains a topic of current interest.
In astronomy, the radiation from stars and planets is sometimes characterized in terms of an effective temperature, the temperature of a black body that would emit the same total flux of electromagnetic energy.
Definition
Isaac Newton introduced the notion of a black body in his 1704 book Opticks, with query 6 of the book stating:The idea of a black body originally was introduced by Gustav Kirchhoff in 1860 as follows:
A more modern definition drops the reference to "infinitely small thicknesses":
Idealizations
This section describes some concepts developed in connection with black bodies.
Cavity with a hole
A widely used model of a black surface is a small hole in a cavity with walls that are opaque to radiation. Radiation incident on the hole will pass into the cavity, and is very unlikely to be re-emitted if the cavity is large. Lack of any re-emission, means that the hole is behaving like a perfect black surface. The hole is not quite a perfect black surface—in particular, if the wavelength of the incident radiation is greater than the diameter of the hole, part will be reflected. Similarly, even in perfect thermal equilibrium, the radiation inside a finite-sized cavity will not have an ideal Planck spectrum for wavelengths comparable to or larger than the size of the cavity.
Suppose the cavity is held at a fixed temperature T and the radiation trapped inside the enclosure is at thermal equilibrium with the enclosure. The hole in the enclosure will allow some radiation to escape. If the hole is small, radiation passing in and out of the hole has negligible effect upon the equilibrium of the radiation inside the cavity. This escaping radiation will approximate black-body radiation that exhibits a distribution in energy characteristic of the temperature T and does not depend upon the properties of the cavity or the hole, at least for wavelengths smaller than the size of the hole. See the figure in the Introduction for the spectrum as a function of the frequency of the radiation, which is related to the energy of the radiation by the equation E = hf, with E = energy, h = Planck constant, f = frequency.
At any given time the radiation in the cavity may not be in thermal equilibrium, but the second law of thermodynamics states that if left undisturbed it will eventually reach equilibrium, although the time it takes to do so may be very long. Typically, equilibrium is reached by continual absorption and emission of radiation by material in the cavity or its walls. Radiation entering the cavity will be "thermalized" by this mechanism: the energy will be redistributed until the ensemble of photons achieves a Planck distribution. The time taken for thermalization is much faster with condensed matter present than with rarefied matter such as a dilute gas. At temperatures below billions of Kelvin, direct photon–photon interactions are usually negligible compared to interactions with matter. Photons are an example of an interacting boson gas, and as described by the H-theorem, under very general conditions any interacting boson gas will approach thermal equilibrium.
Transmission, absorption, and reflection
A body's behavior with regard to thermal radiation is characterized by its transmission τ, absorption α, and reflection ρ.
The boundary of a body forms an interface with its surroundings, and this interface may be rough or smooth. A nonreflecting interface separating regions with different refractive indices must be rough, because the laws of reflection and refraction governed by the Fresnel equations for a smooth interface require a reflected ray when the refractive indices of the material and its surroundings differ. A few idealized types of behavior are given particular names:
An opaque body is one that transmits none of the radiation that reaches it, although some may be reflected. That is, τ = 0 and α + ρ = 1.
A transparent body is one that transmits all the radiation that reaches it. That is, τ = 1 and α = ρ = 0.
A grey body is one where α, ρ and τ are constant for all wavelengths; this term also is used to mean a body for which α is temperature- and wavelength-independent.
A white body is one for which all incident radiation is reflected uniformly in all directions: τ = 0, α = 0, and ρ = 1.
For a black body, τ = 0, α = 1, and ρ = 0. Planck offers a theoretical model for perfectly black bodies, which he noted do not exist in nature: besides their opaque interior, they have interfaces that are perfectly transmitting and non-reflective.
Kirchhoff's perfect black bodies
Kirchhoff in 1860 introduced the theoretical concept of a perfect black body with a completely absorbing surface layer of infinitely small thickness, but Planck noted some severe restrictions upon this idea. Planck noted three requirements upon a black body: the body must (i) allow radiation to enter but not reflect; (ii) possess a minimum thickness adequate to absorb the incident radiation and prevent its re-emission; (iii) satisfy severe limitations upon scattering to prevent radiation from entering and bouncing back out. As a consequence, Kirchhoff's perfect black bodies that absorb all the radiation that falls on them cannot be realized in an infinitely thin surface layer, and impose conditions upon scattering of the light within the black body that are difficult to satisfy.
Realizations
A realization of a black body refers to a real world, physical embodiment. Here are a few.
Cavity with a hole
In 1898, Otto Lummer and Ferdinand Kurlbaum published an account of their cavity radiation source. Their design has been used largely unchanged for radiation measurements to the present day. It was a hole in the wall of a platinum box, divided by diaphragms, with its interior blackened with iron oxide. It was an important ingredient for the progressively improved measurements that led to the discovery of Planck's law. A version described in 1901 had its interior blackened with a mixture of chromium, nickel, and cobalt oxides. See also Hohlraum.
Near-black materials
There is interest in blackbody-like materials for camouflage and radar-absorbent materials for radar invisibility. They also have application as solar energy collectors, and infrared thermal detectors. As a perfect emitter of radiation, a hot material with black body behavior would create an efficient infrared heater, particularly in space or in a vacuum where convective heating is unavailable. They are also useful in telescopes and cameras as anti-reflection surfaces to reduce stray light, and to gather information about objects in high-contrast areas (for example, observation of planets in orbit around their stars), where blackbody-like materials absorb light that comes from the wrong sources.
It has long been known that a lamp-black coating will make a body nearly black. An improvement on lamp-black is found in manufactured carbon nanotubes. Nano-porous materials can achieve refractive indices nearly that of vacuum, in one case obtaining average reflectance of 0.045%. In 2009, a team of Japanese scientists created a material called nanoblack which is close to an ideal black body, based on vertically aligned single-walled carbon nanotubes. This absorbs between 98% and 99% of the incoming light in the spectral range from the ultra-violet to the far-infrared regions.
Other examples of nearly perfect black materials are super black, prepared by chemically etching a nickel–phosphorus alloy, vertically aligned carbon nanotube arrays (like Vantablack) and flower carbon nanostructures; all absorb 99.9% of light or more.
Stars and planets
A star or planet often is modeled as a black body, and electromagnetic radiation emitted from these bodies as black-body radiation. The figure shows a highly schematic cross-section to illustrate the idea. The photosphere of the star, where the emitted light is generated, is idealized as a layer within which the photons of light interact with the material in the photosphere and achieve a common temperature T that is maintained over a long period of time. Some photons escape and are emitted into space, but the energy they carry away is replaced by energy from within the star, so that the temperature of the photosphere is nearly steady. Changes in the core lead to changes in the supply of energy to the photosphere, but such changes are slow on the time scale of interest here. Assuming these circumstances can be realized, the outer layer of the star is somewhat analogous to the example of an enclosure with a small hole in it, with the hole replaced by the limited transmission into space at the outside of the photosphere. With all these assumptions in place, the star emits black-body radiation at the temperature of the photosphere.
Using this model the effective temperature of stars is estimated, defined as the temperature of a black body that yields the same surface flux of energy as the star. If a star were a black body, the same effective temperature would result from any region of the spectrum. For example, comparisons in the B (blue) or V (visible) range lead to the so-called B-V color index, which increases the redder the star, with the Sun having an index of +0.648 ± 0.006. Combining the U (ultraviolet) and the B indices leads to the U-B index, which becomes more negative the hotter the star and the more the UV radiation. Assuming the Sun is a type G2 V star, its U-B index is +0.12. The two indices for two types of most common star sequences are compared in the figure (diagram) with the effective surface temperature of the stars if they were perfect black bodies. There is a rough correlation. For example, for a given B-V index measurement, the curves of both most common sequences of star (the main sequence and the supergiants) lie below the corresponding black-body U-B index that includes the ultraviolet spectrum, showing that both groupings of star emit less ultraviolet light than a black body with the same B-V index. It is perhaps surprising that they fit a black body curve as well as they do, considering that stars have greatly different temperatures at different depths. For example, the Sun has an effective temperature of 5780 K, which can be compared to the temperature of its photosphere (the region generating the light), which ranges from about 5000 K at its outer boundary with the chromosphere to about 9500 K at its inner boundary with the convection zone approximately deep.
Black holes
A black hole is a region of spacetime from which nothing escapes. Around a black hole there is a mathematically defined surface called an event horizon that marks the point of no return. It is called "black" because it absorbs all the light that hits the horizon, reflecting nothing, making it almost an ideal black body (radiation with a wavelength equal to or larger than the diameter of the hole may not be absorbed, so black holes are not perfect black bodies). Physicists believe that to an outside observer, black holes have a non-zero temperature and emit black-body radiation, radiation with a nearly perfect black-body spectrum, ultimately evaporating. The mechanism for this emission is related to vacuum fluctuations in which a virtual pair of particles is separated by the gravity of the hole, one member being sucked into the hole, and the other being emitted. The energy distribution of emission is described by Planck's law with a temperature T:
where c is the speed of light, ℏ is the reduced Planck constant, kB is the Boltzmann constant, G is the gravitational constant and M is the mass of the black hole. These predictions have not yet been tested either observationally or experimentally.
Cosmic microwave background radiation
The Big Bang theory is based upon the cosmological principle, which states that on large scales the Universe is homogeneous and isotropic. According to theory, the Universe approximately a second after its formation was a near-ideal black body in thermal equilibrium at a temperature above 1010 K. The temperature decreased as the Universe expanded and the matter and radiation in it cooled. The cosmic microwave background radiation observed today is "the most perfect black body ever measured in nature". It has a nearly ideal Planck spectrum at a temperature of about 2.7 K. It departs from the perfect isotropy of true black-body radiation by an observed anisotropy that varies with angle on the sky only to about one part in 100,000.
Radiative cooling
The integration of Planck's law over all frequencies provides the total energy per unit of time per unit of surface area radiated by a black body maintained at a temperature T, and is known as the Stefan–Boltzmann law:
where σ is the Stefan–Boltzmann constant, To remain in thermal equilibrium at constant temperature T, the black body must absorb or internally generate this amount of power P over the given area A.
The cooling of a body due to thermal radiation is often approximated using the Stefan–Boltzmann law supplemented with a "gray body" emissivity (). The rate of decrease of the temperature of the emitting body can be estimated from the power radiated and the body's heat capacity. This approach is a simplification that ignores details of the mechanisms behind heat redistribution (which may include changing composition, phase transitions or restructuring of the body) that occur within the body while it cools, and assumes that at each moment in time the body is characterized by a single temperature. It also ignores other possible complications, such as changes in the emissivity with temperature, and the role of other accompanying forms of energy emission, for example, emission of particles like neutrinos.
If a hot emitting body is assumed to follow the Stefan–Boltzmann law and its power emission P and temperature T are known, this law can be used to estimate the dimensions of the emitting object, because the total emitted power is proportional to the area of the emitting surface. In this way it was found that X-ray bursts observed by astronomers originated in neutron stars with a radius of about 10 km, rather than black holes as originally conjectured. An accurate estimate of size requires some knowledge of the emissivity, particularly its spectral and angular dependence.
See also
Kirchhoff's law of thermal radiation
Vantablack, a substance produced in 2014 and among the blackest known
Planckian locus, black body incandescence in a given chromaticity space
References
Citations
Bibliography
a translation of Frühgeschichte der Quantentheorie (1899–1913), Physik Verlag, Mosbach/Baden.
Translated by Guthrie, F. as
External links
Concepts in astrophysics
Electromagnetic radiation
Heat transfer
Infrared | Black body | [
"Physics",
"Chemistry"
] | 3,298 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Concepts in astrophysics",
"Spectrum (physical sciences)",
"Electromagnetic radiation",
"Electromagnetic spectrum",
"Astrophysics",
"Radiation",
"Thermodynamics",
"Infrared"
] |
44,370 | https://en.wikipedia.org/wiki/Trefoil | A trefoil () is a graphic form composed of the outline of three overlapping rings, used in architecture, Pagan and Christian symbolism, among other areas. The term is also applied to other symbols with a threefold shape. A similar shape with four rings is called a quatrefoil.
Architecture
Ornamentation
'Trefoil' is a term in Gothic architecture given to the ornamental foliation or cusping introduced in the heads of window-lights, tracery, and panellings, in which the centre takes the form of a three-lobed leaf (formed from three partially overlapping circles). One of the earliest examples is in the plate tracery at Winchester Cathedral (1222–1235). The fourfold version of an architectural trefoil is a quatrefoil.
A simple trefoil shape in itself can be symbolic of the Trinity, while a trefoil combined with an equilateral triangle was also a moderately common symbol of the Christian Trinity during the late Middle Ages in some parts of Europe, similar to a barbed quatrefoil. Two forms of a trefoil combined with a triangle are shown below:
A dove, which symbolizes the Holy Spirit, is sometimes depicted within the outlined form of the trefoil combined with a triangle.
Architectural layout
In architecture and archaeology, a 'trefoil' describes a layout or floor plan consisting of three apses in clover-leaf shape, as for example in the Megalithic temples of Malta.
Particularly in church architecture, such a layout may be called a "triconchos".
Heraldry
The heraldic 'trefoil' is a stylized clover. It should not be confused with the figure named in French heraldry ("threefoil"), which is a stylized flower with three petals, and differs from the heraldic trefoil in being not slipped.
Symbols
Symmetrical trefoils are particularly popular as warning and informational symbols. If a box containing hazardous material is moved around and shifted into different positions, it is still easy to recognize the symbol, while the distinctive trefoil design of the recycling symbol makes it easy for a consumer to notice and identify the packaging the symbol has been printed on as recyclable. Easily stenciled symbols are also favored.
While the green trefoil is considered by many to be the symbol of Ireland, the harp has much greater officially recognized status. Therefore, shamrocks generally do not appear on Irish coins or postage stamps.
A trefoil is also part of the logo for Adidas Originals, which also includes three stripes.
See also
Clover or Trefoil, a plant
Fleur-de-Lys
Foil (architecture)
Quatrefoil
Shamrock
Trefoil arch
Trefoil domain
Trefoil knot
Torus knot
Explanatory notes
References
External links
Explanation of Christian symbolism of Trefoil
Christian symbols
Heraldic charges
Ornaments
Piecewise-circular curves
Symbols
Visual motifs | Trefoil | [
"Mathematics"
] | 579 | [
"Piecewise-circular curves",
"Visual motifs",
"Symbols",
"Euclidean plane geometry",
"Planes (geometry)"
] |
44,401 | https://en.wikipedia.org/wiki/Brown%20dwarf | Brown dwarfs are substellar objects that have more mass than the biggest gas giant planets, but less than the least massive main-sequence stars. Their mass is approximately 13 to 80 times that of Jupiter ()not big enough to sustain nuclear fusion of ordinary hydrogen (1H) into helium in their cores, but massive enough to emit some light and heat from the fusion of deuterium (2H). The most massive ones (> ) can fuse lithium (7Li).
Astronomers classify self-luminous objects by spectral type, a distinction intimately tied to the surface temperature, and brown dwarfs occupy types M, L, T, and Y. As brown dwarfs do not undergo stable hydrogen fusion, they cool down over time, progressively passing through later spectral types as they age.
Their name comes not from the color of light they emit but from their falling between white dwarf stars and "dark" planets in size. To the naked eye, brown dwarfs would appear in different colors depending on their temperature. The warmest ones are possibly orange or red, while cooler brown dwarfs would likely appear magenta or black to the human eye. Brown dwarfs may be fully convective, with no layers or chemical differentiation by depth.
Though their existence was initially theorized in the 1960s, it was not until the mid-1990s that the first unambiguous brown dwarfs were discovered. As brown dwarfs have relatively low surface temperatures, they are not very bright at visible wavelengths, emitting most of their light in the infrared. However, with the advent of more capable infrared detecting devices, thousands of brown dwarfs have been identified. The nearest known brown dwarfs are located in the Luhman 16 system, a binary of L- and T-type brown dwarfs about from the Sun. Luhman 16 is the third closest system to the Sun after Alpha Centauri and Barnard's Star.
History
Early theorizing
The objects now called "brown dwarfs" were theorized by Shiv S. Kumar in the 1960s to exist and were originally called black dwarfs, a classification for dark substellar objects floating freely in space that were not massive enough to sustain hydrogen fusion. However, (a) the term black dwarf was already in use to refer to a cold white dwarf; (b) red dwarfs fuse hydrogen; and (c) these objects may be luminous at visible wavelengths early in their lives. Because of this, alternative names for these objects were proposed, including and substar. In 1975, Jill Tarter suggested the term "brown dwarf", using "brown" as an approximate color.
The term "black dwarf" still refers to a white dwarf that has cooled to the point that it no longer emits significant amounts of light. However, the time required for even the lowest-mass white dwarf to cool to this temperature is calculated to be longer than the current age of the universe; hence such objects are expected to not yet exist.
Early theories concerning the nature of the lowest-mass stars and the hydrogen-burning limit suggested that a population I object with a mass less than 0.07 solar masses () or a population II object less than would never go through normal stellar evolution and would become a completely degenerate star. The resulting brown dwarf star is sometimes called a failed star. The first self-consistent calculation of the hydrogen-burning minimum mass confirmed a value between 0.07 and 0.08 solar masses for population I objects.
Deuterium fusion
The discovery of deuterium burning down to () and the impact of dust formation in the cool outer atmospheres of brown dwarfs in the late 1980s brought these theories into question. However, such objects were hard to find because they emit almost no visible light. Their strongest emissions are in the infrared (IR) spectrum, and ground-based IR detectors were too imprecise at that time to readily identify any brown dwarfs.
Since then, numerous searches by various methods have sought these objects. These methods included multi-color imaging surveys around field stars, imaging surveys for faint companions of main-sequence dwarfs and white dwarfs, surveys of young star clusters, and radial velocity monitoring for close companions.
GD 165B and class L
For many years, efforts to discover brown dwarfs were fruitless. In 1988, however, a faint companion to the white dwarf star GD 165 was found in an infrared search of white dwarfs. The spectrum of the companion GD 165B was very red and enigmatic, showing none of the features expected of a low-mass red dwarf. It became clear that GD 165B would need to be classified as a much cooler object than the latest M dwarfs then known. GD 165B remained unique for almost a decade until the advent of the Two Micron All-Sky Survey (2MASS) in 1997, which discovered many objects with similar colors and spectral features.
Today, GD 165B is recognized as the prototype of a class of objects now called "L dwarfs".
Although the discovery of the coolest dwarf was highly significant at the time, it was debated whether GD 165B would be classified as a brown dwarf or simply a very-low-mass star, because observationally it is very difficult to distinguish between the two.
Soon after the discovery of GD 165B, other brown-dwarf candidates were reported. Most failed to live up to their candidacy, however, because the absence of lithium showed them to be stellar objects. True stars burn their lithium within a little over 100 Myr, whereas brown dwarfs (which can, confusingly, have temperatures and luminosities similar to true stars) will not. Hence, the detection of lithium in the atmosphere of an object older than 100 Myr ensures that it is a brown dwarf.
Gliese 229B and class T
The first class "T" brown dwarf was discovered in 1994 by Caltech astronomers Shrinivas Kulkarni, Tadashi Nakajima, Keith Matthews and Rebecca Oppenheimer, and Johns Hopkins scientists Samuel T. Durrance and David Golimowski. It was confirmed in 1995 as a substellar companion to Gliese 229. Gliese 229b is one of the first two instances of clear evidence for a brown dwarf, along with Teide 1. Confirmed in 1995, both were identified by the presence of the 670.8 nm lithium line. The latter was found to have a temperature and luminosity well below the stellar range.
Its near-infrared spectrum clearly exhibited a methane absorption band at 2 micrometres, a feature that had previously only been observed in the atmospheres of giant planets and that of Saturn's moon Titan. Methane absorption is not expected at any temperature of a main-sequence star. This discovery helped to establish yet another spectral class even cooler than L dwarfs, known as "T dwarfs", for which Gliese 229B is the prototype.
Teide 1 and class M
The first confirmed class "M" brown dwarf was discovered by Spanish astrophysicists Rafael Rebolo (head of the team), María Rosa Zapatero-Osorio, and Eduardo L. Martín in 1994. This object, found in the Pleiades open cluster, received the name Teide 1. The discovery article was submitted to Nature in May 1995, and published on 14 September 1995. Nature highlighted "Brown dwarfs discovered, official" on the front page of that issue.
Teide 1 was discovered in images collected by the IAC team on 6 January 1994 using the 80 cm telescope (IAC 80) at Teide Observatory, and its spectrum was first recorded in December 1994 using the 4.2 m William Herschel Telescope at Roque de los Muchachos Observatory (La Palma). The distance, chemical composition, and age of Teide 1 could be established because of its membership in the young Pleiades star cluster. Using the most advanced stellar and substellar evolution models at that moment, the team estimated for Teide 1 a mass of , which is below the stellar-mass limit. The object became a reference in subsequent young brown dwarf related works.
In theory, a brown dwarf below is unable to burn lithium by thermonuclear fusion at any time during its evolution. This fact is one of the lithium test principles used to judge the substellar nature of low-luminosity and low-surface-temperature astronomical bodies.
High-quality spectral data acquired by the Keck 1 telescope in November 1995 showed that Teide 1 still had the initial lithium abundance of the original molecular cloud from which Pleiades stars formed, proving the lack of thermonuclear fusion in its core. These observations fully confirmed that Teide 1 is a brown dwarf, as well as the efficiency of the spectroscopic lithium test.
For some time, Teide 1 was the smallest known object outside the Solar System that had been identified by direct observation. Since then, over 1,800 brown dwarfs have been identified, even some very close to Earth, like Epsilon Indi Ba and Bb, a pair of brown dwarfs gravitationally bound to a Sun-like star 12 light-years from the Sun, and Luhman 16, a binary system of brown dwarfs at 6.5 light-years from the Sun.
Theory
The standard mechanism for star birth is through the gravitational collapse of a cold interstellar cloud of gas and dust. As the cloud contracts, it heats due to the Kelvin–Helmholtz mechanism. Early in the process the contracting gas quickly radiates away much of the energy, allowing the collapse to continue. Eventually, the central region becomes sufficiently dense to trap radiation. Consequently, the central temperature and density of the collapsed cloud increase dramatically with time, slowing the contraction, until the conditions are hot and dense enough for thermonuclear reactions to occur in the core of the protostar. For a typical star, gas and radiation pressure generated by the thermonuclear fusion reactions within its core will support it against any further gravitational contraction. Hydrostatic equilibrium is reached, and the star will spend most of its lifetime fusing hydrogen into helium as a main-sequence star.
If, however, the initial mass of the protostar is less than about , normal hydrogen thermonuclear fusion reactions will not ignite in the core. Gravitational contraction does not heat the small protostar very effectively, and before the temperature in the core can increase enough to trigger fusion, the density reaches the point where electrons become closely packed enough to create quantum electron degeneracy pressure. According to the brown dwarf interior models, typical conditions in the core for density, temperature and pressure are expected to be the following:
This means that the protostar is not massive or dense enough ever to reach the conditions needed to sustain hydrogen fusion. The infalling matter is prevented, by electron degeneracy pressure, from reaching the densities and pressures needed.
Further gravitational contraction is prevented and the result is a brown dwarf that simply cools off by radiating away its internal thermal energy. Note that, in principle, it is possible for a brown dwarf to slowly accrete mass above the hydrogen burning limit without initiating hydrogen fusion. This could happen via mass transfer in a binary brown dwarf system.
High-mass brown dwarfs versus low-mass stars
Lithium is generally present in brown dwarfs and not in low-mass stars. Stars, which reach the high temperature necessary for fusing hydrogen, rapidly deplete their lithium. Fusion of lithium-7 and a proton occurs, producing two helium-4 nuclei. The temperature necessary for this reaction is just below that necessary for hydrogen fusion. Convection in low-mass stars ensures that lithium in the whole volume of the star is eventually depleted. Therefore, the presence of the lithium spectral line in a candidate brown dwarf is a strong indicator that it is indeed a substellar object.
Lithium test
The use of lithium to distinguish candidate brown dwarfs from low-mass stars is commonly referred to as the lithium test, and was pioneered by Rafael Rebolo, Eduardo Martín and Antonio Magazzu. However, lithium is also seen in very young stars, which have not yet had enough time to burn it all.
Heavier stars, like the Sun, can also retain lithium in their outer layers, which never get hot enough to fuse lithium, and whose convective layer does not mix with the core where the lithium would be rapidly depleted. Those larger stars are easily distinguishable from brown dwarfs by their size and luminosity.
Conversely, brown dwarfs at the high end of their mass range can be hot enough to deplete their lithium when they are young. Dwarfs of mass greater than can burn their lithium by the time they are half a billion years old; thus the lithium test is not perfect.
Atmospheric methane
Unlike stars, older brown dwarfs are sometimes cool enough that, over very long periods of time, their atmospheres can gather observable quantities of methane, which cannot form in hotter objects. Dwarfs confirmed in this fashion include Gliese 229B.
Iron, silicate and sulfide clouds
Main-sequence stars cool, but eventually reach a minimum bolometric luminosity that they can sustain through steady fusion. This luminosity varies from star to star, but is generally at least 0.01% that of the Sun. Brown dwarfs cool and darken steadily over their lifetimes; sufficiently old brown dwarfs will be too faint to be detectable.
Clouds are used to explain the weakening of the iron hydride (FeH) spectral line in late L-dwarfs. Iron clouds deplete FeH in the upper atmosphere, and the cloud layer blocks the view to lower layers still containing FeH. The later strengthening of this chemical compound at cooler temperatures of mid- to late T-dwarfs is explained by disturbed clouds that allows a telescope to look into the deeper layers of the atmosphere that still contains FeH. Young L/T-dwarfs (L2-T4) show high variability, which could be explained with clouds, hot spots, magnetically driven aurorae or thermochemical instabilities. The clouds of these brown dwarfs are explained as either iron clouds with varying thickness or a lower thick iron cloud layer and an upper silicate cloud layer. This upper silicate cloud layer can consist out of quartz, enstatite, corundum and/or fosterite. It is however not clear if silicate clouds are always necessary for young objects. Silicate absorption can be directly observed in the mid-infrared at 8 to 12 μm. Observations with Spitzer IRS have shown that silicate absorption is common, but not ubiquitous, for L2-L8 dwarfs. Additionally, MIRI has observed silicate absorption in the planetary-mass companion VHS 1256b.
Iron rain as part of atmospheric convection processes is possible only in brown dwarfs, and not in small stars. The spectroscopy research into iron rain is still ongoing, but not all brown dwarfs will always have this atmospheric anomaly. In 2013, a heterogeneous iron-containing atmosphere was imaged around the B component in the nearby Luhman 16 system.
For late T-type brown dwarfs only a few variable searches were carried out. Thin cloud layers are predicted to form in late T-dwarfs from chromium and potassium chloride, as well as several sulfides. These sulfides are manganese sulfide, sodium sulfide and zinc sulfide. The variable T7 dwarf 2M0050–3322 is explained to have a top layer of potassium chloride clouds, a mid layer of sodium sulfide clouds and a lower layer of manganese sulfide clouds. Patchy clouds of the top two cloud layers could explain why the methane and water vapor bands are variable.
At the lowest temperatures of the Y-dwarf WISE 0855-0714 patchy cloud layers of sulfide and water ice clouds could cover 50% of the surface.
Low-mass brown dwarfs versus high-mass planets
Like stars, brown dwarfs form independently, but, unlike stars, they lack sufficient mass to "ignite" hydrogen fusion. Like all stars, they can occur singly or in close proximity to other stars. Some orbit stars and can, like planets, have eccentric orbits.
Size and fuel-burning ambiguities
Brown dwarfs are all roughly the same radius as Jupiter. At the high end of their mass range (), the volume of a brown dwarf is governed primarily by electron-degeneracy pressure, as it is in white dwarfs; at the low end of the range (), their volume is governed primarily by Coulomb pressure, as it is in planets. The net result is that the radii of brown dwarfs vary by only 10–15% over the range of possible masses. Moreover, the mass–radius relationship shows no change from about one Saturn mass to the onset of hydrogen burning (), suggesting that from this perspective brown dwarfs are simply high-mass Jovian planets. This can make distinguishing them from planets difficult.
In addition, many brown dwarfs undergo no fusion; even those at the high end of the mass range (over ) cool quickly enough that after 10 million years they no longer undergo fusion.
Heat spectrum
X-ray and infrared spectra are telltale signs of brown dwarfs. Some emit X-rays; and all "warm" dwarfs continue to glow tellingly in the red and infrared spectra until they cool to planet-like temperatures (under ).
Gas giants have some of the characteristics of brown dwarfs. Like the Sun, Jupiter and Saturn are both made primarily of hydrogen and helium. Saturn is nearly as large as Jupiter, despite having only 30% the mass. Three of the giant planets in the Solar System (Jupiter, Saturn, and Neptune) emit much more (up to about twice) heat than they receive from the Sun. All four giant planets have their own "planetary" systems, in the form of extensive moon systems.
Current IAU standard
Currently, the International Astronomical Union considers an object above (the limiting mass for thermonuclear fusion of deuterium) to be a brown dwarf, whereas an object under that mass (and orbiting a star or stellar remnant) is considered a planet. The minimum mass required to trigger sustained hydrogen burning (about ) forms the upper limit of the definition.
It is also debated whether brown dwarfs would be better defined by their formation process rather than by theoretical mass limits based on nuclear fusion reactions. Under this interpretation brown dwarfs are those objects that represent the lowest-mass products of the star formation process, while planets are objects formed in an accretion disk surrounding a star. The coolest free-floating objects discovered, such as WISE 0855, as well as the lowest-mass young objects known, like PSO J318.5−22, are thought to have masses below , and as a result are sometimes referred to as planetary-mass objects due to the ambiguity of whether they should be regarded as rogue planets or brown dwarfs. There are planetary-mass objects known to orbit brown dwarfs, such as 2M1207b,2MASS J044144b and Oph 98 B.
The 13-Jupiter-mass cutoff is a rule of thumb rather than a quantity with precise physical significance. Larger objects will burn most of their deuterium and smaller ones will burn only a little, and the 13Jupiter-mass value is somewhere in between. The amount of deuterium burnt also depends to some extent on the composition of the object, specifically on the amount of helium and deuterium present and on the fraction of heavier elements, which determines the atmospheric opacity and thus the radiative cooling rate.
As of 2011 the Extrasolar Planets Encyclopaedia included objects up to 25 Jupiter masses, saying, "The fact that there is no special feature around in the observed mass spectrum reinforces the choice to forget this mass limit". As of 2016, this limit was increased to 60 Jupiter masses, based on a study of mass–density relationships.
The Exoplanet Data Explorer includes objects up to 24 Jupiter masses with the advisory: "The 13 Jupiter-mass distinction by the IAU Working Group is physically unmotivated for planets with rocky cores, and observationally problematic due to the sin i ambiguity." The NASA Exoplanet Archive includes objects with a mass (or minimum mass) equal to or less than 30 Jupiter masses.
Sub-brown dwarf
Objects below , called sub-brown dwarfs or planetary-mass brown dwarfs, form in the same manner as stars and brown dwarfs (i.e. through the collapse of a gas cloud) but have a mass below the limiting mass for thermonuclear fusion of deuterium.
Some researchers call them free-floating planets, whereas others call them planetary-mass brown dwarfs.
Role of other physical properties in the mass estimate
While spectroscopic features can help to distinguish between low-mass stars and brown dwarfs, it is often necessary to estimate the mass to come to a conclusion. The theory behind the mass estimate is that brown dwarfs with a similar mass form in a similar way and are hot when they form. Some have spectral types that are similar to low-mass stars, such as 2M1101AB. As they cool down the brown dwarfs should retain a range of luminosities depending on the mass. Without the age and luminosity, a mass estimate is difficult; for example, an L-type brown dwarf could be an old brown dwarf with a high mass (possibly a low-mass star) or a young brown dwarf with a very low mass. For Y dwarfs this is less of a problem, as they remain low-mass objects near the sub-brown dwarf limit, even for relatively high age estimates. For L and T dwarfs it is still useful to have an accurate age estimate. The luminosity is here the less concerning property, as this can be estimated from the spectral energy distribution. The age estimate can be done in two ways. Either the brown dwarf is young and still has spectral features that are associated with youth, or the brown dwarf co-moves with a star or stellar group (star cluster or association), where age estimates are easier to obtain. A very young brown dwarf that was further studied with this method is 2M1207 and the companion 2M1207b. Based on the location, proper motion and spectral signature, this object was determined to belong to the ~8-million-year-old TW Hydrae association, and the mass of the secondary was determined to be 8 ± 2 , below the deuterium burning limit. An example of a very old age obtained by the co-movement method is the brown dwarf + white dwarf binary COCONUTS-1, with the white dwarf estimated to be billion years old. In this case the mass was not estimated with the derived age, but the co-movement provided an accurate distance estimate, using Gaia parallax. Using this measurement the authors estimated the radius, which was then used to estimate the mass for the brown dwarf as .
Observations
Classification of brown dwarfs
Spectral class M
These are brown dwarfs with a spectral class of M5.5 or later; they are also called late-M dwarfs. Some scientists regard them as red dwarfs. All brown dwarfs with spectral type M are young objects, such as Teide 1, which is the first M-type brown dwarf discovered, and LP 944-20, the closest M-type brown dwarf.
Spectral class L
The defining characteristic of spectral class M, the coolest type in the long-standing classical stellar sequence, is an optical spectrum dominated by absorption bands of titanium(II) oxide (TiO) and vanadium(II) oxide (VO) molecules. However, GD 165B, the cool companion to the white dwarf GD 165, had none of the hallmark TiO features of M dwarfs. The subsequent identification of many objects like GD 165B ultimately led to the definition of a new spectral class, the L dwarfs, defined in the red optical region of the spectrum not by metal-oxide absorption bands (TiO, VO), but by metal hydride emission bands (FeH, CrH, MgH, CaH) and prominent atomic lines of alkali metals (Na, K, Rb, Cs). , over 900 L dwarfs had been identified, most by wide-field surveys: the Two Micron All Sky Survey (2MASS), the Deep Near Infrared Survey of the Southern Sky (DENIS), and the Sloan Digital Sky Survey (SDSS). This spectral class also contains the coolest main-sequence stars (> 80 MJ), which have spectral classes L2 to L6.
Spectral class T
As GD 165B is the prototype of the L dwarfs, Gliese 229B is the prototype of a second new spectral class, the T dwarfs. T dwarfs are pinkish-magenta. Whereas near-infrared (NIR) spectra of L dwarfs show strong absorption bands of H2O and carbon monoxide (CO), the NIR spectrum of Gliese 229B is dominated by absorption bands from methane (CH4), a feature which in the Solar System is found only in the giant planets and Titan. CH4, H2O, and molecular hydrogen (H2) collision-induced absorption (CIA) give Gliese 229B blue near-infrared colors. Its steeply sloped red optical spectrum also lacks the FeH and CrH bands that characterize L dwarfs and instead is influenced by exceptionally broad absorption features from the alkali metals Na and K. These differences led J. Davy Kirkpatrick to propose the T spectral class for objects exhibiting H- and K-band CH4 absorption. , 355 T dwarfs were known. NIR classification schemes for T dwarfs have recently been developed by Adam Burgasser and Tom Geballe. Theory suggests that L dwarfs are a mixture of very-low-mass stars and sub-stellar objects (brown dwarfs), whereas the T dwarf class is composed entirely of brown dwarfs. Because of the absorption of sodium and potassium in the green part of the spectrum of T dwarfs, the actual appearance of T dwarfs to human visual perception is estimated to be not brown, but magenta. Early observations limited how distant T-dwarfs could be observed. T-class brown dwarfs, such as WISE 0316+4307, have been detected more than 100 light-years from the Sun. Observations with JWST have detected T-dwarfs such as UNCOVER-BD-1 up to 4500 parsec distant from the sun.
Spectral class Y
In 2009, the coolest-known brown dwarfs had estimated effective temperatures between , and have been assigned the spectral class T9. Three examples are the brown dwarfs CFBDS J005910.90–011401.3, ULAS J133553.45+113005.2 and ULAS J003402.77−005206.7. The spectra of these objects have absorption peaks around 1.55 micrometres. Delorme et al. have suggested that this feature is due to absorption from ammonia and that this should be taken as indicating the T–Y transition, making these objects of type Y0. However, the feature is difficult to distinguish from absorption by water and methane, and other authors have stated that the assignment of class Y0 is premature.
The first JWST spectral energy distribution of a Y-dwarf was able to observe several bands of molecules in the atmosphere of the Y0-dwarf WISE 0359−5401. The observations covered spectroscopy from 1 to 12 μm and photometry at 15, 18 and 21 μm. The molecules water (H2O), methane (CH4), carbon monoxide (CO), carbon dioxide (CO2) and ammonia (NH3) were detected in WISE 0359−5401. Many of these features have been observed before in this Y-dwarf and warmer T-dwarfs by other observatories, but JWST was able to observe them in a single spectrum. Methane is the main reservoir of carbon in the atmosphere of WISE 0359−5401, but there is still enough carbon left to form detectable carbon monoxide (at 4.5–5.0 μm) and carbon dioxide (at 4.2–4.35 μm) in the Y-dwarf. Ammonia was difficult to detect before JWST, as it blends in with the absorption feature of water in the near-infrared, as well at 5.5–7.1 μm. At longer wavelengths of 8.5–12 μm the spectrum of WISE 0359−5401 is dominated by the absorption of ammonia. At 3 μm there is an additional newly detected ammonia feature.
Role of vertical mixing
In the hydrogen-dominated atmosphere of brown dwarfs a chemical equilibrium between carbon monoxide and methane exists. Carbon monoxide reacts with hydrogen molecules and forms methane and hydroxyl in this reaction. The hydroxyl radical might later react with hydrogen and form water molecules. In the other direction of the reaction, methane reacts with hydroxyl and forms carbon monoxide and hydrogen. The chemical reaction is tilted towards carbon monoxide at higher temperatures (L-dwarfs) and lower pressure. At lower temperatures (T-dwarfs) and higher pressure the reaction is tilted towards methane, and methane predominates at the T/Y-boundary. However, vertical mixing of the atmosphere can cause methane to sink into lower layers of the atmosphere and carbon monoxide to rise from these lower and hotter layers. The carbon monoxide is slow to react back into methane because of an energy barrier that prevents the breakdown of the C-O bonds. This forces the observable atmosphere of a brown dwarf to be in a chemical disequilibrium. The L/T transition is mainly defined with the transition from a carbon-monoxide-dominated atmosphere in L-dwarfs to a methane-dominated atmosphere in T-dwarfs. The amount of vertical mixing can therefore push the L/T-transition to lower or higher temperatures. This becomes important for objects with modest surface gravity and extended atmospheres, such as giant exoplanets. This pushes the L/T transition to lower temperatures for giant exoplanets. For brown dwarfs this transition occurs at around 1200 K. The exoplanet HR 8799c, on the other hand, does not show any methane, while having a temperature of 1100K.
The transition between T- and Y-dwarfs is often defined as 500 K because of the lack of spectral observations of these cold and faint objects. Future observations with JWST and the ELTs might improve the sample of Y-dwarfs with observed spectra. Y-dwarfs are dominated by deep spectral features of methane, water vapor and possibly absorption features of ammonia and water ice. Vertical mixing, clouds, metallicity, photochemistry, lightning, impact shocks and metallic catalysts might influence the temperature at which the L/T and T/Y transition occurs.
Secondary features
Young brown dwarfs have low surface gravities because they have larger radii and lower masses than the field stars of similar spectral type. These sources are noted by a letter beta (β) for intermediate surface gravity or gamma (γ) for low surface gravity. Indicators of low surface gravity include weak CaH, K I and Na I lines, as well as a strong VO line. Alpha (α) denotes normal surface gravity and is usually dropped. Sometimes an extremely low surface gravity is denoted by a delta (δ). The suffix "pec" stands for "peculiar"; this suffix is still used for other features that are unusual, and summarizes different properties, indicating low surface gravity, subdwarfs and unresolved binaries. The prefix sd stands for subdwarf and only includes cool subdwarfs. This prefix indicates a low metallicity and kinematic properties that are more similar to halo stars than to disk stars. Subdwarfs appear bluer than disk objects. The red suffix describes objects with red color, but an older age. This is not interpreted as low surface gravity, but as a high dust content. The blue suffix describes objects with blue near-infrared colors that cannot be explained with low metallicity. Some are explained as L+T binaries, others are not binaries, such as 2MASS J11263991−5003550 and are explained with thin and/or large-grained clouds.
Spectral and atmospheric properties of brown dwarfs
The majority of flux emitted by L and T dwarfs is in the 1- to 2.5-micrometre near-infrared range. Low and decreasing temperatures through the late-M, -L, and -T dwarf sequence result in a rich near-infrared spectrum containing a wide variety of features, from relatively narrow lines of neutral atomic species to broad molecular bands, all of which have different dependencies on temperature, gravity, and metallicity. Furthermore, these low temperature conditions favor condensation out of the gas state and the formation of grains.
Typical atmospheres of known brown dwarfs range in temperature from 2200 down to . Compared to stars, which warm themselves with steady internal fusion, brown dwarfs cool quickly over time; more massive dwarfs cool more slowly than less massive ones. There is some evidence that the cooling of brown dwarfs slows down at the transition between spectral classes L and T (about 1000 K).
Observations of known brown dwarf candidates have revealed a pattern of brightening and dimming of infrared emissions that suggests relatively cool, opaque cloud patterns obscuring a hot interior that is stirred by extreme winds. The weather on such bodies is thought to be extremely strong, comparable to but far exceeding Jupiter's famous storms.
On January 8, 2013, astronomers using NASA's Hubble and Spitzer space telescopes probed the stormy atmosphere of a brown dwarf named 2MASS J22282889–4310262, creating the most detailed "weather map" of a brown dwarf thus far. It shows wind-driven, planet-sized clouds. The new research is a stepping stone toward a better understanding not only brown dwarfs, but also of the atmospheres of planets beyond the Solar System.
In April 2020 scientists reported measuring wind speeds of (up to 1,450 miles per hour) on the nearby brown dwarf 2MASS J10475385+2124234. To calculate the measurements, scientists compared the rotational movement of atmospheric features, as ascertained by brightness changes, against the electromagnetic rotation generated by the brown dwarf's interior. The results confirmed previous predictions that brown dwarfs would have high winds. Scientists are hopeful that this comparison method can be used to explore the atmospheric dynamics of other brown dwarfs and extrasolar planets.
Observational techniques
Coronagraphs have recently been used to detect faint objects orbiting bright visible stars, including Gliese 229B.
Sensitive telescopes equipped with charge-coupled devices (CCDs) have been used to search distant star clusters for faint objects, including Teide 1.
Wide-field searches have identified individual faint objects, such as Kelu-1 (30 light-years away).
Brown dwarfs are often discovered in surveys to discover exoplanets. Methods of detecting exoplanets work for brown dwarfs as well, although brown dwarfs are much easier to detect.
Brown dwarfs can be powerful emitters of radio emission due to their strong magnetic fields. Observing programs at the Arecibo Observatory and the Very Large Array have detected over a dozen such objects, which are also called ultracool dwarfs because they share common magnetic properties with other objects in this class. The detection of radio emission from brown dwarfs permits their magnetic field strengths to be measured directly.
Milestones
1995: First brown dwarf verified. Teide 1, an M8 object in the Pleiades cluster, is picked out with a CCD in the Spanish Observatory of Roque de los Muchachos of the Instituto de Astrofísica de Canarias.
First methane brown dwarf verified. Gliese 229B is discovered orbiting red dwarf Gliese 229A (20 ly away) using an adaptive optics coronagraph to sharpen images from the reflecting telescope at Palomar Observatory on Southern California's Mount Palomar; follow-up infrared spectroscopy made with their Hale Telescope shows an abundance of methane.
1998: First X-ray-emitting brown dwarf found. Cha Helpha 1, an M8 object in the Chamaeleon I dark cloud, is determined to be an X-ray source, similar to convective late-type stars.
15 December 1999: First X-ray flare detected from a brown dwarf. A team at the University of California monitoring LP 944-20 (, 16 ly away) via the Chandra X-ray Observatory, catches a 2-hour flare.
27 July 2000: First radio emission (in flare and quiescence) detected from a brown dwarf. A team of students at the Very Large Array detected emission from LP 944–20.
30 April 2004: First detection of a candidate exoplanet around a brown dwarf: 2M1207b discovered with the VLT and the first directly imaged exoplanet.
20 March 2013: Discovery of the closest brown dwarf system: Luhman 16.
25 April 2014: Coldest-known brown dwarf discovered. WISE 0855−0714 is 7.2 light-years away (seventh-closest system to the Sun) and has a temperature between −48 and −13 °C.
Brown dwarfs X-ray sources
X-ray flares detected from brown dwarfs since 1999 suggest changing magnetic fields within them, similar to those in very-low-mass stars. Although they do not fuse hydrogen into helium in their cores like stars, energy from the fusion of deuterium and gravitational contraction keep their interiors warm and generate strong magnetic fields. The interior of a brown dwarf is in a rapidly boiling, or convective state. When combined with the rapid rotation that most brown dwarfs exhibit, convection sets up conditions for the development of a strong, tangled magnetic field near the surface. The magnetic fields that generated the flare observed by Chandra from LP 944-20 has its origin in the turbulent magnetized plasma beneath the brown dwarf's "surface".
Using NASA's Chandra X-ray Observatory, scientists have detected X-rays from a low-mass brown dwarf in a multiple star system. This is the first time that a brown dwarf this close to its parent star(s) (Sun-like stars TWA 5A) has been resolved in X-rays. "Our Chandra data show that the X-rays originate from the brown dwarf's coronal plasma which is some 3 million degrees Celsius", said Yohko Tsuboi of Chuo University in Tokyo. "This brown dwarf is as bright as the Sun today in X-ray light, while it is fifty times less massive than the Sun", said Tsuboi. "This observation, thus, raises the possibility that even massive planets might emit X-rays by themselves during their youth!"
Brown dwarfs as radio sources
The first brown dwarf that was discovered to emit radio signals was LP 944-20, which was observed since it is also a source of X-ray emission, and both types of emission are signatures of coronae. Approximately 5–10% of brown dwarfs appear to have strong magnetic fields and emit radio waves, and there may be as many as 40 magnetic brown dwarfs within 25 pc of the Sun based on Monte Carlo modeling and their average spatial density. The power of the radio emissions of brown dwarfs is roughly constant despite variations in their temperatures. Brown dwarfs may maintain magnetic fields of up to 6 kG in strength. Astronomers have estimated brown dwarf magnetospheres to span an altitude of approximately 107 m given properties of their radio emissions. It is unknown whether the radio emissions from brown dwarfs more closely resemble those from planets or stars. Some brown dwarfs emit regular radio pulses, which are sometimes interpreted as radio emission beamed from the poles but may also be beamed from active regions. The regular, periodic reversal of radio wave orientation may indicate that brown dwarf magnetic fields periodically reverse polarity. These reversals may be the result of a brown dwarf magnetic activity cycle, similar to the solar cycle.
The first brown dwarf of spectral class M found to emit radio waves was LP 944-20, detected in 2001. The first brown dwarf of spectral class L found to emit radio waves was 2MASS J0036159+182110, detected in 2008. The first brown dwarf of spectral class T found to emit radio waves was 2MASS J10475385+2124234. This last discovery was significant since it revealed that brown dwarfs with temperatures similar to exoplanets could host strong >1.7 kG magnetic fields. Although a sensitive search for radio emission from Y dwarfs was conducted at the Arecibo Observatory in 2010, no emission was detected.
Recent developments
Estimates of brown dwarf populations in the solar neighbourhood suggest that there may be as many as six stars for every brown dwarf. A more recent estimate from 2017 using the young massive star cluster RCW 38 concluded that the Milky Way galaxy contains between 25 and 100 billion brown dwarfs. (Compare these numbers to the estimates of the number of stars in the Milky Way; 100 to 400 billion.)
In a study published in Aug 2017 NASA's Spitzer Space Telescope monitored infrared brightness variations in brown dwarfs caused by cloud cover of variable thickness. The observations revealed large-scale waves propagating in the atmospheres of brown dwarfs (similarly to the atmosphere of Neptune and other Solar System giant planets). These atmospheric waves modulate the thickness of the clouds and propagate with different velocities (probably due to differential rotation).
In August 2020, astronomers discovered 95 brown dwarfs near the Sun through the project Backyard Worlds: Planet 9.
In 2024 the James Webb Space Telescope provided the most detailed weather report yet on two brown dwarfs, revealing "stormy" conditions. These brown dwarfs, part of a binary star system named Luhman 16 discovered in 2013, are only 6.5 light-years away from Earth and are the closest brown dwarfs to our sun. Researchers discovered that they have turbulent clouds, likely made of silicate grains, with temperatures ranging from to . This indicates that hot sand is being blown by winds on the brown dwarfs. Additionally, absorption signatures of carbon monoxide, methane, and water vapor were detected.
Binary brown dwarfs
Brown dwarf–brown dwarf binaries
Brown dwarfs binaries of type M, L, and T are less common with a lower mass of the primary. L-dwarfs have a binary fraction of about % and the binary fraction for late T, early Y-dwarfs (T5-Y0) is about .
Brown dwarf binaries have a higher companion-to-host ratio for lower mass binaries. Binaries with a M-type star as a primary have for example a broad distribution of q with a preference of q ≥ 0.4. Brown dwarfs on the other hand show a strong preference for q ≥ 0.7. The separation is decreasing with mass: M-type stars have a separation peaking at 3–30 astronomical units (au), M-L-type brown dwarfs have a projected separation peaking at 5–8 au and T5–Y0 objects have a projected separation that follows a lognormal distribution with a peak separation of about 2.9 au.
An example is the closest brown dwarf binary Luhman 16 AB with a primary L7.5 dwarf and a separation of 3.5 au and q = 0.85. The separation is on the lower end of the expected separation for M-L-type brown dwarfs, but the mass ratio is typical.
It is not known if the same trend continues with Y-dwarfs, because their sample size is so small. The Y+Y dwarf binaries should have a high mass ratio q and a low separation, reaching scales of less than one au. In 2023, the Y+Y dwarf WISE J0336-0143 was confirmed as a binary with JWST, with a mass ratio of q=0.62±0.05 and a separation of 0.97 astronomical units. The researchers point out that the sample size of low-mass binary brown dwarfs is too small to determine if WISE J0336-0143 is a typical representative of low-mass binaries or a peculiar system.
Observations of the orbit of binary systems containing brown dwarfs can be used to measure the mass of the brown dwarf. In the case of 2MASSW J0746425+2000321, the secondary weighs 6% of the solar mass. This measurement is called a dynamical mass. The brown dwarf system closest to the Solar System is the binary Luhman 16. It was attempted to search for planets around this system with a similar method, but none were found.
Unusual brown dwarf binaries
The wide binary system 2M1101AB was the first binary with a separation greater than . The discovery of the system gave definitive insights to the formation of brown dwarfs. It was previously thought that wide binary brown dwarfs are not formed or at least are disrupted at ages of 1–10 Myr. The existence of this system is also inconsistent with the ejection hypothesis. The ejection hypothesis was a proposed hypothesis in which brown dwarfs form in a multiple system, but are ejected before they gain enough mass to burn hydrogen.
More recently the wide binary W2150AB was discovered. It has a similar mass ratio and binding energy as 2M1101AB, but a greater age and is located in a different region of the galaxy. While 2M1101AB is in a closely crowded region, the binary W2150AB is in a sparsely-separated field. It must have survived any dynamical interactions in its natal star cluster. The binary belongs also to a few L+T binaries that can be easily resolved by ground-based observatories. The other two are SDSS J1416+13AB and Luhman 16.
There are other interesting binary systems such as the eclipsing binary brown dwarf system 2MASS J05352184–0546085. Photometric studies of this system have revealed that the less massive brown dwarf in the system is hotter than its higher-mass companion.
Brown dwarfs around stars
Brown dwarfs and massive planets in a close orbit (less than 5 au) around stars are rare and this is sometimes described as the brown dwarf desert. Less than 1% of stars with the mass of the sun have a brown dwarf within 3–5 au.
An example for a star–brown dwarf binary is the first discovered T-dwarf Gliese 229 B, which orbits around the main-sequence star Gliese 229 A, a red dwarf. Brown dwarfs orbiting subgiants are also known, such as TOI-1994b which orbits its star every 4.03 days.
There is also disagreement if some low-mass brown dwarfs should be considered planets. The NASA Exoplanet archive includes brown dwarfs with a minimum mass less or equal to 30 Jupiter masses as planets as long as there are other criteria fulfilled (e.g. orbiting a star). The Working Group on Extrasolar Planets (WGESP) of the IAU on the other hand only considers planets with a mass below 13 Jupiter masses.
White dwarf–brown dwarf binaries
Brown dwarfs around white dwarfs are quite rare. GD 165 B, the prototype of the L dwarfs, is one such system. Such systems can be useful in determining the age of the system and the mass of the brown dwarf. Other white dwarf-brown dwarf binaries are COCONUTS-1 AB (7 billion years old), and LSPM J0055+5948 AB (10 billion years old), SDSS J22255+0016 AB (2 billion years old) WD 0806−661 AB (1.5–2.7 billion years old).
Systems with close, tidally locked brown dwarfs orbiting around white dwarfs belong to the post common envelope binaries or PCEBs. Only eight confirmed PCEBs containing a white dwarf with a brown dwarf companion are known, including WD 0137-349 AB. In the past history of these close white dwarf–brown dwarf binaries, the brown dwarf is engulfed by the star in the red giant phase. Brown dwarfs with a mass lower than 20 Jupiter masses would evaporate during the engulfment. The dearth of brown dwarfs orbiting close to white dwarfs can be compared with similar observations of brown dwarfs around main-sequence stars, described as the brown-dwarf desert. The PCEB might evolve into a cataclysmic variable star (CV*) with the brown dwarf as the donor. Simulations have shown that highly evolved CV* are mostly associated with substellar donors (up to 80%). A type of CV*, called WZ Sge-type dwarf nova often show donors with a mass near the borderline of low-mass stars and brown dwarfs. The binary BW Sculptoris is such a dwarf nova with a brown dwarf donor. This brown dwarf likely formed when a donor star lost enough mass to become a brown dwarf. The mass loss comes with a loss of the orbital period until it reaches a minimum of 70–80 minutes at which the period increases again. This gives this evolutionary stage the name period bouncer. There could also exist brown dwarfs that merged with white dwarfs. The nova CK Vulpeculae might be a result of such a white dwarf–brown dwarf merger.
Formation and evolution
The earliest stage of brown dwarf formation is called proto- or pre-brown dwarf. Proto-brown dwarfs are low-mass equivalents of protostars (class 0/I objects). Additionally Very Low Luminosity Objects (VeLLOs) that have Lint ≤0.1-0.2 are often proto-brown dwarfs. They are found in nearby star-forming clouds. Around 67 promising proto-brown dwarfs and 26 pre-brown dwarfs are known as of 2024. As of 2017 there is only one known proto-brown dwarf that is connected with a large Herbig–Haro object. This is the brown dwarf Mayrit 1701117, which is surrounded by a pseudo-disk and a Keplerian disk. Mayrit 1701117 launches the 0.7-light-year-long jet HH 1165, mostly seen in ionized sulfur.
Brown dwarfs form similarly to stars and are surrounded by protoplanetary disks, such as Cha 110913−773444. Disks around brown dwarfs have been found to have many of the same features as disks around stars; therefore, it is expected that there will be accretion-formed planets around brown dwarfs. Given the small mass of brown dwarf disks, most planets will be terrestrial planets rather than gas giants. If a giant planet orbits a brown dwarf across our line of sight, then, because they have approximately the same diameter, this would give a large signal for detection by transit. The accretion zone for planets around a brown dwarf is very close to the brown dwarf itself, so tidal forces would have a strong effect.
In 2020, the closest brown dwarf with an associated primordial disk (class II disk)—WISEA J120037.79-784508.3 (W1200-7845)—was discovered by the Disk Detective project when classification volunteers noted its infrared excess. It was vetted and analyzed by the science team who found that W1200-7845 had a 99.8% probability of being a member of the ε Chamaeleontis (ε Cha) young moving group association. Its parallax (using Gaia DR2 data) puts it at a distance of 102 parsecs (or 333 lightyears) from Earth—which is within the local Solar neighborhood.
A paper from 2021 studied circumstellar discs around brown dwarfs in stellar associations that are a few million years old and 140 to 200 parsecs away. The researchers found that these disks are not massive enough to form planets in the future. There is evidence in these disks that might indicate that planet formation begins at earlier stages and that planets are already present in these disks. The evidence for disk evolution includes a decreasing disk mass over time, dust grain growth and dust settling. Two brown dwarf disks were also found in absorption and at least 4 disks are photoevaporating from external UV-ratiation in the Orion Nebula. Such objects are also called proplyds. Proplyd 181−247, which is a brown dwarf or low-mass star, is surrounded by a disk with a radius of 30 astronomical units and the disk has a mass of 6.2±1.0 . Disks around brown dwarfs usually have a radius smaller than 40 astronomical units, but three disks in the more distant Taurus molecular cloud have a radius larger than 70 au and were resolved with ALMA. These larger disks are able to form rocky planets with a mass >1 . There are also brown dwarfs with disks in associations older than a few million years, which might be evidence that disks around brown dwarfs need more time to dissipate. Especially old disks (>20 Myrs) are sometimes called Peter Pan disks. Currently 2MASS J02265658-5327032 is the only known brown dwarf that has a Peter Pan disk.
The brown dwarf Cha 110913−773444, located 500 light-years away in the constellation Chamaeleon, may be in the process of forming a miniature planetary system. Astronomers from Pennsylvania State University have detected what they believe to be a disk of gas and dust similar to the one hypothesized to have formed the Solar System. Cha 110913−773444 is the smallest brown dwarf found to date (), and if it formed a planetary system, it would be the smallest-known object to have one.
Planets around brown dwarfs
According to the IAU working definition (from August 2018) an exoplanet can orbit a brown dwarf. It requires a mass below 13 and a mass ratio of M/Mcentral<2/(25+√621). This means that an object with a mass up to 3.2 around a brown dwarf with a mass of 80 is considered a planet. It also means that an object with a mass up to 0.52 around a brown dwarf with a mass of 13 is considered a planet.
The super-Jupiter planetary-mass objects 2M1207b, 2MASS J044144 and Oph 98 B that are orbiting brown dwarfs at large orbital distances may have formed by cloud collapse rather than accretion and so may be sub-brown dwarfs rather than planets, which is inferred from relatively large masses and large orbits. The first discovery of a low-mass companion orbiting a brown dwarf (ChaHα8) at a small orbital distance using the radial velocity technique paved the way for the detection of planets around brown dwarfs on orbits of a few AU or smaller. However, with a mass ratio between the companion and primary in ChaHα8 of about 0.3, this system rather resembles a binary star. Then, in 2008, the first planetary-mass companion in a relatively small orbit (MOA-2007-BLG-192Lb) was discovered orbiting a brown dwarf.
Planets around brown dwarfs are likely to be carbon planets depleted of water.
A 2017 study, based upon observations with Spitzer estimates that 175 brown dwarfs need to be monitored in order to guarantee (95%) at least one detection of a below earth-sized planet via the transiting method. JWST could potentially detect smaller planets. The orbits of planets and moons in the solar system often align with the orientation of the host star/planet they orbit. Assuming the orbit of a planet is aligned with the rotational axis of a brown dwarf or planetary-mass object, the geometric transit probability of an object similar to Io can be calculated with the formula cos(79.5°)/cos(inclination). The inclination was estimated for several brown dwarfs and planetary-mass objects. SIMP 0136 for example has an estimated inclination of 80°±12. Assuming the lower bound of i≥68° for SIMP 0136, this results in a transit probability of ≥48.6% for close-in planets. It is however not known how common close-in planets are around brown dwarfs and they might be more common for lower-mass objects, as disk sizes seem to decrease with mass.
Habitability
Habitability for hypothetical planets orbiting brown dwarfs has been studied. Computer models suggesting conditions for these bodies to have habitable planets are very stringent, the habitable zone being narrow, close (T dwarf 0.005 au) and decreasing with time, due to the cooling of the brown dwarf (they fuse for at most 10 million years). The orbits there would have to be of extremely low eccentricity (on the order of 10 to the minus 6) to avoid strong tidal forces that would trigger a runaway greenhouse effect on the planets, rendering them uninhabitable. There would also be no moons.
Superlative brown dwarfs
In 1984, it was postulated by some astronomers that the Sun may be orbited by an undetected brown dwarf (sometimes referred to as Nemesis) that could interact with the Oort cloud just as passing stars can. However, this hypothesis has fallen out of favor.
Table of firsts
Table of extremes
See also
Fusor (astronomy)
Stellification
WD 0032-317 b
List of brown dwarfs
List of Y-dwarfs
Footnotes
References
External links
HubbleSite newscenter – Weather patterns on a brown dwarf
History
Kumar, Shiv S.; Low-Luminosity Stars. Gordon and Breach, London, 1969—an early overview paper on brown dwarfs
The Columbia Encyclopedia: "Brown Dwarfs"
Details
A current list of L and T dwarfs
A geological definition of brown dwarfs, contrasted with stars and planets (via Berkeley)
I. Neill Reid's pages at the Space Telescope Science Institute:
On spectral analysis of M dwarfs, L dwarfs, and T dwarfs
Temperature and mass characteristics of low-temperature dwarfs
First X-ray from brown dwarf observed, Spaceref.com, 2000
Montes, David; "Brown Dwarfs and ultracool dwarfs (late-M, L, T)", UCM
Wild Weather: Iron Rain on Failed Stars—scientists are investigating astonishing weather patterns on brown dwarfs, Space.com, 2006
NASA Brown dwarf detectives —Detailed information in a simplified sense
Brown Dwarfs—Website with general information about brown dwarfs (has many detailed and colorful artist's impressions)
Stars
Cha Halpha 1 stats and history
"A census of observed brown dwarfs" (not all confirmed), 1998
Michaud, Peter; Heyer, Inge; Leggett, Sandy K.; and Adamson, Andy; "Discovery Narrows the Gap Between Planets and Brown Dwarfs", Gemini and Joint Astronomy Centre, 2007
Definition of planet
Star types
Stellar phenomena
Substellar objects
Types of planet | Brown dwarf | [
"Physics",
"Astronomy"
] | 12,088 | [
"Definition of planet",
"Physical phenomena",
"Astronomical controversies",
"Astronomical classification systems",
"Substellar objects",
"Stellar phenomena",
"Astronomical objects",
"Star types"
] |
44,408 | https://en.wikipedia.org/wiki/Microphone%20array | A microphone array is any number of microphones operating in tandem. There are many applications:
Systems for extracting voice input from ambient noise (notably telephones, speech recognition systems, hearing aids)
Surround sound and related technologies
Binaural recording
Locating objects by sound: acoustic source localization, e.g., military use to locate the source(s) of artillery fire. Aircraft location and tracking.
High fidelity original recordings
Environmental noise monitoring
Robotic navigation (acoustic SLAM)
Typically, an array is made up of omnidirectional microphones, directional microphones, or a mix of omnidirectional and directional microphones distributed about the perimeter of a space, linked to a computer that records and interprets the results into a coherent form. Arrays may also be formed using numbers of very closely spaced microphones. Given a fixed physical relationship in space between the different individual microphone transducer array elements, simultaneous DSP (digital signal processor) processing of the signals from each of the individual microphone array elements can create one or more "virtual" microphones. Different algorithms permit the creation of virtual microphones with extremely complex virtual polar patterns and even the possibility to steer the individual lobes of the virtual microphones patterns so as to home-in-on, or to reject, particular sources of sound. The application of these algorithms can produce varying levels of accuracy when calculating source level and location, and as such, care should be taken when deciding how the individual lobes of the virtual microphones are derived.
In case the array consists of omnidirectional microphones they accept sound from all directions, so electrical signals of the microphones contain the information about the sounds coming from all directions. Joint processing of these sounds allows selecting the sound signal coming from the given direction.
An array of 1020 microphones, the largest in the world until August 21, 2014, was built by researchers at the MIT Computer Science and Artificial Intelligence Laboratory.
Currently the largest microphone array in the world was constructed by DLR, the German Aerospace Center, in 2024. Their array consists of 7200 microphones with an aperture of 8 m x 6 m.
Soundfield microphone
The soundfield microphone system is a well-established example of the use of a microphone array in professional sound recording.
See also
Acoustic camera
Acoustic source localization
Ambisonics
Decca tree
Microphone
SOSUS
Stereophonic sound
Surround sound
Notes
External links
Fukada's tree, in an AES paper about Multichannel Music Recording.
Hamasaki's square, in an AES paper about Multichannel Recording Techniques.
Literature on source localization with microphone arrays.
An introduction to Acoustic Holography
A collection of pages providing a simple introduction to microphone array beamforming
Microphone practices | Microphone array | [
"Engineering"
] | 553 | [
"Audio engineering",
"Microphone practices"
] |
44,412 | https://en.wikipedia.org/wiki/Sedimentary%20rock | Sedimentary rocks are types of rock that are formed by the accumulation or deposition of sediments, ie. mineral or organic particles, at Earth's surface, followed by cementation. Sedimentation is the collective name for processes that cause these particles to settle in place. The particles that form a sedimentary rock are called sediment, and may be composed of geological detritus (minerals) or biological detritus (organic matter). The geological detritus originated from weathering and erosion of existing rocks, or from the solidification of molten lava blobs erupted by volcanoes. The geological detritus is transported to the place of deposition by water, wind, ice or mass movement, which are called agents of denudation. Biological detritus was formed by bodies and parts (mainly shells) of dead aquatic organisms, as well as their fecal mass, suspended in water and slowly piling up on the floor of water bodies (marine snow). Sedimentation may also occur as dissolved minerals precipitate from water solution.
The sedimentary rock cover of the continents of the Earth's crust is extensive (73% of the Earth's current land surface), but sedimentary rock is estimated to be only 8% of the volume of the crust. Sedimentary rocks are only a thin veneer over a crust consisting mainly of igneous and metamorphic rocks. Sedimentary rocks are deposited in layers as strata, forming a structure called bedding. Sedimentary rocks are often deposited in large structures called sedimentary basins. Sedimentary rocks have also been found on Mars.
The study of sedimentary rocks and rock strata provides information about the subsurface that is useful for civil engineering, for example in the construction of roads, houses, tunnels, canals or other structures. Sedimentary rocks are also important sources of natural resources including coal, fossil fuels, drinking water and ores.
The study of the sequence of sedimentary rock strata is the main source for an understanding of the Earth's history, including palaeogeography, paleoclimatology and the history of life. The scientific discipline that studies the properties and origin of sedimentary rocks is called sedimentology. Sedimentology is part of both geology and physical geography and overlaps partly with other disciplines in the Earth sciences, such as pedology, geomorphology, geochemistry and structural geology.
Classification based on origin
Sedimentary rocks can be subdivided into four groups based on the processes responsible for their formation: clastic sedimentary rocks, biochemical (biogenic) sedimentary rocks, chemical sedimentary rocks, and a fourth category for "other" sedimentary rocks formed by impacts, volcanism, and other minor processes.
Clastic sedimentary rocks
Clastic sedimentary rocks are composed of rock fragments (clasts) that have been cemented together. The clasts are commonly individual grains of quartz, feldspar, clay minerals, or mica. However, any type of mineral may be present. Clasts may also be lithic fragments composed of more than one mineral.
Clastic sedimentary rocks are subdivided according to the dominant particle size. Most geologists use the Udden-Wentworth grain size scale and divide unconsolidated sediment into three fractions: gravel (>2 mm diameter), sand (1/16 to 2 mm diameter), and mud (<1/16 mm diameter). Mud is further divided into silt (1/16 to 1/256 mm diameter) and clay (<1/256 mm diameter). The classification of clastic sedimentary rocks parallels this scheme; conglomerates and breccias are made mostly of gravel, sandstones are made mostly of sand, and mudrocks are made mostly of mud. This tripartite subdivision is mirrored by the broad categories of rudites, arenites, and lutites, respectively, in older literature.
The subdivision of these three broad categories is based on differences in clast shape (conglomerates and breccias), composition (sandstones), or grain size or texture (mudrocks).
Conglomerates and breccias
Breccias are dominantly composed of angular gravel in a groundmass (matrix), while conglomerates are dominantly composed rounded gravel.
Sandstones
Sandstone classification schemes vary widely, but most geologists have adopted the Dott scheme, which uses the relative abundance of quartz, feldspar, and lithic framework grains and the abundance of a muddy matrix between the larger grains.
Composition of framework grains
The relative abundance of sand-sized framework grains determines the first word in a sandstone name. Naming depends on the dominance of the three most abundant components quartz, feldspar, or the lithic fragments that originated from other rocks. All other minerals are considered accessories and not used in the naming of the rock, regardless of abundance.
Quartz sandstones have >90% quartz grains
Feldspathic sandstones have <90% quartz grains and more feldspar grains than lithic grains
Lithic sandstones have <90% quartz grains and more lithic grains than feldspar grains
Abundance of muddy matrix material between sand grains
When sand-sized particles are deposited, the space between the grains either remains open or is filled with mud (silt and/or clay sized particle).
"Clean" sandstones with open pore space (that may later be filled with matrix material) are called arenites.
Muddy sandstones with abundant (>10%) muddy matrix are called wackes.
Six sandstone names are possible using the descriptors for grain composition (quartz-, feldspathic-, and lithic-) and the amount of matrix (wacke or arenite). For example, a quartz arenite would be composed of mostly (>90%) quartz grains and have little or no clayey matrix between the grains, a lithic wacke would have abundant lithic grains and abundant muddy matrix, etc.
Although the Dott classification scheme is widely used by sedimentologists, common names like greywacke, arkose, and quartz sandstone are still widely used by non-specialists and in popular literature.
Mudrocks
Mudrocks are sedimentary rocks composed of at least 50% silt- and clay-sized particles. These relatively fine-grained particles are commonly transported by turbulent flow in water or air, and deposited as the flow calms and the particles settle out of suspension.
Most authors presently use the term "mudrock" to refer to all rocks composed dominantly of mud. Mudrocks can be divided into siltstones, composed dominantly of silt-sized particles; mudstones with subequal mixture of silt- and clay-sized particles; and claystones, composed mostly of clay-sized particles. Most authors use "shale" as a term for a fissile mudrock (regardless of grain size) although some older literature uses the term "shale" as a synonym for mudrock.
Biochemical sedimentary rocks
Biochemical sedimentary rocks are created when organisms use materials dissolved in air or water to build their tissue. Examples include:
Most types of limestone are formed from the calcareous skeletons of organisms such as corals, mollusks, and foraminifera.
Coal, formed from vegetation that has removed carbon from the atmosphere and combined it with other elements to build their tissue, this vegetation gets compressed by overlying sediments and undergoes chemical transformation.
Deposits of chert formed from the accumulation of siliceous skeletons of microscopic organisms such as radiolaria and diatoms.
Chemical sedimentary rocks
Chemical sedimentary rock forms when mineral constituents in solution become supersaturated and inorganically precipitate. Common chemical sedimentary rocks include oolitic limestone and rocks composed of evaporite minerals, such as halite (rock salt), sylvite, baryte and gypsum.
Other sedimentary rocks
This fourth miscellaneous category includes volcanic tuff and volcanic breccias formed by deposition and later cementation of lava fragments erupted by volcanoes, and impact breccias formed after impact events.
Classification based on composition
Alternatively, sedimentary rocks can be subdivided into compositional groups based on their mineralogy:
Siliciclastic sedimentary rocks, are dominantly composed of silicate minerals. The sediment that makes up these rocks was transported as bed load, suspended load, or by sediment gravity flows. Siliciclastic sedimentary rocks are subdivided into conglomerates and breccias, sandstone, and mudrocks.
Carbonate sedimentary rocks are composed of calcite (rhombohedral ), aragonite (orthorhombic ), dolomite (), and other carbonate minerals based on the ion. Common examples include limestone and the rock dolomite.
Evaporite sedimentary rocks are composed of minerals formed from the evaporation of water. The most common evaporite minerals are carbonates (calcite and others based on ), chlorides (halite and others built on ), and sulfates (gypsum and others built on ). Evaporite rocks commonly include abundant halite (rock salt), gypsum, and anhydrite.
Organic-rich sedimentary rocks have significant amounts of organic material, generally in excess of 3% total organic carbon. Common examples include coal, oil shale as well as source rocks for oil and natural gas.
Siliceous sedimentary rocks are almost entirely composed of silica (), typically as chert, opal, chalcedony or other microcrystalline forms.
Iron-rich sedimentary rocks are composed of >15% iron; the most common forms are banded iron formations and ironstones.
Phosphatic sedimentary rocks are composed of phosphate minerals and contain more than 6.5% phosphorus; examples include deposits of phosphate nodules, bone beds, and phosphatic mudrocks.
Deposition and transformation
Sediment transport and deposition
Sedimentary rocks are formed when sediment is deposited out of air, ice, wind, gravity, or water flows carrying the particles in suspension. This sediment is often formed when weathering and erosion break down a rock into loose material in a source area. The material is then transported from the source area to the deposition area. The type of sediment transported depends on the geology of the hinterland (the source area of the sediment). However, some sedimentary rocks, such as evaporites, are composed of material that form at the place of deposition. The nature of a sedimentary rock, therefore, not only depends on the sediment supply, but also on the sedimentary depositional environment in which it formed.
Transformation (Diagenesis)
As sediments accumulate in a depositional environment, older sediments are buried by younger sediments, and they undergo diagenesis. Diagenesis includes all the chemical, physical, and biological changes, exclusive of surface weathering, undergone by a sediment after its initial deposition. This includes compaction and lithification of the sediments. Early stages of diagenesis, described as eogenesis, take place at shallow depths (a few tens of meters) and is characterized by bioturbation and mineralogical changes in the sediments, with only slight compaction. The red hematite that gives red bed sandstones their color is likely formed during eogenesis. Some biochemical processes, like the activity of bacteria, can affect minerals in a rock and are therefore seen as part of diagenesis.
Deeper burial is accompanied by mesogenesis, during which most of the compaction and lithification takes place. Compaction takes place as the sediments come under increasing overburden (lithostatic) pressure from overlying sediments. Sediment grains move into more compact arrangements, grains of ductile minerals (such as mica) are deformed, and pore space is reduced. Sediments are typically saturated with groundwater or seawater when originally deposited, and as pore space is reduced, much of these connate fluids are expelled. In addition to this physical compaction, chemical compaction may take place via pressure solution. Points of contact between grains are under the greatest strain, and the strained mineral is more soluble than the rest of the grain. As a result, the contact points are dissolved away, allowing the grains to come into closer contact. The increased pressure and temperature stimulate further chemical reactions, such as the reactions by which organic material becomes lignite or coal.
Lithification follows closely on compaction, as increased temperatures at depth hasten the precipitation of cement that binds the grains together. Pressure solution contributes to this process of cementation, as the mineral dissolved from strained contact points is redeposited in the unstrained pore spaces. This further reduces porosity and makes the rock more compact and competent.
Unroofing of buried sedimentary rock is accompanied by telogenesis, the third and final stage of diagenesis. As erosion reduces the depth of burial, renewed exposure to meteoric water produces additional changes to the sedimentary rock, such as leaching of some of the cement to produce secondary porosity.
At sufficiently high temperature and pressure, the realm of diagenesis makes way for metamorphism, the process that forms metamorphic rock.
Properties
Color
The color of a sedimentary rock is often mostly determined by iron, an element with two major oxides: iron(II) oxide and iron(III) oxide. Iron(II) oxide (FeO) only forms under low oxygen (anoxic) circumstances and gives the rock a grey or greenish colour. Iron(III) oxide (Fe2O3) in a richer oxygen environment is often found in the form of the mineral hematite and gives the rock a reddish to brownish colour. In arid continental climates rocks are in direct contact with the atmosphere, and oxidation is an important process, giving the rock a red or orange colour. Thick sequences of red sedimentary rocks formed in arid climates are called red beds. However, a red colour does not necessarily mean the rock formed in a continental environment or arid climate.
The presence of organic material can colour a rock black or grey. Organic material is formed from dead organisms, mostly plants. Normally, such material eventually decays by oxidation or bacterial activity. Under anoxic circumstances, however, organic material cannot decay and leaves a dark sediment, rich in organic material. This can, for example, occur at the bottom of deep seas and lakes. There is little water mixing in such environments; as a result, oxygen from surface water is not brought down, and the deposited sediment is normally a fine dark clay. Dark rocks, rich in organic material, are therefore often shales.
Texture
The size, form and orientation of clasts (the original pieces of rock) in a sediment is called its texture. The texture is a small-scale property of a rock, but determines many of its large-scale properties, such as the density, porosity or permeability.
The 3D orientation of the clasts is called the fabric of the rock. The size and form of clasts can be used to determine the velocity and direction of current in the sedimentary environment that moved the clasts from their origin; fine, calcareous mud only settles in quiet water while gravel and larger clasts are moved only by rapidly moving water. The grain size of a rock is usually expressed with the Wentworth scale, though alternative scales are sometimes used. The grain size can be expressed as a diameter or a volume, and is always an average value, since a rock is composed of clasts with different sizes. The statistical distribution of grain sizes is different for different rock types and is described in a property called the sorting of the rock. When all clasts are more or less of the same size, the rock is called 'well-sorted', and when there is a large spread in grain size, the rock is called 'poorly sorted'.
The form of the clasts can reflect the origin of the rock. For example, coquina, a rock composed of clasts of broken shells, can only form in energetic water. The form of a clast can be described by using four parameters:
Surface texture describes the amount of small-scale relief of the surface of a grain that is too small to influence the general shape. For example, frosted grains, which are covered with small-scale fractures, are characteristic of eolian sandstones.
Rounding describes the general smoothness of the shape of a grain.
Sphericity describes the degree to which the grain approaches a sphere.
Grain form describes the three-dimensional shape of the grain.
Chemical sedimentary rocks have a non-clastic texture, consisting entirely of crystals. To describe such a texture, only the average size of the crystals and the fabric are necessary.
Mineralogy
Most sedimentary rocks contain either quartz (siliciclastic rocks) or calcite (carbonate rocks). In contrast to igneous and metamorphic rocks, a sedimentary rock usually contains very few different major minerals. However, the origin of the minerals in a sedimentary rock is often more complex than in an igneous rock. Minerals in a sedimentary rock may have been present in the original sediments or may formed by precipitation during diagenesis. In the second case, a mineral precipitate may have grown over an older generation of cement. A complex diagenetic history can be established by optical mineralogy, using a petrographic microscope.
Carbonate rocks predominantly consist of carbonate minerals such as calcite, aragonite or dolomite. Both the cement and the clasts (including fossils and ooids) of a carbonate sedimentary rock usually consist of carbonate minerals. The mineralogy of a clastic rock is determined by the material supplied by the source area, the manner of its transport to the place of deposition and the stability of that particular mineral.
The resistance of rock-forming minerals to weathering is expressed by the Goldich dissolution series. In this series, quartz is the most stable, followed by feldspar, micas, and finally other less stable minerals that are only present when little weathering has occurred. The amount of weathering depends mainly on the distance to the source area, the local climate and the time it took for the sediment to be transported to the point where it is deposited. In most sedimentary rocks, mica, feldspar and less stable minerals have been weathered to clay minerals like kaolinite, illite or smectite.
Fossils
Among the three major types of rock, fossils are most commonly found in sedimentary rock. Unlike most igneous and metamorphic rocks, sedimentary rocks form at temperatures and pressures that do not destroy fossil remnants. Often these fossils may only be visible under magnification.
Dead organisms in nature are usually quickly removed by scavengers, bacteria, rotting and erosion, but under exceptional circumstances, these natural processes are unable to take place, leading to fossilisation. The chance of fossilisation is higher when the sedimentation rate is high (so that a carcass is quickly buried), in anoxic environments (where little bacterial activity occurs) or when the organism had a particularly hard skeleton. Larger, well-preserved fossils are relatively rare.
Fossils can be both the direct remains or imprints of organisms and their skeletons. Most commonly preserved are the harder parts of organisms such as bones, shells, and the woody tissue of plants. Soft tissue has a much smaller chance of being fossilized, and the preservation of soft tissue of animals older than 40 million years is very rare. Imprints of organisms made while they were still alive are called trace fossils, examples of which are burrows, footprints, etc.
As a part of a sedimentary rock, fossils undergo the same diagenetic processes as does the host rock. For example, a shell consisting of calcite can dissolve while a cement of silica then fills the cavity. In the same way, precipitating minerals can fill cavities formerly occupied by blood vessels, vascular tissue or other soft tissues. This preserves the form of the organism but changes the chemical composition, a process called permineralization. The most common minerals involved in permineralization are various forms of amorphous silica (chalcedony, flint, chert), carbonates (especially calcite), and pyrite.
At high pressure and temperature, the organic material of a dead organism undergoes chemical reactions in which volatiles such as water and carbon dioxide are expulsed. The fossil, in the end, consists of a thin layer of pure carbon or its mineralized form, graphite. This form of fossilisation is called carbonisation. It is particularly important for plant fossils. The same process is responsible for the formation of fossil fuels like lignite or coal.
Primary sedimentary structures
Structures in sedimentary rocks can be divided into primary structures (formed during deposition) and secondary structures (formed after deposition). Unlike textures, structures are always large-scale features that can easily be studied in the field. Sedimentary structures can indicate something about the sedimentary environment or can serve to tell which side originally faced up where tectonics have tilted or overturned sedimentary layers.
Sedimentary rocks are laid down in layers called beds or strata. A bed is defined as a layer of rock that has a uniform lithology and texture. Beds form by the deposition of layers of sediment on top of each other. The sequence of beds that characterizes sedimentary rocks is called bedding. Single beds can be a couple of centimetres to several meters thick. Finer, less pronounced layers are called laminae, and the structure a lamina forms in a rock is called lamination. Laminae are usually less than a few centimetres thick. Though bedding and lamination are often originally horizontal in nature, this is not always the case. In some environments, beds are deposited at a (usually small) angle. Sometimes multiple sets of layers with different orientations exist in the same rock, a structure called cross-bedding. Cross-bedding is characteristic of deposition by a flowing medium (wind or water).
The opposite of cross-bedding is parallel lamination, where all sedimentary layering is parallel. Differences in laminations are generally caused by cyclic changes in the sediment supply, caused, for example, by seasonal changes in rainfall, temperature or biochemical activity. Laminae that represent seasonal changes (similar to tree rings) are called varves. Any sedimentary rock composed of millimeter or finer scale layers can be named with the general term laminite. When sedimentary rocks have no lamination at all, their structural character is called massive bedding.
Graded bedding is a structure where beds with a smaller grain size occur on top of beds with larger grains. This structure forms when fast flowing water stops flowing. Larger, heavier clasts in suspension settle first, then smaller clasts. Although graded bedding can form in many different environments, it is a characteristic of turbidity currents.
The surface of a particular bed, called the bedform, can also be indicative of a particular sedimentary environment. Examples of bed forms include dunes and ripple marks. Sole markings, such as tool marks and flute casts, are grooves eroded on a surface that are preserved by renewed sedimentation. These are often elongated structures and can be used to establish the direction of the flow during deposition.
Ripple marks also form in flowing water. There can be symmetric or asymmetric. Asymmetric ripples form in environments where the current is in one direction, such as rivers. The longer flank of such ripples is on the upstream side of the current. Symmetric wave ripples occur in environments where currents reverse directions, such as tidal flats.
Mudcracks are a bed form caused by the dehydration of sediment that occasionally comes above the water surface. Such structures are commonly found at tidal flats or point bars along rivers.
Secondary sedimentary structures
Secondary sedimentary structures are those which formed after deposition. Such structures form by chemical, physical and biological processes within the sediment. They can be indicators of circumstances after deposition. Some can be used as way up criteria.
Organic materials in a sediment can leave more traces than just fossils. Preserved tracks and burrows are examples of trace fossils (also called ichnofossils). Such traces are relatively rare. Most trace fossils are burrows of molluscs or arthropods. This burrowing is called bioturbation by sedimentologists. It can be a valuable indicator of the biological and ecological environment that existed after the sediment was deposited. On the other hand, the burrowing activity of organisms can destroy other (primary) structures in the sediment, making a reconstruction more difficult.
Secondary structures can also form by diagenesis or the formation of a soil (pedogenesis) when a sediment is exposed above the water level. An example of a diagenetic structure common in carbonate rocks is a stylolite. Stylolites are irregular planes where material was dissolved into the pore fluids in the rock. This can result in the precipitation of a certain chemical species producing colouring and staining of the rock, or the formation of concretions. Concretions are roughly concentric bodies with a different composition from the host rock. Their formation can be the result of localized precipitation due to small differences in composition or porosity of the host rock, such as around fossils, inside burrows or around plant roots. In carbonate rocks such as limestone or chalk, chert or flint concretions are common, while terrestrial sandstones sometimes contain iron concretions. Calcite concretions in clay containing angular cavities or cracks are called septarian concretions.
After deposition, physical processes can deform the sediment, producing a third class of secondary structures. Density contrasts between different sedimentary layers, such as between sand and clay, can result in flame structures or load casts, formed by inverted diapirism. While the clastic bed is still fluid, diapirism can cause a denser upper layer to sink into a lower layer. Sometimes, density contrasts occur or are enhanced when one of the lithologies dehydrates. Clay can be easily compressed as a result of dehydration, while sand retains the same volume and becomes relatively less dense. On the other hand, when the pore fluid pressure in a sand layer surpasses a critical point, the sand can break through overlying clay layers and flow through, forming discordant bodies of sedimentary rock called sedimentary dykes. The same process can form mud volcanoes on the surface where they broke through upper layers.
Sedimentary dykes can also be formed in a cold climate where the soil is permanently frozen during a large part of the year. Frost weathering can form cracks in the soil that fill with rubble from above. Such structures can be used as climate indicators as well as way up structures.
Density contrasts can also cause small-scale faulting, even while sedimentation progresses (synchronous-sedimentary faulting). Such faulting can also occur when large masses of non-lithified sediment are deposited on a slope, such as at the front side of a delta or the continental slope. Instabilities in such sediments can result in the deposited material to slump, producing fissures and folding. The resulting structures in the rock are syn-sedimentary folds and faults, which can be difficult to distinguish from folds and faults formed by tectonic forces acting on lithified rocks.
Depositional environments
The setting in which a sedimentary rock forms is called the depositional environment. Every environment has a characteristic combination of geologic processes, and circumstances. The type of sediment that is deposited is not only dependent on the sediment that is transported to a place (provenance), but also on the environment itself.
A marine environment means that the rock was formed in a sea or ocean. Often, a distinction is made between deep and shallow marine environments. Deep marine usually refers to environments more than 200 m below the water surface (including the abyssal plain). Shallow marine environments exist adjacent to coastlines and can extend to the boundaries of the continental shelf. The water movements in such environments have a generally higher energy than that in deep environments, as wave activity diminishes with depth. This means that coarser sediment particles can be transported and the deposited sediment can be coarser than in deeper environments. When the sediment is transported from the continent, an alternation of sand, clay and silt is deposited. When the continent is far away, the amount of such sediment deposited may be small, and biochemical processes dominate the type of rock that forms. Especially in warm climates, shallow marine environments far offshore mainly see deposition of carbonate rocks. The shallow, warm water is an ideal habitat for many small organisms that build carbonate skeletons. When these organisms die, their skeletons sink to the bottom, forming a thick layer of calcareous mud that may lithify into limestone. Warm shallow marine environments also are ideal environments for coral reefs, where the sediment consists mainly of the calcareous skeletons of larger organisms.
In deep marine environments, the water current working the sea bottom is small. Only fine particles can be transported to such places. Typically sediments depositing on the ocean floor are fine clay or small skeletons of micro-organisms. At 4 km depth, the solubility of carbonates increases dramatically (the depth zone where this happens is called the lysocline). Calcareous sediment that sinks below the lysocline dissolves; as a result, no limestone can be formed below this depth. Skeletons of micro-organisms formed of silica (such as radiolarians) are not as soluble and are still deposited. An example of a rock formed of silica skeletons is radiolarite. When the bottom of the sea has a small inclination, for example, at the continental slopes, the sedimentary cover can become unstable, causing turbidity currents. Turbidity currents are sudden disturbances of the normally quiet deep marine environment and can cause the near-instantaneous deposition of large amounts of sediment, such as sand and silt. The rock sequence formed by a turbidity current is called a turbidite.
The coast is an environment dominated by wave action. At a beach, dominantly denser sediment such as sand or gravel, often mingled with shell fragments, is deposited, while the silt and clay sized material is kept in mechanical suspension. Tidal flats and shoals are places that sometimes dry because of the tide. They are often cross-cut by gullies, where the current is strong and the grain size of the deposited sediment is larger. Where rivers enter the body of water, either on a sea or lake coast, deltas can form. These are large accumulations of sediment transported from the continent to places in front of the mouth of the river. Deltas are dominantly composed of clastic (rather than chemical) sediment.
A continental sedimentary environment is an environment in the interior of a continent. Examples of continental environments are lagoons, lakes, swamps, floodplains and alluvial fans. In the quiet water of swamps, lakes and lagoons, fine sediment is deposited, mingled with organic material from dead plants and animals. In rivers, the energy of the water is much greater and can transport heavier clastic material. Besides transport by water, sediment can be transported by wind or glaciers. Sediment transported by wind is called aeolian and is almost always very well sorted, while sediment transported by a glacier is called glacial till and is characterized by very poor sorting.
Aeolian deposits can be quite striking. The depositional environment of the Touchet Formation, located in the Northwestern United States, had intervening periods of aridity which resulted in a series of rhythmite layers. Erosional cracks were later infilled with layers of soil material, especially from aeolian processes. The infilled sections formed vertical inclusions in the horizontally deposited layers, and thus provided evidence of the sequence of events during deposition of the forty-one layers of the formation.
Sedimentary facies
The kind of rock formed in a particular depositional environment is called its sedimentary facies. Sedimentary environments usually exist alongside each other in certain natural successions. A beach, where sand and gravel is deposited, is usually bounded by a deeper marine environment a little offshore, where finer sediments are deposited at the same time. Behind the beach, there can be dunes (where the dominant deposition is well sorted sand) or a lagoon (where fine clay and organic material is deposited). Every sedimentary environment has its own characteristic deposits. When sedimentary strata accumulate through time, the environment can shift, forming a change in facies in the subsurface at one location. On the other hand, when a rock layer with a certain age is followed laterally, the lithology (the type of rock) and facies eventually change.
Facies can be distinguished in a number of ways: the most common are by the lithology (for example: limestone, siltstone or sandstone) or by fossil content. Coral, for example, only lives in warm and shallow marine environments and fossils of coral are thus typical for shallow marine facies. Facies determined by lithology are called lithofacies; facies determined by fossils are biofacies.
Sedimentary environments can shift their geographical positions through time. Coastlines can shift in the direction of the sea when the sea level drops (regression), when the surface rises (transgression) due to tectonic forces in the Earth's crust or when a river forms a large delta. In the subsurface, such geographic shifts of sedimentary environments of the past are recorded in shifts in sedimentary facies. This means that sedimentary facies can change either parallel or perpendicular to an imaginary layer of rock with a fixed age, a phenomenon described by Walther's Law.
The situation in which coastlines move in the direction of the continent is called transgression. In the case of transgression, deeper marine facies are deposited over shallower facies, a succession called onlap. Regression is the situation in which a coastline moves in the direction of the sea. With regression, shallower facies are deposited on top of deeper facies, a situation called offlap.
The facies of all rocks of a certain age can be plotted on a map to give an overview of the palaeogeography. A sequence of maps for different ages can give an insight in the development of the regional geography.
Gallery of sedimentary facies
Sedimentary basins
Places where large-scale sedimentation takes place are called sedimentary basins. The amount of sediment that can be deposited in a basin depends on the depth of the basin, the so-called accommodation space. The depth, shape and size of a basin depend on tectonics, movements within the Earth's lithosphere. Where the lithosphere moves upward (tectonic uplift), land eventually rises above sea level and the area becomes a source for new sediment as erosion removes material. Where the lithosphere moves downward (tectonic subsidence), a basin forms and sediments are deposited.
A type of basin formed by the moving apart of two pieces of a continent is called a rift basin. Rift basins are elongated, narrow and deep basins. Due to divergent movement, the lithosphere is stretched and thinned, so that the hot asthenosphere rises and heats the overlying rift basin. Apart from continental sediments, rift basins normally also have part of their infill consisting of volcanic deposits. When the basin grows due to continued stretching of the lithosphere, the rift grows and the sea can enter, forming marine deposits.
When a piece of lithosphere that was heated and stretched cools again, its density rises, causing isostatic subsidence. If this subsidence continues long enough, the basin is called a sag basin. Examples of sag basins are the regions along passive continental margins, but sag basins can also be found in the interior of continents. In sag basins, the extra weight of the newly deposited sediments is enough to keep the subsidence going in a vicious circle. The total thickness of the sedimentary infill in a sag basin can thus exceed 10 km.
A third type of basin exists along convergent plate boundaries – places where one tectonic plate moves under another into the asthenosphere. The subducting plate bends and forms a fore-arc basin in front of the overriding plate – an elongated, deep asymmetric basin. Fore-arc basins are filled with deep marine deposits and thick sequences of turbidites. Such infill is called flysch. When the convergent movement of the two plates results in continental collision, the basin becomes shallower and develops into a foreland basin. At the same time, tectonic uplift forms a mountain belt in the overriding plate, from which large amounts of material are eroded and transported to the basin. Such erosional material of a growing mountain chain is called molasse and has either a shallow marine or a continental facies.
At the same time, the growing weight of the mountain belt can cause isostatic subsidence in the area of the overriding plate on the other side to the mountain belt. The basin type resulting from this subsidence is called a back-arc basin and is usually filled by shallow marine deposits and molasse.
Influence of astronomical cycles
In many cases facies changes and other lithological features in sequences of sedimentary rock have a cyclic nature. This cyclic nature was caused by cyclic changes in sediment supply and the sedimentary environment. Most of these cyclic changes are caused by astronomic cycles. Short astronomic cycles can be the difference between the tides or the spring tide every two weeks. On a larger time-scale, cyclic changes in climate and sea level are caused by Milankovitch cycles: cyclic changes in the orientation and/or position of the Earth's rotational axis and orbit around the Sun. There are a number of Milankovitch cycles known, lasting between 10,000 and 200,000 years.
Relatively small changes in the orientation of the Earth's axis or length of the seasons can be a major influence on the Earth's climate. An example are the ice ages of the past 2.6 million years (the Quaternary period), which are assumed to have been caused by astronomic cycles. Climate change can influence the global sea level (and thus the amount of accommodation space in sedimentary basins) and sediment supply from a certain region. Eventually, small changes in astronomic parameters can cause large changes in sedimentary environment and sedimentation.
Sedimentation rates
The rate at which sediment is deposited differs depending on the location. A channel in a tidal flat can see the deposition of a few metres of sediment in one day, while on the deep ocean floor each year only a few millimetres of sediment accumulate. A distinction can be made between normal sedimentation and sedimentation caused by catastrophic processes. The latter category includes all kinds of sudden exceptional processes like mass movements, rock slides or flooding. Catastrophic processes can see the sudden deposition of a large amount of sediment at once. In some sedimentary environments, most of the total column of sedimentary rock was formed by catastrophic processes, even though the environment is usually a quiet place. Other sedimentary environments are dominated by normal, ongoing sedimentation.
In many cases, sedimentation occurs slowly. In a desert, for example, the wind deposits siliciclastic material (sand or silt) in some spots, or catastrophic flooding of a wadi may cause sudden deposits of large quantities of detrital material, but in most places eolian erosion dominates. The amount of sedimentary rock that forms is not only dependent on the amount of supplied material, but also on how well the material consolidates. Erosion removes most deposited sediment shortly after deposition.
Stratigraphy
Sedimentary rock are laid down in layers called beds or strata, each layer is horizontally laid down over the older ones and new layers are above older layers as stated in the principle of superposition. There are usually some gaps in the sequence called unconformities which represent periods where no new sediments were laid down, or when earlier sedimentary layers were raised above sea level and eroded away.
Unconformities can be classified based on the orientation of the strata on either sides of the unconformity:
Angular unconformity when the earlier layers are tilted and eroded while the later layers are horizontally laid.
Nonconformity if the early layers have no bedding in contrast to the later layers, ie. they are igneous or metamorphic rocks.
Disconformity if both the early beds and the later beds are parallel to each other.
Sedimentary rocks contain important information about the history of the Earth. They contain fossils, the preserved remains of ancient plants and animals. Coal is considered a type of sedimentary rock. The composition of sediments provides us with clues as to the original rock. Differences between successive layers indicate changes to the environment over time. Sedimentary rocks can contain fossils because, unlike most igneous and metamorphic rocks, they form at temperatures and pressures that do not destroy fossil remains.
Provenance
Provenance is the reconstruction of the origin of sediments. All rock exposed at Earth's surface is subjected to physical or chemical weathering and broken down into finer grained sediment. All three types of rocks (igneous, sedimentary and metamorphic rocks) can be the source of sedimentary detritus. The purpose of sedimentary provenance studies is to reconstruct and interpret the history of sediment from the initial parent rocks at a source area to final detritus at a burial place.
See also
References
Citations
General and cited references
External links
Basic Sedimentary Rock Classification , by Lynn S. Fichter, James Madison University, Harrisonburg.VI;
Sedimentary Rocks Tour, introduction to sedimentary rocks, by Bruce Perry, Department of Geological Sciences, California State University at Long Beach.
Petrology
Rocks
la:Sedimentum | Sedimentary rock | [
"Physics"
] | 8,506 | [
"Rocks",
"Physical objects",
"Matter"
] |
44,428 | https://en.wikipedia.org/wiki/Chondrus%20crispus | Chondrus crispus—commonly called Irish moss or carrageenan moss (Irish carraigín, "little rock")—is a species of red algae which grows abundantly along the rocky parts of the Atlantic coasts of Europe and North America. In its fresh condition it is soft and cartilaginous, varying in color from a greenish-yellow, through red, to a dark purple or purplish-brown. The principal constituent is a mucilaginous body, made of the polysaccharide carrageenan, which constitutes 55% of its dry weight. The organism also consists of nearly 10% dry weight protein and about 15% dry weight mineral matter, and is rich in iodine and sulfur. When softened in water it has a sea-like odour. Because of the abundant cell wall polysaccharides, it will form a jelly when boiled, containing from 20 to 100 times its weight of water.
Description
Chondrus crispus is a relatively small sea alga, reaching up to a little more than in length. It grows from a discoid holdfast and branches four or five times in a dichotomous, fan-like manner. The morphology is highly variable, especially the broadness of the thalli. The branches are 2–15 mm broad and firm in texture, and the color ranges from bright green towards the surface of the water, to deep red at greater depths. The gametophytes (see below) often show a blue iridescence at the tip of the fronds and fertile sporophytes show a spotty pattern. Mastocarpus stellatus (Stackhouse) Guiry is a similar species which can be readily distinguished by its strongly channelled and often somewhat twisted thalli.
Irish moss undergoes an alternation of generation lifecycle common in many species of algae. The two distinct stages are the sexual haploid gametophyte stage and the asexual diploid sporophyte stage. In addition, a third stage – the carposporophyte – is formed on the female gametophyte after fertilization. The male and female gametophytes produce gametes which fuse to form a diploid carposporophyte, which forms carpospores, which develops into the sporophyte. The sporophyte then undergoes meiosis to produce haploid tetraspores (which can be male or female) that develop into gametophytes. The three stages (male, female, and sporophyte) are difficult to distinguish when they are not fertile; however, the gametophytes often show a blue iridescence.
Distribution
Chondrus crispus is common all around the shores of Ireland and can also be found along the coast of Europe including Iceland, the Faroe Islands, western Baltic Sea to southern Spain. It is found on the Atlantic coasts of Canada and recorded from California in the United States to Japan. However, any distribution outside the Northern Atlantic needs to be verified.
There are also other species of the same genus in the Pacific Ocean, for example, C. ocellatus Holmes, C. nipponicus Yendo, C. yendoi Yamada et Mikami, C. pinnulatus (Harvey) Okamura and C. armatus (Harvey) Yamada et Mikami.
Ecology
Chondrus crispus is found growing on rock from the middle intertidal zone into the subtidal zone, all the way to the ocean floor. It is able to survive with minimal sunlight.
C. crispus is susceptible to infection from the oomycete Pythium porphyrae.
Uses
C. crispus is an industrial source of carrageenan commonly used as a thickener and stabilizer in milk products, such as ice cream and processed foods. In Europe, it is indicated as E407 or E407a. It may also be used as a thickener in calico printing and paper marbling, and for fining beer. Irish moss is frequently used with Mastocarpus stellatus (Gigartina mamillosa), Chondracanthus acicularis (G. acicularis), and other seaweeds, which are all commonly found growing together. Carrageenan may be extracted from tropical seaweeds of the genera Kappaphycus and Eucheuma.
Scientific interest
C. crispus, compared to most other seaweeds, is well-investigated scientifically. It has been used as a model species to study photosynthesis, carrageenan biosynthesis, and stress responses. The nuclear genome was sequenced in 2013. The genome size is 105 Mbp and is coding for 9,606 genes. It is characterised by relatively few genes with very few introns. The genes are clustered together, with normally short distances between genes and then large distances between groups of genes.
See also
Gelidium amansii
References
External links
AlgaeBase: Chondrus crispus
Chondrus crispus Stackhouse Chondrus crispus.
Marine Life Information Network
Irish Moss industry on Prince Edward Island
Sea Moss Market Report
Edible algae
Gigartinaceae
Flora of Jamaica
Demulcents
Taxa named by John Stackhouse
Flora without expected TNC conservation status
Lithophytes | Chondrus crispus | [
"Biology"
] | 1,092 | [
"Edible algae",
"Algae",
"Lithophytes",
"Plants"
] |
44,439 | https://en.wikipedia.org/wiki/Erotic%20spanking | Erotic spanking is the act of spanking another person for the sexual arousal or gratification of either or both parties. The intensity of the act can vary in both its duration and severity, and may include the use of one or more spanking implements (such as the wooden spoon or cane). Activities range from a spontaneous smack on bare buttocks during sexual activity to sexual roleplaying, such as ageplay or domestic discipline. Erotic spanking is often found within and associated with BDSM, but the activity is not exclusive to it. The term spankee is commonly used within erotic spanking to refer to the individual receiving a spanking.
History
Pre 19th century
One of the earliest depictions of erotic spanking is found in the Etruscan Tomb of the Whipping from the fifth century BC.
Early sex manuals such as the Indian Kama Sutra (circa 400 BC), Indian Koka Shastra (ca. 1150 AD) and Arabic The Perfumed Garden (ca. 1400 AD) have among their recommendations the use of spanking to enhance sexual arousal.
19th century
An increase in interest in erotic spanking can be observed during the 19th century in the form of spanking literature and spanking photography, particularly in France and the United Kingdom. Numerous limited edition spanking novels were published, many of which are now classified as novellas.
Early 20th century
This interest for spanking (both in regards to literature and photography) followed into the next century, with the early 20th century being considered the "Golden Age" of spanking literature. This period of spanking literature is marked by three notable characteristics. First, greater audiences were reached with the availability of less expensive editions and greater print runs. Second, many of the spanking novels contained numerous illustrations (many of which have fallen under public domain and are easily available online). Third, this period saw a gradual increase in the output and publication of spanking literature, growing particularly within the 1920s and peaking within the 1930s. Much of the output of spanking literature during this period was by French publishers, writers and illustrators. For example, Jules Malteste, known as Louis Malteste (1862–1928) was a French writer, painter, engraver, lithographer, draughtsman, and illustrator, commonly known for his depictions of spanking. Similarly, within the context of spanking photography, France was also the home to the creation of much content, with the most notable studios being the Biederer Studio and the related Ostra Studio. This "Golden Age" of spanking literature (and French spanking photography) came to an end as a result of the Second World War, more specifically due to the German occupation of France between 1940 and 1944 and later the enforcement of censorship laws. A somewhat notable exception to the decline of spanking literature during this period was John Willie's bondage Bizarre magazine (published between 1946 and 1959).
Of the many French works from the "Golden Age" few at the time were translated into other languages within which spanking literature was popular, namely English and German, but beginning during the mid-1960s a number of these French works were translated into English and published, along with these works being republished in French and older British works also being republished. The occurrence of this was facilitated by the availability of mass-produced paperbacks and changes in censorship laws.
Late 20th century
During the latter half of the 20th century changes in both censorship laws and improvements within technology enabled greater accessibility for those interested in obtaining content related to erotic spanking.
As an extension of the earlier developments within spanking literature and photography, during this period various fetish magazines devoted to stories and imagery of erotic spanking became legally available.
This period also saw an important development in both the production and consumer accessibility of erotic spanking films. Whilst recordings of erotic spanking had been produced as early as the 1920s, until the 1980s technology limited the quality of their recording and the ability for consumers to easily watch them. In addition to changes in censorship laws, the introduction of the videocassette recorder enabled creators to produce and distribute erotic spanking films that were far easier for consumers to both obtain and watch.
21st century
The proliferation of the internet has enabled individuals, of various level of pre-existing interest and/or knowledge, more ease then ever to explore and consume content relating to erotic spanking.
Partially developing from earlier erotic spanking magazine and video producers, numerous pornographic websites (primary American or British based) within the 21st century are devoted to producing spanking films of various lengths and about various scenarios.
The internet has also resulted in the creation of various blogs which discuss the topic of erotic spanking and non-profit websites which publish erotic spanking stories.
Practice
Implements
Whilst a spanking may be simply given with the palm of the hand, the use of spanking implements is common within erotic spanking. Spanking implements which are commonly used within erotic spanking are a reflection of the traditional spanking implements that were/are used in corporal punishment more broadly. Common and traditional spanking implements include those which have been specifically manufactured for such purpose (i.e. the cane, paddle, strap, tawse and martinet) and those which have been adapted/improvised from available items (such as the slipper, wooden spoon, hairbrush, bath brush, carpet beater, riding crop, switch and birch). Other less common and atypical spanking implements include the (handle of a) feather duster (common, however, in China), fly swatter and stinging nettles.
Some spanking implements can be characterised as being either 'stingy' or 'thuddy'. Stingy implements (such as a cane) produce a sharp and quick burning sensation which is mostly felt on the skin. Thuddy implements (such as a paddle), in contrast, whilst not producing a stinging sensation penetrate deeper into tissue of the buttocks. As a general rule, the heavier a spanking implement is the greater thud it will produce. A person receiving an erotic spanking may have a preference for either a stingy or thuddy sensation.
Safety
For safety, during a spanking, particularly with spanking implements, care should be taken to avoid striking the tailbone and hipbones.
Apparatuses
Erotic spanking can include the use of apparatuses, both those which are adapted/improvised and those which are specifically created for such use.
The use of adapted/improvised apparatuses within erotic spanking derives from how such apparatuses were used in non-erotic spanking. The gymnastic vaulting-buck (which sees the receiver of the spanking bent-over it) was often employed during non-erotic school slipperings and canings; consequently erotic spanking, particularly that which involves school roleplay, may incorporate use of a vaulting-buck.
A spanking bench or spanking horse is a piece of erotic furniture explicitly created for erotic spanking. It is used to position a spankee on, with or without restraints. They come in many sizes and styles, the most popular design of which is similar to a sawhorse (used in woodworking) with a padded top and rings for restraints. The 19th-century British dominatrix Mrs. Theresa Berkley became famous for her invention of the Berkley Horse, a similar form of BDSM apparatus.
Fetish wear
Often erotic spanking will be combined with sexual roleplay which may see one or more parties dress-up in certain clothing, for example a female spankee wearing a schoolgirl uniform in a Teacher – Student scenario.
A spank skirt or spanking skirt is a skirt that has an additional opening in back designed to expose the buttocks. While the name spank skirt suggests the intention that the wearer be spanked "bare bottom" without removing or repositioning the skirt, this item may be worn for reasons other than spanking (for instance, exhibitionism.) Considered fetish wear, these kind of skirts are typically tight-fitting and made of fetishistic materials (such as leather, PVC or latex). Regardless of the gender of the wearer, spank skirts are usually considered female attire. The male gender role equivalent might be motorcycle chaps (a.k.a. "assless chaps").
Self-spanking
Self-spanking is the practice in which the individual spanks themselves, thus making the spankee and the spanker one and the same. This can occur for a number of reasons: the individual is experimenting with spanking; the individual may lack someone (who is willing and the spankee is comfortable with doing so) to give them a spanking; the individual spanks themselves during masturbation; the individual is a submissive in a BDSM relationship and is self-spanking on orders of the dominant partner.
Psychology and prevalence
A number of explanations have been put forward to account why an individual may enjoy erotic spanking, although there is almost certainly no universal explanation. Interest and gratification in spanking varies per the individual: an individual may gain gratification in spanking another and gain no gratification in being spanked themself; an individual may gain gratification in being spanked and gain no gratification in spanking another; an individual may gain gratification in both spanking another and being spanked themself.
A scientific survey of 152 people who claimed to practice sadomasochism — of which spanking is a subset — categorised the origin of such interest to be either extrinsic (i.e. the interest originating from a source external to the person) or intrinsic (i.e. emerged naturally). 22% of those surveyed reported origins that were categorised by researchers as extrinsic, such as parental discipline or being introduced to sadomasochism in adulthood by another. In contrast, 78% reported intrinsically having such an interest. Most commonly an intrinsic interest was reported to have emerged in either childhood or adolescence (though at such age the interest was not necessarily sexualised); a small sub-group reported that only in adulthood did they accept or acknowledge what they now recognise to be an intrinsic interest. The majority of those that reported an intrinsic interest were unable to explain the origin of such interest.
From the same scientific survey as above, a third of testimonies reported enjoying receiving pain as to why they practiced sadomasochism; researchers noted that practitioners commonly stressed the difference between "bad" and "good" pain. Interpersonal power (either through giving or exchanging power) was also a commonly reported reason as to why people practice sadomasochism, and researchers noted that this taking place with a trusted partner was a recurring specified need.
Journalist Jillian Keenan has argued that spanking fetishism is a form of sexual orientation, which should not be considered a mental illness. Whilst there has been an increasing ability to talk openly about erotic spanking within mainstream society, individuals may still find it difficult to express that they have an interest in erotic spanking.
See also
Algolagnia
BDSM
Christian domestic discipline
Dominatrix
Impact play
Male dominance (BDSM)
Sadism and masochism in fiction
Sadomasochism
References
Notes
Further reading
Koetzle, Michael. 1000 Nudes: A History of Erotic Photography from 1839–1939. Taschen, 2005.
Lady Green, The Compleat Spanker. Greenery Press, 2000. .
Marcus, Steven. The Other Victorians. Basic Books, 1966.
Swinburne, Charles Algernon. The Works of Charles Algernon Swinburne. Hertfordshire: Wordsworth Editions, 1995.
External links
The Spanking Art Wiki – relaunched
Spanking Artworks
Biblio Curiosa Wiki
Basics of Erotic Spanking
BDSM
Paraphilias
Sexual acts
Spanking | Erotic spanking | [
"Biology"
] | 2,421 | [
"Sexual acts",
"Behavior",
"Sexuality",
"Mating"
] |
44,469 | https://en.wikipedia.org/wiki/Pluto | Pluto (minor-planet designation: 134340 Pluto) is a dwarf planet in the Kuiper belt, a ring of bodies beyond the orbit of Neptune. It is the ninth-largest and tenth-most-massive known object to directly orbit the Sun. It is the largest known trans-Neptunian object by volume, by a small margin, but is less massive than Eris. Like other Kuiper belt objects, Pluto is made primarily of ice and rock and is much smaller than the inner planets. Pluto has roughly one-sixth the mass of the Moon, and one-third its volume.
Pluto has a moderately eccentric and inclined orbit, ranging from from the Sun. Light from the Sun takes 5.5 hours to reach Pluto at its orbital distance of . Pluto's eccentric orbit periodically brings it closer to the Sun than Neptune, but a stable orbital resonance prevents them from colliding.
Pluto has five known moons: Charon, the largest, whose diameter is just over half that of Pluto; Styx; Nix; Kerberos; and Hydra. Pluto and Charon are sometimes considered a binary system because the barycenter of their orbits does not lie within either body, and they are tidally locked. New Horizons was the first spacecraft to visit Pluto and its moons, making a flyby on July 14, 2015, and taking detailed measurements and observations.
Pluto was discovered in 1930 by Clyde W. Tombaugh, making it by far the first known object in the Kuiper belt. It was immediately hailed as the ninth planet. However, its planetary status was questioned when it was found to be much smaller than expected. These doubts increased following the discovery of additional objects in the Kuiper belt starting in the 1990s, and particularly the more massive scattered disk object Eris in 2005. In 2006, the International Astronomical Union (IAU) formally redefined the term planet to exclude dwarf planets such as Pluto. Many planetary astronomers, however, continue to consider Pluto and other dwarf planets to be planets.
History
Discovery
In the 1840s, Urbain Le Verrier used Newtonian mechanics to predict the position of the then-undiscovered planet Neptune after analyzing perturbations in the orbit of Uranus. Subsequent observations of Neptune in the late 19th century led astronomers to speculate that Uranus's orbit was being disturbed by another planet besides Neptune.
In 1906, Percival Lowell—a wealthy Bostonian who had founded Lowell Observatory in Flagstaff, Arizona, in 1894—started an extensive project in search of a possible ninth planet, which he termed "Planet X". By 1909, Lowell and William H. Pickering had suggested several possible celestial coordinates for such a planet. Lowell and his observatory conducted his search, using mathematical calculations made by Elizabeth Williams, until his death in 1916, but to no avail. Unknown to Lowell, his surveys had captured two faint images of Pluto on March 19 and April 7, 1915, but they were not recognized for what they were. There are fourteen other known precovery observations, with the earliest made by the Yerkes Observatory on August 20, 1909.
Percival's widow, Constance Lowell, entered into a ten-year legal battle with the Lowell Observatory over her husband's legacy, and the search for Planet X did not resume until 1929. Vesto Melvin Slipher, the observatory director, gave the job of locating Planet X to 23-year-old Clyde Tombaugh, who had just arrived at the observatory after Slipher had been impressed by a sample of his astronomical drawings.
Tombaugh's task was to systematically image the night sky in pairs of photographs, then examine each pair and determine whether any objects had shifted position. Using a blink comparator, he rapidly shifted back and forth between views of each of the plates to create the illusion of movement of any objects that had changed position or appearance between photographs. On February 18, 1930, after nearly a year of searching, Tombaugh discovered a possible moving object on photographic plates taken on January 23 and 29. A lesser-quality photograph taken on January 21 helped confirm the movement. After the observatory obtained further confirmatory photographs, news of the discovery was telegraphed to the Harvard College Observatory on March 13, 1930.
One Plutonian year corresponds to 247.94 Earth years; thus, in 2178, Pluto will complete its first orbit since its discovery.
Name and symbol
The name Pluto came from the Roman god of the underworld; and it is also an epithet for Hades (the Greek equivalent of Pluto).
Upon the announcement of the discovery, Lowell Observatory received over a thousand suggestions for names. Three names topped the list: Minerva, Pluto and Cronus. 'Minerva' was the Lowell staff's first choice but was rejected because it had already been used for an asteroid; Cronus was disfavored because it was promoted by an unpopular and egocentric astronomer, Thomas Jefferson Jackson See. A vote was then taken and 'Pluto' was the unanimous choice. To make sure the name stuck, and that the planet would not suffer changes in its name as Uranus had, Lowell Observatory proposed the name to the American Astronomical Society and the Royal Astronomical Society; both approved it unanimously. The name was published on May 1, 1930.
The name Pluto had received some 150 nominations among the letters and telegrams sent to Lowell. The first had been from Venetia Burney (1918–2009), an eleven-year-old schoolgirl in Oxford, England, who was interested in classical mythology. She had suggested it to her grandfather Falconer Madan when he read the news of Pluto's discovery to his family over breakfast; Madan passed the suggestion to astronomy professor Herbert Hall Turner, who cabled it to colleagues at Lowell on March 16, three days after the announcement.
The name 'Pluto' was mythologically appropriate: the god Pluto was one of six surviving children of Saturn, and the others had already all been chosen as names of major or minor planets (his brothers Jupiter and Neptune, and his sisters Ceres, Juno and Vesta). Both the god and the planet inhabited "gloomy" regions, and the god was able to make himself invisible, as the planet had been for so long.
The choice was further helped by the fact that the first two letters of Pluto were the initials of Percival Lowell; indeed, 'Percival' had been one of the more popular suggestions for a name for the new planet.
Pluto's planetary symbol was then created as a monogram of the letters "PL". This symbol is rarely used in astronomy anymore, though it is still common in astrology. However, the most common astrological symbol for Pluto, occasionally used in astronomy as well, is an orb (possibly representing Pluto's invisibility cap) over Pluto's bident , which dates to the early 1930s.
The name 'Pluto' was soon embraced by wider culture. In 1930, Walt Disney was apparently inspired by it when he introduced for Mickey Mouse a canine companion named Pluto, although Disney animator Ben Sharpsteen could not confirm why the name was given. In 1941, Glenn T. Seaborg named the newly created element plutonium after Pluto, in keeping with the tradition of naming elements after newly discovered planets, following uranium, which was named after Uranus, and neptunium, which was named after Neptune.
Most languages use the name "Pluto" in various transliterations. In Japanese, Houei Nojiri suggested the calque , and this was borrowed into Chinese and Korean. Some languages of India use the name Pluto, but others, such as Hindi, use the name of Yama, the God of Death in Hinduism. Polynesian languages also tend to use the indigenous god of the underworld, as in Māori Whiro.
Vietnamese might be expected to follow Chinese, but does not because the Sino-Vietnamese word 冥 minh "dark" is homophonous with 明 minh "bright". Vietnamese instead uses Yama, which is also a Buddhist deity, in the form of Sao Diêm Vương 星閻王 "Yama's Star", derived from Chinese 閻王 Yán Wáng / Yìhm Wòhng "King Yama".
Planet X disproved
Once Pluto was found, its faintness and lack of a viewable disc cast doubt on the idea that it was Lowell's Planet X. Estimates of Pluto's mass were revised downward throughout the 20th century.
Astronomers initially calculated its mass based on its presumed effect on Neptune and Uranus. In 1931, Pluto was calculated to be roughly the mass of Earth, with further calculations in 1948 bringing the mass down to roughly that of Mars. In 1976, Dale Cruikshank, Carl Pilcher and David Morrison of the University of Hawaiʻi calculated Pluto's albedo for the first time, finding that it matched that for methane ice; this meant Pluto had to be exceptionally luminous for its size and therefore could not be more than 1 percent the mass of Earth. (Pluto's albedo is times that of Earth.)
In 1978, the discovery of Pluto's moon Charon allowed the measurement of Pluto's mass for the first time: roughly 0.2% that of Earth, and far too small to account for the discrepancies in the orbit of Uranus. Subsequent searches for an alternative Planet X, notably by Robert Sutton Harrington, failed. In 1992, Myles Standish used data from Voyager 2'''s flyby of Neptune in 1989, which had revised the estimates of Neptune's mass downward by 0.5%—an amount comparable to the mass of Mars—to recalculate its gravitational effect on Uranus. With the new figures added in, the discrepancies, and with them the need for a Planet X, vanished. the majority of scientists agree that Planet X, as Lowell defined it, does not exist. Lowell had made a prediction of Planet X's orbit and position in 1915 that was fairly close to Pluto's actual orbit and its position at that time; Ernest W. Brown concluded soon after Pluto's discovery that this was a coincidence.
Classification
From 1992 onward, many bodies were discovered orbiting in the same volume as Pluto, showing that Pluto is part of a population of objects called the Kuiper belt. This made its official status as a planet controversial, with many questioning whether Pluto should be considered together with or separately from its surrounding population. Museum and planetarium directors occasionally created controversy by omitting Pluto from planetary models of the Solar System. In February 2000 the Hayden Planetarium in New York City displayed a Solar System model of only eight planets, which made headlines almost a year later.
Ceres, Pallas, Juno and Vesta lost their planet status among most astronomers after the discovery of many other asteroids in the 1840s. On the other hand, planetary geologists often regarded Ceres, and less often Pallas and Vesta, as being different from smaller asteroids because they were large enough to have undergone geological evolution. Although the first Kuiper belt objects discovered were quite small, objects increasingly closer in size to Pluto were soon discovered, some large enough (like Pluto itself) to satisfy geological but not dynamical ideas of planethood. On July 29, 2005, the debate became unavoidable when astronomers at Caltech announced the discovery of a new trans-Neptunian object, Eris, which was substantially more massive than Pluto and the most massive object discovered in the Solar System since Triton in 1846. Its discoverers and the press initially called it the tenth planet, although there was no official consensus at the time on whether to call it a planet. Others in the astronomical community considered the discovery the strongest argument for reclassifying Pluto as a minor planet.
IAU classification
The debate came to a head in August 2006, with an IAU resolution that created an official definition for the term "planet". According to this resolution, there are three conditions for an object in the Solar System to be considered a planet:
The object must be in orbit around the Sun.
The object must be massive enough to be rounded by its own gravity. More specifically, its own gravity should pull it into a shape defined by hydrostatic equilibrium.
It must have cleared the neighborhood around its orbit.
Pluto fails to meet the third condition. Its mass is substantially less than the combined mass of the other objects in its orbit: 0.07 times, in contrast to Earth, which is 1.7 million times the remaining mass in its orbit (excluding the moon). The IAU further decided that bodies that, like Pluto, meet criteria 1 and 2, but do not meet criterion 3 would be called dwarf planets. In September 2006, the IAU included Pluto, and Eris and its moon Dysnomia, in their Minor Planet Catalogue, giving them the official minor-planet designations "(134340) Pluto", "(136199) Eris", and "(136199) Eris I Dysnomia". Had Pluto been included upon its discovery in 1930, it would have likely been designated 1164, following 1163 Saga, which was discovered a month earlier.
There has been some resistance within the astronomical community toward the reclassification, and in particular planetary scientists often continue to reject it, considering Pluto, Charon, and Eris to be planets for the same reason they do so for Ceres. In effect, this amounts to accepting only the second clause of the IAU definition. Alan Stern, principal investigator with NASA's New Horizons mission to Pluto, derided the IAU resolution. He also stated that because less than five percent of astronomers voted for it, the decision was not representative of the entire astronomical community. Marc W. Buie, then at the Lowell Observatory, petitioned against the definition. Others have supported the IAU, for example Mike Brown, the astronomer who discovered Eris.
Public reception to the IAU decision was mixed. A resolution introduced in the California State Assembly facetiously called the IAU decision a "scientific heresy". The New Mexico House of Representatives passed a resolution in honor of Clyde Tombaugh, the discoverer of Pluto and a longtime resident of that state, that declared that Pluto will always be considered a planet while in New Mexican skies and that March 13, 2007, was Pluto Planet Day. The Illinois Senate passed a similar resolution in 2009 on the basis that Tombaugh was born in Illinois. The resolution asserted that Pluto was "unfairly downgraded to a 'dwarf' planet" by the IAU." Some members of the public have also rejected the change, citing the disagreement within the scientific community on the issue, or for sentimental reasons, maintaining that they have always known Pluto as a planet and will continue to do so regardless of the IAU decision. In 2006, in its 17th annual words-of-the-year vote, the American Dialect Society voted plutoed as the word of the year. To "pluto" is to "demote or devalue someone or something". In April 2024, Arizona (where Pluto was first discovered in 1930) passed a law naming Pluto as the official state planet.
Researchers on both sides of the debate gathered in August 2008, at the Johns Hopkins University Applied Physics Laboratory for a conference that included back-to-back talks on the IAU definition of a planet. Entitled "The Great Planet Debate", the conference published a post-conference press release indicating that scientists could not come to a consensus about the definition of planet. In June 2008, the IAU had announced in a press release that the term "plutoid" would henceforth be used to refer to Pluto and other planetary-mass objects that have an orbital semi-major axis greater than that of Neptune, though the term has not seen significant use.
Orbit
Pluto's orbital period is about 248 years. Its orbital characteristics are substantially different from those of the planets, which follow nearly circular orbits around the Sun close to a flat reference plane called the ecliptic. In contrast, Pluto's orbit is moderately inclined relative to the ecliptic (over 17°) and moderately eccentric (elliptical). This eccentricity means a small region of Pluto's orbit lies closer to the Sun than Neptune's. The Pluto–Charon barycenter came to perihelion on September 5, 1989, and was last closer to the Sun than Neptune between February 7, 1979, and February 11, 1999.
Although the 3:2 resonance with Neptune (see below) is maintained, Pluto's inclination and eccentricity behave in a chaotic manner. Computer simulations can be used to predict its position for several million years (both forward and backward in time), but after intervals much longer than the Lyapunov time of 10–20 million years, calculations become unreliable: Pluto is sensitive to immeasurably small details of the Solar System, hard-to-predict factors that will gradually change Pluto's position in its orbit.
The semi-major axis of Pluto's orbit varies between about 39.3 and 39.6 AU with a period of about 19,951 years, corresponding to an orbital period varying between 246 and 249 years. The semi-major axis and period are presently getting longer.
Relationship with Neptune
Despite Pluto's orbit appearing to cross that of Neptune when viewed from north or south of the Solar System, the two objects' orbits do not intersect. When Pluto is closest to the Sun, and close to Neptune's orbit as viewed from such a position, it is also the farthest north of Neptune's path. Pluto's orbit passes about 8 AU north of that of Neptune, preventing a collision.
This alone is not enough to protect Pluto; perturbations from the planets (especially Neptune) could alter Pluto's orbit (such as its orbital precession) over millions of years so that a collision could happen. However, Pluto is also protected by its 2:3 orbital resonance with Neptune: for every two orbits that Pluto makes around the Sun, Neptune makes three, in a frame of reference that rotates at the rate that Pluto's perihelion precesses (about degrees per year). Each cycle lasts about 495 years. (There are many other objects in this same resonance, called plutinos.) At present, in each 495-year cycle, the first time Pluto is at perihelion (such as in 1989), Neptune is 57° ahead of Pluto. By Pluto's second passage through perihelion, Neptune will have completed a further one and a half of its own orbits, and will be 123° behind Pluto. Pluto and Neptune's minimum separation is over 17 AU, which is greater than Pluto's minimum separation from Uranus (11 AU). The minimum separation between Pluto and Neptune actually occurs near the time of Pluto's aphelion.
The 2:3 resonance between the two bodies is highly stable and has been preserved over millions of years. This prevents their orbits from changing relative to one another, so the two bodies can never pass near each other. Even if Pluto's orbit were not inclined, the two bodies could never collide. When Pluto's period is slightly different from 3/2 of Neptune's, the pattern of its distance from Neptune will drift. Near perihelion Pluto moves interior to Neptune's orbit and is therefore moving faster, so during the first of two orbits in the 495-year cycle, it is approaching Neptune from behind. At present it remains between 50° and 65° behind Neptune for 100 years (e.g. 1937–2036). The gravitational pull between the two causes angular momentum to be transferred to Pluto. This situation moves Pluto into a slightly larger orbit, where it has a slightly longer period, according to Kepler's third law. After several such repetitions, Pluto is sufficiently delayed that at the second perihelion of each cycle it will not be far ahead of Neptune coming behind it, and Neptune will start to decrease Pluto's period again. The whole cycle takes about 20,000 years to complete.
Other factors
Numerical studies have shown that over millions of years, the general nature of the alignment between the orbits of Pluto and Neptune does not change. There are several other resonances and interactions that enhance Pluto's stability. These arise principally from two additional mechanisms (besides the 2:3 mean-motion resonance).
First, Pluto's argument of perihelion, the angle between the point where it crosses the ecliptic (or the invariant plane) and the point where it is closest to the Sun, librates around 90°. This means that when Pluto is closest to the Sun, it is at its farthest north of the plane of the Solar System, preventing encounters with Neptune. This is a consequence of the Kozai mechanism, which relates the eccentricity of an orbit to its inclination to a larger perturbing body—in this case, Neptune. Relative to Neptune, the amplitude of libration is 38°, and so the angular separation of Pluto's perihelion to the orbit of Neptune is always greater than 52° . The closest such angular separation occurs every 10,000 years.
Second, the longitudes of ascending nodes of the two bodies—the points where they cross the invariant plane—are in near-resonance with the above libration. When the two longitudes are the same—that is, when one could draw a straight line through both nodes and the Sun—Pluto's perihelion lies exactly at 90°, and hence it comes closest to the Sun when it is furthest north of Neptune's orbit. This is known as the 1:1 superresonance. All the Jovian planets (Jupiter, Saturn, Uranus, and Neptune) play a role in the creation of the superresonance.
Orcus
The second-largest known plutino, Orcus, has a diameter around 900 km and is in a very similar orbit to that of Pluto. However, the orbits of Pluto and Orcus are out of phase, so that the two never approach each other. It has been termed the "anti-Pluto", and is named for the Etruscan counterpart to the god Pluto.
Rotation
Pluto's rotation period, its day, is equal to 6.387 Earth days. Like Uranus and 2 Pallas, Pluto rotates on its "side" in its orbital plane, with an axial tilt of 120°, and so its seasonal variation is extreme; at its solstices, one-fourth of its surface is in continuous daylight, whereas another fourth is in continuous darkness. The reason for this unusual orientation has been debated. Research from the University of Arizona has suggested that it may be due to the way that a body's spin will always adjust to minimize energy. This could mean a body reorienting itself to put extraneous mass near the equator and regions lacking mass tend towards the poles. This is called polar wander. According to a paper released from the University of Arizona, this could be caused by masses of frozen nitrogen building up in shadowed areas of the dwarf planet. These masses would cause the body to reorient itself, leading to its unusual axial tilt of 120°. The buildup of nitrogen is due to Pluto's vast distance from the Sun. At the equator, temperatures can drop to , causing nitrogen to freeze as water would freeze on Earth. The same polar wandering effect seen on Pluto would be observed on Earth were the Antarctic ice sheet several times larger.
Geology
Surface
The plains on Pluto's surface are composed of more than 98 percent nitrogen ice, with traces of methane and carbon monoxide. Nitrogen and carbon monoxide are most abundant on the anti-Charon face of Pluto (around 180° longitude, where Tombaugh Regio's western lobe, Sputnik Planitia, is located), whereas methane is most abundant near 300° east. The mountains are made of water ice. Pluto's surface is quite varied, with large differences in both brightness and color. Pluto is one of the most contrastive bodies in the Solar System, with as much contrast as Saturn's moon Iapetus. The color varies from charcoal black, to dark orange and white. Pluto's color is more similar to that of Io with slightly more orange and significantly less red than Mars. Notable geographical features include Tombaugh Regio, or the "Heart" (a large bright area on the side opposite Charon), Belton Regio, or the "Whale" (a large dark area on the trailing hemisphere), and the "Brass Knuckles" (a series of equatorial dark areas on the leading hemisphere).
Sputnik Planitia, the western lobe of the "Heart", is a 1,000 km-wide basin of frozen nitrogen and carbon monoxide ices, divided into polygonal cells, which are interpreted as convection cells that carry floating blocks of water ice crust and sublimation pits towards their margins; there are obvious signs of glacial flows both into and out of the basin. It has no craters that were visible to New Horizons, indicating that its surface is less than 10 million years old. Latest studies have shown that the surface has an age of years.
The New Horizons science team summarized initial findings as "Pluto displays a surprisingly wide variety of geological landforms, including those resulting from glaciological and surface–atmosphere interactions as well as impact, tectonic, possible cryovolcanic, and mass-wasting processes."
In Western parts of Sputnik Planitia there are fields of transverse dunes formed by the winds blowing from the center of Sputnik Planitia in the direction of surrounding mountains. The dune wavelengths are in the range of 0.4–1 km and likely consist of methane particles 200–300 μm in size.
Internal structure
Pluto's density is . Because the decay of radioactive elements would eventually heat the ices enough for the rock to separate from them, scientists expect that Pluto's internal structure is differentiated, with the rocky material having settled into a dense core surrounded by a mantle of water ice. The pre–New Horizons estimate for the diameter of the core is , 70% of Pluto's diameter.
It is possible that such heating continues, creating a subsurface ocean of liquid water thick at the core–mantle boundary. In September 2016, scientists at Brown University simulated the impact thought to have formed Sputnik Planitia, and showed that it might have been the result of liquid water upweling from below after the collision, implying the existence of a subsurface ocean at least 100 km deep. In June 2020, astronomers reported evidence that Pluto may have had a subsurface ocean, and consequently may have been habitable, when it was first formed. In March 2022, a team of researchers proposed that the mountains Wright Mons and Piccard Mons are actually a merger of many smaller cryovolcanic domes, suggesting a source of heat on the body at levels previously thought not possible.
Mass and size
Pluto's diameter is and its mass is , 17.7% that of the Moon (0.22% that of Earth). Its surface area is , or just slightly bigger than Russia or Antarctica (particularly including the Antarctic sea ice during winter). Its surface gravity is 0.063 g (compared to 1 g for Earth and 0.17 g for the Moon). This gives Pluto an escape velocity of 4,363.2 km per hour / 2,711.167 miles per hour (as compared to Earth's 40,270 km per hour / 25,020 miles per hour). Pluto is more than twice the diameter and a dozen times the mass of Ceres, the largest object in the asteroid belt. It is less massive than the dwarf planet Eris, a trans-Neptunian object discovered in 2005, though Pluto has a larger diameter of 2,376.6 km compared to Eris's approximate diameter of 2,326 km.
With less than 0.2 lunar masses, Pluto is much less massive than the terrestrial planets, and also less massive than seven moons: Ganymede, Titan, Callisto, Io, the Moon, Europa, and Triton. The mass is much less than thought before Charon was discovered.
The discovery of Pluto's satellite Charon in 1978 enabled a determination of the mass of the Pluto–Charon system by application of Newton's formulation of Kepler's third law. Observations of Pluto in occultation with Charon allowed scientists to establish Pluto's diameter more accurately, whereas the invention of adaptive optics allowed them to determine its shape more accurately.
Determinations of Pluto's size have been complicated by its atmosphere and hydrocarbon haze. In March 2014, Lellouch, de Bergh et al. published findings regarding methane mixing ratios in Pluto's atmosphere consistent with a Plutonian diameter greater than 2,360 km, with a "best guess" of 2,368 km. On July 13, 2015, images from NASA's New Horizons mission Long Range Reconnaissance Imager (LORRI), along with data from the other instruments, determined Pluto's diameter to be , which was later revised to be on July 24, and later to . Using radio occultation data from the New Horizons Radio Science Experiment (REX), the diameter was found to be .
Atmosphere
Pluto has a tenuous atmosphere consisting of nitrogen (N2), methane (CH4), and carbon monoxide (CO), which are in equilibrium with their ices on Pluto's surface. According to the measurements by New Horizons, the surface pressure is about 1 Pa (10 μbar), roughly one million to 100,000 times less than Earth's atmospheric pressure. It was initially thought that, as Pluto moves away from the Sun, its atmosphere should gradually freeze onto the surface; studies of New Horizons data and ground-based occultations show that Pluto's atmospheric density increases, and that it likely remains gaseous throughout Pluto's orbit. New Horizons observations showed that atmospheric escape of nitrogen to be 10,000 times less than expected. Alan Stern has contended that even a small increase in Pluto's surface temperature can lead to exponential increases in Pluto's atmospheric density; from 18 hPa to as much as 280 hPa (three times that of Mars to a quarter that of the Earth). At such densities, nitrogen could flow across the surface as liquid. Just like sweat cools the body as it evaporates from the skin, the sublimation of Pluto's atmosphere cools its surface. Pluto has no or almost no troposphere; observations by New Horizons suggest only a thin tropospheric boundary layer. Its thickness in the place of measurement was 4 km, and the temperature was 37±3 K. The layer is not continuous.
In July 2019, an occultation by Pluto showed that its atmospheric pressure, against expectations, had fallen by 20% since 2016. In 2021, astronomers at the Southwest Research Institute confirmed the result using data from an occultation in 2018, which showed that light was appearing less gradually from behind Pluto's disc, indicating a thinning atmosphere.
The presence of methane, a powerful greenhouse gas, in Pluto's atmosphere creates a temperature inversion, with the average temperature of its atmosphere tens of degrees warmer than its surface, though observations by New Horizons have revealed Pluto's upper atmosphere to be far colder than expected (70 K, as opposed to about 100 K). Pluto's atmosphere is divided into roughly 20 regularly spaced haze layers up to 150 km high, thought to be the result of pressure waves created by airflow across Pluto's mountains.
Natural satellites
Pluto has five known natural satellites. The largest and closest to Pluto is Charon. First identified in 1978 by astronomer James Christy, Charon is the only moon of Pluto that may be in hydrostatic equilibrium. Charon's mass is sufficient to cause the barycenter of the Pluto–Charon system to be outside Pluto. Beyond Charon there are four much smaller circumbinary moons. In order of distance from Pluto they are Styx, Nix, Kerberos, and Hydra. Nix and Hydra were both discovered in 2005, Kerberos was discovered in 2011, and Styx was discovered in 2012. The satellites' orbits are circular (eccentricity < 0.006) and coplanar with Pluto's equator (inclination < 1°), and therefore tilted approximately 120° relative to Pluto's orbit. The Plutonian system is highly compact: the five known satellites orbit within the inner 3% of the region where prograde orbits would be stable.
The orbital periods of all Pluto's moons are linked in a system of orbital resonances and near-resonances. When precession is accounted for, the orbital periods of Styx, Nix, and Hydra are in an exact 18:22:33 ratio. There is a sequence of approximate ratios, 3:4:5:6, between the periods of Styx, Nix, Kerberos, and Hydra with that of Charon; the ratios become closer to being exact the further out the moons are.
The Pluto–Charon system is one of the few in the Solar System whose barycenter lies outside the primary body; the Patroclus–Menoetius system is a smaller example, and the Sun–Jupiter system is the only larger one. The similarity in size of Charon and Pluto has prompted some astronomers to call it a double dwarf planet. The system is also unusual among planetary systems in that each is tidally locked to the other, which means that Pluto and Charon always have the same hemisphere facing each other — a property shared by only one other known system, Eris and Dysnomia. From any position on either body, the other is always at the same position in the sky, or always obscured. This also means that the rotation period of each is equal to the time it takes the entire system to rotate around its barycenter.
Pluto's moons are hypothesized to have been formed by a collision between Pluto and a similar-sized body, early in the history of the Solar System. The collision released material that consolidated into the moons around Pluto.
Quasi-satellite
In 2012, it was calculated that 15810 Arawn could be a quasi-satellite of Pluto, a specific type of co-orbital configuration. According to the calculations, the object would be a quasi-satellite of Pluto for about 350,000 years out of every two-million-year period. Measurements made by the New Horizons spacecraft in 2015 made it possible to calculate the orbit of Arawn more accurately, and confirmed the earlier ones. However, it is not agreed upon among astronomers whether Arawn should be classified as a quasi-satellite of Pluto based on its orbital dynamics, since its orbit is primarily controlled by Neptune with only occasional perturbations by Pluto.
Origin
Pluto's origin and identity had long puzzled astronomers. One early hypothesis was that Pluto was an escaped moon of Neptune knocked out of orbit by Neptune's largest moon, Triton. This idea was eventually rejected after dynamical studies showed it to be impossible because Pluto never approaches Neptune in its orbit.
Pluto's true place in the Solar System began to reveal itself only in 1992, when astronomers began to find small icy objects beyond Neptune that were similar to Pluto not only in orbit but also in size and composition. This trans-Neptunian population is thought to be the source of many short-period comets. Pluto is the largest member of the Kuiper belt, a stable belt of objects located between 30 and 50 AU from the Sun. As of 2011, surveys of the Kuiper belt to magnitude 21 were nearly complete and any remaining Pluto-sized objects are expected to be beyond 100 AU from the Sun. Like other Kuiper-belt objects (KBOs), Pluto shares features with comets; for example, the solar wind is gradually blowing Pluto's surface into space. It has been claimed that if Pluto were placed as near to the Sun as Earth, it would develop a tail, as comets do. This claim has been disputed with the argument that Pluto's escape velocity is too high for this to happen. It has been proposed that Pluto may have formed as a result of the agglomeration of numerous comets and Kuiper-belt objects.
Though Pluto is the largest Kuiper belt object discovered, Neptune's moon Triton, which is larger than Pluto, is similar to it both geologically and atmospherically, and is thought to be a captured Kuiper belt object. Eris (see above) is about the same size as Pluto (though more massive) but is not strictly considered a member of the Kuiper belt population. Rather, it is considered a member of a linked population called the scattered disc.
Like other members of the Kuiper belt, Pluto is thought to be a residual planetesimal; a component of the original protoplanetary disc around the Sun that failed to fully coalesce into a full-fledged planet. Most astronomers agree that Pluto owes its position to a sudden migration undergone by Neptune early in the Solar System's formation. As Neptune migrated outward, it approached the objects in the proto-Kuiper belt, setting one in orbit around itself (Triton), locking others into resonances, and knocking others into chaotic orbits. The objects in the scattered disc, a dynamically unstable region overlapping the Kuiper belt, are thought to have been placed in their positions by interactions with Neptune's migrating resonances. A computer model created in 2004 by Alessandro Morbidelli of the Observatoire de la Côte d'Azur in Nice suggested that the migration of Neptune into the Kuiper belt may have been triggered by the formation of a 1:2 resonance between Jupiter and Saturn, which created a gravitational push that propelled both Uranus and Neptune into higher orbits and caused them to switch places, ultimately doubling Neptune's distance from the Sun. The resultant expulsion of objects from the proto-Kuiper belt could also explain the Late Heavy Bombardment 600 million years after the Solar System's formation and the origin of the Jupiter trojans. It is possible that Pluto had a near-circular orbit about 33 AU from the Sun before Neptune's migration perturbed it into a resonant capture. The Nice model requires that there were about a thousand Pluto-sized bodies in the original planetesimal disk, which included Triton and Eris.
Observation and exploration
Observation
Pluto's distance from Earth makes its in-depth study and exploration difficult. Pluto's visual apparent magnitude averages 15.1, brightening to 13.65 at perihelion. To see it, a telescope is required; around 30 cm (12 in) aperture being desirable. It looks star-like and without a visible disk even in large telescopes, because its angular diameter is maximum 0.11".
The earliest maps of Pluto, made in the late 1980s, were brightness maps created from close observations of eclipses by its largest moon, Charon. Observations were made of the change in the total average brightness of the Pluto–Charon system during the eclipses. For example, eclipsing a bright spot on Pluto makes a bigger total brightness change than eclipsing a dark spot. Computer processing of many such observations can be used to create a brightness map. This method can also track changes in brightness over time.
Better maps were produced from images taken by the Hubble Space Telescope (HST), which offered higher resolution, and showed considerably more detail, resolving variations several hundred kilometers across, including polar regions and large bright spots. These maps were produced by complex computer processing, which finds the best-fit projected maps for the few pixels of the Hubble images. These remained the most detailed maps of Pluto until the flyby of New Horizons in July 2015, because the two cameras on the HST used for these maps were no longer in service.
Exploration
The New Horizons spacecraft, which flew by Pluto in July 2015, is the first and so far only attempt to explore Pluto directly. Launched in 2006, it captured its first (distant) images of Pluto in late September 2006 during a test of the Long Range Reconnaissance Imager. The images, taken from a distance of approximately 4.2 billion kilometers, confirmed the spacecraft's ability to track distant targets, critical for maneuvering toward Pluto and other Kuiper belt objects. In early 2007 the craft made use of a gravity assist from Jupiter.New Horizons made its closest approach to Pluto on July 14, 2015, after a 3,462-day journey across the Solar System. Scientific observations of Pluto began five months before the closest approach and continued for at least a month after the encounter. Observations were conducted using a remote sensing package that included imaging instruments and a radio science investigation tool, as well as spectroscopic and other experiments. The scientific goals of New Horizons were to characterize the global geology and morphology of Pluto and its moon Charon, map their surface composition, and analyze Pluto's neutral atmosphere and its escape rate. On October 25, 2016, at 05:48 pm ET, the last bit of data (of a total of 50 billion bits of data; or 6.25 gigabytes) was received from New Horizons from its close encounter with Pluto.
Since the New Horizons flyby, scientists have advocated for an orbiter mission that would return to Pluto to fulfill new science objectives. They include mapping the surface at per pixel, observations of Pluto's smaller satellites, observations of how Pluto changes as it rotates on its axis, investigations of a possible subsurface ocean, and topographic mapping of Pluto's regions that are covered in long-term darkness due to its axial tilt. The last objective could be accomplished using laser pulses to generate a complete topographic map of Pluto. New Horizons principal investigator Alan Stern has advocated for a Cassini-style orbiter that would launch around 2030 (the 100th anniversary of Pluto's discovery) and use Charon's gravity to adjust its orbit as needed to fulfill science objectives after arriving at the Pluto system. The orbiter could then use Charon's gravity to leave the Pluto system and study more KBOs after all Pluto science objectives are completed. A conceptual study funded by the NASA Innovative Advanced Concepts (NIAC) program describes a fusion-enabled Pluto orbiter and lander based on the Princeton field-reversed configuration reactor.Fusion-Enabled Pluto Orbiter and Lander – Phase I Final Report . (PDF) Stephanie Thomas, Princeton Satellite Systems. 2017.New Horizons imaged all of Pluto's northern hemisphere, and the equatorial regions down to about 30° South. Higher southern latitudes have only been observed, at very low resolution, from Earth. Images from the Hubble Space Telescope in 1996 cover 85% of Pluto and show large albedo features down to about 75° South. This is enough to show the extent of the temperate-zone maculae. Later images had slightly better resolution, due to minor improvements in Hubble instrumentation. The equatorial region of the sub-Charon hemisphere of Pluto has only been imaged at low resolution, as New Horizons made its closest approach to the anti-Charon hemisphere.
Some albedo variations in the higher southern latitudes could be detected by New Horizons using Charon-shine (light reflected off Charon). The south polar region seems to be darker than the north polar region, but there is a high-albedo region in the southern hemisphere that may be a regional nitrogen or methane ice deposit.
See also
How I Killed Pluto and Why It Had It Coming''
List of geological features on Pluto
Pluto in astrology
Pluto in fiction
Stats of planets in the Solar System
Notes
References
Further reading
External links
New Horizons homepage
Pluto Profile at NASA's Solar System Exploration site
NASA Pluto factsheet
Website of the observatory that discovered Pluto
Earth telescope image of Pluto system
Keck infrared with AO of Pluto system
Video – Pluto – viewed through the years (GIF) (NASA; animation; July 15, 2015).
Video – Pluto – "FlyThrough" (00:22; MP4) (YouTube) (NASA; animation; August 31, 2015).
"A Day on Pluto Video made from July 2015 New Horizon Images" Scientific American
NASA CGI video of Pluto flyover (July 14, 2017)
CGI video simulation of rotating Pluto by Seán Doran (see album for more)
Google Pluto 3D , interactive map of the dwarf planet
Articles containing video clips
19300218
Discoveries by Clyde Tombaugh
134340
Dwarf planets
Kozai mechanism
Minor planets visited by spacecraft
Pluto
134340
Plutinos
134340
Solar System | Pluto | [
"Astronomy"
] | 9,160 | [
"Outer space",
"Solar System"
] |
44,474 | https://en.wikipedia.org/wiki/Saturn | Saturn is the sixth planet from the Sun and the second largest in the Solar System, after Jupiter. It is a gas giant, with an average radius of about nine times that of Earth. It has an eighth the average density of Earth, but is over 95 times more massive. Even though Saturn is almost as big as Jupiter, Saturn has less than a third its mass. Saturn orbits the Sun at a distance of , with an orbital period of 29.45 years.
Saturn's interior is thought to be composed of a rocky core, surrounded by a deep layer of metallic hydrogen, an intermediate layer of liquid hydrogen and liquid helium, and an outer layer of gas. Saturn has a pale yellow hue, due to ammonia crystals in its upper atmosphere. An electrical current in the metallic hydrogen layer is thought to give rise to Saturn's planetary magnetic field, which is weaker than Earth's, but has a magnetic moment 580 times that of Earth because of Saturn's greater size. Saturn's magnetic field strength is about a twentieth that of Jupiter. The outer atmosphere is generally bland and lacking in contrast, although long-lived features can appear. Wind speeds on Saturn can reach .
The planet has a bright and extensive system of rings, composed mainly of ice particles, with a smaller amount of rocky debris and dust. At least 146 moons orbit the planet, of which 63 are officially named; these do not include the hundreds of moonlets in the rings. Titan, Saturn's largest moon and the second largest in the Solar System, is larger (and less massive) than the planet Mercury and is the only moon in the Solar System that has a substantial atmosphere.
Name and symbol
Saturn is named after the Roman god of wealth and agriculture, who was the father of the god Jupiter. Its astronomical symbol has been traced back to the Greek Oxyrhynchus Papyri, where it can be seen to be a Greek kappa-rho ligature with a horizontal stroke, as an abbreviation for Κρονος (Cronus), the Greek name for the planet (). It later came to look like a lower-case Greek eta, with the cross added at the top in the 16th century to Christianize this pagan symbol.
The Romans named the seventh day of the week Saturday, Sāturni diēs, "Saturn's Day", for the planet Saturn.
Physical characteristics
Saturn is a gas giant, composed predominantly of hydrogen and helium. It lacks a definite surface, though it is likely to have a solid core. The planet's rotation makes it an oblate spheroid—a ball flattened at the poles and bulging at the equator. Its equatorial radius is more than 10% larger than the polar radius: 60,268 km versus 54,364 km (37,449 mi versus 33,780 mi). Jupiter, Uranus, and Neptune, the other giant planets in the Solar System, are less oblate. The combination of the bulge and the rotation rate means that the effective surface gravity along the equator, , is 74% of what it is at the poles and is lower than the surface gravity of Earth. However, the equatorial escape velocity, nearly , is much higher than that of Earth.
Saturn is the only planet of the Solar System that is less dense than water—about 30% less. Although Saturn's core is considerably denser than water, the average specific density of the planet is , because of the atmosphere. Jupiter has 318 times Earth's mass, and Saturn is 95 times Earth's mass. Together, Jupiter and Saturn hold 92% of the total planetary mass in the Solar System.
Internal structure
Despite consisting mostly of hydrogen and helium, most of Saturn's mass is not in the gas phase, because hydrogen becomes a non-ideal liquid when the density is above , which is reached at a radius containing 99.9% of Saturn's mass. The temperature, pressure, and density inside Saturn all rise steadily toward the core, which causes hydrogen to be a metal in the deeper layers.
Standard planetary models suggest that the interior of Saturn is similar to that of Jupiter, having a small rocky core surrounded by hydrogen and helium, with trace amounts of various volatiles. Analysis of the distortion shows that Saturn is substantially more centrally condensed than Jupiter and therefore contains much more material denser than hydrogen near its center. Saturn's central regions are about 50% hydrogen by mass, and Jupiter's are about 67% hydrogen.
This core is similar in composition to Earth, but is more dense. The examination of Saturn's gravitational moment, in combination with physical models of the interior, has allowed constraints to be placed on the mass of Saturn's core. In 2004, scientists estimated that the core must be 9–22 times the mass of Earth, which corresponds to a diameter of about . However, measurements of Saturn's rings suggest a much more diffuse core, with a mass equal to about 17 Earths and a radius equal to about 60% of Saturn's entire radius. This is surrounded by a thicker, liquid metallic hydrogen layer, followed by a liquid layer of helium-saturated molecular hydrogen, which gradually transitions to a gas as altitude increases. The outermost layer spans about and consists of gas.
Saturn has a hot interior, reaching at its core, and radiates 2.5 times more energy into space than it receives from the Sun. Jupiter's thermal energy is generated by the Kelvin–Helmholtz mechanism of slow gravitational compression; but such a process alone may not be sufficient to explain heat production for Saturn, because it is less massive. An alternative or additional mechanism may be the generation of heat through the "raining out" of droplets of helium deep in Saturn's interior. As the droplets descend through the lower-density hydrogen, the process releases heat by friction and leaves Saturn's outer layers depleted of helium. These descending droplets may have accumulated into a helium shell surrounding the core. Rainfalls of diamonds have been suggested to occur within Saturn, as well as in Jupiter and ice giants Uranus and Neptune.
Atmosphere
The outer atmosphere of Saturn contains 96.3% molecular hydrogen and 3.25% helium by volume. The proportion of helium is significantly deficient compared to the abundance of this element in the Sun. The quantity of elements heavier than helium (metallicity) is not known precisely, but the proportions are assumed to match the primordial abundances from the formation of the Solar System. The total mass of these heavier elements is estimated to be 19–31 times the mass of Earth, with a significant fraction located in Saturn's core region.
Trace amounts of ammonia, acetylene, ethane, propane, phosphine, and methane have been detected in Saturn's atmosphere. The upper clouds are composed of ammonia crystals, while the lower level clouds appear to consist of either ammonium hydrosulfide () or water. Ultraviolet radiation from the Sun causes methane photolysis in the upper atmosphere, leading to a series of hydrocarbon chemical reactions with the resulting products being carried downward by eddies and diffusion. This photochemical cycle is modulated by Saturn's annual seasonal cycle. Cassini observed a series of cloud features found in northern latitudes, nicknamed the "String of Pearls". These features are cloud clearings that reside in deeper cloud layers.
Cloud layers
Saturn's atmosphere exhibits a banded pattern similar to Jupiter's, but Saturn's bands are much fainter and are much wider near the equator. The nomenclature used to describe these bands is the same as on Jupiter. Saturn's finer cloud patterns were not observed until the flybys of the Voyager spacecraft during the 1980s. Since then, Earth-based telescopy has improved to the point where regular observations can be made.
The composition of the clouds varies with depth and increasing pressure. In the upper cloud layers, with temperatures in the range of 100–160 K and pressures extending between 0.5–2 bar, the clouds consist of ammonia ice. Water ice clouds begin at a level where the pressure is about 2.5 bar and extend down to 9.5 bar, where temperatures range from 185 to 270 K. Intermixed in this layer is a band of ammonium hydrosulfide ice, lying in the pressure range 3–6 bar with temperatures of 190–235 K. Finally, the lower layers, where pressures are between 10 and 20 bar and temperatures are 270–330 K, contains a region of water droplets with ammonia in aqueous solution.
Saturn's usually bland atmosphere occasionally exhibits long-lived ovals and other features common on Jupiter. In 1990, the Hubble Space Telescope imaged an enormous white cloud near Saturn's equator that was not present during the Voyager encounters, and in 1994 another smaller storm was observed. The 1990 storm was an example of a Great White Spot, a short-lived phenomenon that occurs once every Saturnian year, roughly every 30 Earth years, around the time of the northern hemisphere's summer solstice. Previous Great White Spots were observed in 1876, 1903, 1933, and 1960, with the 1933 storm being the best observed. The latest giant storm was observed in 2010. In 2015, researchers used Very Large Array telescope to study Saturnian atmosphere, and reported that they found "long-lasting signatures of all mid-latitude giant storms, a mixture of equatorial storms up to hundreds of years old, and potentially an unreported older storm at 70°N".
The winds on Saturn are the second fastest among the Solar System's planets, after Neptune's. Voyager data indicate peak easterly winds of . In images from the Cassini spacecraft during 2007, Saturn's northern hemisphere displayed a bright blue hue, similar to Uranus. The color was most likely caused by Rayleigh scattering. Thermography has shown that Saturn's south pole has a warm polar vortex, the only known example of such a phenomenon in the Solar System. Whereas temperatures on Saturn are normally −185 °C, temperatures on the vortex often reach as high as −122 °C, suspected to be the warmest spot on Saturn.
Hexagonal cloud patterns
A persisting hexagonal wave pattern around the north polar vortex in the atmosphere at about 78°N was first noted in the Voyager images. The sides of the hexagon are each about long, which is longer than the diameter of the Earth. The entire structure rotates with a period of (the same period as that of the planet's radio emissions) which is assumed to be equal to the period of rotation of Saturn's interior. The hexagonal feature does not shift in longitude like the other clouds in the visible atmosphere. The pattern's origin is a matter of much speculation. Most scientists think it is a standing wave pattern in the atmosphere. Polygonal shapes have been replicated in the laboratory through differential rotation of fluids.
HST imaging of the south polar region indicates the presence of a jet stream, but no strong polar vortex nor any hexagonal standing wave. NASA reported in November 2006 that Cassini had observed a "hurricane-like" storm locked to the south pole that had a clearly defined eyewall. Eyewall clouds had not previously been seen on any planet other than Earth. For example, images from the Galileo spacecraft did not show an eyewall in the Great Red Spot of Jupiter.
The south pole storm may have been present for billions of years. This vortex is comparable to the size of Earth, and it has winds of 550 km/h.
Magnetosphere
Saturn has an intrinsic magnetic field that has a simple, symmetric shape—a magnetic dipole. Its strength at the equator—0.2 gauss (20 μT)—is approximately one twentieth of that of the field around Jupiter and slightly weaker than Earth's magnetic field. As a result, Saturn's magnetosphere is much smaller than Jupiter's.
When Voyager 2 entered the magnetosphere, the solar wind pressure was high and the magnetosphere extended only 19 Saturn radii, or 1.1 million km (684,000 mi), although it enlarged within several hours, and remained so for about three days. Most probably, the magnetic field is generated similarly to that of Jupiter—by currents in the liquid metallic-hydrogen layer called a metallic-hydrogen dynamo. This magnetosphere is efficient at deflecting the solar wind particles from the Sun. The moon Titan orbits within the outer part of Saturn's magnetosphere and contributes plasma from the ionized particles in Titan's outer atmosphere. Saturn's magnetosphere, like Earth's, produces aurorae.
Orbit and rotation
The average distance between Saturn and the Sun is over 1.4 billion kilometers (9 AU). With an average orbital speed of 9.68 km/s, it takes Saturn 10,759 Earth days (or about years) to finish one revolution around the Sun. As a consequence, it forms a near 5:2 mean-motion resonance with Jupiter. The elliptical orbit of Saturn is inclined 2.48° relative to the orbital plane of the Earth. The perihelion and aphelion distances are, respectively, 9.195 and 9.957 AU, on average. The visible features on Saturn rotate at different rates depending on latitude, and multiple rotation periods have been assigned to various regions (as in Jupiter's case).
Astronomers use three different systems for specifying the rotation rate of Saturn. System I has a period of (844.3°/d) and encompasses the Equatorial Zone, the South Equatorial Belt, and the North Equatorial Belt. The polar regions are considered to have rotation rates similar to System I. All other Saturnian latitudes, excluding the north and south polar regions, are indicated as System II and have been assigned a rotation period of (810.76°/d). System III refers to Saturn's internal rotation rate. Based on radio emissions from the planet detected by Voyager 1 and Voyager 2, System III has a rotation period of (810.8°/d). System III has largely superseded System II.
A precise value for the rotation period of the interior remains elusive. While approaching Saturn in 2004, Cassini found that the radio rotation period of Saturn had increased appreciably, to approximately . An estimate of Saturn's rotation (as an indicated rotation rate for Saturn as a whole) based on a compilation of various measurements from the Cassini, Voyager, and Pioneer probes is . Studies of the planet's C Ring yield a rotation period of .
In March 2007, it was found that the variation in radio emissions from the planet did not match Saturn's rotation rate. This variance may be caused by geyser activity on Saturn's moon Enceladus. The water vapor emitted into Saturn's orbit by this activity becomes charged and creates a drag upon Saturn's magnetic field, slowing its rotation slightly relative to the rotation of the planet.
An apparent oddity for Saturn is that it does not have any known trojan asteroids. These are minor planets that orbit the Sun at the stable Lagrangian points, designated L4 and L5, located at 60° angles to the planet along its orbit. Trojan asteroids have been discovered for Mars, Jupiter, Uranus, and Neptune. Orbital resonance mechanisms, including secular resonance, are believed to be the cause of the missing Saturnian trojans.
Natural satellites
Saturn has 146 known moons, 63 of which have formal names. It is estimated that there are another outer irregular moons larger than in diameter. In addition, there is evidence of dozens to hundreds of moonlets with diameters of 40–500 meters in Saturn's rings, which are not considered to be true moons. Titan, the largest moon, comprises more than 90% of the mass in orbit around Saturn, including the rings. Saturn's second-largest moon, Rhea, may have a tenuous ring system of its own, along with a tenuous atmosphere.
Many of the other moons are small: 131 are less than 50 km in diameter. Traditionally, most of Saturn's moons have been named after Titans of Greek mythology. Titan is the only satellite in the Solar System with a major atmosphere, in which a complex organic chemistry occurs. It is the only satellite with hydrocarbon lakes.
On 6 June 2013, scientists at the IAA-CSIC reported the detection of polycyclic aromatic hydrocarbons in the upper atmosphere of Titan, a possible precursor for life. On 23 June 2014, NASA claimed to have strong evidence that nitrogen in the atmosphere of Titan came from materials in the Oort cloud, associated with comets, and not from the materials that formed Saturn in earlier times.
Saturn's moon Enceladus, which seems similar in chemical makeup to comets, has often been regarded as a potential habitat for microbial life. Evidence of this possibility includes the satellite's salt-rich particles having an "ocean-like" composition that indicates most of Enceladus's expelled ice comes from the evaporation of liquid salt water. A 2015 flyby by Cassini through a plume on Enceladus found most of the ingredients to sustain life forms that live by methanogenesis.
In April 2014, NASA scientists reported the possible beginning of a new moon within the A Ring, which was imaged by Cassini on 15 April 2013.
Planetary rings
Saturn is probably best known for the system of planetary rings that makes it visually unique. The rings extend from outward from Saturn's equator and average approximately in thickness. They are composed predominantly of water ice, with trace amounts of tholin impurities and a peppered coating of approximately 7% amorphous carbon. The particles that make up the rings range in size from specks of dust up to 10 m. While the other gas giants also have ring systems, Saturn's is the largest and most visible.
There is a debate on the age of the rings. One side supports that they are ancient, and were created simultaneously with Saturn from the original nebular material (around 4.6 billion years ago), or shortly after the LHB (around 4.1 to 3.8 billion years ago). The other side supports that they are much younger, created around 100 million years ago. An MIT research team, supporting the latter theory, proposed that the rings are remnant of a destroyed moon of Saturn, named ″Chrysalis″.
Beyond the main rings, at a distance of 12 million km (7.5 million mi) from the planet is the sparse Phoebe ring. It is tilted at an angle of 27° to the other rings and, like Phoebe, orbits in retrograde fashion.
Some of the moons of Saturn, including Pandora and Prometheus, act as shepherd moons to confine the rings and prevent them from spreading out. Pan and Atlas cause weak, linear density waves in Saturn's rings that have yielded more reliable calculations of their masses.
History of observation and exploration
The observation and exploration of Saturn can be divided into three phases: (1) pre-modern observations with the naked eye, (2) telescopic observations from Earth beginning in the 17th century, and (3) visitation by space probes, in orbit or on flyby. In the 21st century, telescopic observations continue from Earth (including Earth-orbiting observatories like the Hubble Space Telescope) and, until its 2017 retirement, from the Cassini orbiter around Saturn.
Pre-telescopic observation
Saturn has been known since prehistoric times, and in early recorded history it was a major character in various mythologies. Babylonian astronomers systematically observed and recorded the movements of Saturn. In ancient Greek, the planet was known as Phainon, and in Roman times it was known as the "star of Saturn" or the "star of the Sun (i.e. Helios)". In ancient Roman mythology, the planet Phainon was sacred to this agricultural god, from which the planet takes its modern name. The Romans considered the god Saturnus the equivalent of the Greek god Cronus; in modern Greek, the planet retains the name Cronus—: Kronos.
The Greek scientist Ptolemy based his calculations of Saturn's orbit on observations he made while it was in opposition. In Hindu astrology, there are nine astrological objects, known as Navagrahas. Saturn is known as "Shani" and judges everyone based on the good and bad deeds performed in life. Ancient Chinese and Japanese culture designated the planet Saturn as the "earth star" (). This was based on Five Elements which were traditionally used to classify natural elements.
In Hebrew, Saturn is called Shabbathai. Its angel is Cassiel. Its intelligence or beneficial spirit is 'Agȋȇl (), and its darker spirit (demon) is Zȃzȇl (). Zazel has been described as a great angel, invoked in Solomonic magic, who is "effective in love conjurations". In Ottoman Turkish, Urdu, and Malay, the name of Zazel is 'Zuhal', derived from the Arabic language ().
Telescopic pre-spaceflight observations
Saturn's rings require at least a 15-mm-diameter telescope to resolve and thus were not known to exist until Christiaan Huygens saw them in 1655 and published his observations in 1659. Galileo, with his primitive telescope in 1610, incorrectly thought of Saturn's appearing not quite round as two moons on Saturn's sides. It was not until Huygens used greater telescopic magnification that this notion was refuted, and the rings were truly seen for the first time. Huygens also discovered Saturn's moon Titan; Giovanni Domenico Cassini later discovered four other moons: Iapetus, Rhea, Tethys, and Dione. In 1675, Cassini discovered the gap now known as the Cassini Division.
No further discoveries of significance were made until 1789 when William Herschel discovered two further moons, Mimas and Enceladus. The irregularly shaped satellite Hyperion, which has a resonance with Titan, was discovered in 1848 by a British team.
In 1899, William Henry Pickering discovered Phoebe, a highly irregular satellite that does not rotate synchronously with Saturn as the larger moons do. Phoebe was the first such satellite found and it took more than a year to orbit Saturn in a retrograde orbit. During the early 20th century, research on Titan led to the confirmation in 1944 that it had a thick atmosphere—a feature unique among the Solar System's moons.
Spaceflight missions
Pioneer 11 flyby
Pioneer 11 made the first flyby of Saturn in September 1979, when it passed within of the planet's cloud tops. Images were taken of the planet and a few of its moons, although their resolution was too low to discern surface detail. The spacecraft also studied Saturn's rings, revealing the thin F-ring and the fact that dark gaps in the rings are bright when viewed at a high phase angle (towards the Sun), meaning that they contain fine light-scattering material. In addition, Pioneer 11 measured the temperature of Titan.
Voyager flybys
In November 1980, the Voyager 1 probe visited the Saturn system. It sent back the first high-resolution images of the planet, its rings and satellites. Surface features of various moons were seen for the first time. Voyager 1 performed a close flyby of Titan, increasing knowledge of the atmosphere of the moon. It proved that Titan's atmosphere is impenetrable at visible wavelengths; therefore no surface details were seen. The flyby changed the spacecraft's trajectory out of the plane of the Solar System.
Almost a year later, in August 1981, Voyager 2 continued the study of the Saturn system. More close-up images of Saturn's moons were acquired, as well as evidence of changes in the atmosphere and the rings. During the flyby, the probe's turnable camera platform stuck for a couple of days and some planned imaging was lost. Saturn's gravity was used to direct the spacecraft's trajectory towards Uranus.
The probes discovered and confirmed several new satellites orbiting near or within the planet's rings, as well as the small Maxwell Gap (a gap within the C Ring) and Keeler gap (a 42 km-wide gap in the A Ring).
Cassini–Huygens spacecraft
The Cassini–Huygens space probe entered orbit around Saturn on 1 July 2004. In June 2004, it conducted a close flyby of Phoebe, sending back high-resolution images and data. Cassini flyby of Saturn's largest moon, Titan, captured radar images of large lakes and their coastlines with numerous islands and mountains. The orbiter completed two Titan flybys before releasing the Huygens probe on 25 December 2004. Huygens descended onto the surface of Titan on 14 January 2005.
Starting in early 2005, scientists used Cassini to track lightning on Saturn. The power of the lightning is approximately 1,000 times that of lightning on Earth.
In 2006, NASA reported that Cassini had found evidence of liquid water reservoirs no more than tens of meters below the surface that erupt in geysers on Saturn's moon Enceladus. These jets of icy particles are emitted into orbit around Saturn from vents in the moon's south polar region. Over 100 geysers have been identified on Enceladus. In May 2011, NASA scientists reported that Enceladus "is emerging as the most habitable spot beyond Earth in the Solar System for life as we know it".
Cassini photographs have revealed a previously undiscovered planetary ring, outside the brighter main rings of Saturn and inside the G and E rings. The source of this ring is hypothesized to be the crashing of a meteoroid off Janus and Epimetheus. In July 2006, images were returned of hydrocarbon lakes near Titan's north pole, the presence of which were confirmed in January 2007. In March 2007, hydrocarbon seas were found near the North pole, the largest of which is almost the size of the Caspian Sea. In October 2006, the probe detected an diameter cyclone-like storm with an eyewall at Saturn's south pole.
From 2004 to 2 November 2009, the probe discovered and confirmed eight new satellites. In April 2013, Cassini sent back images of a hurricane at the planet's north pole 20 times larger than those found on Earth, with winds faster than . On 15 September 2017, the Cassini–Huygens spacecraft performed the "Grand Finale" of its mission: a number of passes through gaps between Saturn and Saturn's inner rings. The atmospheric entry of Cassini ended the mission.
Possible future missions
The continued exploration of Saturn is still considered to be a viable option for NASA as part of their ongoing New Frontiers program of missions. NASA previously requested for plans to be put forward for a mission to Saturn that included the Saturn Atmospheric Entry Probe, and possible investigations into the habitability and possible discovery of life on Saturn's moons Titan and Enceladus by Dragonfly.
Observation
Saturn is the most distant of the five planets easily visible to the naked eye from Earth, the other four being Mercury, Venus, Mars, and Jupiter. (Uranus, and occasionally 4 Vesta, are visible to the naked eye in dark skies.) Saturn appears to the naked eye in the night sky as a bright, yellowish point of light. The mean apparent magnitude of Saturn is 0.46 with a standard deviation of 0.34. Most of the magnitude variation is due to the inclination of the ring system relative to the Sun and Earth. The brightest magnitude, −0.55, occurs near the time when the plane of the rings is inclined most highly, and the faintest magnitude, 1.17, occurs around the time when they are least inclined. It takes approximately 29.4 years for the planet to complete an entire circuit of the ecliptic against the background constellations of the zodiac. Most people will require an optical aid (very large binoculars or a small telescope) that magnifies at least 30 times to achieve an image of Saturn's rings in which a clear resolution is present. When Earth passes through the ring plane, which occurs twice every Saturnian year (roughly every 15 Earth years), the rings briefly disappear from view because they are so thin. Such a "disappearance" will next occur in 2025, but Saturn will be too close to the Sun for observations.
Saturn and its rings are best seen when the planet is at, or near, opposition, the configuration of a planet when it is at an elongation of 180°, and thus appears opposite the Sun in the sky. A Saturnian opposition occurs every year—approximately every 378 days—and results in the planet appearing at its brightest. Both the Earth and Saturn orbit the Sun on eccentric orbits, which means their distances from the Sun vary over time, and therefore so do their distances from each other, hence varying the brightness of Saturn from one opposition to the next. Saturn also appears brighter when the rings are angled such that they are more visible. For example, during the opposition of 17 December 2002, Saturn appeared at its brightest due to the favorable orientation of its rings relative to the Earth, even though Saturn was closer to the Earth and Sun in late 2003.
From time to time, Saturn is occulted by the Moon (that is, the Moon covers up Saturn in the sky). As with all the planets in the Solar System, occultations of Saturn occur in "seasons". Saturnian occultations will take place monthly for about a 12-month period, followed by about a five-year period in which no such activity is registered. The Moon's orbit is inclined by several degrees relative to Saturn's, so occultations will only occur when Saturn is near one of the points in the sky where the two planes intersect (both the length of Saturn's year and the 18.6-Earth-year nodal precession period of the Moon's orbit influence the periodicity).
In science fiction
In Christopher Nolan's 2014 science fiction epic Interstellar, in proximity to Saturn is a wormhole leading to a planetary system in another galaxy, whose central object is a black hole known as Gargantua. The Endurance team enters the wormhole in the hopes of finding a habitable planet for humanity to settle as conditions on Earth deteriorate. At the end of the film, Cooper Station, named for the main character, is shown in orbit around Saturn.
See also
Statistics of planets in the Solar System
Outline of Saturn
Notes
References
Further reading
External links
Saturn overview by NASA's Science Mission Directorate
Saturn fact sheet at the NASA Space Science Data Coordinated Archive
Saturnian System terminology by the IAU Gazetteer of Planetary Nomenclature
Cassini-Huygens legacy website by the Jet Propulsion Laboratory
Interactive 3D gravity simulation of the Cronian system
Astronomical objects known since antiquity
Gas giants
Outer planets
Solar System | Saturn | [
"Astronomy"
] | 6,311 | [
"Outer space",
"Solar System"
] |
44,475 | https://en.wikipedia.org/wiki/Uranus | Uranus is the seventh planet from the Sun. It is a gaseous cyan-coloured ice giant. Most of the planet is made of water, ammonia, and methane in a supercritical phase of matter, which astronomy calls "ice" or volatiles. The planet's atmosphere has a complex layered cloud structure and has the lowest minimum temperature () of all the Solar System's planets. It has a marked axial tilt of 82.23° with a retrograde rotation period of 17 hours and 14 minutes. This means that in an 84-Earth-year orbital period around the Sun, its poles get around 42 years of continuous sunlight, followed by 42 years of continuous darkness.
Uranus has the third-largest diameter and fourth-largest mass among the Solar System's planets. Based on current models, inside its volatile mantle layer is a rocky core, and surrounding it is a thick hydrogen and helium atmosphere. Trace amounts of hydrocarbons (thought to be produced via hydrolysis) and carbon monoxide along with carbon dioxide (thought to have been originated from comets) have been detected in the upper atmosphere. There are many unexplained climate phenomena in Uranus's atmosphere, such as its peak wind speed of , variations in its polar cap, and its erratic cloud formation. The planet also has very low internal heat compared to other giant planets, the cause of which remains unclear.
Like the other giant planets, Uranus has a ring system, a magnetosphere, and many natural satellites. The extremely dark ring system reflects only about 2% of the incoming light. Uranus's 28 natural satellites include 18 known regular moons, of which 13 are small inner moons. Further out are the larger five major moons of the planet: Miranda, Ariel, Umbriel, Titania, and Oberon. Orbiting at a much greater distance from Uranus are the ten known irregular moons. The planet's magnetosphere is highly asymmetric and has many charged particles, which may be the cause of the darkening of its rings and moons.
Uranus is visible to the naked eye, but it is very dim and was not classified as a planet until 1781, when it was first observed by William Herschel. About seven decades after its discovery, consensus was reached that the planet be named after the Greek god Uranus (Ouranos), one of the Greek primordial deities. As of 2024, it had been visited up close only once when in 1986 the Voyager 2 probe flew by the planet. Though nowadays it can be resolved and observed by telescopes, there is much desire to revisit the planet, as shown by Planetary Science Decadal Survey's decision to make the proposed Uranus Orbiter and Probe mission a top priority in the 2023–2032 survey, and the CNSA's proposal to fly by the planet with a subprobe of Tianwen-4.
History
Like the classical planets, Uranus is visible to the naked eye, but it was never recognised as a planet by ancient observers because of its dimness and slow orbit. William Herschel first observed Uranus on 13 March 1781, leading to its discovery as a planet, expanding the known boundaries of the Solar System for the first time in history and making Uranus the first planet classified as such with the aid of a telescope. The discovery of Uranus also effectively doubled the size of the known Solar System because Uranus is around twice the distance from the Sun as the planet Saturn.
Discovery
Before its recognition as a planet, Uranus had been observed on numerous occasions, albeit generally misidentified as a star. The earliest possible known observation was by Hipparchus, who in 128 BC might have recorded it as a star for his star catalogue that was later incorporated into Ptolemy's Almagest. The earliest definite sighting was in 1690, when John Flamsteed observed it at least six times, cataloguing it as 34 Tauri. The French astronomer Pierre Charles Le Monnier observed Uranus at least twelve times between 1750 and 1769, including on four consecutive nights.
William Herschel observed Uranus on 13 March 1781 from the garden of his house at 19 New King Street in Bath, Somerset, England (now the Herschel Museum of Astronomy), and initially reported it (on 26 April 1781) as a comet. With a homemade 6.2-inch reflecting telescope, Herschel "engaged in a series of observations on the parallax of the fixed stars."
Herschel recorded in his journal: "In the quartile near ζ Tauri ... either [a] Nebulous star or perhaps a comet." On 17 March he noted: "I looked for the Comet or Nebulous Star and found that it is a Comet, for it has changed its place." When he presented his discovery to the Royal Society, he continued to assert that he had found a comet, but also implicitly compared it to a planet:
Herschel notified the Astronomer Royal Nevil Maskelyne of his discovery and received this flummoxed reply from him on 23 April 1781: "I don't know what to call it. It is as likely to be a regular planet moving in an orbit nearly circular to the sun as a Comet moving in a very eccentric ellipsis. I have not yet seen any coma or tail to it."
Although Herschel continued to describe his new object as a comet, other astronomers had already begun to suspect otherwise. Finnish-Swedish astronomer Anders Johan Lexell, working in Russia, was the first to compute the orbit of the new object. Its nearly circular orbit led him to the conclusion that it was a planet rather than a comet. Berlin astronomer Johann Elert Bode described Herschel's discovery as "a moving star that can be deemed a hitherto unknown planet-like object circulating beyond the orbit of Saturn". Bode concluded that its near-circular orbit was more like a planet's than a comet's.
The object was soon universally accepted as a new planet. By 1783, Herschel acknowledged this to Royal Society president Joseph Banks: "By the observation of the most eminent Astronomers in Europe it appears that the new star, which I had the honour of pointing out to them in March 1781, is a Primary Planet of our Solar System." In recognition of his achievement, King George III gave Herschel an annual stipend of £200 () on condition that he moved to Windsor so that the Royal Family could look through his telescopes.
Name
The name Uranus references the ancient Greek deity of the sky Uranus (), known as Caelus in Roman mythology, the father of Cronus (Saturn), grandfather of Zeus (Jupiter) and the great-grandfather of Ares (Mars), which was rendered as in Latin (). It is the only one of the eight planets whose English name derives from a figure of Greek mythology. The pronunciation of the name Uranus preferred among astronomers is , with the long "u" of English and stress on the first syllable as in Latin , in contrast to , with stress on the second syllable and a long a, though both are considered acceptable.
Consensus on the name was not reached until almost 70 years after the planet's discovery. During the original discussions following discovery, Maskelyne asked Herschel to "do the astronomical world the to give a name to your planet, which is entirely your own, [and] which we are so much obliged to you for the discovery of". In response to Maskelyne's request, Herschel decided to name the object (George's Star), or the "Georgian Planet" in honour of his new patron, King George III. He explained this decision in a letter to Joseph Banks:
Herschel's proposed name was not popular outside Britain and Hanover, and alternatives were soon proposed. Astronomer Jérôme Lalande proposed that it be named Herschel in honour of its discoverer. Swedish astronomer Erik Prosperin proposed the names Astraea, Cybele (now the names of asteroids), and Neptune, which would become the name of the next planet to be discovered. Georg Lichtenberg from Göttingen also supported Astraea (as Austräa), but she is traditionally associated with Virgo instead of Taurus. Neptune was supported by other astronomers who liked the idea of commemorating the victories of the British Royal Naval fleet in the course of the American Revolutionary War by calling the new planet either Neptune George III or Neptune Great Britain, a compromise Lexell suggested as well. Daniel Bernoulli suggested Hypercronius and Transaturnis. Minerva was also proposed.
In a March 1782 treatise, Johann Elert Bode proposed Uranus, the Latinised version of the Greek god of the sky, Ouranos. Bode argued that the name should follow the mythology so as not to stand out as different from the other planets, and that Uranus was an appropriate name as the father of the first generation of the Titans. He also noted the elegance of the name in that just as Saturn was the father of Jupiter, the new planet should be named after the father of Saturn. However, he was apparently unaware that Uranus was only the Latinised form of the deity's name, and the Roman equivalent was Caelus. In 1789, Bode's Royal Academy colleague Martin Klaproth named his newly discovered element uranium in support of Bode's choice. Ultimately, Bode's suggestion became the most widely used, and became universal in 1850 when HM Nautical Almanac Office, the final holdout, switched from using Georgium Sidus to Uranus.
Uranus has two astronomical symbols. The first to be proposed, , was proposed by Johann Gottfried Köhler at Bode's request in 1782. Köhler suggested that the new planet be given the symbol for platinum, which had been described scientifically only 30 years before. As there was no alchemical symbol for platinum, he suggested ⛢ or ⛢, a combination of the planetary-metal symbols ☉ (gold) and ♂ (iron), as platinum (or 'white gold') is found mixed with iron. Bode thought that an upright orientation, ⛢, fit better with the symbols for the other planets while remaining distinct. This symbol predominates in modern astronomical use in the rare cases that symbols are used at all. The second symbol, , was suggested by Lalande in 1784. In a letter to Herschel, Lalande described it as "" ("a globe surmounted by the first letter of your surname"). The second symbol is nearly universal in astrology.
In English-language popular culture, humour is often derived from the common pronunciation of Uranus's name, which resembles that of the phrase "your anus".
Uranus is called by a variety of names in other languages. Uranus's name is literally translated as the "Heavenly King star" in Chinese (), Japanese (天王星), Korean (천왕성), and Vietnamese (sao Thiên Vương). In Thai, its official name is (), as in English. Its other name in Thai is (, Star of Mṛtyu), after the Sanskrit word for 'death', (). In Mongolian, its name is (), translated as 'King of the Sky', reflecting its namesake god's role as the ruler of the heavens. In Hawaiian, its name is , the Hawaiian rendering of the name 'Herschel'.
Formation
It is argued that the differences between the ice giants and the gas giants arise from their formation history. The Solar System is hypothesised to have formed from a rotating disk of gas and dust known as the presolar nebula. Much of the nebula's gas, primarily hydrogen and helium, formed the Sun, and the dust grains collected together to form the first protoplanets. As the planets grew, some of them eventually accreted enough matter for their gravity to hold on to the nebula's leftover gas. The more gas they held onto, the larger they became; the larger they became, the more gas they held onto until a critical point was reached, and their size began to increase exponentially. The ice giants, with only a few Earth masses of nebular gas, never reached that critical point. Recent simulations of planetary migration have suggested that both ice giants formed closer to the Sun than their present positions, and moved outwards after formation (the Nice model).
Orbit and rotation
Uranus orbits the Sun once every 84 years. As viewed against the background of stars, since being discovered in 1781, the planet has returned to the point of its discovery northeast of the binary star Zeta Tauri twice—in March 1865 and March 1949—and will return to this location again in April 2033.
Its average distance from the Sun is roughly . The difference between its minimum and maximum distance from the Sun is 1.8 AU, larger than that of any other planet, though not as large as that of dwarf planet Pluto. The intensity of sunlight varies inversely with the square of the distance—on Uranus (at about 20 times the distance from the Sun compared to Earth), it is about 1/400 the intensity of light on Earth.
The orbital elements of Uranus were first calculated in 1783 by Pierre-Simon Laplace. With time, discrepancies began to appear between predicted and observed orbits, and in 1841, John Couch Adams first proposed that the differences might be due to the gravitational tug of an unseen planet. In 1845, Urbain Le Verrier began his own independent research into Uranus's orbit. On 23 September 1846, Johann Gottfried Galle located a new planet, later named Neptune, at nearly the position predicted by Le Verrier.
The rotational period of the interior of Uranus is 17 hours, 14 minutes. As on all giant planets, its upper atmosphere experiences strong winds in the direction of rotation. At some latitudes, such as about 60 degrees south, visible features of the atmosphere move much faster, making a full rotation in as little as 14 hours.
Axial tilt
The Uranian axis of rotation is approximately parallel to the plane of the Solar System, with an axial tilt of 82.23°. Depending on which pole is considered north, the tilt can be described either as 82.23° or as 97.8°. The former follows the International Astronomical Union definition that the north pole is the pole which lies on Earth's North's side of the invariable plane of the Solar System. Uranus has retrograde rotation when defined this way. Alternatively, the convention in which a body's north and south poles are defined according to the right-hand rule in relation to the direction of rotation, Uranus's axial tilt may be given instead as 97.8°, which reverses which pole is considered north and which is considered south and giving the planet prograde rotation. This gives it seasonal changes completely unlike those of the other planets. Pluto and asteroid 2 Pallas also have extreme axial tilts. Near the solstice, one pole faces the Sun continuously and the other faces away, with only a narrow strip around the equator experiencing a rapid day–night cycle, with the Sun low over the horizon. On the other side of Uranus's orbit, the orientation of the poles towards the Sun is reversed. Each pole gets around 42 years of continuous sunlight, followed by 42 years of darkness. Near the time of the equinoxes, the Sun faces the equator of Uranus, giving a period of day–night cycles similar to those seen on most of the other planets.
One result of this axis orientation is that, averaged over the Uranian year, the near-polar regions of Uranus receive a greater energy input from the Sun than its equatorial regions. Nevertheless, Uranus is hotter at its equator than at its poles. The underlying mechanism that causes this is unknown. The reason for Uranus's unusual axial tilt is also not known with certainty, but the usual speculation is that during the formation of the Solar System, an Earth-sized protoplanet collided with Uranus, causing the skewed orientation. Research by Jacob Kegerreis of Durham University suggests that the tilt resulted from a rock larger than Earth crashing into the planet 3 to 4 billion years ago. Uranus's south pole was pointed almost directly at the Sun at the time of Voyager 2 flyby in 1986.
Visibility from Earth
The mean apparent magnitude of Uranus is 5.68 with a standard deviation of 0.17, while the extremes are 5.38 and 6.03. This range of brightness is near the limit of naked eye visibility. Much of the variability is dependent upon the planetary latitudes being illuminated from the Sun and viewed from the Earth. Its angular diameter is between 3.4 and 3.7 arcseconds, compared with 16 to 20 arcseconds for Saturn and 32 to 45 arcseconds for Jupiter. At opposition, Uranus is visible to the naked eye in dark skies, and becomes an easy target even in urban conditions with binoculars. On larger amateur telescopes with an objective diameter of between 15 and 23 cm, Uranus appears as a pale cyan disk with distinct limb darkening. With a large telescope of 25 cm or wider, cloud patterns, as well as some of the larger satellites, such as Titania and Oberon, may be visible.
Internal structure
Uranus's mass is roughly 14.5 times that of Earth, making it the least massive of the giant planets. Its diameter is slightly larger than Neptune's at roughly four times that of Earth. A resulting density of 1.27 g/cm3 makes Uranus the second least dense planet, after Saturn. This value indicates that it is made primarily of various ices, such as water, ammonia, and methane. The total mass of ice in Uranus's interior is not precisely known, because different figures emerge depending on the model chosen; it must be between 9.3 and 13.5 Earth masses. Hydrogen and helium constitute only a small part of the total, with between 0.5 and 1.5 Earth masses. The remainder of the non-ice mass (0.5 to 3.7 Earth masses) is accounted for by rocky material.
The standard model of Uranus's structure is that it consists of three layers: a rocky (silicate/iron–nickel) core in the centre, an icy mantle in the middle, and an outer gaseous hydrogen/helium envelope. The core is relatively small, with a mass of only 0.55 Earth masses and a radius less than 20% of the planet; the mantle comprises its bulk, with around 13.4 Earth masses, and the upper atmosphere is relatively insubstantial, weighing about 0.5 Earth masses and extending for the last 20% of Uranus's radius. Uranus's core density is around 9 g/cm3, with a pressure in the centre of 8 million bars (800 GPa) and a temperature of about 5000 K. The ice mantle is not in fact composed of ice in the conventional sense, but of a hot and dense fluid consisting of water, ammonia and other volatiles. This fluid, which has a high electrical conductivity, is sometimes called a water–ammonia ocean.
The extreme pressure and temperature deep within Uranus may break up the methane molecules, with the carbon atoms condensing into crystals of diamond that rain down through the mantle like hailstones. This phenomenon is similar to diamond rains that are theorised by scientists to exist on Jupiter, Saturn, and Neptune. Very-high-pressure experiments at the Lawrence Livermore National Laboratory suggest that an ocean of metallic liquid carbon, perhaps with floating solid 'diamond-bergs', may comprise the base of the mantle.
The bulk compositions of Uranus and Neptune are different from those of Jupiter and Saturn, with ice dominating over gases, hence justifying their separate classification as ice giants. There may be a layer of ionic water where the water molecules break down into a soup of hydrogen and oxygen ions, and deeper down superionic water in which the oxygen crystallises but the hydrogen ions move freely within the oxygen lattice.
Although the model considered above is reasonably standard, it is not unique; other models also satisfy observations. For instance, if substantial amounts of hydrogen and rocky material are mixed in the ice mantle, the total mass of ices in the interior will be lower, and, correspondingly, the total mass of rocks and hydrogen will be higher. Presently available data does not allow a scientific determination of which model is correct. The fluid interior structure of Uranus means that it has no solid surface. The gaseous atmosphere gradually transitions into the internal liquid layers. For the sake of convenience, a revolving oblate spheroid set at the point at which atmospheric pressure equals 1 bar (100 kPa) is conditionally designated as a "surface". It has equatorial and polar radii of and , respectively. This surface is used throughout this article as a zero point for altitudes.
Internal heat
Uranus's internal heat appears markedly lower than that of the other giant planets; in astronomical terms, it has a low thermal flux. Why Uranus's internal temperature is so low is still not understood. Neptune, which is Uranus's near twin in size and composition, radiates 2.61 times as much energy into space as it receives from the Sun, but Uranus radiates hardly any excess heat at all. The total power radiated by Uranus in the far infrared (i.e. heat) part of the spectrum is times the solar energy absorbed in its atmosphere. Uranus's heat flux is only , which is lower than the internal heat flux of Earth of about . The lowest temperature recorded in Uranus's tropopause is , making Uranus the coldest planet in the Solar System.
One of the hypotheses for this discrepancy suggests the Earth-sized impactor theorised to be behind Uranus's axial tilt left the planet with a depleted core temperature, as the impact caused Uranus to expel most of its primordial heat. Another hypothesis is that some form of barrier exists in Uranus's upper layers that prevents the core's heat from reaching the surface. For example, convection may take place in a set of compositionally different layers, which may inhibit upward heat transport; perhaps double diffusive convection is a limiting factor.
In a 2021 study, the ice giants' interior conditions were mimicked by compressing water that contained minerals such as olivine and ferropericlase, thus showing that large amounts of magnesium could be dissolved in the liquid interiors of Uranus and Neptune. If Uranus has more of this magnesium than Neptune, it could form a thermal insulation layer, thus potentially explaining the planet's low temperature.
Atmosphere
Although there is no well-defined solid surface within Uranus's interior, the outermost part of Uranus's gaseous envelope that is accessible to remote sensing is called its atmosphere. Remote-sensing capability extends down to roughly 300 km below the level, with a corresponding pressure around and temperature of . The tenuous thermosphere extends over two planetary radii from the nominal surface, which is defined to lie at a pressure of 1 bar. The Uranian atmosphere can be divided into three layers: the troposphere, between altitudes of and pressures from 100 to 0.1 bar (10 MPa to 10 kPa); the stratosphere, spanning altitudes between and pressures of between (10 kPa to 10 μPa); and the thermosphere extending from 4,000 km to as high as 50,000 km from the surface. There is no mesosphere.
Composition
The composition of Uranus's atmosphere is different from its bulk, consisting mainly of molecular hydrogen and helium. The helium molar fraction, i.e. the number of helium atoms per molecule of gas, is in the upper troposphere, which corresponds to a mass fraction . This value is close to the protosolar helium mass fraction of , indicating that helium has not settled in its centre as it has in the gas giants. The third-most-abundant component of Uranus's atmosphere is methane (). Methane has prominent absorption bands in the visible and near-infrared (IR), making Uranus aquamarine or cyan in colour. Methane molecules account for 2.3% of the atmosphere by molar fraction below the methane cloud deck at the pressure level of ; this represents about 20 to 30 times the carbon abundance found in the Sun.
The mixing ratio is much lower in the upper atmosphere due to its extremely low temperature, which lowers the saturation level and causes excess methane to freeze out. The abundances of less volatile compounds such as ammonia, water, and hydrogen sulfide in the deep atmosphere are poorly known. They are probably also higher than solar values. Along with methane, trace amounts of various hydrocarbons are found in the stratosphere of Uranus, which are thought to be produced from methane by photolysis induced by the solar ultraviolet (UV) radiation. They include ethane (), acetylene (), methylacetylene (), and diacetylene (). Spectroscopy has also uncovered traces of water vapour, carbon monoxide, and carbon dioxide in the upper atmosphere, which can only originate from an external source such as infalling dust and comets.
Troposphere
The troposphere is the lowest and densest part of the atmosphere and is characterised by a decrease in temperature with altitude. The temperature falls from about at the base of the nominal troposphere at −300 km to at 50 km. The temperatures in the coldest upper region of the troposphere (the tropopause) actually vary in the range between depending on planetary latitude. The tropopause region is responsible for the vast majority of Uranus's thermal far infrared emissions, thus determining its effective temperature of .
The troposphere is thought to have a highly complex cloud structure; water clouds are hypothesised to lie in the pressure range of , ammonium hydrosulfide clouds in the range of , ammonia or hydrogen sulfide clouds at between and finally directly detected thin methane clouds at . The troposphere is a dynamic part of the atmosphere, exhibiting strong winds, bright clouds, and seasonal changes.
Upper atmosphere
The middle layer of the Uranian atmosphere is the stratosphere, where temperature generally increases with altitude from in the tropopause to between at the base of the thermosphere. The heating of the stratosphere is caused by absorption of solar UV and IR radiation by methane and other hydrocarbons, which form in this part of the atmosphere as a result of methane photolysis. Heat is also conducted from the hot thermosphere. The hydrocarbons occupy a relatively narrow layer at altitudes of between 100 and 300 km corresponding to a pressure range of 1,000 to 10 Pa and temperatures of between .
The most abundant hydrocarbons are methane, acetylene, and ethane with mixing ratios of around relative to hydrogen. The mixing ratio of carbon monoxide is similar at these altitudes. Heavier hydrocarbons and carbon dioxide have mixing ratios three orders of magnitude lower. The abundance ratio of water is around 7. Ethane and acetylene tend to condense in the colder lower part of the stratosphere and tropopause (below 10 mBar level) forming haze layers, which may be partly responsible for the bland appearance of Uranus. The concentration of hydrocarbons in the Uranian stratosphere above the haze is significantly lower than in the stratospheres of the other giant planets.
The outermost layer of the Uranian atmosphere is the thermosphere and corona, which has a uniform temperature of around to . The heat sources necessary to sustain such a high level are not understood, as neither the solar UV nor the auroral activity can provide the necessary energy to maintain these temperatures. The weak cooling efficiency due to the lack of hydrocarbons in the stratosphere above 0.1 mBar pressure levels may contribute too. In addition to molecular hydrogen, the thermosphere-corona contains many free hydrogen atoms. Their small mass and high temperatures explain why the corona extends as far as , or two Uranian radii, from its surface.
This extended corona is a unique feature of Uranus. Its effects include a drag on small particles orbiting Uranus, causing a general depletion of dust in the Uranian rings. The Uranian thermosphere, together with the upper part of the stratosphere, corresponds to the ionosphere of Uranus. Observations show that the ionosphere occupies altitudes from . The Uranian ionosphere is denser than that of either Saturn or Neptune, which may arise from the low concentration of hydrocarbons in the stratosphere. The ionosphere is mainly sustained by solar UV radiation and its density depends on the solar activity. Auroral activity is insignificant as compared to Jupiter and Saturn.
Climate
At ultraviolet and visible wavelengths, Uranus's atmosphere is bland in comparison to the other giant planets, even to Neptune, which it otherwise closely resembles. When Voyager 2 flew by Uranus in 1986, it observed a total of 10 cloud features across the entire planet. One proposed explanation for this dearth of features is that Uranus's internal heat is markedly lower than that of the other giant planets, being the coldest planet in the Solar System.
Banded structure, winds and clouds
In 1986, Voyager 2 found that the visible southern hemisphere of Uranus can be subdivided into two regions: a bright polar cap and dark equatorial bands. Their boundary is located at about −45° of latitude. A narrow band straddling the latitudinal range from −45 to −50° is the brightest large feature on its visible surface. It is called a southern "collar". The cap and collar are thought to be a dense region of methane clouds located within the pressure range of 1.3 to 2 bar. Besides the large-scale banded structure, Voyager 2 observed ten small bright clouds, most lying several degrees to the north from the collar. In all other respects, Uranus looked like a dynamically dead planet in 1986.
Voyager 2 arrived during the height of Uranus's southern summer and could not observe the northern hemisphere. At the beginning of the 21st century, when the northern polar region came into view, the Hubble Space Telescope (HST) and Keck telescope initially observed neither a collar nor a polar cap in the northern hemisphere. So Uranus appeared to be asymmetric: bright near the south pole and uniformly dark in the region north of the southern collar. In 2007, when Uranus passed its equinox, the southern collar almost disappeared, and a faint northern collar emerged near 45° of latitude. In 2023, a team employing the Very Large Array observed a dark collar at 80° latitude, and a bright spot at the north pole, indicating the presence of a polar vortex.
In the 1990s, the number of the observed bright cloud features grew considerably, partly because new high-resolution imaging techniques became available. Most were found in the northern hemisphere as it started to become visible. An early explanation—that bright clouds are easier to identify in its dark part, whereas in the southern hemisphere the bright collar masks them—was shown to be incorrect. Nevertheless, there are differences between the clouds of each hemisphere. The northern clouds are smaller, sharper and brighter. They appear to lie at a higher altitude. The lifetime of clouds spans several orders of magnitude. Some small clouds live for hours; at least one southern cloud may have persisted since the Voyager 2 flyby. Recent observation also discovered that cloud features on Uranus have a lot in common with those on Neptune. For example, the dark spots common on Neptune had never been observed on Uranus before 2006, when the first such feature dubbed Uranus Dark Spot was imaged. The speculation is that Uranus is becoming more Neptune-like during its equinoctial season.
The tracking of numerous cloud features allowed determination of zonal winds blowing in the upper troposphere of Uranus. At the equator winds are retrograde, which means that they blow in the reverse direction to the planetary rotation. Their speeds are from . Wind speeds increase with the distance from the equator, reaching zero values near ±20° latitude, where the troposphere's temperature minimum is located. Closer to the poles, the winds shift to a prograde direction, flowing with Uranus's rotation. Wind speeds continue to increase reaching maxima at ±60° latitude before falling to zero at the poles. Wind speeds at −40° latitude range from . Because the collar obscures all clouds below that parallel, speeds between it and the southern pole are impossible to measure. In contrast, in the northern hemisphere maximum speeds as high as are observed near +50° latitude.
In 1986, the Voyager 2 Planetary Radio Astronomy (PRA) experiment observed 140 lightning flashes, or Uranian electrostatic discharges with a frequency of 0.9-40 MHz. The UEDs were detected from 600,000 km of Uranus over 24 hours, most of which were not visible . However, microphysical modelling suggests that Uranian lightning occurs in convective storms occurring in deep troposphere water clouds. If this is the case, lightning will not be visible due to the thick cloud layers above the troposphere. The UEDs were detected from 600,000 km of Uranus, most of which were not visible . Uranian lightning has a power of around 108 W, emits 1×10^7 J - 2×10^7 J of energy, and lasts an average of 120 ms. There is a possibility that the power of Uranian lightning varies greatly with the seasons caused by changes in convection rates in the clouds The UEDs were detected from 600,000 km of Uranus, most of which were not visible. Uranian lightning is much more powerful than lightning on Earth and comparable to Jovian lightning. During the Ice Giant flybys, "Voyager 2" detected lightning more clearly on Uranus than on Neptune due to the planet's lower gravity and possible warmer deep atmosphere.
Seasonal variation
For a short period from March to May 2004, large clouds appeared in the Uranian atmosphere, giving it a Neptune-like appearance. Observations included record-breaking wind speeds of and a persistent thunderstorm referred to as "Fourth of July fireworks". On 23 August 2006, researchers at the Space Science Institute (Boulder, Colorado) and the University of Wisconsin observed a dark spot on Uranus's surface, giving scientists more insight into Uranus atmospheric activity. Why this sudden upsurge in activity occurred is not fully known, but it appears that Uranus's extreme axial tilt results in extreme seasonal variations in its weather. Determining the nature of this seasonal variation is difficult because good data on Uranus's atmosphere has existed for less than 84 years, or one full Uranian year. Photometry over the course of half a Uranian year (beginning in the 1950s) has shown regular variation in the brightness in two spectral bands, with maxima occurring at the solstices and minima occurring at the equinoxes. A similar periodic variation, with maxima at the solstices, has been noted in microwave measurements of the deep troposphere begun in the 1960s. Stratospheric temperature measurements beginning in the 1970s also showed maximum values near the 1986 solstice. The majority of this variability is thought to occur owing to changes in viewing geometry.
There are some indications that physical seasonal changes are happening in Uranus. Although Uranus is known to have a bright south polar region, the north pole is fairly dim, which is incompatible with the model of the seasonal change outlined above. During its previous northern solstice in 1944, Uranus displayed elevated levels of brightness, which suggests that the north pole was not always so dim. This information implies that the visible pole brightens some time before the solstice and darkens after the equinox. Detailed analysis of the visible and microwave data revealed that the periodical changes in brightness are not completely symmetrical around the solstices, which also indicates a change in the meridional albedo patterns.
In the 1990s, as Uranus moved away from its solstice, Hubble and ground-based telescopes revealed that the south polar cap darkened noticeably (except the southern collar, which remained bright), whereas the northern hemisphere demonstrated increasing activity, such as cloud formations and stronger winds, bolstering expectations that it should brighten soon. This indeed happened in 2007 when it passed an equinox: a faint northern polar collar arose, and the southern collar became nearly invisible, although the zonal wind profile remained slightly asymmetric, with northern winds being somewhat slower than southern.
The mechanism of these physical changes is still not clear. Near the summer and winter solstices, Uranus's hemispheres lie alternately either in full glare of the Sun's rays or facing deep space. The brightening of the sunlit hemisphere is thought to result from the local thickening of the methane clouds and haze layers located in the troposphere. The bright collar at −45° latitude is also connected with methane clouds. Other changes in the southern polar region can be explained by changes in the lower cloud layers. The variation of the microwave emission from Uranus is probably caused by changes in the deep tropospheric circulation, because thick polar clouds and haze may inhibit convection. Now that the spring and autumn equinoxes are arriving on Uranus, the dynamics are changing and convection can occur again.
Magnetosphere
Before the arrival of Voyager 2, no measurements of the Uranian magnetosphere had been taken, so its nature remained a mystery. Before 1986, scientists had expected the magnetic field of Uranus to be in line with the solar wind, because it would then align with Uranus's poles that lie in the ecliptic.
Voyagers observations revealed that Uranus's magnetic field is peculiar, both because it does not originate from its geometric centre, and because it is tilted at 59° from the axis of rotation. In fact, the magnetic dipole is shifted from Uranus's centre towards the south rotational pole by as much as one-third of the planetary radius. This unusual geometry results in a highly asymmetric magnetosphere, where the magnetic field strength on the surface in the southern hemisphere can be as low as 0.1 gauss (10 μT), whereas in the northern hemisphere it can be as high as 1.1 gauss (110 μT). The average field at the surface is 0.23 gauss (23 μT).
Studies of Voyager 2 data in 2017 suggest that this asymmetry causes Uranus's magnetosphere to connect with the solar wind once a Uranian day, opening the planet to the Sun's particles. In comparison, the magnetic field of Earth is roughly as strong at either pole, and its "magnetic equator" is roughly parallel with its geographical equator. The dipole moment of Uranus is 50 times that of Earth. Neptune has a similarly displaced and tilted magnetic field, suggesting that this may be a common feature of ice giants. One hypothesis is that, unlike the magnetic fields of the terrestrial and gas giants, which are generated within their cores, the ice giants' magnetic fields are generated by motion at relatively shallow depths, for instance, in the water–ammonia ocean. Another possible explanation for the magnetosphere's alignment is that there are oceans of liquid diamond in Uranus's interior that would deter the magnetic field.
It is, however, unclear whether the observed asymmetry of Uranus' magnetic field represents the typical state of the magnetosphere, or a coincidence of observing it during unusual space weather conditions. A post-analysis of Voyager data from 2024 suggests that the strongly asymmetric shape of the magnetosphere observed during the fly-by represents an anomalous state, as the measured values of solar wind density at the time were unusually high, which could have compressed Uranus' magnetosphere. The interaction with the solar wind event could also explain the apparent paradox of presence of strong electron radiation belts despite the otherwise low magnetospheric plasma density measured. Such conditions are estimated to occur less than 5% of the time.
Despite its curious alignment, in other respects the Uranian magnetosphere is like those of other planets: it has a bow shock at about 23 Uranian radii ahead of it, a magnetopause at 18 Uranian radii, a fully developed magnetotail, and radiation belts. Overall, the structure of Uranus's magnetosphere is different from Jupiter's and more similar to Saturn's. Uranus's magnetotail trails behind it into space for millions of kilometres and is twisted by its sideways rotation into a long corkscrew.Uranus's magnetosphere contains charged particles: mainly protons and electrons, with a small amount of H2+ ions. Many of these particles probably derive from the thermosphere. The ion and electron energies can be as high as 4 and 1.2 megaelectronvolts, respectively. The density of low-energy (below 1 kiloelectronvolt) ions in the inner magnetosphere is about 2 cm−3. The particle population is strongly affected by the Uranian moons, which sweep through the magnetosphere, leaving noticeable gaps. The particle flux is high enough to cause darkening or space weathering of their surfaces on an astronomically rapid timescale of 100,000 years. This may be the cause of the uniformly dark colouration of the Uranian satellites and rings.
Uranus has relatively well developed aurorae, which are seen as bright arcs around both magnetic poles. Unlike Jupiter's, Uranus's aurorae seem to be insignificant for the energy balance of the planetary thermosphere. They, or rather their trihydrogen cations' infrared spectral emissions, have been studied in-depth as of late 2023.
In March 2020, NASA astronomers reported the detection of a large atmospheric magnetic bubble, also known as a plasmoid, released into outer space from the planet Uranus, after reevaluating old data recorded by the Voyager 2 space probe during a flyby of the planet in 1986.
Moons
Uranus has 28 known natural satellites. The names of these satellites are chosen from characters in the works of Shakespeare and Alexander Pope. The five main satellites are Miranda, Ariel, Umbriel, Titania, and Oberon. The Uranian satellite system is the least massive among those of the giant planets; the combined mass of the five major satellites would be less than half that of Triton (largest moon of Neptune) alone. The largest of Uranus's satellites, Titania, has a radius of only , or less than half that of the Moon, but slightly more than Rhea, the second-largest satellite of Saturn, making Titania the eighth-largest moon in the Solar System. Uranus's satellites have relatively low albedos; ranging from 0.20 for Umbriel to 0.35 for Ariel (in green light). They are ice–rock conglomerates composed of roughly 50% ice and 50% rock. The ice may include ammonia and carbon dioxide.
Among the Uranian satellites, Ariel appears to have the youngest surface, with the fewest impact craters, and Umbriel the oldest. Miranda has fault canyons deep, terraced layers, and a chaotic variation in surface ages and features. Miranda's past geologic activity is thought to have been driven by tidal heating at a time when its orbit was more eccentric than currently, probably as a result of a former 3:1 orbital resonance with Umbriel. Extensional processes associated with upwelling diapirs are the likely origin of Miranda's 'racetrack'-like coronae. Ariel is thought to have once been held in a 4:1 resonance with Titania.
Uranus has at least one horseshoe orbiter occupying the Sun–Uranus Lagrangian point—a gravitationally unstable region at 180° in its orbit, 83982 Crantor. Crantor moves inside Uranus's co-orbital region on a complex, temporary horseshoe orbit. is also a promising Uranus horseshoe librator candidate.
Rings
The Uranian rings are composed of extremely dark particles, which vary in size from micrometres to a fraction of a metre. Thirteen distinct rings are presently known, the brightest being the ε ring. All except the two rings of Uranus are extremely narrow—they are usually a few kilometres wide. The rings are probably quite young; the dynamics considerations indicate that they did not form with Uranus. The matter in the rings may once have been part of a moon (or moons) that was shattered by high-speed impacts. From numerous pieces of debris that formed as a result of those impacts, only a few particles survived, in stable zones corresponding to the locations of the present rings.
William Herschel described a possible ring around Uranus in 1789. This sighting is generally considered doubtful, because the rings are quite faint, and in the two following centuries none were noted by other observers. Still, Herschel made an accurate description of the epsilon ring's size, its angle relative to Earth, its red colour, and its apparent changes as Uranus travelled around the Sun. The ring system was definitively discovered on 10 March 1977 by James L. Elliot, Edward W. Dunham, and Jessica Mink using the Kuiper Airborne Observatory. The discovery was serendipitous; they planned to use the occultation of the star SAO 158687 (also known as HD 128598) by Uranus to study its atmosphere. When their observations were analysed, they found that the star had disappeared briefly from view five times both before and after it disappeared behind Uranus. They concluded that there must be a ring system around Uranus. Later, they detected four additional rings. The rings were directly imaged when Voyager 2 passed Uranus in 1986. Voyager 2 also discovered two additional faint rings, bringing the total number to eleven.
In December 2005, the Hubble Space Telescope detected a pair of previously unknown rings. The largest is located twice as far from Uranus as the previously known rings. These new rings are so far from Uranus that they are called the "outer" ring system. Hubble also spotted two small satellites, one of which, Mab, shares its orbit with the outermost newly discovered ring. The new rings bring the total number of Uranian rings to 13. In April 2006, images of the new rings from the Keck Observatory yielded the colours of the outer rings: the outermost is blue and the other one red. One hypothesis concerning the outer ring's blue colour is that it is composed of minute particles of water ice from the surface of Mab that are small enough to scatter blue light. In contrast, Uranus's inner rings appear grey.
Although the Uranian rings are very difficult to directly observe from Earth, advances in digital imaging have allowed several amateur astronomers to successfully photograph the rings with red or infrared filters; telescopes with apertures as small as may be able to detect the rings with proper imaging equipment.
Exploration
Launched in 1977, Voyager 2 made its closest approach to Uranus on 24 January 1986, coming within of the cloudtops, before continuing its journey to Neptune. The spacecraft studied the structure and chemical composition of Uranus's atmosphere, including its unique weather, caused by its extreme axial tilt. It made the first detailed investigations of its five largest moons and discovered 10 new ones. Voyager 2 examined all nine of the system's known rings and discovered two more. It also studied the magnetic field, its irregular structure, its tilt and its unique corkscrew magnetotail caused by Uranus's sideways orientation.
No other spacecraft has flown by Uranus since then, though there have been many proposed missions to revisit the Uranus system. The possibility of sending the Cassini spacecraft from Saturn to Uranus was evaluated during a mission extension planning phase in 2009, but was ultimately rejected in favour of destroying it in the Saturnian atmosphere, as it would have taken about twenty years to get to the Uranian system after departing Saturn. A Uranus entry probe could use Pioneer Venus Multiprobe heritage and descend to 1–5 atmospheres. A Uranus orbiter and probe was recommended by the 2013–2022 Planetary Science Decadal Survey published in 2011; the proposal envisaged launch during 2020–2023 and a 13-year cruise to Uranus. The committee's opinion was reaffirmed in 2022, when a Uranus probe/orbiter mission was placed at the highest priority, due to the lack of knowledge about ice giants. Most recently, the CNSA's Tianwen-4 Jupiter orbiter, launching in 2029, is planned to have a subprobe that will detach and get a gravity assist instead of entering orbit, flying by Uranus in March 2045 before heading to interstellar space. China also has plans for a potential Tianwen-5 that may orbit either Uranus or Neptune that have yet to come to fruition.
In culture
In modern astrology, the planet Uranus (symbol ) is the ruling planet of Aquarius; prior to the discovery of Uranus, the ruling planet of Aquarius was Saturn. Because Uranus is cyan and Uranus is associated with electricity, the colour electric blue, which is close to cyan, is associated with the sign Aquarius.
The chemical element uranium, discovered in 1789 by the German chemist Martin Heinrich Klaproth, was named after the then-newly discovered Uranus.
Lydia Sigourney included her poem in her 1827 collection of poetry.
"Uranus, the Magician" is a movement in Gustav Holst's orchestral suite The Planets, written between 1914 and 1916.
Operation Uranus was the successful military operation in World War II by the Red Army to take back Stalingrad and marked the turning point in the land war against the Wehrmacht.
The lines "Then felt I like some watcher of the skies/When a new planet swims into his ken", from John Keats's "On First Looking into Chapman's Homer", are a reference to Herschel's discovery of Uranus.
See also
and , the only two known Uranus trojans
Colonisation of Uranus
Extraterrestrial diamonds (thought to be abundant in Uranus)
Outline of Uranus
Statistics of planets in the Solar System
Uranus in astrology
Uranus in fiction
Notes
References
Further reading
External links
Uranus at European Space Agency
Uranus at NASA's Solar System Exploration site
Uranus at Jet Propulsion Laboratory's planetary photojournal (photos)
Voyager at Uranus (photos)
Uranian system montage (photo)
Interactive 3D gravity simulation of the Uranian system
17810313
Discoveries by William Herschel
Tauri, 034
Gas giants
Ice giants
Objects observed by stellar occultation
Outer planets
Solar System | Uranus | [
"Astronomy"
] | 10,485 | [
"Outer space",
"Solar System"
] |
44,495 | https://en.wikipedia.org/wiki/Linear%20motor | A linear motor is an electric motor that has had its stator and rotor "unrolled", thus, instead of producing a torque (rotation), it produces a linear force along its length. However, linear motors are not necessarily straight. Characteristically, a linear motor's active section has ends, whereas more conventional motors are arranged as a continuous loop.
A typical mode of operation is as a Lorentz-type actuator, in which the applied force is linearly proportional to the current and the magnetic field .
Linear motors are most commonly found in high accuracy engineering applications.
Many designs have been put forward for linear motors, falling into two major categories, low-acceleration and high-acceleration linear motors. Low-acceleration linear motors are suitable for maglev trains and other ground-based transportation applications. High-acceleration linear motors are normally rather short, and are designed to accelerate an object to a very high speed; for example, see the coilgun.
High-acceleration linear motors are typically used in studies of hypervelocity collisions, as weapons, or as mass drivers for spacecraft propulsion. They are usually of the AC linear induction motor (LIM) design with an active three-phase winding on one side of the air-gap and a passive conductor plate on the other side. However, the direct current homopolar linear motor railgun is another high acceleration linear motor design. The low-acceleration, high speed and high power motors are usually of the linear synchronous motor (LSM) design, with an active winding on one side of the air-gap and an array of alternate-pole magnets on the other side. These magnets can be permanent magnets or electromagnets. The motor for the Shanghai maglev train, for instance, is an LSM.
Types
Brushless
Brushless linear motors are members of the Synchronous motor family. They are typically used in standard linear stages or integrated into custom, high performance positioning systems. Invented in the late 1980s by Anwar Chitayat at Anorad Corporation, now Rockwell Automation, and helped improve the throughput and quality of industrial manufacturing processes.
Brush
Brushed linear motors were used in industrial automation applications prior to the invention of Brushless linear motors. Compared with three phase brushless motors, which are typically being used today, brush motors operate on a single phase. Brush linear motors have a lower cost since they do not need moving cables or three phase servo drives. However, they require higher maintenance since their brushes wear out.
Synchronous
In this design the rate of movement of the magnetic field is controlled, usually electronically, to track the motion of the rotor. For cost reasons synchronous linear motors rarely use commutators, so the rotor often contains permanent magnets, or soft iron. Examples include coilguns and the motors used on some maglev systems, as well as many other linear motors. In high precision industrial automation linear motors are typically configured with a magnet stator and a moving coil. A Hall effect sensor is attached to the rotor to track the magnetic flux of the stator. The electric current is typically provided from a stationary servo drive to the moving coil by a moving cable inside a cable carrier.
Induction
In this design, the force is produced by a moving linear magnetic field acting on conductors in the field. Any conductor, be it a loop, a coil or simply a piece of plate metal, that is placed in this field will have eddy currents induced in it thus creating an opposing magnetic field, in accordance with Lenz's law. The two opposing fields will repel each other, thus creating motion as the magnetic field sweeps through the metal.
Homopolar
In this design a large current is passed through a metal sabot across sliding contacts that are fed by two rails. The magnetic field this generates causes the metal to be projected along the rails.
Tubular
Efficient and compact design applicable to the replacement of pneumatic cylinders.
Piezoelectric
Piezoelectric drive is often used to drive small linear motors.
History
Low acceleration
The history of linear electric motors can be traced back at least as far as the 1840s, to the work of Charles Wheatstone at King's College London, but Wheatstone's model was too inefficient to be practical. A feasible linear induction motor is described in (1905 - inventor Alfred Zehden of Frankfurt-am-Main), for driving trains or lifts. The German engineer Hermann Kemper built a working model in 1935. In the late 1940s, Dr. Eric Laithwaite of Manchester University, later Professor of Heavy Electrical Engineering at Imperial College in London developed the first full-size working model.
In a single sided version the magnetic repulsion forces the conductor away from the stator, levitating it, and carrying it along in the direction of the moving magnetic field. He called the later versions of it magnetic river. The technologies would later be applied, in the 1984, Air-Rail Link shuttle, between Birmingham's airport and an adjacent train station.
Because of these properties, linear motors are often used in maglev propulsion, as in the Japanese Linimo magnetic levitation train line near Nagoya. However, linear motors have been used independently of magnetic levitation, as in the Bombardier Innovia Metro systems worldwide and a number of modern Japanese subways, including Tokyo's Toei Ōedo Line.
Similar technology is also used in some roller coasters with modifications but, at present, is still impractical on street running trams, although this, in theory, could be done by burying it in a slotted conduit.
Outside of public transportation, vertical linear motors have been proposed as lifting mechanisms in deep mines, and the use of linear motors is growing in motion control applications. They are also often used on sliding doors, such as those of low floor trams such as the Alstom Citadis and the Socimi Eurotram. Dual axis linear motors also exist. These specialized devices have been used to provide direct X-Y motion for precision laser cutting of cloth and sheet metal, automated drafting, and cable forming. Most linear motors in use are LIM (linear induction motor), or LSM (linear synchronous motor). Linear DC motors are not used due to their higher cost and linear SRM suffers from poor thrust. So for long runs in traction LIM is mostly preferred and for short runs LSM is mostly preferred.
High acceleration
High-acceleration linear motors have been suggested for a number of uses.
They have been considered for use as weapons, since current armour-piercing ammunition tends to consist of small rounds with very high kinetic energy, for which just such motors are suitable. Many amusement park launched roller coasters now use linear induction motors to propel the train at a high speed, as an alternative to using a lift hill.
The United States Navy is also using linear induction motors in the Electromagnetic Aircraft Launch System that will replace traditional steam catapults on future aircraft carriers. They have also been suggested for use in spacecraft propulsion. In this context they are usually called mass drivers. The simplest way to use mass drivers for spacecraft propulsion would be to build a large mass driver that can accelerate cargo up to escape velocity, though RLV launch assist like StarTram to low Earth orbit has also been investigated.
High-acceleration linear motors are difficult to design for a number of reasons. They require large amounts of energy in very short periods of time. One rocket launcher design calls for 300 GJ for each launch in the space of less than a second. Normal electrical generators are not designed for this kind of load, but short-term electrical energy storage methods can be used. Capacitors are bulky and expensive but can supply large amounts of energy quickly. Homopolar generators can be used to convert the kinetic energy of a flywheel into electric energy very rapidly. High-acceleration linear motors also require very strong magnetic fields; in fact, the magnetic fields are often too strong to permit the use of superconductors. However, with careful design, this need not be a major problem.
Two different basic designs have been invented for high-acceleration linear motors: railguns and coilguns.
Usage
Linear motors are commonly used for actuating high performance industrial automation equipment. Their advantage, unlike any other commonly used actuator, such as a ball screw, timing belt, or rack and pinion, is that they provide any combination of high precision, high velocity, high force and long travel.
Linear motors are widely used. One of the major uses of linear motors is for propelling the shuttle in looms.
A linear motor has been used for sliding doors and various similar actuators. They have been used for baggage handling and even large-scale bulk materials transport.
Linear motors are sometimes used to create rotary motion. For example, they have been used at observatories to deal with the large radius of curvature.
Linear motors may also be used as an alternative to conventional chain-run lift hills for roller coasters. The coaster Maverick at Cedar Point uses one such linear motor in place of a chain lift.
A linear motor has been used to accelerate cars for crash tests.
Industrial automation
The combination of high precision, high velocity, high force, and long travel makes brushless linear motors attractive for driving industrial automations equipment. They serve industries and applications such as semiconductor steppers, electronics surface-mount technology, automotive cartesian coordinate robots, aerospace chemical milling, optics electron microscope, healthcare laboratory automation, food and beverage pick and place.
Machine tools
Synchronous linear motor actuators, used in machine tools, provide high force, high velocity, high precision and high dynamic stiffness, resulting in high smoothness of motion and low settling time. They may reach velocities of 2 m/s and micron-level accuracies, with short cycle times and a smooth surface finish.
Train propulsion
Conventional rails
All of the following applications are in rapid transit and have the active part of the motor in the cars.
Bombardier Innovia Metro
Originally developed in the late 1970s by UTDC in Canada as the Intermediate Capacity Transit System (ICTS). A test track was constructed in Millhaven, Ontario, for extensive testing of prototype cars, after which three lines were constructed:
Line 3 Scarborough in Toronto (opened 1985; closed 2023)
Expo Line of the Vancouver SkyTrain (opened 1985 and extended in 1994)
Detroit People Mover in Detroit (opened 1987)
ICTS was sold to Bombardier Transportation in 1991 and later known as Advanced Rapid Transit (ART) before adopting its current branding in 2011. Since then, several more installations have been made:
Kelana Jaya Line in Kuala Lumpur (opened 1998 and extended in 2016)
Millennium Line of the Vancouver SkyTrain (opened 2002 and extended in 2016)
AirTrain JFK in New York (opened 2003)
Airport Express (Beijing Subway) (opened 2008)
Everline in Yongin, South Korea (opened 2013)
All Innovia Metro systems use third rail electrification.
Japanese Linear Metro
One of the biggest challenges faced by Japanese railway engineers in the 1970s to the 1980s was the ever increasing construction costs of subways. In response, the Japan Subway Association began studying on the feasibility of the "mini-metro" for meeting urban traffic demand in 1979. In 1981, the Japan Railway Engineering Association studied on the use of linear induction motors for such small-profile subways and by 1984 was investigating on the practical applications of linear motors for urban rail with the Japanese Ministry of Land, Infrastructure, Transport and Tourism. In 1988, a successful demonstration was made with the Limtrain at Saitama and influenced the eventual adoption of the linear motor for the Nagahori Tsurumi-ryokuchi Line in Osaka and Toei Line 12 (present-day Toei Oedo Line) in Tokyo.
To date, the following subway lines in Japan use linear motors and use overhead lines for power collection:
Two Osaka Metro lines in Osaka:
Nagahori Tsurumi-ryokuchi Line (opened 1990)
Imazatosuji Line (opened 2006)
Toei Ōedo Line in Tokyo (opened 2000)
Kaigan Line of the Kobe Municipal Subway (opened 2001)
Nanakuma Line of the Fukuoka City Subway (opened 2005)
Yokohama Municipal Subway Green Line (opened 2008)
Sendai Subway Tōzai Line (opened 2015)
In addition, Kawasaki Heavy Industries has also exported the Linear Metro to the Guangzhou Metro in China; all of the Linear Metro lines in Guangzhou use third rail electrification:
Line 4 (opened 2005)
Line 5 (opened 2009).
Line 6 (opened 2013)
Monorail
There is at least one known monorail system which is not magnetically levitated, but nonetheless uses linear motors. This is the Moscow Monorail. Originally, traditional motors and wheels were to be used. However, it was discovered during test runs that the proposed motors and wheels would fail to provide adequate traction under some conditions, for example, when ice appeared on the rail. Hence, wheels are still used, but the trains use linear motors to accelerate and slow down. This is possibly the only use of such a combination, due to the lack of such requirements for other train systems.
The TELMAGV is a prototype of a monorail system that is also not magnetically levitated but uses linear motors.
Magnetic levitation
High-speed trains:
Transrapid: first commercial use in Shanghai (opened in 2004)
SCMaglev, under construction in Japan (fastest train in the world, planned to open by 2027)
Rapid transit:
Birmingham Airport, UK (opened 1984, closed 1995)
M-Bahn in Berlin, Germany (opened in 1989, closed in 1991)
Daejeon EXPO, Korea (ran only 1993)
HSST: Linimo line in Aichi Prefecture, Japan (opened 2005)
Incheon Airport Maglev (opened July 2014)
Changsha Maglev Express (opened 2016)
S1 line of Beijing Subway (opened 2017)
Amusement rides
There are many roller coasters throughout the world that use LIMs to accelerate the ride vehicles. The first being Flight of Fear at Kings Island and Kings Dominion, both opening in 1996. Battlestar Galactica: Human VS Cylon & Revenge of the Mummy at Universal Studios Singapore opened in 2010. They both use LIMs to accelerate from certain point in the rides.
Revenge of the Mummy also located at Universal Studios Hollywood and Universal Studios Florida. The Incredible Hulk Coaster and VelociCoaster at Universal Islands of Adventure also use linear motors. At Walt Disney World, Rock 'n' Roller Coaster Starring Aerosmith at Disney's Hollywood Studios and Guardians of the Galaxy: Cosmic Rewind at Epcot both use LSM to launch their ride vehicles into their indoor ride enclosures.
In 2023 a hydraulic launch roller coaster, Top Thrill Dragster at Cedar Point in Ohio, USA, was renovated and the hydraulic launch replaced with a weaker multi-launch system using LSM, that creates less g-force.
Aircraft launching
Electromagnetic Aircraft Launch System
Proposed and research
Launch loop – A proposed system for launching vehicles into space using a linear motor powered loop
StarTram – Concept for a linear motor on extreme scale
Tether cable catapult system
Aérotrain S44 – A suburban commuter hovertrain prototype
Research Test Vehicle 31 – A hovercraft-type vehicle guided by a track
Hyperloop – a conceptual high-speed transportation system put forward by entrepreneur Elon Musk
Elevator
Lift
Magway - a UK freight delivery system under research and development that aims to deliver goods in pods via 90 cm diameter pipework under and over ground.
See also
Linear actuator
Linear induction motor
Linear motion
Maglev
Online Electric Vehicle
Reciprocating electric motor
Sawyer motor
Tubular linear motor
References
External links
Design equations, spreadsheet, and drawings
Motor torque calculation
Overview of Electromagnetic Guns
Electric motors
English inventions
Linear motion | Linear motor | [
"Physics",
"Technology",
"Engineering"
] | 3,216 | [
"Physical phenomena",
"Engines",
"Electric motors",
"Motion (physics)",
"Electrical engineering",
"Linear motion"
] |
44,547 | https://en.wikipedia.org/wiki/Tie%20rod | A tie rod or tie bar (also known as a hanger rod if vertical) is a slender structural unit used as a tie and (in most applications) capable of carrying tensile loads only.
It is any rod or bar-shaped structural member designed to prevent the separation of two parts, as in a vehicle.
Subtypes and examples of applications
In airplane structures, tie rods are sometimes used in the fuselage or wings.
Tie rods are often used in steel structures, such as bridges, industrial buildings, tanks, towers, and cranes.
Sometimes tie rods are retrofitted to bowing or subsiding masonry walls (brick, block, stone, etc.) to keep them from succumbing to lateral forces. The ends of the rods are secured by anchor plates which may be visible from the outside.
The rebar used in reinforced concrete is not referred to as a "tie rod", but it essentially performs some of the same tension-force-counteracting purposes that tie rods perform.
In automobiles, the tie rods are part of the steering mechanism. They differ from the archetypal tie rod by both pushing and pulling (operating in both tension and compression). In the UK, these items are generally referred to as track rods.
In steam locomotives, a tie rod is a rod that connects several driving wheels to transmit the power from the connecting rod.
Tie rods known as sag rods are sometimes used in connection with purlins to take the component of the loads which is parallel to the roof.
The spokes of bicycle wheels are tie rods.
In ships, tie rods are bolts which keep the whole engine structure under compression. They provide for fatigue strength. They also provide for proper running gear alignment which prevents fretting. They help to reduce the bending stress being transmitted to the transverse girder.
Physics and engineering principles
In general, because the ratio of the typical tie rod's length to its cross section is usually very large, it would buckle under the action of compressive forces. The working strength of a tie rod is the product of the allowable working stress and the rod's minimum cross-sectional area.
If threads are cut into a cylindrical rod, that minimum area occurs at the root of the thread. Often rods are upset (made thicker at the ends) so that the tie rod does not become weaker when threads are cut into it.
Tie rods may be connected at the ends in various ways, but it is desirable that the strength of the connection should be at least equal to the strength of the rod. The ends may be threaded and passed through drilled holes or shackles and retained by nuts screwed on the ends. If the ends are threaded right- and left-hand the length between points of loading may be altered. This furnishes a second method for pre-tensioning the rod at will by turning it in the nuts so that the length will be changed. A turnbuckle will accomplish the same purpose. The ends may also be swaged to receive a fitting which is connected to the supports. Another way of making end connections is to forge an eye or hook on the rod.
An infamous structural failure involving tie rods is the Hyatt Regency walkway collapse in Kansas City, Missouri, on July 17, 1981. The hotel had a large atrium with three walkways crossing it suspended from tie rods. Construction errors led to several of the walkways collapsing, killing 114 people and injuring over 200.
Geometry
Osgood and Graustein used the rectangular hyperbola, its conjugate hyperbola, and conjugate diameters to rationalize tie rods at 15 degree radial spacing, to a square of girders, from its center. The tie-rods to the corners (45°) correspond to the asymptotes, while the pair at 15° and 75° are conjugate, as are the pair at 30° and 60°. According to this model in linear elasticity, the application of a load compressing the square results in a deformation where the tie rods maintain their conjugate relations.
See also
Guy-wire
Tie (engineering)
References
Building materials | Tie rod | [
"Physics",
"Engineering"
] | 837 | [
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
44,568 | https://en.wikipedia.org/wiki/Herbivore | A herbivore is an animal anatomically and physiologically evolved to feed on plants, especially upon vascular tissues such as foliage, fruits or seeds, as the main component of its diet. These more broadly also encompass animals that eat non-vascular autotrophs such as mosses, algae and lichens, but do not include those feeding on decomposed plant matters (i.e. detritivores) or macrofungi (i.e. fungivores).
As a result of their plant-based diet, herbivorous animals typically have mouth structures (jaws or mouthparts) well adapted to mechanically break down plant materials, and their digestive systems have special enzymes (e.g. amylase and cellulase) to digest polysaccharides. Grazing herbivores such as horses and cattles have wide flat-crowned teeth that are better adapted for grinding grass, tree bark and other tougher lignin-containing materials, and many of them evolved rumination or cecotropic behaviors to better extract nutrients from plants. A large percentage of herbivores also have mutualistic gut flora made up of bacteria and protozoans that help to degrade the cellulose in plants, whose heavily cross-linking polymer structure makes it far more difficult to digest than the protein- and fat-rich animal tissues that carnivores eat.
Etymology
Herbivore is the anglicized form of a modern Latin coinage, herbivora, cited in Charles Lyell's 1830 Principles of Geology. Richard Owen employed the anglicized term in an 1854 work on fossil teeth and skeletons. Herbivora is derived from Latin herba 'small plant, herb' and vora, from vorare 'to eat, devour'.
Definition and related terms
Herbivory is a form of consumption in which an organism principally eats autotrophs such as plants, algae and photosynthesizing bacteria. More generally, organisms that feed on autotrophs in general are known as primary consumers.
Herbivory is usually limited to animals that eat plants. Insect herbivory can cause a variety of physical and metabolic alterations in the way the host plant interacts with itself and other surrounding biotic factors. Fungi, bacteria, and protists that feed on living plants are usually termed plant pathogens (plant diseases), while fungi and microbes that feed on dead plants are described as saprotrophs. Flowering plants that obtain nutrition from other living plants are usually termed parasitic plants. There is, however, no single exclusive and definitive ecological classification of consumption patterns; each textbook has its own variations on the theme.
Evolution of herbivory
The understanding of herbivory in geological time comes from three sources: fossilized plants, which may preserve evidence of defence (such as spines), or herbivory-related damage; the observation of plant debris in fossilised animal faeces; and the construction of herbivore mouthparts.
Although herbivory was long thought to be a Mesozoic phenomenon, fossils have shown that plants were being consumed by arthropods within less than 20 million years after the first land plants evolved. Insects fed on the spores of early Devonian plants, and the Rhynie chert also provides evidence that organisms fed on plants using a "pierce and suck" technique.
During the next 75 million years, plants evolved a range of more complex organs, such as roots and seeds. There is no evidence of any organism being fed upon until the middle-late Mississippian, . There was a gap of 50 to 100 million years between the time each organ evolved and the time organisms evolved to feed upon them; this may be due to the low levels of oxygen during this period, which may have suppressed evolution. Further than their arthropod status, the identity of these early herbivores is uncertain. Hole feeding and skeletonization are recorded in the early Permian, with surface fluid feeding evolving by the end of that period.
Herbivory among four-limbed terrestrial vertebrates, the tetrapods, developed in the Late Carboniferous (307–299 million years ago). The oldest known example being Desmatodon hesperis. Early tetrapods were large amphibious piscivores. While amphibians continued to feed on fish and insects, some reptiles began exploring two new food types, tetrapods (carnivory) and plants (herbivory). The entire dinosaur order ornithischia was composed of herbivorous dinosaurs. Carnivory was a natural transition from insectivory for medium and large tetrapods, requiring minimal adaptation. In contrast, a complex set of adaptations was necessary for feeding on highly fibrous plant materials.
Arthropods evolved herbivory in four phases, changing their approach to it in response to changing plant communities. Tetrapod herbivores made their first appearance in the fossil record of their jaws near the Permio-Carboniferous boundary, approximately 300 million years ago. The earliest evidence of their herbivory has been attributed to dental occlusion, the process in which teeth from the upper jaw come in contact with teeth in the lower jaw is present. The evolution of dental occlusion led to a drastic increase in plant food processing and provides evidence about feeding strategies based on tooth wear patterns. Examination of phylogenetic frameworks of tooth and jaw morphologes has revealed that dental occlusion developed independently in several lineages tetrapod herbivores. This suggests that evolution and spread occurred simultaneously within various lineages.
Food chain
Herbivores form an important link in the food chain because they consume plants to digest the carbohydrates photosynthetically produced by a plant. Carnivores in turn consume herbivores for the same reason, while omnivores can obtain their nutrients from either plants or animals. Due to a herbivore's ability to survive solely on tough and fibrous plant matter, they are termed the primary consumers in the food cycle (chain). Herbivory, carnivory, and omnivory can be regarded as special cases of consumer–resource interactions.
Feeding strategies
Two herbivore feeding strategies are grazing (e.g. cows) and browsing (e.g. moose). For a terrestrial mammal to be called a grazer, at least 90% of the forage has to be grass, and for a browser at least 90% tree leaves and twigs. An intermediate feeding strategy is called "mixed-feeding". In their daily need to take up energy from forage, herbivores of different body mass may be selective in choosing their food. "Selective" means that herbivores may choose their forage source depending on, e.g., season or food availability, but also that they may choose high quality (and consequently highly nutritious) forage before lower quality. The latter especially is determined by the body mass of the herbivore, with small herbivores selecting for high-quality forage, and with increasing body mass animals are less selective. Several theories attempt to explain and quantify the relationship between animals and their food, such as Kleiber's law, Holling's disk equation and the marginal value theorem (see below).
Kleiber's law describes the relationship between an animal's size and its feeding strategy, saying that larger animals need to eat less food per unit weight than smaller animals. Kleiber's law states that the metabolic rate (q0) of an animal is the mass of the animal (M) raised to the 3/4 power: q0=M3/4
Therefore, the mass of the animal increases at a faster rate than the metabolic rate.
Herbivores employ numerous types of feeding strategies. Many herbivores do not fall into one specific feeding strategy, but employ several strategies and eat a variety of plant parts.
Optimal foraging theory is a model for predicting animal behavior while looking for food or other resources, such as shelter or water. This model assesses both individual movement, such as animal behavior while looking for food, and distribution within a habitat, such as dynamics at the population and community level. For example, the model would be used to look at the browsing behavior of a deer while looking for food, as well as that deer's specific location and movement within the forested habitat and its interaction with other deer while in that habitat.
This model has been criticized as circular and untestable. Critics have pointed out that its proponents use examples that fit the theory, but do not use the model when it does not fit the reality. Other critics point out that animals do not have the ability to assess and maximize their potential gains, therefore the optimal foraging theory is irrelevant and derived to explain trends that do not exist in nature.
Holling's disk equation models the efficiency at which predators consume prey. The model predicts that as the number of prey increases, the amount of time predators spend handling prey also increases, and therefore the efficiency of the predator decreases. In 1959, S. Holling proposed an equation to model the rate of return for an optimal diet: Rate (R )=Energy gained in foraging (Ef)/(time searching (Ts) + time handling (Th))
Where s=cost of search per unit time f=rate of encounter with items, h=handling time, e=energy gained per encounter.
In effect, this would indicate that a herbivore in a dense forest would spend more time handling (eating) the vegetation because there was so much vegetation around than a herbivore in a sparse forest, who could easily browse through the forest vegetation. According to the Holling's disk equation, a herbivore in the sparse forest would be more efficient at eating than the herbivore in the dense forest.
The marginal value theorem describes the balance between eating all the food in a patch for immediate energy, or moving to a new patch and leaving the plants in the first patch to regenerate for future use. The theory predicts that absent complicating factors, an animal should leave a resource patch when the rate of payoff (amount of food) falls below the average rate of payoff for the entire area. According to this theory, an animal should move to a new patch of food when the patch they are currently feeding on requires more energy to obtain food than an average patch. Within this theory, two subsequent parameters emerge, the Giving Up Density (GUD) and the Giving Up Time (GUT). The Giving Up Density (GUD) quantifies the amount of food that remains in a patch when a forager moves to a new patch. The Giving Up Time (GUT) is used when an animal continuously assesses the patch quality.
Plant-herbivore interactions
Interactions between plants and herbivores can play a prevalent role in ecosystem dynamics such community structure and functional processes. Plant diversity and distribution is often driven by herbivory, and it is likely that trade-offs between plant competitiveness and defensiveness, and between colonization and mortality allow for coexistence between species in the presence of herbivores. However, the effects of herbivory on plant diversity and richness is variable. For example, increased abundance of herbivores such as deer decrease plant diversity and species richness, while other large mammalian herbivores like bison control dominant species which allows other species to flourish. Plant-herbivore interactions can also operate so that plant communities mediate herbivore communities. Plant communities that are more diverse typically sustain greater herbivore richness by providing a greater and more diverse set of resources.
Coevolution and phylogenetic correlation between herbivores and plants are important aspects of the influence of herbivore and plant interactions on communities and ecosystem functioning, especially in regard to herbivorous insects. This is apparent in the adaptations plants develop to tolerate and/or defend from insect herbivory and the responses of herbivores to overcome these adaptations. The evolution of antagonistic and mutualistic plant-herbivore interactions are not mutually exclusive and may co-occur. Plant phylogeny has been found to facilitate the colonization and community assembly of herbivores, and there is evidence of phylogenetic linkage between plant beta diversity and phylogenetic beta diversity of insect clades such as butterflies. These types of eco-evolutionary feedbacks between plants and herbivores are likely the main driving force behind plant and herbivore diversity.
Abiotic factors such as climate and biogeographical features also impact plant-herbivore communities and interactions. For example, in temperate freshwater wetlands herbivorous waterfowl communities change according to season, with species that eat above-ground vegetation being abundant during summer, and species that forage below-ground being present in winter months. These seasonal herbivore communities differ in both their assemblage and functions within the wetland ecosystem. Such differences in herbivore modalities can potentially lead to trade-offs that influence species traits and may lead to additive effects on community composition and ecosystem functioning. Seasonal changes and environmental gradients such as elevation and latitude often affect the palatability of plants which in turn influences herbivore community assemblages and vice versa. Examples include a decrease in abundance of leaf-chewing larvae in the fall when hardwood leaf palatability decreases due to increased tannin levels which results in a decline of arthropod species richness, and increased palatability of plant communities at higher elevations where grasshoppers abundances are lower. Climatic stressors such as ocean acidification can lead to responses in plant-herbivore interactions in relation to palatability as well.
Herbivore offense
The myriad defenses displayed by plants means that their herbivores need a variety of skills to overcome these defenses and obtain food. These allow herbivores to increase their feeding and use of a host plant. Herbivores have three primary strategies for dealing with plant defenses: choice, herbivore modification, and plant modification.
Feeding choice involves which plants a herbivore chooses to consume. It has been suggested that many herbivores feed on a variety of plants to balance their nutrient uptake and to avoid consuming too much of any one type of defensive chemical. This involves a tradeoff however, between foraging on many plant species to avoid toxins or specializing on one type of plant that can be detoxified.
Herbivore modification is when various adaptations to body or digestive systems of the herbivore allow them to overcome plant defenses. This might include detoxifying secondary metabolites, sequestering toxins unaltered, or avoiding toxins, such as through the production of large amounts of saliva to reduce effectiveness of defenses. Herbivores may also utilize symbionts to evade plant defenses. For example, some aphids use bacteria in their gut to provide essential amino acids lacking in their sap diet.
Plant modification occurs when herbivores manipulate their plant prey to increase feeding. For example, some caterpillars roll leaves to reduce the effectiveness of plant defenses activated by sunlight.
Plant defense
A plant defense is a trait that increases plant fitness when faced with herbivory. This is measured relative to another plant that lacks the defensive trait. Plant defenses increase survival and/or reproduction (fitness) of plants under pressure of predation from herbivores.
Defense can be divided into two main categories, tolerance and resistance. Tolerance is the ability of a plant to withstand damage without a reduction in fitness. This can occur by diverting herbivory to non-essential plant parts, resource allocation, compensatory growth, or by rapid regrowth and recovery from herbivory. Resistance refers to the ability of a plant to reduce the amount of damage it receives from herbivores. This can occur via avoidance in space or time, physical defenses, or chemical defenses. Defenses can either be constitutive, always present in the plant, or induced, produced or translocated by the plant following damage or stress.
Physical, or mechanical, defenses are barriers or structures designed to deter herbivores or reduce intake rates, lowering overall herbivory. Thorns such as those found on roses or acacia trees are one example, as are the spines on a cactus. Smaller hairs known as trichomes may cover leaves or stems and are especially effective against invertebrate herbivores. In addition, some plants have waxes or resins that alter their texture, making them difficult to eat. Also the incorporation of silica into cell walls is analogous to that of the role of lignin in that it is a compression-resistant structural component of cell walls; so that plants with their cell walls impregnated with silica are thereby afforded a measure of protection against herbivory.
Chemical defenses are secondary metabolites produced by the plant that deter herbivory. There are a wide variety of these in nature and a single plant can have hundreds of different chemical defenses. Chemical defenses can be divided into two main groups, carbon-based defenses and nitrogen-based defenses.
Carbon-based defenses include terpenes and phenolics. Terpenes are derived from 5-carbon isoprene units and comprise essential oils, carotenoids, resins, and latex. They can have several functions that disrupt herbivores such as inhibiting adenosine triphosphate (ATP) formation, molting hormones, or the nervous system. Phenolics combine an aromatic carbon ring with a hydroxyl group. There are several different phenolics such as lignins, which are found in cell walls and are very indigestible except for specialized microorganisms; tannins, which have a bitter taste and bind to proteins making them indigestible; and furanocumerins, which produce free radicals disrupting DNA, protein, and lipids, and can cause skin irritation.
Nitrogen-based defenses are synthesized from amino acids and primarily come in the form of alkaloids and cyanogens. Alkaloids include commonly recognized substances such as caffeine, nicotine, and morphine. These compounds are often bitter and can inhibit DNA or RNA synthesis or block nervous system signal transmission. Cyanogens get their name from the cyanide stored within their tissues. This is released when the plant is damaged and inhibits cellular respiration and electron transport.
Plants have also changed features that enhance the probability of attracting natural enemies to herbivores. Some emit semiochemicals, odors that attract natural enemies, while others provide food and housing to maintain the natural enemies' presence, e.g. ants that reduce herbivory. A given plant species often has many types of defensive mechanisms, mechanical or chemical, constitutive or induced, which allow it to escape from herbivores.
Predator–prey theory
According to the theory of predator–prey interactions, the relationship between herbivores and plants is cyclic. When prey (plants) are numerous their predators (herbivores) increase in numbers, reducing the prey population, which in turn causes predator number to decline. The prey population eventually recovers, starting a new cycle. This suggests that the population of the herbivore fluctuates around the carrying capacity of the food source, in this case, the plant.
Several factors play into these fluctuating populations and help stabilize predator-prey dynamics. For example, spatial heterogeneity is maintained, which means there will always be pockets of plants not found by herbivores. This stabilizing dynamic plays an especially important role for specialist herbivores that feed on one species of plant and prevents these specialists from wiping out their food source. Prey defenses also help stabilize predator-prey dynamics, and for more information on these relationships see the section on Plant Defenses. Eating a second prey type helps herbivores' populations stabilize. Alternating between two or more plant types provides population stability for the herbivore, while the populations of the plants oscillate. This plays an important role for generalist herbivores that eat a variety of plants. Keystone herbivores keep vegetation populations in check and allow for a greater diversity of both herbivores and plants. When an invasive herbivore or plant enters the system, the balance is thrown off and the diversity can collapse to a monotaxon system.
The back and forth relationship of plant defense and herbivore offense drives coevolution between plants and herbivores, resulting in a "coevolutionary arms race". The escape and radiation mechanisms for coevolution, presents the idea that adaptations in herbivores and their host plants, has been the driving force behind speciation.
Mutualism
While much of the interaction of herbivory and plant defense is negative, with one individual reducing the fitness of the other, some is beneficial. This beneficial herbivory takes the form of mutualisms in which both partners benefit in some way from the interaction. Seed dispersal by herbivores and pollination are two forms of mutualistic herbivory in which the herbivore receives a food resource and the plant is aided in reproduction. Plants can also be indirectly affected by herbivores through nutrient recycling, with plants benefiting from herbivores when nutrients are recycled very efficiently. Another form of plant-herbivore mutualism is physical changes to the environment and/or plant community structure by herbivores which serve as ecosystem engineers, such as wallowing by bison. Swans form a mutual relationship with the plant species that they forage by digging and disturbing the sediment which removes competing plants and subsequently allows colonization of other plant species.
Impacts
Trophic cascades and environmental degradation
When herbivores are affected by trophic cascades, plant communities can be indirectly affected. Often these effects are felt when predator populations decline and herbivore populations are no longer limited, which leads to intense herbivore foraging which can suppress plant communities. With the size of herbivores having an effect on the amount of energy intake that is needed, larger herbivores need to forage on higher quality or more plants to gain the optimal amount of nutrients and energy compared to smaller herbivores. Environmental degradation from white-tailed deer (Odocoileus virginianus) in the US alone has the potential to both change vegetative communities through over-browsing and cost forest restoration projects upwards of $750 million annually. Another example of a trophic cascade involved plant-herbivore interactions are coral reef ecosystems. Herbivorous fish and marine animals are important algae and seaweed grazers, and in the absence of plant-eating fish, corals are outcompeted and seaweeds deprive corals of sunlight.
Economic impacts
Agricultural crop damage by the same species totals approximately $100 million every year. Insect crop damages also contribute largely to annual crop losses in the U.S. Herbivores also affect economics through the revenue generated by hunting and ecotourism. For example, the hunting of herbivorous game species such as white-tailed deer, cottontail rabbits, antelope, and elk in the U.S. contributes greatly to the billion-dollar annually, hunting industry. Ecotourism is a major source of revenue, particularly in Africa, where many large mammalian herbivores such as elephants, zebras, and giraffes help to bring in the equivalent of millions of US dollars to various nations annually.
See also
Carnivore and omnivore
Consumer-resource systems
List of feeding behaviours
List of herbivorous animals
Plant-based diet
Productivity (ecology)
Seed predation
Tritrophic interactions in plant defense
Veganism
Vegetarianism
References
Further reading
Bob Strauss, 2008, Herbivorous Dinosaurs , The New York Times
Danell, K., R. Bergström, P. Duncan, J. Pastor (Editors)(2006) Large herbivore ecology, ecosystem dynamics and conservation Cambridge, UK : Cambridge University Press. 506 p.
Crawley, M. J. (1983) Herbivory : the dynamics of animal-plant interactions Oxford : Blackwell Scientific. 437 p.
Olff, H., V.K. Brown, R.H. Drent (editors) (1999) Herbivores : between plants and predators Oxford; Malden, Ma. : Blackwell Science. 639 p.
External links
Herbivore information resource website
The herbivore defenses of Senecio viscusus
Herbivore defense in Lindera benzoin
website of the herbivory lab at Cornell University
Ecology terminology
Animals by eating behaviors | Herbivore | [
"Biology"
] | 5,042 | [
"Ecology terminology",
"Behavior",
"Ethology",
"Animals by eating behaviors",
"Herbivory",
"Eating behaviors"
] |
44,578 | https://en.wikipedia.org/wiki/Big%20O%20notation | Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. Big O is a member of a family of notations invented by German mathematicians Paul Bachmann, Edmund Landau, and others, collectively called Bachmann–Landau notation or asymptotic notation. The letter O was chosen by Bachmann to stand for Ordnung, meaning the order of approximation.
In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows. In analytic number theory, big O notation is often used to express a bound on the difference between an arithmetical function and a better understood approximation; a famous example of such a difference is the remainder term in the prime number theorem. Big O notation is also used in many other fields to provide similar estimates.
Big O notation characterizes functions according to their growth rates: different functions with the same asymptotic growth rate may be represented using the same O notation. The letter O is used because the growth rate of a function is also referred to as the order of the function. A description of a function in terms of big O notation usually only provides an upper bound on the growth rate of the function.
Associated with big O notation are several related notations, using the symbols , and , to describe other kinds of bounds on asymptotic growth rates.
Formal definition
Let the function to be estimated, be a real or complex valued function, and let the comparison function, be a real valued function. Let both functions be defined on some unbounded subset of the positive real numbers, and be non-zero (often, but not necessarily, strictly positive) for all large enough values of One writes
and it is read " is big O of " or more often " is of the order of " if the absolute value of is at most a positive constant multiple of the absolute value of for all sufficiently large values of That is, if there exists a positive real number and a real number such that
In many contexts, the assumption that we are interested in the growth rate as the variable goes to infinity or to zero is left unstated, and one writes more simply that
The notation can also be used to describe the behavior of near some real number (often, ): we say
if there exist positive numbers and such that for all defined with
As is non-zero for adequately large (or small) values of both of these definitions can be unified using the limit superior:
if
And in both of these definitions the limit point (whether or not) is a cluster point of the domains of and i. e., in every neighbourhood of there have to be infinitely many points in common. Moreover, as pointed out in the article about the limit inferior and limit superior, the (at least on the extended real number line) always exists.
In computer science, a slightly more restrictive definition is common: and are both required to be functions from some unbounded subset of the positive integers to the nonnegative real numbers; then if there exist positive integer numbers and such that for all
Example
In typical usage the notation is asymptotical, that is, it refers to very large . In this setting, the contribution of the terms that grow "most quickly" will eventually make the other ones irrelevant. As a result, the following simplification rules can be applied:
If is a sum of several terms, if there is one with largest growth rate, it can be kept, and all others omitted.
If is a product of several factors, any constants (factors in the product that do not depend on ) can be omitted.
For example, let , and suppose we wish to simplify this function, using notation, to describe its growth rate as approaches infinity. This function is the sum of three terms: , , and . Of these three terms, the one with the highest growth rate is the one with the largest exponent as a function of , namely . Now one may apply the second rule: is a product of and in which the first factor does not depend on . Omitting this factor results in the simplified form . Thus, we say that is a "big O" of . Mathematically, we can write . One may confirm this calculation using the formal definition: let and . Applying the formal definition from above, the statement that is equivalent to its expansion,
for some suitable choice of a real number and a positive real number and for all . To prove this, let and . Then, for all :
so
Use
Big O notation has two main areas of application:
In mathematics, it is commonly used to describe how closely a finite series approximates a given function, especially in the case of a truncated Taylor series or asymptotic expansion.
In computer science, it is useful in the analysis of algorithms.
In both applications, the function appearing within the is typically chosen to be as simple as possible, omitting constant factors and lower order terms.
There are two formally close, but noticeably different, usages of this notation:
infinite asymptotics
infinitesimal asymptotics.
This distinction is only in application and not in principle, however—the formal definition for the "big O" is the same for both cases, only with different limits for the function argument.
Infinite asymptotics
Big O notation is useful when analyzing algorithms for efficiency. For example, the time (or the number of steps) it takes to complete a problem of size might be found to be . As grows large, the term will come to dominate, so that all other terms can be neglected—for instance when , the term is 1000 times as large as the term. Ignoring the latter would have negligible effect on the expression's value for most purposes. Further, the coefficients become irrelevant if we compare to any other order of expression, such as an expression containing a term or . Even if , if , the latter will always exceed the former once grows larger than , viz. . Additionally, the number of steps depends on the details of the machine model on which the algorithm runs, but different types of machines typically vary by only a constant factor in the number of steps needed to execute an algorithm. So the big O notation captures what remains: we write either
or
and say that the algorithm has order of time complexity. The sign "" is not meant to express "is equal to" in its normal mathematical sense, but rather a more colloquial "is", so the second expression is sometimes considered more accurate (see the "Equals sign" discussion below) while the first is considered by some as an abuse of notation.
Infinitesimal asymptotics
Big O can also be used to describe the error term in an approximation to a mathematical function. The most significant terms are written explicitly, and then the least-significant terms are summarized in a single big O term. Consider, for example, the exponential series and two expressions of it that are valid when is small:
The middle expression (the one with O(x3)) means the absolute-value of the error ex − (1 + x + x2/2) is at most some constant times x3 when x is close enough to 0.
Properties
If the function can be written as a finite sum of other functions, then the fastest growing one determines the order of . For example,
In particular, if a function may be bounded by a polynomial in , then as tends to infinity, one may disregard lower-order terms of the polynomial. The sets and are very different. If is greater than one, then the latter grows much faster. A function that grows faster than for any is called superpolynomial. One that grows more slowly than any exponential function of the form is called subexponential. An algorithm can require time that is both superpolynomial and subexponential; examples of this include the fastest known algorithms for integer factorization and the function .
We may ignore any powers of inside of the logarithms. The set is exactly the same as . The logarithms differ only by a constant factor (since ) and thus the big O notation ignores that. Similarly, logs with different constant bases are equivalent. On the other hand, exponentials with different bases are not of the same order. For example, and are not of the same order.
Changing units may or may not affect the order of the resulting algorithm. Changing units is equivalent to multiplying the appropriate variable by a constant wherever it appears. For example, if an algorithm runs in the order of , replacing by means the algorithm runs in the order of , and the big O notation ignores the constant . This can be written as . If, however, an algorithm runs in the order of , replacing with gives . This is not equivalent to in general. Changing variables may also affect the order of the resulting algorithm. For example, if an algorithm's run time is when measured in terms of the number of digits of an input number , then its run time is when measured as a function of the input number itself, because .
Product
Sum
If and then . It follows that if and then . In other words, this second statement says that is a convex cone.
Multiplication by a constant
Let be a nonzero constant. Then . In other words, if , then
Multiple variables
Big O (and little o, Ω, etc.) can also be used with multiple variables. To define big O formally for multiple variables, suppose and are two functions defined on some subset of . We say
if and only if there exist constants and such that for all with for some
Equivalently, the condition that for some can be written , where denotes the Chebyshev norm. For example, the statement
asserts that there exist constants C and M such that
whenever either or holds. This definition allows all of the coordinates of to increase to infinity. In particular, the statement
(i.e., ) is quite different from
(i.e., ).
Under this definition, the subset on which a function is defined is significant when generalizing statements from the univariate setting to the multivariate setting. For example, if and , then if we restrict and to , but not if they are defined on .
This is not the only generalization of big O to multivariate functions, and in practice, there is some inconsistency in the choice of definition.
Matters of notation
Equals sign
The statement " is " as defined above is usually written as . Some consider this to be an abuse of notation, since the use of the equals sign could be misleading as it suggests a symmetry that this statement does not have. As de Bruijn says, is true but is not. Knuth describes such statements as "one-way equalities", since if the sides could be reversed, "we could deduce ridiculous things like from the identities and ". In another letter, Knuth also pointed out that
"the equality sign is not symmetric with respect to such notations", [as, in this notation,] "mathematicians customarily use the '=' sign as they use the word 'is' in English: Aristotle is a man, but a man isn't necessarily Aristotle".
For these reasons, it would be more precise to use set notation and write (read as: " is an element of ", or " is in the set thinking of as the class of all functions such that for some positive real number . However, the use of the equals sign is customary.
Other arithmetic operators
Big O notation can also be used in conjunction with other arithmetic operators in more complicated equations. For example, denotes the collection of functions having the growth of h(x) plus a part whose growth is limited to that of f(x). Thus,
expresses the same as
Example
Suppose an algorithm is being developed to operate on a set of n elements. Its developers are interested in finding a function T(n) that will express how long the algorithm will take to run (in some arbitrary measurement of time) in terms of the number of elements in the input set. The algorithm works by first calling a subroutine to sort the elements in the set and then perform its own operations. The sort has a known time complexity of O(n2), and after the subroutine runs the algorithm must take an additional steps before it terminates. Thus the overall time complexity of the algorithm can be expressed as . Here the terms are subsumed within the faster-growing O(n2). Again, this usage disregards some of the formal meaning of the "=" symbol, but it does allow one to use the big O notation as a kind of convenient placeholder.
Multiple uses
In more complicated usage, O(·) can appear in different places in an equation, even several times on each side. For example, the following are true for :
The meaning of such statements is as follows: for any functions which satisfy each O(·) on the left side, there are some functions satisfying each O(·) on the right side, such that substituting all these functions into the equation makes the two sides equal. For example, the third equation above means: "For any function f(n) = O(1), there is some function g(n) = O(en) such that nf(n) = g(n)." In terms of the "set notation" above, the meaning is that the class of functions represented by the left side is a subset of the class of functions represented by the right side. In this use the "=" is a formal symbol that unlike the usual use of "=" is not a symmetric relation. Thus for example does not imply the false statement .
Typesetting
Big O is typeset as an italicized uppercase "O", as in the following example: . In TeX, it is produced by simply typing O inside math mode. Unlike Greek-named Bachmann–Landau notations, it needs no special symbol. However, some authors use the calligraphic variant instead.
Orders of common functions
Here is a list of classes of functions that are commonly encountered when analyzing the running time of an algorithm. In each case, c is a positive constant and n increases without bound. The slower-growing functions are generally listed first.
The statement is sometimes weakened to to derive simpler formulas for asymptotic complexity. For any and is a subset of for any so may be considered as a polynomial with some bigger order.
Related asymptotic notations
Big O is widely used in computer science. Together with some other related notations, it forms the family of Bachmann–Landau notations.
Little-o notation
Intuitively, the assertion " is " (read " is little-o of " or " is of inferior order to ") means that grows much faster than , or equivalently grows much slower than . As before, let f be a real or complex valued function and g a real valued function, both defined on some unbounded subset of the positive real numbers, such that g(x) is strictly positive for all large enough values of x. One writes
if for every positive constant there exists a constant such that
For example, one has
and both as
The difference between the definition of the big-O notation and the definition of little-o is that while the former has to be true for at least one constant M, the latter must hold for every positive constant , however small. In this way, little-o notation makes a stronger statement than the corresponding big-O notation: every function that is little-o of g is also big-O of g, but not every function that is big-O of g is little-o of g. For example, but
If g(x) is nonzero, or at least becomes nonzero beyond a certain point, the relation is equivalent to
(and this is in fact how Landau originally defined the little-o notation).
Little-o respects a number of arithmetic operations. For example,
if is a nonzero constant and then , and
if and then
It also satisfies a transitivity relation:
if and then
Big Omega notation
Another asymptotic notation is , read "big omega". There are two widespread and incompatible definitions of the statement
as ,
where a is some real number, , or , where f and g are real functions defined in a neighbourhood of a, and where g is positive in this neighbourhood.
The Hardy–Littlewood definition is used mainly in analytic number theory, and the Knuth definition mainly in computational complexity theory; the definitions are not equivalent.
The Hardy–Littlewood definition
In 1914 G.H. Hardy and J.E. Littlewood introduced the new symbol which is defined as follows:
as if
Thus is the negation of
In 1916 the same authors introduced the two new symbols and defined as:
as if
as if
These symbols were used by E. Landau, with the same meanings, in 1924. Authors that followed Landau, however, use a different notation for the same definitions: The symbol has been replaced by the current notation with the same definition, and became
These three symbols as well as (meaning that and are both satisfied), are now currently used in analytic number theory.
Simple examples
We have
as
and more precisely
as
We have
as
and more precisely
as
however
as
The Knuth definition
In 1976 Donald Knuth published a paper to justify his use of the -symbol to describe a stronger property. Knuth wrote: "For all the applications I have seen so far in computer science, a stronger requirement ... is much more appropriate". He defined
with the comment: "Although I have changed Hardy and Littlewood's definition of , I feel justified in doing so because their definition is by no means in wide use, and because there are other ways to say what they want to say in the comparatively rare cases when their definition applies."
Family of Bachmann–Landau notations
The limit definitions assume for sufficiently large . The table is (partly) sorted from smallest to largest, in the sense that (Knuth's version of) on functions correspond to on the real line (the Hardy–Littlewood version of , however, doesn't correspond to any such description).
Computer science uses the big , big Theta , little , little omega and Knuth's big Omega notations. Analytic number theory often uses the big , small , Hardy's , Hardy–Littlewood's big Omega (with or without the +, − or ± subscripts) and notations. The small omega notation is not used as often in analysis.
Use in computer science
Informally, especially in computer science, the big O notation often can be used somewhat differently to describe an asymptotic tight bound where using big Theta Θ notation might be more factually appropriate in a given context. For example, when considering a function T(n) = 73n3 + 22n2 + 58, all of the following are generally acceptable, but tighter bounds (such as numbers 2 and 3 below) are usually strongly preferred over looser bounds (such as number 1 below).
The equivalent English statements are respectively:
T(n) grows asymptotically no faster than n100
T(n) grows asymptotically no faster than n3
T(n) grows asymptotically as fast as n3.
So while all three statements are true, progressively more information is contained in each. In some fields, however, the big O notation (number 2 in the lists above) would be used more commonly than the big Theta notation (items numbered 3 in the lists above). For example, if T(n) represents the running time of a newly developed algorithm for input size n, the inventors and users of the algorithm might be more inclined to put an upper asymptotic bound on how long it will take to run without making an explicit statement about the lower asymptotic bound.
Other notation
In their book Introduction to Algorithms, Cormen, Leiserson, Rivest and Stein consider the set of functions f which satisfy
In a correct notation this set can, for instance, be called O(g), where
The authors state that the use of equality operator (=) to denote set membership rather than the set membership operator (∈) is an abuse of notation, but that doing so has advantages. Inside an equation or inequality, the use of asymptotic notation stands for an anonymous function in the set O(g), which eliminates lower-order terms, and helps to reduce inessential clutter in equations, for example:
Extensions to the Bachmann–Landau notations
Another notation sometimes used in computer science is Õ (read soft-O), which hides polylogarithmic factors. There are two definitions in use: some authors use f(n) = Õ(g(n)) as shorthand for for some k, while others use it as shorthand for . When is polynomial in n, there is no difference; however, the latter definition allows one to say, e.g. that while the former definition allows for for any constant k. Some authors write O* for the same purpose as the latter definition. Essentially, it is big O notation, ignoring logarithmic factors because the growth-rate effects of some other super-logarithmic function indicate a growth-rate explosion for large-sized input parameters that is more important to predicting bad run-time performance than the finer-point effects contributed by the logarithmic-growth factor(s). This notation is often used to obviate the "nitpicking" within growth-rates that are stated as too tightly bounded for the matters at hand (since logk n is always o(nε) for any constant k and any ).
Also, the L notation, defined as
is convenient for functions that are between polynomial and exponential in terms of
Generalizations and related usages
The generalization to functions taking values in any normed vector space is straightforward (replacing absolute values by norms), where f and g need not take their values in the same space. A generalization to functions g taking values in any topological group is also possible.
The "limiting process" can also be generalized by introducing an arbitrary filter base, i.e. to directed nets f and g. The o notation can be used to define derivatives and differentiability in quite general spaces, and also (asymptotical) equivalence of functions,
which is an equivalence relation and a more restrictive notion than the relationship "f is Θ(g)" from above. (It reduces to lim f / g = 1 if f and g are positive real valued functions.) For example, 2x is Θ(x), but is not o(x).
History (Bachmann–Landau, Hardy, and Vinogradov notations)
The symbol O was first introduced by number theorist Paul Bachmann in 1894, in the second volume of his book Analytische Zahlentheorie ("analytic number theory"). The number theorist Edmund Landau adopted it, and was thus inspired to introduce in 1909 the notation o; hence both are now called Landau symbols. These notations were used in applied mathematics during the 1950s for asymptotic analysis.
The symbol (in the sense "is not an o of") was introduced in 1914 by Hardy and Littlewood. Hardy and Littlewood also introduced in 1916 the symbols ("right") and ("left"), precursors of the modern symbols ("is not smaller than a small o of") and ("is not larger than a small o of"). Thus the Omega symbols (with their original meanings) are sometimes also referred to as "Landau symbols". This notation became commonly used in number theory at least since the 1950s.
The symbol , although it had been used before with different meanings, was given its modern definition by Landau in 1909 and by Hardy in 1910. Just above on the same page of his tract Hardy defined the symbol , where means that both and are satisfied. The notation is still currently used in analytic number theory. In his tract Hardy also proposed the symbol , where means that for some constant .
In the 1970s the big O was popularized in computer science by Donald Knuth, who proposed the different notation for Hardy's , and proposed a different definition for the Hardy and Littlewood Omega notation.
Two other symbols coined by Hardy were (in terms of the modern O notation)
and
(Hardy however never defined or used the notation , nor , as it has been sometimes reported).
Hardy introduced the symbols and (as well as the already mentioned other symbols) in his 1910 tract "Orders of Infinity", and made use of them only in three papers (1910–1913). In his nearly 400 remaining papers and books he consistently used the Landau symbols O and o.
Hardy's symbols and (as well as ) are not used anymore. On the other hand, in the 1930s, the Russian number theorist Ivan Matveyevich Vinogradov introduced his notation , which has been increasingly used in number theory instead of the notation. We have
and frequently both notations are used in the same paper.
The big-O originally stands for "order of" ("Ordnung", Bachmann 1894), and is thus a Latin letter. Neither Bachmann nor Landau ever call it "Omicron". The symbol was much later on (1976) viewed by Knuth as a capital omicron, probably in reference to his definition of the symbol Omega. The digit zero should not be used.
See also
Asymptotic computational complexity
Asymptotic expansion: Approximation of functions generalizing Taylor's formula
Asymptotically optimal algorithm: A phrase frequently used to describe an algorithm that has an upper bound asymptotically within a constant of a lower bound for the problem
Big O in probability notation: Op, op
Limit inferior and limit superior: An explanation of some of the limit notation used in this article
Master theorem (analysis of algorithms): For analyzing divide-and-conquer recursive algorithms using Big O notation
Nachbin's theorem: A precise method of bounding complex analytic functions so that the domain of convergence of integral transforms can be stated
Order of approximation
Order of accuracy
Computational complexity of mathematical operations
References and notes
Further reading
External links
Growth of sequences — OEIS (Online Encyclopedia of Integer Sequences) Wiki
Introduction to Asymptotic Notations
Big-O Notation – What is it good for
An example of Big O in accuracy of central divided difference scheme for first derivative
A Gentle Introduction to Algorithm Complexity Analysis
Mathematical notation
Asymptotic analysis
Analysis of algorithms | Big O notation | [
"Mathematics"
] | 5,476 | [
"Mathematical analysis",
"Asymptotic analysis",
"nan"
] |
44,585 | https://en.wikipedia.org/wiki/Cyclotron | A cyclotron is a type of particle accelerator invented by Ernest Lawrence in 1929–1930 at the University of California, Berkeley, and patented in 1932. A cyclotron accelerates charged particles outwards from the center of a flat cylindrical vacuum chamber along a spiral path. The particles are held to a spiral trajectory by a static magnetic field and accelerated by a rapidly varying electric field. Lawrence was awarded the 1939 Nobel Prize in Physics for this invention.
The cyclotron was the first "cyclical" accelerator. The primary accelerators before the development of the cyclotron were electrostatic accelerators, such as the Cockcroft–Walton generator and the Van de Graaff generator. In these accelerators, particles would cross an accelerating electric field only once. Thus, the energy gained by the particles was limited by the maximum electrical potential that could be achieved across the accelerating region. This potential was in turn limited by electrostatic breakdown to a few million volts. In a cyclotron, by contrast, the particles encounter the accelerating region many times by following a spiral path, so the output energy can be many times the energy gained in a single accelerating step.
Cyclotrons were the most powerful particle accelerator technology until the 1950s, when they were surpassed by the synchrotron. Nonetheless, they are still widely used to produce particle beams for nuclear medicine and basic research. As of 2020, close to 1,500 cyclotrons were in use worldwide for the production of radionuclides for nuclear medicine. In addition, cyclotrons can be used for particle therapy, where particle beams are directly applied to patients.
History
Origins
In 1927, while a student at Kiel, German physicist Max Steenbeck was the first to formulate the concept of the cyclotron, but he was discouraged from pursuing the idea further. In late 1928 and early 1929, Hungarian physicist Leo Szilárd filed patent applications in Germany for the linear accelerator, cyclotron, and betatron. In these applications, Szilárd became the first person to discuss the resonance condition (what is now called the cyclotron frequency) for a circular accelerating apparatus. However, neither Steenbeck's ideas nor Szilard's patent applications were ever published and therefore did not contribute to the development of the cyclotron. Several months later, in the early summer of 1929, Ernest Lawrence independently conceived the cyclotron concept after reading a paper by Rolf Widerøe describing a drift tube accelerator. He published a paper in Science in 1930 (the first published description of the cyclotron concept), after a student of his built a crude model in April of that year. He patented the device in 1932.
To construct the first such device, Lawrence used large electromagnets recycled from obsolete arc converters provided by the Federal Telegraph Company. He was assisted by a graduate student, M. Stanley Livingston. Their first working cyclotron became operational on January 2, 1931. This machine had a diameter of , and accelerated protons to an energy up to 80 keV.
At the Radiation Laboratory on the campus of the University of California, Berkeley (now the Lawrence Berkeley National Laboratory), Lawrence and his collaborators went on to construct a series of cyclotrons which were the most powerful accelerators in the world at the time; a 4.8 MeV machine (1932), a 8 MeV machine (1937), and a 16 MeV machine (1939). Lawrence received the 1939 Nobel Prize in Physics for the invention and development of the cyclotron and for results obtained with it.
The first European cyclotron was constructed in 1934 in the Soviet Union by Mikhail Alekseevich Eremeev, at the Leningrad Physico-Technical Institute. It was a small design based a prototype by Lawrence, with a 28 cm diameter capable of achieving 530 keV proton energies. Research quickly refocused around the construction of a larger MeV-level cyclotron, in the physics department of the V.G. Khlopin Radium Institute in Leningrad, headed by . This instrument was first proposed in 1932 by George Gamow and and was installed and became operative in March 1937 at 100 cm (39 in) diameter and 3.2 MeV proton energies.
The first Asian cyclotron was constructed at the Riken laboratory in Tokyo, by a team including Yoshio Nishina, Sukeo Watanabe, Tameichi Yasaki, and Ryokichi Sagane. Yasaki and Sagane had been sent to Berkeley Radiation Laboratory to work with Lawrence. The device had a 26 in diameter and the first beam was produced on April 2, 1937, at 2.9 MeV deuteron energies.
During World War II
Cyclotrons played a key role in the Manhattan Project. The published 1940 discovery of neptunium and the withheld 1941 discovery of plutonium both used bombardment in the Berkeley Radiation Laboratory's 60 in cyclotron. Furthermore Lawrence invented the calutron (California University cyclotron), which was industrially developed at the Y-12 National Security Complex from 1942. This provided the bulk of the uranium enrichment process, taking low-enriched uranium (<5% uranium-235) from the S-50 and K-25 plants and electromagnetically separating isotopes up to 84.5% highly enriched uranium. This was the first production of HEU in history, and was shipped to Los Alamos and used in the Little Boy bomb dropped on Hiroshima, and its precursor Water Boiler and Dragon test reactors.
In France, Frédéric Joliot-Curie constructed a large 7 MeV cyclotron at the Collège de France in Paris, achieving the first beam in March 1939. With the Nazi occupation of Paris in June 1940 and an incoming contingent of German scientists, Joliot ceased research into uranium fission, and obtained an understanding with his German former colleague Wolfgang Gentner that no research of military use would be carried out. In 1943 Gentner was recalled for weakness, and a new German contingent attempted to operate the cyclotron. However, it is likely that Joliot, a member of French Communist Party and in fact president of the National Front resistance movement, sabotaged the cyclotron to prevent its use to the Nazi German nuclear program.
One cyclotron was built within Nazi Germany, in Heidelberg, under the supervision of Walther Bothe and Wolfgang Gentner, with support from the Heereswaffenamt. At the end of 1938, Gentner was sent to Berkeley Radiation Laboratory and worked most closely with Emilio Segrè and Donald Cooksey, returning before the start of the war. Construction was slowed by the war and completed in January 1944, but difficulties in testing made it unusable until the war's end.
Post-war
By the late 1930s it had become clear that there was a practical limit on the beam energy that could be achieved with the traditional cyclotron design, due to the effects of special relativity. As particles reach relativistic speeds, their effective mass increases, which causes the resonant frequency for a given magnetic field to change. To address this issue and reach higher beam energies using cyclotrons, two primary approaches were taken, synchrocyclotrons (which hold the magnetic field constant, but decrease the accelerating frequency) and isochronous cyclotrons (which hold the accelerating frequency constant, but alter the magnetic field).
Lawrence's team built one of the first synchrocyclotrons in 1946. This machine eventually achieved a maximum beam energy of 350 MeV for protons. However, synchrocyclotrons suffer from low beam intensities (< 1 μA), and must be operated in a "pulsed" mode, further decreasing the available total beam. As such, they were quickly overtaken in popularity by isochronous cyclotrons.
The first isochronous cyclotron (other than classified prototypes) was built by F. Heyn and K.T. Khoe in Delft, the Netherlands, in 1956. Early isochronous cyclotrons were limited to energies of ~50 MeV per nucleon, but as manufacturing and design techniques gradually improved, the construction of "spiral-sector" cyclotrons allowed the acceleration and control of more powerful beams. Later developments included the use of more compact and power-efficient superconducting magnets and the separation of the magnets into discrete sectors, as opposed to a single large magnet.
Principle of operation
Cyclotron principle
In a particle accelerator, charged particles are accelerated by applying an electric field across a gap. The force on a particle crossing this gap is given by the Lorentz force law:
where is the charge on the particle, is the electric field, is the particle velocity, and is the magnetic flux density. It is not possible to accelerate particles using only a static magnetic field, as the magnetic force always acts perpendicularly to the direction of motion, and therefore can only change the direction of the particle, not the speed.
In practice, the magnitude of an unchanging electric field which can be applied across a gap is limited by the need to avoid electrostatic breakdown. As such, modern particle accelerators use alternating (radio frequency) electric fields for acceleration. Since an alternating field across a gap only provides an acceleration in the forward direction for a portion of its cycle, particles in RF accelerators travel in bunches, rather than a continuous stream. In a linear particle accelerator, in order for a bunch to "see" a forward voltage every time it crosses a gap, the gaps must be placed further and further apart, in order to compensate for the increasing speed of the particle.
A cyclotron, by contrast, uses a magnetic field to bend the particle trajectories into a spiral, thus allowing the same gap to be used many times to accelerate a single bunch. As the bunch spirals outward, the increasing distance between transits of the gap is exactly balanced by the increase in speed, so a bunch will reach the gap at the same point in the RF cycle every time.
The frequency at which a particle will orbit in a perpendicular magnetic field is known as the cyclotron frequency, and depends, in the non-relativistic case, solely on the charge and mass of the particle, and the strength of the magnetic field:
where is the (linear) frequency, is the charge of the particle, is the magnitude of the magnetic field that is perpendicular to the plane in which the particle is travelling, and is the particle mass. The property that the frequency is independent of particle velocity is what allows a single, fixed gap to be used to accelerate a particle travelling in a spiral.
Particle energy
Each time a particle crosses the accelerating gap in a cyclotron, it is given an accelerating force by the electric field across the gap, and the total particle energy gain can be calculated by multiplying the increase per crossing by the number of times the particle crosses the gap.
However, given the typically high number of revolutions, it is usually simpler to estimate the energy by combining the equation for frequency in circular motion:
with the cyclotron frequency equation to yield:
The kinetic energy for particles with speed is therefore given by:
where is the radius at which the energy is to be determined. The limit on the beam energy which can be produced by a given cyclotron thus depends on the maximum radius which can be reached by the magnetic field and the accelerating structures, and on the maximum strength of the magnetic field which can be achieved.
K-factor
In the nonrelativistic approximation, the maximum kinetic energy per atomic mass for a given cyclotron is given by:
where is the elementary charge, is the strength of the magnet, is the maximum radius of the beam, is an atomic mass unit, is the charge of the beam particles, and is the atomic mass of the beam particles. The value of K
is known as the "K-factor", and is used to characterize the maximum kinetic beam energy of protons (quoted in MeV). It represents the theoretical maximum energy of protons (with Q and A equal to 1) accelerated in a given machine.
Particle trajectory
While the trajectory followed by a particle in the cyclotron is conventionally referred to as a "spiral", it is more accurately described as a series of arcs of constant radius. The particle speed, and therefore orbital radius, only increases at the accelerating gaps. Away from those regions, the particle will orbit (to a first approximation) at a fixed radius.
Assuming a uniform energy gain per orbit (which is only valid in the non-relativistic case), the average orbit may be approximated by a simple spiral. If the energy gain per turn is given by , the particle energy after turns will be:
Combining this with the non-relativistic equation for the kinetic energy of a particle in a cyclotron gives:
This is the equation of a Fermat spiral.
Stability and focusing
As a particle bunch travels around a cyclotron, two effects tend to make its particles spread out. The first is simply the particles injected from the ion source having some initial spread of positions and velocities. This spread tends to get amplified over time, making the particles move away from the bunch center. The second is the mutual repulsion of the beam particles due to their electrostatic charges. Keeping the particles focused for acceleration requires confining the particles to the plane of acceleration (in-plane or "vertical" focusing), preventing them from moving inward or outward from their correct orbit ("horizontal" focusing), and keeping them synchronized with the accelerating RF field cycle (longitudinal focusing).
Transverse stability and focusing
The in-plane or "vertical" focusing is typically achieved by varying the magnetic field around the orbit, i.e. with azimuth. A cyclotron using this focusing method is thus called an azimuthally-varying field (AVF) cyclotron. The variation in field strength is provided by shaping the steel poles of the magnet into sectors which can have a shape reminiscent of a spiral and also have a larger area towards the outer edge of the cyclotron to improve the vertical focus of the particle beam. This solution for focusing the particle beam was proposed by L. H. Thomas in 1938 and almost all modern cyclotrons use azimuthally-varying fields.
The "horizontal" focusing happens as a natural result of cyclotron motion. Since for identical particles travelling perpendicularly to a constant magnetic field the trajectory curvature radius is only a function of their speed, all particles with the same speed will travel in circular orbits of the same radius, and a particle with a slightly incorrect trajectory will simply travel in a circle with a slightly offset center. Relative to a particle with a centered orbit, such a particle will appear to undergo a horizontal oscillation relative to the centered particle. This oscillation is stable for particles with a small deviation from the reference energy.
Longitudinal stability
The instantaneous level of synchronization between a particle and the RF field is expressed by phase difference between the RF field and the particle. In the first harmonic mode (i.e. particles make one revolution per RF cycle) it is the difference between the instantaneous phase of the RF field and the instantaneous azimuth of the particle. Fastest acceleration is achieved when the phase difference equals 90° (modulo360°). Poor synchronization, i.e. phase difference far from this value, leads to the particle being accelerated slowly or even decelerated (outside of the 0–180° range).
As the time taken by a particle to complete an orbit depends only on particle's type, magnetic field (which may vary with the radius), and Lorentz factor (see ), cyclotrons have no longitudinal focusing mechanism which would keep the particles synchronized to the RF field. The phase difference, that the particle had at the moment of its injection into the cyclotron, is preserved throughout the acceleration process, but errors from imperfect match between the RF field frequency and the cyclotron frequency at a given radius accumulate on top of it. Failure of the particle to be injected with phase difference within about ±20° from the optimum may make its acceleration too slow and its stay in the cyclotron too long. As a consequence, half-way through the process the phase difference escapes the 0–180° range, the acceleration turns into deceleration, and the particle fails to reach the target energy. Grouping of the particles into correctly synchronized bunches before their injection into the cyclotron thus greatly increases the injection efficiency.
Relativistic considerations
In the non-relativistic approximation, the cyclotron frequency does not depend upon the particle's speed or the radius of the particle's orbit. As the beam spirals outward, the rotation frequency stays constant, and the beam continues to accelerate as it travels a greater distance in the same time period. In contrast to this approximation, as particles approach the speed of light, the cyclotron frequency decreases due to the change in relativistic mass. This change is proportional to the particle's Lorentz factor.
The relativistic mass can be written as:
where:
is the particle rest mass,
is the relative velocity, and
is the Lorentz factor.
Substituting this into the equations for cyclotron frequency and angular frequency gives:
The gyroradius for a particle moving in a static magnetic field is then given by:
Expressing the speed in this equation in terms of frequency and radius
yields the connection between the magnetic field strength, frequency, and radius:
Approaches to relativistic cyclotrons
Synchrocyclotron
Since increases as the particle reaches relativistic velocities, acceleration of relativistic particles requires modification of the cyclotron to ensure the particle crosses the gap at the same point in each RF cycle. If the frequency of the accelerating electric field is varied while the magnetic field is held constant, this leads to the synchrocyclotron.
In this type of cyclotron, the accelerating frequency is varied as a function of particle orbit radius such that:
The decrease in accelerating frequency is tuned to match the increase in gamma for a constant magnetic field.
Isochronous cyclotron
If instead the magnetic field is varied with radius while the frequency of the accelerating field is held constant, this leads to the isochronous cyclotron.
Keeping the frequency constant allows isochronous cyclotrons to operate in a continuous mode, which makes them capable of producing much greater beam current than synchrocyclotrons. On the other hand, as precise matching of the orbital frequency to the accelerating field frequency is the responsibility of the magnetic field variation with radius, the variation must be precisely tuned.
Fixed-field alternating gradient accelerator (FFA)
An approach which combines static magnetic fields (as in the synchrocyclotron) and alternating gradient focusing (as in a synchrotron) is the fixed-field alternating gradient accelerator (FFA). In an isochronous cyclotron, the magnetic field is shaped by using precisely machined steel magnet poles. This variation provides a focusing effect as the particles cross the edges of the poles. In an FFA, separate magnets with alternating directions are used to focus the beam using the principle of strong focusing. The field of the focusing and bending magnets in an FFA is not varied over time, so the beam chamber must still be wide enough to accommodate a changing beam radius within the field of the focusing magnets as the beam accelerates.
Classifications
Cyclotron types
There are a number of basic types of cyclotron:
Beam types
The particles for cyclotron beams are produced in ion sources of various types.
Target types
To make use of the cyclotron beam, it must be directed to a target.
Usage
Basic research
For several decades, cyclotrons were the best source of high-energy beams for nuclear physics experiments. With the advent of strong focusing synchrotrons, cyclotrons were supplanted as the accelerators capable of producing the highest energies. However, due to their compactness, and therefore lower expense compared to high energy synchrotrons, cyclotrons are still used to create beams for research where the primary consideration is not achieving the maximum possible energy. Cyclotron based nuclear physics experiments are used to measure basic properties of isotopes (particularly short lived radioactive isotopes) including half life, mass, interaction cross sections, and decay schemes.
Medical uses
Radioisotope production
Cyclotron beams can be used to bombard other atoms to produce short-lived isotopes with a variety of medical uses, including medical imaging and radiotherapy. Positron and gamma emitting isotopes, such as fluorine-18, carbon-11, and technetium-99m are used for PET and SPECT imaging. While cyclotron produced radioisotopes are widely used for diagnostic purposes, therapeutic uses are still largely in development. Proposed isotopes include astatine-211, palladium-103, rhenium-186, and bromine-77, among others.
Beam therapy
The first suggestion that energetic protons could be an effective treatment method was made by Robert R. Wilson in a paper published in 1946 while he was involved in the design of the Harvard Cyclotron Laboratory.
Beams from cyclotrons can be used in particle therapy to treat cancer. Ion beams from cyclotrons can be used, as in proton therapy, to penetrate the body and kill tumors by radiation damage, while minimizing damage to healthy tissue along their path.
As of 2020, there were approximately 80 facilities worldwide for radiotherapy using beams of protons and heavy ions, consisting of a mixture of cyclotrons and synchrotrons. Cyclotrons are primarily used for proton beams, while synchrotrons are used to produce heavier ions.
Advantages and limitations
The most obvious advantage of a cyclotron over a linear accelerator is that because the same accelerating gap is used many times, it is both more space efficient and more cost efficient; particles can be brought to higher energies in less space, and with less equipment. The compactness of the cyclotron reduces other costs as well, such as foundations, radiation shielding, and the enclosing building. Cyclotrons have a single electrical driver, which saves both equipment and power costs. Furthermore, cyclotrons are able to produce a continuous beam of particles at the target, so the average power passed from a particle beam into a target is relatively high compared to the pulsed beam of a synchrotron.
However, as discussed above, a constant frequency acceleration method is only possible when the accelerated particles are approximately obeying Newton's laws of motion. If the particles become fast enough that relativistic effects become important, the beam becomes out of phase with the oscillating electric field, and cannot receive any additional acceleration. The classical cyclotron (constant field and frequency) is therefore only capable of accelerating particles up to a few percent of the speed of light. Synchro-, isochronous, and other types of cyclotrons can overcome this limitation, with the tradeoff of increased complexity and cost.
An additional limitation of cyclotrons is due to space charge effects – the mutual repulsion of the particles in the beam. As the amount of particles (beam current) in a cyclotron beam is increased, the effects of electrostatic repulsion grow stronger until they disrupt the orbits of neighboring particles. This puts a functional limit on the beam intensity, or the number of particles which can be accelerated at one time, as distinct from their energy.
Notable examples
Superconducting cyclotron examples
A superconducting cyclotron uses superconducting magnets to achieve high magnetic field in a small diameter and with lower power requirements. These cyclotrons require a cryostat to house the magnet and cool it to superconducting temperatures. Some of these cyclotrons are being built for medical therapy.
Related technologies
The spiraling of electrons in a cylindrical vacuum chamber within a transverse magnetic field is also employed in the magnetron, a device for producing high frequency radio waves (microwaves). In the magnetron, electrons are bent into a circular path by a magnetic field, and their motion is used to excite resonant cavities, producing electromagnetic radiation.
A betatron uses the change in the magnetic field to accelerate electrons in a circular path. While static magnetic fields cannot provide acceleration, as the force always acts perpendicularly to the direction of particle motion, changing fields can be used to induce an electromotive force in the same manner as in a transformer. The betatron was developed in 1940, although the idea had been proposed substantially earlier.
A synchrotron is another type of particle accelerator that uses magnets to bend particles into a circular trajectory. Unlike in a cyclotron, the particle path in a synchrotron has a fixed radius. Particles in a synchrotron pass accelerating stations at increasing frequency as they get faster. To compensate for this frequency increase, both the frequency of the applied accelerating electric field and the magnetic field must be increased in tandem, leading to the "synchro" portion of the name.
In fiction
The United States Department of War famously asked for dailies of the Superman comic strip to be pulled in April 1945 for having Superman bombarded with the radiation from a cyclotron.
In the 1984 film Ghostbusters, a miniature cyclotron forms part of the proton pack used for catching ghosts.
See also
Cyclotron radiation – radiation produced by non-relativistic charged particles bent by a magnetic field
Fast neutron therapy – a type of beam therapy that may use accelerator produced beams
Microtron – an accelerator concept similar to the cyclotron which uses a linear accelerator type accelerating structure with a constant magnetic field.
Radiation reaction force – a braking force on beams that are bent in a magnetic field
Notes
References
Further reading
About a neighborhood cyclotron in Anchorage, Alaska.
An experiment done by Fred M. Niell, III his senior year of high school (1994–95) with which he won the overall grand prize in the ISEF.
External links
Current facilities
The 88-Inch Cyclotron at Lawrence Berkeley National Laboratory
PSI Proton Accelerator – the highest beam current cyclotron in the world.
The Superconducting Ring Cyclotron at the RIKEN Nishina Center for Accelerator Based Science – the highest energy cyclotron in the world
Rutgers Cyclotron – Students at Rutgers University built a 1 MeV cyclotron as an undergraduate project, which is now used for a senior-level undergraduate and a graduate lab course.
TRIUMF – the largest single-magnet cyclotron in the world.
Historic cyclotrons
Ernest Lawrence's Cyclotron A history of cyclotron development at the Berkeley Radiation Laboratory, now Lawrence Berkeley National Laboratory
National Superconducting Cyclotron Laboratory of the Michigan State University – Home of coupled K500 and K1200 superconducting cyclotrons; the K500, the first superconducting cyclotron, and the K1200, formerly the most powerful in the world.
1932 introductions
Accelerator physics
American inventions
Nuclear medicine
Particle accelerators | Cyclotron | [
"Physics"
] | 5,712 | [
"Accelerator physics",
"Applied and interdisciplinary physics",
"Experimental physics"
] |
44,590 | https://en.wikipedia.org/wiki/Radius%20of%20gyration | The radius of gyration or gyradius of a body about the axis of rotation is defined as the radial distance to a point which would have a moment of inertia the same as the body's actual distribution of mass, if the total mass of the body were concentrated there. The radius of gyration has dimensions of distance [L] or [M0LT0] and the SI unit is the metre (m).
Formulation
Mathematically the radius of gyration is the root mean square distance of the object's parts from either its center of mass or a given axis, depending on the relevant application. It is actually the perpendicular distance from point mass to the axis of rotation. One can represent a trajectory of a moving point as a body. Then radius of gyration can be used to characterize the typical distance travelled by this point.
Suppose a body consists of particles each of mass . Let be their perpendicular distances from the axis of rotation. Then, the moment of inertia of the body about the axis of rotation is
If all the masses are the same (), then the moment of inertia is .
Since ( being the total mass of the body),
From the above equations, we have
Radius of gyration is the root mean square distance of particles from axis formula
Therefore, the radius of gyration of a body about a given axis may also be defined as the root mean square distance of the various particles of the body from the axis of rotation. It is also known as a measure of the way in which the mass of a rotating rigid body is distributed about its axis of rotation.
Applications in structural engineering
In structural engineering, the two-dimensional radius of gyration is used to describe the distribution of cross sectional area in a column around its centroidal axis with the mass of the body. The radius of gyration is given by the following formula:
Where is the second moment of area and is the total cross-sectional area.
The gyration radius is useful in estimating the stiffness of a column. If the principal moments of the two-dimensional gyration tensor are not equal, the column will tend to buckle around the axis with the smaller principal moment. For example, a column with an elliptical cross-section will tend to buckle in the direction of the smaller semiaxis.
In engineering, where continuous bodies of matter are generally the objects of study, the radius of gyration is usually calculated as an integral.
Applications in mechanics
The radius of gyration about a given axis () can be calculated in terms of the mass moment of inertia around that axis, and the total mass m;
is a scalar, and is not the moment of inertia tensor.
Molecular applications
In polymer physics, the radius of gyration is used to describe the dimensions of a polymer chain. The radius of gyration of an individual homopolymer with degree of polymerization N at a given time is defined as:
where is the mean position of the monomers.
As detailed below, the radius of gyration is also proportional to the root mean square distance between the monomers:
As a third method, the radius of gyration can also be computed by summing the principal moments of the gyration tensor.
Since the chain conformations of a polymer sample are quasi infinite in number and constantly change over time, the "radius of gyration" discussed in polymer physics must usually be understood as a mean over all polymer molecules of the sample and over time. That is, the radius of gyration which is measured as an average over time or ensemble:
where the angular brackets denote the ensemble average.
An entropically governed polymer chain (i.e. in so called theta conditions) follows a random walk in three dimensions. The radius of gyration for this case is given by
Note that although represents the contour length of the polymer, is strongly dependent of polymer stiffness and can vary over orders of magnitude. is reduced accordingly.
One reason that the radius of gyration is an interesting property is that it can be determined experimentally with static light scattering as well as with small angle neutron- and x-ray scattering. This allows theoretical polymer physicists to check their models against reality.
The hydrodynamic radius is numerically similar, and can be measured with Dynamic Light Scattering (DLS).
Derivation of identity
To show that the two definitions of are identical,
we first multiply out the summand in the first definition:
Carrying out the summation over the last two terms and using the definition of gives the formula
On the other hand, the second definition can be calculated in the same way as follows.
Thus, the two definitions are the same.
The last transformation uses the relationship
Applications in geographical data analysis
In data analysis, the radius of gyration is used to calculate many different statistics including the spread of geographical locations. These locations have recently been collected from social media users to investigate the typical mentions of a user. This can be useful for understanding how a certain group of users on social media use the platform.
Notes
References
Grosberg AY and Khokhlov AR. (1994) Statistical Physics of Macromolecules (translated by Atanov YA), AIP Press.
Flory PJ. (1953) Principles of Polymer Chemistry, Cornell University, pp. 428–429 (Appendix C of Chapter X).
Solid mechanics
Polymer physics
Radii | Radius of gyration | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,118 | [
"Polymer physics",
"Solid mechanics",
"Mechanics",
"Polymer chemistry"
] |
44,596 | https://en.wikipedia.org/wiki/Positronium | Positronium (Ps) is a system consisting of an electron and its anti-particle, a positron, bound together into an exotic atom, specifically an onium. Unlike hydrogen, the system has no protons. The system is unstable: the two particles annihilate each other to predominantly produce two or three gamma-rays, depending on the relative spin states. The energy levels of the two particles are similar to that of the hydrogen atom (which is a bound state of a proton and an electron). However, because of the reduced mass, the frequencies of the spectral lines are less than half of those for the corresponding hydrogen lines.
States
The mass of positronium is 1.022 MeV, which is twice the electron mass minus the binding energy of a few eV. The lowest energy orbital state of positronium is 1S, and like with hydrogen, it has a hyperfine structure arising from the relative orientations of the spins of the electron and the positron.
The singlet state, , with antiparallel spins (S = 0, Ms = 0) is known as para-positronium (p-Ps). It has a mean lifetime of and decays preferentially into two gamma rays with energy of each (in the center-of-mass frame). Para-positronium can decay into any even number of photons (2, 4, 6, ...), but the probability quickly decreases with the number: the branching ratio for decay into 4 photons is .
Para-positronium lifetime in vacuum is approximately
The triplet states, 3S1, with parallel spins (S = 1, Ms = −1, 0, 1) are known as ortho-positronium (o-Ps), and have an energy that is approximately 0.001 eV higher than the singlet. These states have a mean lifetime of , and the leading decay is three gammas. Other modes of decay are negligible; for instance, the five-photons mode has branching ratio of ≈.
Ortho-positronium lifetime in vacuum can be calculated approximately as:
However more accurate calculations with corrections to O(α2) yield a value of −1 for the decay rate, corresponding to a lifetime of .
Positronium in the 2S state is metastable having a lifetime of against annihilation. The positronium created in such an excited state will quickly cascade down to the ground state, where annihilation will occur more quickly.
Measurements
Measurements of these lifetimes and energy levels have been used in precision tests of quantum electrodynamics, confirming quantum electrodynamics (QED) predictions to high precision.
Annihilation can proceed via a number of channels, each producing gamma rays with total energy of (sum of the electron and positron mass-energy), usually 2 or 3, with up to 5 gamma ray photons recorded from a single annihilation.
The annihilation into a neutrino–antineutrino pair is also possible, but the probability is predicted to be negligible. The branching ratio for o-Ps decay for this channel is (electron neutrino–antineutrino pair) and (for other flavour) in predictions based on the Standard Model, but it can be increased by non-standard neutrino properties, like relatively high magnetic moment. The experimental upper limits on branching ratio for this decay (as well as for a decay into any "invisible" particles) are < for p-Ps and < for o-Ps.
Energy levels
While precise calculation of positronium energy levels uses the Bethe–Salpeter equation or the Breit equation, the similarity between positronium and hydrogen allows a rough estimate. In this approximation, the energy levels are different because of a different effective mass, μ, in the energy equation (see electron energy levels for a derivation):
where:
is the charge magnitude of the electron (same as the positron),
is the Planck constant,
is the electric constant (otherwise known as the permittivity of free space),
is the reduced mass: where and are, respectively, the mass of the electron and the positron (which are the same by definition as antiparticles).
Thus, for positronium, its reduced mass only differs from the electron by a factor of 2. This causes the energy levels to also roughly be half of what they are for the hydrogen atom.
So finally, the energy levels of positronium are given by
The lowest energy level of positronium () is . The next level is . The negative sign is a convention that implies a bound state. Positronium can also be considered by a particular form of the two-body Dirac equation; Two particles with a Coulomb interaction can be exactly separated in the (relativistic) center-of-momentum frame and the resulting ground-state energy has been obtained very accurately using finite element methods of Janine Shertzer. Their results lead to the discovery of anomalous states.
The Dirac equation whose Hamiltonian comprises two Dirac particles and a static Coulomb potential is not relativistically invariant. But if one adds the (or , where is the fine-structure constant) terms, where , then the result is relativistically invariant. Only the leading term is included. The contribution is the Breit term; workers rarely go to because at one has the Lamb shift, which requires quantum electrodynamics.
Formation and decay in materials
After a radioactive atom in a material undergoes a β+ decay (positron emission), the resulting high-energy positron slows down by colliding with atoms, and eventually annihilates with one of the many electrons in the material. It may however first form positronium before the annihilation event. The understanding of this process is of some importance in positron emission tomography. Approximately:
~60% of positrons will directly annihilate with an electron without forming positronium. The annihilation usually results in two gamma rays. In most cases this direct annihilation occurs only after the positron has lost its excess kinetic energy and has thermalized with the material.
~10% of positrons form para-positronium, which then promptly (in ~0.12 ns) decays, usually into two gamma rays.
~30% of positrons form ortho-positronium but then annihilate within a few nanoseconds by 'picking off' another nearby electron with opposing spin. This usually produces two gamma rays. During this time, the very lightweight positronium atom exhibits a strong zero-point motion, that exerts a pressure and is able to push out a tiny nanometer-sized bubble in the medium.
Only ~0.5% of positrons form ortho-positronium that self-decays (usually into three gamma rays). This natural decay rate of ortho-positronium is relatively slow (~140 ns decay lifetime), compared to the aforementioned pick-off process, which is why the three-gamma decay rarely occurs.
History
The Croatian physicist Stjepan Mohorovičić predicted the existence of positronium in a 1934 article published in Astronomische Nachrichten, in which he called it the "electrum". Other sources incorrectly credit Carl Anderson as having predicted its existence in 1932 while at Caltech. It was experimentally discovered by Martin Deutsch at MIT in 1951 and became known as positronium. Many subsequent experiments have precisely measured its properties and verified predictions of quantum electrodynamics.
A discrepancy known as the ortho-positronium lifetime puzzle persisted for some time, but was resolved with further calculations and measurements. Measurements were in error because of the lifetime measurement of unthermalised positronium, which was produced at only a small rate. This had yielded lifetimes that were too long. Also calculations using relativistic quantum electrodynamics are difficult, so they had been done to only the first order. Corrections that involved higher orders were then calculated in a non-relativistic quantum electrodynamics.
In 2024, the AEgIS collaboration at CERN was the first to cool positronium by laser light, leaving it available for experimental use. The substance was brought to using laser cooling.
Exotic compounds
Molecular bonding was predicted for positronium. Molecules of positronium hydride (PsH) can be made. Positronium can also form a cyanide and can form bonds with halogens or lithium.
The first observation of di-positronium () molecules—molecules consisting of two positronium atoms—was reported on 12 September 2007 by David Cassidy and Allen Mills from University of California, Riverside.
Unlike muonium, positronium does not have a nucleus analogue, because the electron and the positron have equal masses. Consequently, while muonium tends to behave like a light isotope of hydrogen, positronium shows large differences in size, polarisability, and binding energy from hydrogen.
Natural occurrence
The events in the early universe leading to baryon asymmetry predate the formation of atoms (including exotic varieties such as positronium) by around a third of a million years, so no positronium atoms occurred then.
Likewise, the naturally occurring positrons in the present day result from high-energy interactions such as in cosmic ray–atmosphere interactions, and so are too hot (thermally energetic) to form electrical bonds before annihilation.
See also
Breit equation
Antiprotonic helium
Di-positronium
Exciton — solid-state analog
Protonium
Quantum electrodynamics
Two-body Dirac equations
Quarkonium
References
External links
The annihilation of positronium - The Feynman Lectures on Physics
The Search for Positronium
Obituary of Martin Deutsch, discoverer of Positronium
Molecular physics
Quantum electrodynamics
Spintronics
Onia
Antimatter
Substances discovered in the 1950s | Positronium | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,129 | [
"Antimatter",
"Molecular physics",
"Spintronics",
"Condensed matter physics",
" molecular",
"nan",
"Atomic",
"Matter",
" and optical physics"
] |
44,603 | https://en.wikipedia.org/wiki/Calcite | Calcite is a carbonate mineral and the most stable polymorph of calcium carbonate (CaCO3). It is a very common mineral, particularly as a component of limestone. Calcite defines hardness 3 on the Mohs scale of mineral hardness, based on scratch hardness comparison. Large calcite crystals are used in optical equipment, and limestone composed mostly of calcite has numerous uses.
Other polymorphs of calcium carbonate are the minerals aragonite and vaterite. Aragonite will change to calcite over timescales of days or less at temperatures exceeding 300 °C, and vaterite is even less stable.
Etymology
Calcite is derived from the German , a term from the 19th century that came from the Latin word for lime, (genitive ) with the suffix -ite used to name minerals. It is thus a doublet of the word chalk.
When applied by archaeologists and stone trade professionals, the term alabaster is used not just as in geology and mineralogy, where it is reserved for a variety of gypsum; but also for a similar-looking, translucent variety of fine-grained banded deposit of calcite.
Unit cell and Miller indices
In publications, two different sets of Miller indices are used to describe directions in hexagonal and rhombohedral crystals, including calcite crystals: three Miller indices in the directions, or four Bravais–Miller indices in the directions, where is redundant but useful in visualizing permutation symmetries.
To add to the complications, there are also two definitions of unit cell for calcite. One, an older "morphological" unit cell, was inferred by measuring angles between faces of crystals, typically with a goniometer, and looking for the smallest numbers that fit. Later, a "structural" unit cell was determined using X-ray crystallography. The morphological unit cell is rhombohedral, having approximate dimensions and , while the structural unit cell is hexagonal (i.e. a rhombic prism), having approximate dimensions and . For the same orientation, must be multiplied by 4 to convert from morphological to structural units. As an example, calcite cleavage is given as "perfect on {1 0 1}" in morphological coordinates and "perfect on {1 0 4}" in structural units. In indices, these are {1 0 1} and {1 0 4}, respectively. Twinning, cleavage and crystal forms are often given in morphological units.
Properties
The diagnostic properties of calcite include a defining Mohs hardness of 3, a specific gravity of 2.71 and, in crystalline varieties, a vitreous luster. Color is white or none, though shades of gray, red, orange, yellow, green, blue, violet, brown, or even black can occur when the mineral is charged with impurities.
Crystal habits
Calcite has numerous habits, representing combinations of over 1000 crystallographic forms. Most common are scalenohedra, with faces in the hexagonal directions (morphological unit cell) or {2 1 4} directions (structural unit cell); and rhombohedral, with faces in the or directions (the most common cleavage plane). Habits include acute to obtuse rhombohedra, tabular habits, prisms, or various scalenohedra. Calcite exhibits several twinning types that add to the observed habits. It may occur as fibrous, granular, lamellar, or compact. A fibrous, efflorescent habit is known as lublinite. Cleavage is usually in three directions parallel to the rhombohedron form. Its fracture is conchoidal, but difficult to obtain.
Scalenohedral faces are chiral and come in pairs with mirror-image symmetry; their growth can be influenced by interaction with chiral biomolecules such as L- and D-amino acids. Rhombohedral faces are not chiral.
Optical
Calcite is transparent to opaque and may occasionally show phosphorescence or fluorescence. A transparent variety called "Iceland spar" is used for optical purposes. Acute scalenohedral crystals are sometimes referred to as "dogtooth spar" while the rhombohedral form is sometimes referred to as "nailhead spar". The rhombohedral form may also have been the "sunstone" whose use by Viking navigators is mentioned in the Icelandic Sagas.
Single calcite crystals display an optical property called birefringence (double refraction). This strong birefringence causes objects viewed through a clear piece of calcite to appear doubled. The birefringent effect (using calcite) was first described by the Danish scientist Rasmus Bartholin in 1669. At a wavelength of about 590 nm, calcite has ordinary and extraordinary refractive indices of 1.658 and 1.486, respectively. Between 190 and 1700 nm, the ordinary refractive index varies roughly between 1.9 and 1.5, while the extraordinary refractive index varies between 1.6 and 1.4.
Thermoluminescence
Calcite has thermoluminescent properties mainly due to manganese divalent (). An experiment was conducted by adding activators such as ions of Mn, Fe, Co, Ni, Cu, Zn, Ag, Pb, and Bi to the calcite samples to observe whether they emitted heat or light. The results showed that adding ions (, , , , , , , , ) did not react. However, a reaction occurred when both manganese and lead ions were present in calcite. By changing the temperature and observing the glow curve peaks, it was found that and acted as activators in the calcite lattice, but was much less efficient than .
Measuring mineral thermoluminescence experiments usually use x-rays or gamma-rays to activate the sample and record the changes in glowing curves at a temperature of 700–7500 K. Mineral thermoluminescence can form various glow curves of crystals under different conditions, such as temperature changes, because impurity ions or other crystal defects present in minerals supply luminescence centers and trapping levels. Observing these curve changes also can help infer geological correlation and age determination.
Chemical
Calcite, like most carbonates, dissolves in acids by the following reaction
The carbon dioxide released by this reaction produces a characteristic effervescence when a calcite sample is treated with an acid.
Due to its acidity, carbon dioxide has a slight solubilizing effect on calcite. The overall reaction is
If the amount of dissolved carbon dioxide drops, the reaction reverses to precipitate calcite. As a result, calcite can be either dissolved by groundwater or precipitated by groundwater, depending on such factors as the water temperature, pH, and dissolved ion concentrations. When conditions are right for precipitation, calcite forms mineral coatings that cement rock grains together and can fill fractures. When conditions are right for dissolution, the removal of calcite can dramatically increase the porosity and permeability of the rock, and if it continues for a long period of time, may result in the formation of caves. Continued dissolution of calcium carbonate-rich formations can lead to the expansion and eventual collapse of cave systems, resulting in various forms of karst topography.
Calcite exhibits an unusual characteristic called retrograde solubility: it is less soluble in water as the temperature increases. Calcite is also more soluble at higher pressures.
Pure calcite has the composition . However, the calcite in limestone often contains a few percent of magnesium. Calcite in limestone is divided into low-magnesium and high-magnesium calcite, with the dividing line placed at a composition of 4% magnesium. High-magnesium calcite retains the calcite mineral structure, which is distinct from that of dolomite, . Calcite can also contain small quantities of iron and manganese. Manganese may be responsible for the fluorescence of impure calcite, as may traces of organic compounds.
Distribution
Calcite is found all over the world, and its leading global distribution is as follows:
United States
Calcite is found in many different areas in the United States. One of the best examples is the Calcite Quarry in Michigan. The Calcite Quarry is the largest carbonate mine in the world and has been in use for more than 85 years. Large quantities of calcite can be mined from these sizeable open pit mines.
Canada
Calcite can also be found throughout Canada, such as in Thorold Quarry and Madawaska Mine, Ontario, Canada.
Mexico
Abundant calcite is mined in the Santa Eulalia mining district, Chihuahua, Mexico.
Iceland
Large quantities of calcite in Iceland are concentrated in the Helgustadir mine. The mine was once the primary mining location of "Iceland spar." However, it currently serves as a nature reserve, and calcite mining will not be allowed.
England
Calcite is found in parts of England, such as Alston Moor, Egremont, and Frizington, Cumbria.
Germany
St. Andreasberg, Harz Mountains, and Freiberg, Saxony can find calcite.
Use and applications
Ancient Egyptians carved many items out of calcite, relating it to their goddess Bast, whose name contributed to the term alabaster because of the close association. Many other cultures have used the material for similar carved objects and applications.
A transparent variety of calcite known as Iceland spar may have been used by Vikings for navigating on cloudy days. A very pure crystal of calcite can split a beam of sunlight into dual images, as the polarized light deviates slightly from the main beam. By observing the sky through the crystal and then rotating it so that the two images are of equal brightness, the rings of polarized light that surround the sun can be seen even under overcast skies. Identifying the sun's location would give seafarers a reference point for navigating on their lengthy sea voyages.
In World War II, high-grade optical calcite was used for gun sights, specifically in bomb sights and anti-aircraft weaponry. It was used as a polarizer (in Nicol prisms) before the invention of Polaroid plates and still finds use in optical instruments. Also, experiments have been conducted to use calcite for a cloak of invisibility.
Microbiologically precipitated calcite has a wide range of applications, such as soil remediation, soil stabilization and concrete repair. It also can be used for tailings management and is designed to promote sustainable development in the mining industry.
Calcite can help synthesize precipitated calcium carbonate (PCC) (mainly used in the paper industry) and increase carbonation. Furthermore, due to its particular crystal habit, such as rhombohedron, hexagonal prism, etc., it promotes the production of PCC with specific shapes and particle sizes.
Calcite, obtained from an 80 kg sample of Carrara marble, is used as the IAEA-603 isotopic standard in mass spectrometry for the calibration of δ18O and δ13C.
Calcite can be formed naturally or synthesized. However, artificial calcite is the preferred material to be used as a scaffold in bone tissue engineering due to its controllable and repeatable properties.
Calcite can be used to alleviate water pollution caused by the excessive growth of cyanobacteria. Lakes and rivers can lead to cyanobacteria blooms due to eutrophication, which pollutes water resources. Phosphorus (P) is the leading cause of excessive growth of cyanobacteria. As an active capping material, calcite can help reduce P release from sediments into the water, thus inhibiting cyanobacteria overgrowth.
Natural occurrence
Calcite is a common constituent of sedimentary rocks, limestone in particular, much of which is formed from the shells of dead marine organisms. Approximately 10% of sedimentary rock is limestone. It is the primary mineral in metamorphic marble. It also occurs in deposits from hot springs as a vein mineral; in caverns as stalactites and stalagmites; and in volcanic or mantle-derived rocks such as carbonatites, kimberlites, or rarely in peridotites.
Cacti contain Ca-oxalate biominerals. Their death releases these biominerals into the environment, which subsequently transform to calcite via a monohydrocalcite intermediate, sequestering carbon.
Calcite is often the primary constituent of the shells of marine organisms, such as plankton (such as coccoliths and planktic foraminifera), the hard parts of red algae, some sponges, brachiopods, echinoderms, some serpulids, most bryozoa, and parts of the shells of some bivalves (such as oysters and rudists). Calcite is found in spectacular form in the Snowy River Cave of New Mexico as mentioned above, where microorganisms are credited with natural formations. Trilobites, which became extinct a quarter billion years ago, had unique compound eyes that used clear calcite crystals to form the lenses. It also forms a substantial part of birds' eggshells, and the δC of the diet is reflected in the δC of the calcite of the shell.
The largest documented single crystal of calcite originated from Iceland, measured and and weighed about 250 tons. Classic samples have been produced at Madawaska Mine, near Bancroft, Ontario.
Bedding parallel veins of fibrous calcite, often referred to in quarrying parlance as beef, occur in dark organic rich mudstones and shales, these veins are formed by increasing fluid pressure during diagenesis.
Formation processes
Calcite formation can proceed by several pathways, from the classical terrace ledge kink model to the crystallization of poorly ordered precursor phases like amorphous calcium carbonate (ACC) via an Ostwald ripening process, or via the agglomeration of nanocrystals.
The crystallization of ACC can occur in two stages. First, the ACC nanoparticles rapidly dehydrate and crystallize to form individual particles of vaterite. Second, the vaterite transforms to calcite via a dissolution and reprecipitation mechanism, with the reaction rate controlled by the surface area of a calcite crystal. The second stage of the reaction is approximately 10 times slower.
However, crystallization of calcite has been observed to be dependent on the starting pH and concentration of magnesium in solution. A neutral starting pH during mixing promotes the direct transformation of ACC into calcite without a vaterite intermediate. But when ACC forms in a solution with a basic initial pH, the transformation to calcite occurs via metastable vaterite, following the pathway outlined above. Magnesium has a noteworthy effect on both the stability of ACC and its transformation to crystalline CaCO3, resulting in the formation of calcite directly from ACC, as this ion destabilizes the structure of vaterite.
Epitaxial overgrowths of calcite precipitated on weathered cleavage surfaces have morphologies that vary with the type of weathering the substrate experienced: growth on physically weathered surfaces has a shingled morphology due to Volmer-Weber growth, growth on chemically weathered surfaces has characteristics of Stranski-Krastanov growth, and growth on pristine cleavage surfaces has characteristics of Frank - van der Merwe growth. These differences are apparently due to the influence of surface roughness on layer coalescence dynamics.
Calcite may form in the subsurface in response to microorganism activity, such as sulfate-dependent anaerobic oxidation of methane, where methane is oxidized and sulfate is reduced, leading to precipitation of calcite and pyrite from the produced bicarbonate and sulfide. These processes can be traced by the specific carbon isotope composition of the calcites, which are extremely depleted in the 13C isotope, by as much as −125 per mil PDB (δ13C).
In Earth history
Calcite seas existed in Earth's history when the primary inorganic precipitate of calcium carbonate in marine waters was low-magnesium calcite (lmc), as opposed to the aragonite and high-magnesium calcite (hmc) precipitated today. Calcite seas alternated with aragonite seas over the Phanerozoic, being most prominent in the Ordovician and Jurassic periods. Lineages evolved to use whichever morph of calcium carbonate was favourable in the ocean at the time they became mineralised, and retained this mineralogy for the remainder of their evolutionary history. Petrographic evidence for these calcite sea conditions consists of calcitic ooids, lmc cements, hardgrounds, and rapid early seafloor aragonite dissolution. The evolution of marine organisms with calcium carbonate shells may have been affected by the calcite and aragonite sea cycle.
Calcite is one of the minerals that has been shown to catalyze an important biological reaction, the formose reaction, and may have had a role in the origin of life. Interaction of its chiral surfaces (see Form) with aspartic acid molecules results in a slight bias in chirality; this is one possible mechanism for the origin of homochirality in living cells.
Climate change
Climate change is exacerbating ocean acidification, possibly leading to lower natural calcite production. The oceans absorb large amounts of from fossil fuel emissions into the air. The total amount of artificial absorbed by the oceans is calculated to be 118 ± 19 Gt C. If a large amount of dissolves in the sea, it will cause the acidity of the seawater to increase, thereby affecting the pH value of the ocean. Calcifying organisms in the sea, such as molluscs foraminifera, crustaceans, echinoderms and corals, are susceptible to pH changes. Meanwhile, these calcifying organisms are also an essential source of calcite. As ocean acidification causes pH to drop, carbonate ion concentrations will decline, potentially reducing natural calcite production.
Gallery
See also
Carbonate rock
Ikaite, CaCO3·6H2O
List of minerals
Lysocline
Manganoan calcite, (Ca,Mn)CO3
Monohydrocalcite, CaCO3·H2O
Nitratine
Ocean acidification
Ulexite
References
Further reading
Calcium minerals
Carbonate minerals
Limestone
Optical materials
Transparent materials
Calcite group
Cave minerals
Trigonal minerals
Minerals in space group 167
Evaporite
Luminescent minerals
Polymorphism (materials science)
Bastet | Calcite | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,841 | [
"Physical phenomena",
"Luminescence",
"Polymorphism (materials science)",
"Luminescent minerals",
"Materials science",
"Optical phenomena",
"Materials",
"Optical materials",
"Transparent materials",
"Matter"
] |
44,633 | https://en.wikipedia.org/wiki/Ultramarine | Ultramarine is a deep blue color pigment which was originally made by grinding lapis lazuli into a powder. Its lengthy grinding and washing process makes the natural pigment quite valuable—roughly ten times more expensive than the stone it comes from and as expensive as gold.
The name ultramarine comes from the Latin . The word means 'beyond the sea', as the pigment was imported by Italian traders during the 14th and 15th centuries from mines in Afghanistan. Much of the expansion of ultramarine can be attributed to Venice which historically was the port of entry for lapis lazuli in Europe.
Ultramarine was the finest and most expensive blue used by Renaissance painters. It was often used for the robes of the Virgin Mary and symbolized holiness and humility. It remained an extremely expensive pigment until a synthetic ultramarine was invented in 1826.
Ultramarine is a permanent pigment when under ideal preservation conditions. Otherwise, it is susceptible to discoloration and fading.
Structure
The pigment consists primarily of a zeolite-based mineral containing small amounts of polysulfides. It occurs in nature as a proximate component of lapis lazuli containing a blue cubic mineral called lazurite. In the Colour Index International, the pigment of ultramarine is identified as P. Blue 29 77007.
The major component of lazurite is a complex sulfur-containing sodium-silicate (Na8–10Al6Si6O24S2–4), which makes ultramarine the most complex of all mineral pigments. Some chloride is often present in the crystal lattice as well. The blue color of the pigment is due to the radical anion, which contains an unpaired electron.
Visual properties
The best samples of ultramarine are a uniform deep blue while other specimens are of paler color.
Particle size distribution has been found to vary among samples of ultramarine from various workshops. Numerous grinding techniques used by painters have resulted in different pigment/medium ratios and particle size distributions. The grinding and purification process results in pigment with particles of various geometries. Different grades of pigment may have been used for different areas in a painting, a characteristic that is sometimes used in art authentication.
Shades and variations
International Klein Blue (IKB) a deep blue hue first mixed by the French artist Yves Klein.
Electric
Electric ultramarine is the tone of ultramarine that is halfway between blue and violet on the RGB (HSV) color wheel, the expression of the HSV color space of the RGB color model.
Production
Natural production
Historically, lapis lazuli stone was mined in Afghanistan and shipped overseas to Europe.
A method to produce ultramarine from lapis lazuli was introduced and later described by Cennino Cennini in the 15th century. This process consisted of grinding the lapis lazuli mineral, mixing the ground material with melted wax, resins, and oils, wrapping the resulting mass in a cloth, and then kneading it in a dilute lye solution, a potassium carbonate solution prepared by combining wood ash with water. The blue lazurite particles collect at the bottom of the pot, while the colorless crystalline material and other impurities remain at the top. This process was performed at least three times, with each successive extraction generating a lower quality material. The final extraction, consisting largely of colorless material as well as a few blue particles, brings forth ultramarine ash which is prized as a glaze for its pale blue transparency. This extensive process was specific to ultramarine because the mineral it comes from has a combination of both blue and colorless pigments. If an artist were to simply grind and wash lapis lazuli, the resulting powder would be a greyish-blue color that lacks purity and depth of color since lapis lazuli contains a high proportion of colorless material.
Although the lapis lazuli stone itself is relatively inexpensive, the lengthy process of pulverizing, sifting, and washing to produce ultramarine makes the natural pigment quite valuable and roughly ten times more expensive than the stone it comes from. The high cost of the imported raw material and the long laborious process of extraction combined has been said to make high-quality ultramarine as expensive as gold.
Synthetic production
In 1990, an estimated 20,000 tons of ultramarine were produced industrially. The raw materials used in the manufacture of synthetic ultramarine are the following:
white kaolin,
anhydrous sodium sulfate (Na2SO4),
anhydrous sodium carbonate (Na2CO3),
powdered sulfur,
powdered charcoal or relatively ash-free coal, or colophony in lumps.
The preparation is typically made in steps:
The first part of the process takes place at 700 to 750 °C in a closed furnace, so that sulfur, carbon and organic substances give reducing conditions. This yields a yellow-green product sometimes used as a pigment.
In the second step, air or sulfur dioxide at 350 to 450 °C is used to oxidize sulfide in the intermediate product to S2 and Sn chromophore molecules, resulting in the blue (or purple, pink or red) pigment.
The mixture is heated in a kiln, sometimes in brick-sized amounts.
The resultant solids are then ground and washed, as is the case in any other insoluble pigment's manufacturing process; the chemical reaction produces large amounts of sulfur dioxide. (Flue-gas desulfurization is thus essential to its manufacture where SO2 pollution is regulated.)
Ultramarine poor in silica is obtained by fusing a mixture of soft clay, sodium sulfate, charcoal, sodium carbonate, and sulfur. The product is at first white, but soon turns green "green ultramarine" when it is mixed with sulfur and heated. The sulfur burns, and a fine blue pigment is obtained. Ultramarine rich in silica is generally obtained by heating a mixture of pure clay, very fine white sand, sulfur, and charcoal in a muffle furnace. A blue product is obtained at once, but a red tinge often results. The different ultramarines—green, blue, red, and violet—are finely ground and washed with water.
Synthetic ultramarine is a more vivid blue than natural ultramarine, since the particles in synthetic ultramarine are smaller and more uniform than the particles in natural ultramarine and therefore diffuse light more evenly. Its color is unaffected by light nor by contact with oil or lime as used in painting. Hydrochloric acid immediately bleaches it with liberation of hydrogen sulfide. Even a small addition of zinc oxide to the reddish varieties especially causes a considerable diminution in the intensity of the color. Modern, synthetic ultramarine blue is a non-toxic, soft pigment that does not need much mulling to disperse into a paint formulation.
Structure and classification
Ultramarine is the aluminosilicate zeolite with a sodalite structure. Sodalite consists of interconnected aluminosilicate cages. Some of these cages contain polysulfide () groups that are the chromophore (color centre). The negative charge on these ions is balanced by ions that also occupy these cages.
The chromophore is proposed to be or S4.
History
Antiquity and Middle Ages
The name derives from Middle Latin , literally "beyond the sea" because it was imported from Asia by sea. In the past, it has also been known as azzurrum ultramarine, , , , . The current terminology for ultramarine includes natural ultramarine (English), (French), (German), (Italian), and (Spanish). The first recorded use of ultramarine as a color name in English was in 1598.
The first noted use of lapis lazuli as a pigment can be seen in 6th and 7th-century paintings in Zoroastrian and Buddhist cave temples in Afghanistan, near the most famous source of the mineral. Lapis lazuli has been identified in Chinese paintings from the 10th and 11th centuries, in Indian mural paintings from the 11th, 12th, and 17th centuries, and on Anglo-Saxon and Norman illuminated manuscripts from .
Ancient Egyptians used lapis lazuli in solid form for ornamental applications in jewelry, however, there is no record of them successfully formulating lapis lazuli into paint. Archaeological evidence and early literature reveal that lapis lazuli was used as a semi-precious stone and decorative building stone from early Egyptian times. The mineral is described by the classical authors Theophrastus and Pliny. There is no evidence that lapis lazuli was used ground as a painting pigment by ancient Greeks and Romans. Like ancient Egyptians, they had access to a satisfactory blue colorant in the synthetic copper silicate pigment, Egyptian blue.
Renaissance
Venice was central to both the manufacturing and distribution of ultramarine during the early modern period. The pigment was imported by Italian traders during the 14th and 15th centuries from mines in Afghanistan. Other European countries employed the pigment less extensively than in Italy; the pigment was not used even by wealthy painters in Spain at that time.
During the Renaissance, ultramarine was the finest and most expensive blue that could be used by painters. Color infrared photogenic studies of ultramarine in 13th and 14th-century Sienese panel paintings have revealed that historically, ultramarine has been diluted with white lead pigment in an effort to use the color more sparingly given its high price. The 15th century artist Cennino Cennini wrote in his painters' handbook: "Ultramarine blue is a glorious, lovely and absolutely perfect pigment beyond all the pigments. It would not be possible to say anything about or do anything to it which would not make it more so." Natural ultramarine is a difficult pigment to grind by hand, and for all except the highest quality of mineral, sheer grinding and washing produces only a pale grayish blue powder.
The pigment was most extensively used during the 14th through 15th centuries, as its brilliance complemented the vermilion and gold of illuminated manuscripts and Italian panel paintings. It was valued chiefly on account of its brilliancy of tone and its inertness in opposition to sunlight, oil, and slaked lime. It is, however, extremely susceptible to even minute and dilute mineral acids and acid vapors. Dilute HCl, HNO3, and H2SO4 rapidly destroy the blue color, producing hydrogen sulfide (H2S) in the process. Acetic acid attacks the pigment at a much slower rate than mineral acids.
Ultramarine was only used for frescoes when it was applied secco because frescoes' absorption rate made its use cost prohibitive. The pigment was mixed with a binding medium like egg to form a tempera and applied over dry plaster, such as in Giotto di Bondone's frescos in the Cappella degli Scrovegni or the Arena Chapel in Padua.
European artists used the pigment sparingly, reserving their highest quality blues for the robes of Mary and the Christ child, possibly in an effort to show piety, spending as a means of expressing devotion. As a result of the high price, artists sometimes economized by using a cheaper blue, azurite, for under painting. Most likely imported to Europe through Venice, the pigment was seldom seen in German art or art from countries north of Italy. Due to a shortage of azurite in the late 16th and 17th century, the price for the already-expensive ultramarine increased dramatically.
17th and 18th centuries
Johannes Vermeer made extensive use of ultramarine in his paintings. The turban of the Girl with a Pearl Earring is painted with a mixture of ultramarine and lead white, with a thin glaze of pure ultramarine over it. In Lady Standing at a Virginal, the young woman's dress is painted with a mixture of ultramarine and green earth, and ultramarine was used to add shadows in the flesh tones. Scientific analysis by the National Gallery in London of Lady Standing at a Virginal showed that the ultramarine in the blue seat cushion in the foreground had degraded and become paler with time; it would have been a deeper blue when originally painted.
19th century (invention of synthetic ultramarine)
The beginning of the development of artificial ultramarine blue is known from Goethe. In about 1787, he observed the blue deposits on the walls of lime kilns near Palermo in Sicily. He was aware of the use of these glassy deposits as a substitute for lapis lazuli in decorative applications. He did not mention if it was suitable to grind for a pigment.
In 1814, Tassaert observed the spontaneous formation of a blue compound, very similar to ultramarine, if not identical with it, in a lime kiln at St. Gobain. In 1824, this caused the to offer a prize for the artificial production of the precious color. Processes were devised by Jean Baptiste Guimet (1826) and by Christian Gmelin (1828), then professor of chemistry in Tübingen. While Guimet kept his process a secret, Gmelin published his, and became the originator of the "artificial ultramarine" industry.
Permanence
Easel paintings and illuminated manuscripts have revealed natural ultramarine in a perfect state of preservation even though the art may be several centuries old. In general, ultramarine is a permanent pigment. Although it is a sulfur-containing compound from which sulfur is readily emitted as H2S, historically, it has been mixed with lead white with no reported occurrences of the lead pigment blackening to become lead sulfide.
A plague known as "ultramarine sickness" has occasionally been observed among ultramarine oil paintings as a grayish or yellowish gray discoloration of the paint surface. This can occur with artificial ultramarine that is used industrially. The cause of this has been debated among experts, however, potential causes include atmospheric sulfur dioxide and moisture, acidity of an oil- or oleo-resinous paint medium, or slow drying of the oil during which time water may have been absorbed, creating swelling, opacity of the medium, and therefore whitening of the paint film.
Both natural and artificial ultramarine are stable to ammonia and caustic alkalis in ordinary conditions. Artificial ultramarine has been found to fade when in contact with lime when it is used to color concrete or plaster. These observations have led experts to speculate if the natural pigment's fading may be the result of contact with the lime plaster of fresco paintings.
Synthetic applications
Synthetic ultramarine, being very cheap, is used for wall painting, the printing of paper hangings, and calico. It also is used as a corrective for the yellowish tinge often present in things meant to be white, such as linen and paper. Bluing or "laundry blue" is a suspension of synthetic ultramarine, or the chemically different Prussian blue, that is used for this purpose when washing white clothes. It is often found in makeup such as mascaras or eye shadows.
Large quantities are used in the manufacture of paper, and especially for producing a kind of pale blue writing paper which was popular in Britain. During World War I, the RAF painted the outer roundels with a color made from ultramarine blue. This became BS 108(381C) aircraft blue. It was replaced in the 1960s by a new color made on phthalocyanine blue, called BS110(381C) roundel blue.
Terminology
Ultramarine is a blue made from natural lapis lazuli, or its synthetic equivalent which is sometimes called "French Ultramarine". More generally "ultramarine blue" can refer to a vivid blue.
The term ultramarine can also refer to other pigments. Variants of the pigment such as "ultramarine red," "ultramarine green," and "ultramarine violet" all resemble ultramarine with respect to their chemistry and crystal structure.
The term "ultramarine green" indicates a dark green while barium chromate is sometimes referred to as "ultramarine yellow". Ultramarine pigment has also been termed "Gmelin's Blue," "Guimet's Blue," "New blue," "Oriental Blue," and "Permanent Blue".
See also
Blue pigments
RAL 5002 Ultramarine blue
Notes
Further reading
Mangla, Ravi (8 June 2015), "True blue: a brief history of ultramarine", Paris Review—Daily.
Plesters, J. (1993), "Ultramarine Blue, Natural and Artificial", in Artists' Pigments. A Handbook of Their History and Characteristics, Vol. 2: A. Roy (Ed.) Oxford University Press, p. 37–66
References
External links
Discussion of ultramarine in an article on blue pigments in early Sienese paintings from The Journal of the American Institute for Conservation
National Gallery essay on the altered appearance of ultramarine in the paintings of Vermeer
Ultramarine natural, ColourLex
Ultramarine artificial, ColourLex
Shades and tints and color harmonies of ultramarine, HTMLCSScolor.com
More shades and tints and color harmonies of ultramarine, HTMLCSScolor.com
An alternative ultramarine color (#5A7CC2) from Pantone, pantone.com
Quaternary colors
Aluminosilicates
Inorganic pigments
Zeolites
Sulfides
Shades of blue | Ultramarine | [
"Chemistry"
] | 3,537 | [
"Inorganic pigments",
"Inorganic compounds"
] |
44,653 | https://en.wikipedia.org/wiki/Lapis%20lazuli | Lapis lazuli (; ), or lapis for short, is a deep-blue metamorphic rock used as a semi-precious stone that has been prized since antiquity for its intense color. Originating from the Persian word for the gem, lāžward, lapis lazuli is a rock composed primarily of the minerals lazurite, pyrite and calcite. As early as the 7th millennium BC, lapis lazuli was mined in the Sar-i Sang mines, in Shortugai, and in other mines in Badakhshan province in modern northeast Afghanistan. Lapis lazuli artifacts, dated to 7570 BC, have been found at Bhirrana, which is the oldest site of Indus Valley civilisation. Lapis was highly valued by the Indus Valley Civilisation (3300–1900 BC). Lapis beads have been found at Neolithic burials in Mehrgarh, the Caucasus, and as far away as Mauritania. It was used in the funeral mask of Tutankhamun (1341–1323 BC).
By the end of the Middle Ages, lapis lazuli began to be exported to Europe, where it was ground into powder and made into the pigment ultramarine. Ultramarine was used by some of the most important artists of the Renaissance and Baroque, including Masaccio, Perugino, Titian and Vermeer, and was often reserved for the clothing of the central figures of their paintings, especially the Virgin Mary. Ultramarine has also been found in dental tartar of medieval nuns and scribes, perhaps as a result of licking their painting brushes while producing medieval texts and manuscripts.
History
Excavations from Tepe Gawra show that Lapis lazuli was introduced to Mesopotamia approximately in the late Ubaid period, c. 4900–4000 BCE. A traditional understanding was that the lapis was mined some fifteen hundred miles to the east – in Badakhshan. Indeed, the Persian , also written , is commonly interpreted as having an origin in a local place name.
From the Persian, the Arabic is the etymological source of both the English word azure (via Old French azur) and Medieval Latin , which came to mean 'heaven' or 'sky'. To disambiguate, ("stone of ") was used to refer to the stone itself, and is the term ultimately imported into Middle English. is etymologically related to the color blue, and used as a root for the word for blue in several languages, including Spanish and Portuguese .
Mines in northeast Afghanistan continue to be a major source of lapis lazuli. Important amounts are also produced from mines west of Lake Baikal in Russia, and in the Andes mountains in Chile which is the source that the Inca used to carve artifacts and jewelry. Smaller quantities are mined in Pakistan, Italy, Mongolia, the United States, and Canada.
Science and uses
Composition
The most important mineral component of lapis lazuli is lazurite (25% to 40%), a blue feldspathoid silicate mineral of the sodalite family, with the formula . Most lapis lazuli also contains calcite (white), and pyrite (metallic yellow). Some samples of lapis lazuli contain augite, diopside, enstatite, mica, hauynite, hornblende, nosean, and sulfur-rich löllingite geyerite.
Lapis lazuli usually occurs in crystalline marble as a result of contact metamorphism.
Color
The intense blue color is due to the presence of the trisulfur radical anion () in the crystal. The presence of disulfur () and tetrasulfur () radicals can shift the color towards yellow or red, respectively. These radical anions substitute for the chloride anions within the sodalite structure. The radical anion exhibits a visible absorption band in the range 595–620 nm with high molar absorptivity, leading to its bright blue color.
Sources
Lapis lazuli is found in limestone in the Kokcha River valley of Badakhshan province in north-eastern Afghanistan, where the Sar-i Sang mine deposits have been worked for more than 6,000 years. Afghanistan was the source of lapis for the ancient Persian, Egyptian and Mesopotamian civilizations, as well as the later Greeks and Romans. Ancient Egyptians obtained the material through trade with Mesopotamians, as part of Egypt–Mesopotamia relations. During the height of the Indus Valley civilisation, approximately 2000 BC, the Harappan colony, now known as Shortugai, was established near the lapis mines.
In addition to the Afghan deposits, lapis is also extracted in the Andes (near Ovalle, Chile); and to the west of Lake Baikal in Siberia, Russia, at the Tultui lazurite deposit. It is mined in smaller amounts in Angola, Argentina, Burma, Pakistan, Canada, Italy, India, and in the United States in California and Colorado.
Uses and substitutes
Lapis takes an excellent polish and can be made into jewellery, carvings, boxes, mosaics, ornaments, small statues, and vases. Interior items and finishing buildings can be also made with lapis. During the Renaissance, lapis was ground and processed to make the pigment ultramarine for use in frescoes and oil painting. Its usage as a pigment in oil paint largely ended during the early 19th century, when a chemically identical synthetic variety became available.
Lapis lazuli is commercially synthesized or simulated by the Gillson process, which is used to make artificial ultramarine and hydrous zinc phosphates. Spinel or sodalite, or dyed jasper or howlite, can be substituted for lapis.
History and art
In the ancient world
Lapis lazuli has been mined in Afghanistan and exported to the Mediterranean world and South Asia since the Neolithic age, along the ancient trade route between Afghanistan and the Indus Valley dating to the 7th millennium BC. Quantities of these beads have also been found at 4th millennium BC settlements in Northern Mesopotamia, and at the Bronze Age site of Shahr-e Sukhteh in southeast Iran (3rd millennium BC). A dagger with a lapis handle, a bowl inlaid with lapis, amulets, beads, and inlays representing eyebrows and beards, were found in the Royal Tombs of the Sumerian city-state of Ur from the 3rd millennium BC.
Lapis was also used in ancient Persia, Mesopotamia by the Akkadians, Assyrians, and Babylonians for seals and jewelry. It is mentioned several times in the Mesopotamian poem, the Epic of Gilgamesh (17th–18th century BC), one of the oldest known works of literature. The Statue of Ebih-Il, a 3rd millennium BC statue found in the ancient city-state of Mari in modern-day Syria, now in the Louvre, uses lapis lazuli inlays for the irises of the eyes.
In ancient Egypt, lapis lazuli was a favorite stone for amulets and ornaments such as scarabs. Lapis jewellery has been found at excavations of the Predynastic Egyptian site Naqada (3300–3100 BC). At Karnak, the relief carvings of Thutmose III (1479–1429 BC) show fragments and barrel-shaped pieces of lapis lazuli being delivered to him as tribute. Powdered lapis was used as eyeshadow by Cleopatra.
Jewelry made of lapis lazuli has also been found at Mycenae attesting to relations between the Myceneans and the developed civilizations of Egypt and the East.
Pliny the Elder wrote that lapis lazuli is "opaque and sprinkled with specks of gold". Because the stone combines the blue of the heavens and golden glitter of the sun, it was emblematic of success in the old Jewish tradition. In the early Christian tradition lapis lazuli was regarded as the stone of Virgin Mary.
In late classical times and as late as the Middle Ages, lapis lazuli was often called sapphire (sapphirus in Latin, sappir in Hebrew), though it had little to do with the stone today known as the blue corundum variety sapphire. In his book on stones, the Greek scientist Theophrastus described "the sapphirus, which is speckled with gold," a description which matches lapis lazuli.
There are many references to "sapphire" in the Old Testament, but most scholars agree that, since sapphire was not known before the Roman Empire, they most likely are references to lapis lazuli. For instance, Exodus 24:10: "And they saw the God of Israel, and there was under his feet as it were a paved work of a sapphire stone..." (KJV). The words used in the Latin Vulgate Bible in this citation are "quasi opus lapidis sapphirini", the terms for lapis lazuli. Modern translations of the Bible, such as the New Living Translation Second Edition, refer to lapis lazuli in most instances instead of sapphire.
Vermeer
Johannes Vermeer used lapis lazuli paint, in the Girl with a Pearl Earring painting.
Yeats
The poet, William Butler Yeats, describes a figurine of sculpted lapis lazuli in a poem entitled "Lapis Lazuli". The sculpture of three men from China, a bird, and a musical instrument serves in the poem as a reminder of "gaiety" in the face of tragedy.
Gallery
See also
References
Bibliography
Bakhtiar, Lailee McNair, Afghanistan's Blue Treasure Lapis Lazuli, Front Porch Publishing, 2011,
Bariand, Pierre, "Lapis Lazuli", Mineral Digest, Vol 4 Winter 1972.
Herrmann, Georgina, "Lapis Lazuli: The Early Phases of Its Trade", Oxford University Dissertation, 1966.
Korzhinskij, D. S., "Gisements bimetasomatiques de philogophite et de lazurite de l'Archen du pribajkale", Traduction par Mr. Jean Sagarzky-B.R.G.M., 1944.
Lapparent A. F., Bariand, P. et Blaise, J., "Une visite au gisement de lapis lazuli de Sar-e-Sang du Hindu Kouch, Afghanistan," C.R. Somm.S.G.P.p. 30, 1964.
.
Wise, Richard W., Secrets of the Gem Trade: The Connoisseur's Guide to Precious Gemstones, 2016
Wyart J. Bariand P, Filippi J., "Le Lapis Lazuli de Sar-e-SAng", Revue de Geographie Physique et de Geologie Dynamique (2) Vol. XIV Pasc. 4 pp. 443–448, Paris, 1972.
External links
Lapis lazuli at Gemstone.org
Documentation from online course produced by University of California at Berkeley
Lapislazuli: Occurrence, Mining and Market Potential of a blue Mineral Pigment
"Why a Medieval Woman Had Lapis Lazuli Hidden in Her Teeth", The Atlantic, January 2019
Lapis Lazuli birthstone virtues and story at birthstone.guide
Gemstones
Metamorphic rocks
Archaeological sites in Rajasthan | Lapis lazuli | [
"Physics"
] | 2,379 | [
"Materials",
"Gemstones",
"Matter"
] |
44,665 | https://en.wikipedia.org/wiki/Arago%20spot | In optics, the Arago spot, Poisson spot, or Fresnel spot is a bright point that appears at the center of a circular object's shadow due to Fresnel diffraction. This spot played an important role in the discovery of the wave nature of light and is a common way to demonstrate that light behaves as a wave.
The basic experimental setup requires a point source, such as an illuminated pinhole or a diverging laser beam. The dimensions of the setup must comply with the requirements for Fresnel diffraction. Namely, the Fresnel number must satisfy
where
is the diameter of the circular object,
is the distance between the object and the screen, and
is the wavelength of the source.
Finally, the edge of the circular object must be sufficiently smooth.
These conditions together explain why the bright spot is not encountered in everyday life. However, with the laser sources available today, it is undemanding to perform an Arago-spot experiment.
In astronomy, the Arago spot can also be observed in the strongly defocussed image of a star in a Newtonian telescope. There, the star provides an almost ideal point source at infinity, and the secondary mirror of the telescope constitutes the circular obstacle.
When light shines on the circular obstacle, Huygens' principle says that every point in the plane of the obstacle acts as a new point source of light. The light coming from points on the circumference of the obstacle and going to the center of the shadow travels exactly the same distance, so all the light passing close by the object arrives at the screen in phase and constructively interferes. This results in a bright spot at the shadow's center, where geometrical optics and particle theories of light predict that there should be no light at all.
History
At the beginning of the 19th century, the idea that light does not simply propagate along straight lines gained traction. Thomas Young published his double-slit experiment in 1807. The original Arago spot experiment was carried out a decade later and was the deciding experiment on the question of whether light is a particle or a wave. It is thus an example of an experimentum crucis.
At that time, many favored Isaac Newton's corpuscular theory of light, among them the theoretician Siméon Denis Poisson. In 1818 the French Academy of Sciences launched a competition to explain the properties of light, where Poisson was one of the members of the judging committee. The civil engineer Augustin-Jean Fresnel entered this competition by submitting a new wave theory of light.
Poisson studied Fresnel's theory in detail and, being a supporter of the particle theory of light, looked for a way to prove it wrong. Poisson thought that he had found a flaw when he argued that a consequence of Fresnel's theory was that there would exist an on-axis bright spot in the shadow of a circular obstacle, where there should be complete darkness according to the particle theory of light. This prediction was seen as an absurd consequence of the wave theory, and the failure of that prediction should be a strong argument to reject Fresnel's theory.
However, the head of the committee, Dominique-François-Jean Arago, decided to actually perform the experiment. He molded a 2 mm metallic disk to a glass plate with wax. He succeeded in observing the predicted spot, confirming Fresnel's prediction.
Arago later noted that the phenomenon (later known as "Poisson's spot" or the "spot of Arago") had already been observed by Delisle and Maraldi a century earlier.
Although Arago's experimental result was overwhelming evidence in favor of the wave theory, a century later, in conjunction with the birth of quantum mechanics (and first suggested in one of Albert Einstein's Annus Mirabilis papers), it became understood that light (as well as all forms of matter and energy) must be described as both a particle and a wave (wave–particle duality). However the particle associated with electromagnetic waves, the photon, has nothing in common with the particles imagined in the corpuscular theory that had been dominant before the rise of the wave theory and Arago's powerful demonstration. Before the advent of quantum theory in the late 1920s, only the wave nature of light could explain phenomena such as diffraction and interference. Today it is known that a diffraction pattern appears through the mosaic-like buildup of bright spots caused by single photons, as predicted by Dirac's quantum theory. With increasing light intensity the bright dots in the mosaic diffraction pattern just assemble faster. In contrast, the wave theory predicts the formation of an extended continuous pattern whose overall brightness increases with light intensity.
Theory
At the heart of Fresnel's wave theory is the Huygens–Fresnel principle, which states that every unobstructed point of a wavefront becomes the source of a secondary spherical wavelet and that the amplitude of the optical field E at a point on the screen is given by the superposition of all those secondary wavelets taking into account their relative phases. This means that the field at a point P1 on the screen is given by a surface integral:
where the inclination factor which ensures that the secondary wavelets do not propagate backwards is given by
and
A is the amplitude of the source wave
is the wavenumber
S is the unobstructed surface.
The first term outside of the integral represents the oscillations from the source wave at a distance r0. Similarly, the term inside the integral represents the oscillations from the secondary wavelets at distances r1.
In order to derive the intensity behind the circular obstacle using this integral one assumes that the experimental parameters fulfill the requirements of the near-field diffraction regime (the size of the circular obstacle is large compared to the wavelength and small compared to the distances g = P0C and b = CP1). Going to polar coordinates then yields the integral for a circular object of radius a (see for example Born and Wolf):
This integral can be solved numerically (see below). If g is large and b is small so that the angle is not negligible one can write the integral for the on-axis case (P1 is at the center of the shadow) as (see Sommerfeld):
The source intensity, which is the square of the field amplitude, is and the intensity at the screen . The on-axis intensity as a function of the distance b is hence given by:
This shows that the on-axis intensity at distances b much greater than the diameter of the circular obstacle is the same as the source intensity, as if the circular object was not present at all. However at larger distances b, it turns out that the size of the bright spot (as can be seen in the simulations below where b/a is increased in successive images) is larger therefore making the spot easier to discern.
Calculation of diffraction images
To calculate the full diffraction image that is visible on the screen one has to consider the surface integral of the previous section. One cannot exploit circular symmetry anymore, since the line between the source and an arbitrary point on the screen does not pass through the center of the circular object. With the aperture function which is 1 for transparent parts of the object plane and 0 otherwise (i.e. It is 0 if the direct line between source and the point on the screen passes through the blocking circular object.) the integral that needs to be solved is given by:
Numerical calculation of the integral using the trapezoidal rule or Simpson's rule is not efficient and becomes numerically unstable especially for configurations with large Fresnel number. However, it is possible to solve the radial part of the integral so that only the integration over the azimuth angle remains to be done numerically. For a particular angle one must solve the line integral for the ray with origin at the intersection point of the line P0P1 with the circular object plane. The contribution for a particular ray with azimuth angle and passing a transparent part of the object plane from to is:
So for each angle one has to compute the intersection point(s) of the ray with the circular object and then sum the contributions for a certain number of angles between 0 and . Results of such a calculation are shown in the following images.
The images are simulations of the Arago spot in the shadow of discs of diameter 4 mm, 2 mm, and 1 mm, imaged 1 m behind each disc. The disks are illuminated by light of wavelength of 633 nm, diverging from a point 1 m in front of each disc. Each image is 16 mm wide.
Experimental aspects
Intensity and size
For an ideal point source, the intensity of the Arago spot equals that of the undisturbed wave front. Only the width of the Arago spot intensity peak depends on the distances between source, circular object and screen, as well as the source's wavelength and the diameter of the circular object. This means that one can compensate for a reduction in the source's wavelength by increasing the distance between the circular object and screen or reducing the circular object's diameter.
The lateral intensity distribution on the screen has in fact the shape of a squared zeroth Bessel function of the first kind when close to the optical axis and using a plane wave source (point source at infinity):
where
r is the distance of the point P1 on the screen from the optical axis
d is the diameter of circular object
λ is the wavelength
b is the distance between circular object and screen.
The following images show the radial intensity distribution of the simulated Arago spot images above:
The red lines in these three graphs correspond to the simulated images above, and the green lines were computed by applying the corresponding parameters to the squared Bessel function given above.
Finite source size and spatial coherence
The main reason why the Arago spot is hard to observe in circular shadows from conventional light sources is that such light sources are bad approximations of point sources. If the wave source has a finite size S then the Arago spot will have an extent that is given by Sb/g, as if the circular object acted like a lens. At the same time the intensity of the Arago spot is reduced with respect to the intensity of the undisturbed wave front. Defining the relative intensity as the intensity divided by the intensity of the undisturbed wavefront, the relative intensity for an extended circular source of diameter w can be expressed exactly using the following equation:
where and are the Bessel functions of the first kind. is the radius of the disc casting the shadow, the wavelength and the distance between source and disc. For large sources the following asymptotic approximation applies:
Deviation from circularity
If the cross-section of the circular object deviates slightly from its circular shape (but it still has a sharp edge on a smaller scale) the shape of the point-source Arago spot changes. In particular, if the object has an ellipsoidal cross-section the Arago spot has the shape of an evolute. Note that this is only the case if the source is close to an ideal point source. From an extended source the Arago spot is only affected marginally, since one can interpret the Arago spot as a point-spread function. Therefore, the image of the extended source only becomes washed out due to the convolution with the point-spread function, but it does not decrease in overall intensity.
Surface roughness of circular object
The Arago spot is very sensitive to small-scale deviations from the ideal circular cross-section. This means that a small amount of surface roughness of the circular object can completely cancel out the bright spot. This is shown in the following three diagrams which are simulations of the Arago spot from a 4 mm diameter disc ():
The simulation includes a regular sinusoidal corrugation of the circular shape of amplitude 10 μm, 50 μm and 100 μm, respectively. Note, that the 100 μm edge corrugation almost completely removes the central bright spot.
This effect can be best understood using the Fresnel zone concept. The field transmitted by a radial segment that stems from a point on the obstacle edge provides a contribution whose phase is tight to the position of the edge point relative to Fresnel zones. If the variance in the radius of the obstacle are much smaller than the width of Fresnel zone near the edge, the contributions form radial segments are approximately in phase and interfere constructively. However, if random edge corrugation have amplitude comparable to or greater than the width of that adjacent Fresnel zone, the contributions from radial segments are no longer in phase and cancel each other reducing the Arago spot intensity.
The adjacent Fresnel zone is approximately given by:
The edge corrugation should not be much more than 10% of this width to see a close to ideal Arago spot. In the above simulations with the 4 mm diameter disc the adjacent Fresnel zone has a width of about 77 μm.
Arago spot with matter waves
In 2009, the Arago spot experiment was demonstrated with a supersonic expansion beam of deuterium molecules (an example of neutral matter waves). Material particles behaving like waves is known from quantum mechanics. The wave nature of particles actually dates back to de Broglie's hypothesis as well as Davisson and Germer's experiments. An Arago spot of electrons, which also constitute matter waves, can be observed in transmission electron microscopes when examining circular structures of a certain size.
The observation of an Arago spot with large molecules, thus proving their wave-nature, is a topic of current research.
Other applications
Beside the demonstration of wave-behavior, the Arago spot also has a few other applications. One of the ideas is to use the Arago spot as a straight line reference in alignment systems. Another is to probe aberrations in laser beams by using the spot's sensitivity to beam aberrations. Finally, the aragoscope has been proposed as a method for dramatically improving the diffraction-limited resolution of space-based telescopes.
See also
Aragoscope
Occulting disk
References
Diffraction | Arago spot | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,920 | [
"Crystallography",
"Diffraction",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
44,682 | https://en.wikipedia.org/wiki/CMYK%20color%20model | The CMYK color model (also known as process color, or four color) is a subtractive color model, based on the CMY color model, used in color printing, and is also used to describe the printing process itself. The abbreviation CMYK refers to the four ink plates used: cyan, magenta, yellow, and key (most often black).
The CMYK model works by partially or entirely masking colors on a lighter, usually white, background. The ink reduces the light that would otherwise be reflected. Such a model is called subtractive because inks subtract some colors from white light; in the CMY model, white light minus red leaves cyan, white light minus green leaves magenta, and white light minus blue leaves yellow.
In additive color models, such as RGB, white is the additive combination of all primary colored lights, and black is the absence of light. In the CMYK model, it is the opposite: white is the natural color of the paper or other background, and black results from a full combination of colored inks. To save cost on ink, and to produce deeper black tones, unsaturated and dark colors are produced by using black ink instead of or in addition to combinations of cyan, magenta, and yellow.
The CMYK printing process was invented in the 1890s, when newspapers began to publish color comic strips.
Halftoning
With CMYK printing, halftoning (also called screening) allows for less than full saturation of the primary colors; tiny dots of each primary color are printed in a pattern small enough that humans perceive a solid color. Magenta printed with a 20% halftone, for example, produces a pink color, because the eye perceives the tiny magenta dots on the large white paper as lighter and less saturated than the color of pure magenta ink. Halftoning allows for a continuous variability of each color, which enables continuous color mixing of the primaries. Without halftoning, each primary would be binary, i.e. on/off, which only allows for the reproduction of eight colors: white, the three primaries (cyan, magenta, yellow), the three secondaries (red, green, blue), and black.
Comparison to CMY
The CMYK color model is based on the CMY color model, which omits the black ink. Four-color printing uses black ink in addition to subtractive primaries for several reasons:
In traditional preparation of color separations, a red keyline on the black line art marked the outline of solid or tint color areas. In some cases a black keyline was used when it served as both a color indicator and an outline to be printed in black because usually the black plate contained the keyline. The K in CMYK represents the keyline, or black, plate, also sometimes called the key plate.
Text is typically printed in black and includes fine detail (such as serifs). To avoid even slight blurring when reproducing text (or other finely detailed outlines) using three inks would require impractically accurate registration.
A combination of 100% cyan, magenta, and yellow inks soaks the paper with ink, making it slower to dry, causing bleeding, or (especially on low-quality paper such as newsprint) weakening the paper so much that it tears.
Although a combination of 100% cyan, magenta, and yellow inks would, in theory, completely absorb the entire visible spectrum of light and produce a perfect black, practical inks fall short of their ideal characteristics, and the result is a dark, muddy color that is not quite black. Black ink absorbs more light and yields much better blacks.
Black ink is less expensive than the combination of colored inks that makes black.
A black made with just CMY inks is sometimes called a composite black.
When a very dark area is wanted, a colored or gray CMY "bedding" is applied first, then a full black layer is applied on top, making a rich, deep black; this is called rich black.
The amount of black to use to replace amounts of the other inks is variable, and the choice depends on the technology, paper and ink in use. Processes called under color removal, under color addition, and gray component replacement are used to decide on the final mix; different CMYK recipes will be used depending on the printing task.
Other printer color models
CMYK, as well as all other process color printing, is contrasted with spot color printing, in which specific colored inks are used to generate the colors seen. Some printing presses are capable of printing with both four-color process inks and additional spot color inks at the same time. High-quality printed materials, such as marketing brochures and books, often include photographs requiring process-color printing, other graphic effects requiring spot colors (such as metallic inks), and finishes such as varnish, which enhances the glossy appearance of the printed piece.
CMYK are the process printers which often have a relatively small color gamut. Processes such as Pantone's proprietary six-color (CMYKOG) Hexachrome considerably expand the gamut. Light, saturated colors often cannot be created with CMYK, and light colors in general may make visible the halftone pattern. Using a CcMmYK process, with the addition of light cyan and magenta inks to CMYK, can solve these problems, and such a process is used by many inkjet printers, including desktop models.
Comparison with RGB displays
Comparisons between RGB displays and CMYK prints can be difficult, since the color reproduction technologies and properties are very different. A computer monitor mixes shades of red, green, and blue light to create color images. A CMYK printer instead uses light-absorbing cyan, magenta, and yellow inks, whose colors are mixed using dithering, halftoning, or some other optical technique.
Similar to electronic displays, the inks used in printing produce color gamuts that are only a subsets of the visible spectrum, and the two color modes have their own specific ranges, each being capable of producing colors the other is not. As a result of this, an image rendered on an electronic display and rendered in print can vary in appearance. When designing images to be printed, designers work in RGB color spaces (electronic displays) capable of rendering colors a CMYK process cannot, and it is often difficult to accurately visualize a printed result that must fit into a different color space that both lacks some colors an electronic display can produce and includes colors it cannot.
Spectrum of printed paper
To reproduce color, the CMYK color model codes for absorbing light rather than emitting it (as is assumed by RGB). The K component ideally absorbs all wavelengths and is therefore achromatic. The cyan, magenta, and yellow components are used for color reproduction and they may be viewed as the inverse of RGB: Cyan absorbs red, magenta absorbs green, and yellow absorbs blue (−R,−G,−B).
Conversion
Since RGB and CMYK spaces are both device-dependent spaces, there is no simple or general conversion formula that converts between them. Conversions are generally done through color management systems, using color profiles that describe the spaces being converted. An ICC profile defines the bidirectional conversion between a neutral "profile connection" color space (CIE XYZ or Lab) and a selected colorspace, in this case both RGB and CMYK. The precision of the conversion depends on the profile itself, the exact methodology, and because the gamuts do not generally match, the rendering intent and constraints such as ink limit.
ICC profiles, internally built out of lookup tables and other transformation functions, are capable of handling many effects of ink blending. One example is the dot gain, which show up as non-linear components in the color-to-density mapping. More complex interactions such as Neugebauer blending can be modelled in higher-dimension lookup tables.
The problem of computing a colorimetric estimate of the color that results from printing various combinations of ink has been addressed by many scientists. A general method that has emerged for the case of halftone printing is to treat each tiny overlap of color dots as one of 8 (combinations of CMY) or of 16 (combinations of CMYK) colors, which in this context are known as Neugebauer primaries. The resultant color would be an area-weighted colorimetric combination of these primary colors, except that the Yule–Nielsen effect of scattered light between and within the areas complicates the physics and the analysis; empirical formulas for such analysis have been developed, in terms of detailed dye combination absorption spectra and empirical parameters.
Standardization of printing practices allow for some profiles to be predefined. One of them is the US Specifications for Web Offset Publications, which has its ICC color profile built into some software including Microsoft Office (as Agfa RSWOP.icm).
See also
CMY color model
CcMmYK color model
Cycolor
RGB color model
Gray component replacement
Jacob Christoph Le Blon
SWOP CMYK standard
Color management
Technicolor, the three-strip version of which is based on the CMYK model
References
External links
XCmyk– A Windows software with source code for converting CMYK to RGB.
RGB to CMYK converter– Tool for RGB to CMYK color converter online.
Color Space Fundamentals– animated illustration of RGB vs. CMYK
ICC profile registry, which lists some standard CMYK profiles, their paper types, and color separation limits
Color space
Printing
Printing terminology | CMYK color model | [
"Mathematics"
] | 1,988 | [
"Color space",
"Space (mathematics)",
"Metric spaces"
] |
44,688 | https://en.wikipedia.org/wiki/Almost%20everywhere | In measure theory (a branch of mathematical analysis), a property holds almost everywhere if, in a technical sense, the set for which the property holds takes up nearly all possibilities. The notion of "almost everywhere" is a companion notion to the concept of measure zero, and is analogous to the notion of almost surely in probability theory.
More specifically, a property holds almost everywhere if it holds for all elements in a set except a subset of measure zero, or equivalently, if the set of elements for which the property holds is conull. In cases where the measure is not complete, it is sufficient that the set be contained within a set of measure zero. When discussing sets of real numbers, the Lebesgue measure is usually assumed unless otherwise stated.
The term almost everywhere is abbreviated a.e.; in older literature p.p. is used, to stand for the equivalent French language phrase presque partout.
A set with full measure is one whose complement is of measure zero. In probability theory, the terms almost surely, almost certain and almost always refer to events with probability 1 not necessarily including all of the outcomes. These are exactly the sets of full measure in a probability space.
Occasionally, instead of saying that a property holds almost everywhere, it is said that the property holds for almost all elements (though the term almost all can also have other meanings).
Definition
If is a measure space, a property is said to hold almost everywhere in if there exists a measurable set with , and all have the property .
Another common way of expressing the same thing is to say that "almost every point satisfies ", or that "for almost every , holds".
It is not required that the set has measure zero; it may not be measurable.
By the above definition, it is sufficient that be contained in some set that is measurable and has measure zero.
However, this technicality vanishes when considering a complete measure space: if is complete then exists with measure zero if and only if is measurable with measure zero.
Properties
If property holds almost everywhere and implies property , then property holds almost everywhere. This follows from the monotonicity of measures.
If is a finite or a countable sequence of properties, each of which holds almost everywhere, then their conjunction holds almost everywhere. This follows from the countable sub-additivity of measures.
By contrast, if is an uncountable family of properties, each of which holds almost everywhere, then their conjunction does not necessarily hold almost everywhere. For example, if is Lebesgue measure on and is the property of not being equal to (i.e. is true if and only if ), then each holds almost everywhere, but the conjunction does not hold anywhere.
As a consequence of the first two properties, it is often possible to reason about "almost every point" of a measure space as though it were an ordinary point rather than an abstraction. This is often done implicitly in informal mathematical arguments. However, one must be careful with this mode of reasoning because of the third bullet above: universal quantification over uncountable families of statements is valid for ordinary points but not for "almost every point".
Examples
If f : R → R is a Lebesgue integrable function and almost everywhere, then for all real numbers with equality if and only if almost everywhere.
If f : [a, b] → R is a monotonic function, then f is differentiable almost everywhere.
If f : R → R is Lebesgue measurable and for all real numbers , then there exists a set E (depending on f) such that, if x is in E, the Lebesgue mean converges to as decreases to zero. The set E is called the Lebesgue set of f. Its complement can be proved to have measure zero. In other words, the Lebesgue mean of f converges to f almost everywhere.
A bounded function is Riemann integrable if and only if it is continuous almost everywhere.
As a curiosity, the decimal expansion of almost every real number in the interval [0, 1] contains the complete text of Shakespeare's plays, encoded in ASCII; similarly for every other finite digit sequence, see Normal number.
Definition using ultrafilters
Outside of the context of real analysis, the notion of a property true almost everywhere is sometimes defined in terms of an ultrafilter. An ultrafilter on a set X is a maximal collection F of subsets of X such that:
If U ∈ F and U ⊆ V then V ∈ F
The intersection of any two sets in F is in F
The empty set is not in F
A property P of points in X holds almost everywhere, relative to an ultrafilter F, if the set of points for which P holds is in F.
For example, one construction of the hyperreal number system defines a hyperreal number as an equivalence class of sequences that are equal almost everywhere as defined by an ultrafilter.
The definition of almost everywhere in terms of ultrafilters is closely related to the definition in terms of measures, because each ultrafilter defines a finitely-additive measure taking only the values 0 and 1, where a set has measure 1 if and only if it is included in the ultrafilter.
See also
Dirichlet's function, a function that is equal to 0 almost everywhere.
References
Bibliography
Mathematical terminology
Measure theory
ja:ほとんど (数学)#ほとんど至るところで | Almost everywhere | [
"Mathematics"
] | 1,126 | [
"nan"
] |
44,708 | https://en.wikipedia.org/wiki/Ferroelectricity | In physics and materials science, ferroelectricity is a characteristic of certain materials that have a spontaneous electric polarization that can be reversed by the application of an external electric field. All ferroelectrics are also piezoelectric and pyroelectric, with the additional property that their natural electrical polarization is reversible. The term is used in analogy to ferromagnetism, in which a material exhibits a permanent magnetic moment. Ferromagnetism was already known when ferroelectricity was discovered in 1920 in Rochelle salt by American physicist Joseph Valasek. Thus, the prefix ferro, meaning iron, was used to describe the property despite the fact that most ferroelectric materials do not contain iron. Materials that are both ferroelectric and ferromagnetic are known as multiferroics.
Polarization
When most materials are electrically polarized, the polarization induced, P, is almost exactly proportional to the applied external electric field E; so the polarization is a linear function. This is called linear dielectric polarization (see figure). Some materials, known as paraelectric materials, show a more enhanced nonlinear polarization (see figure). The electric permittivity, corresponding to the slope of the polarization curve, is not constant as in linear dielectrics but is a function of the external electric field.
In addition to being nonlinear, ferroelectric materials demonstrate a spontaneous nonzero polarization (after entrainment, see figure) even when the applied field E is zero. The distinguishing feature of ferroelectrics is that the spontaneous polarization can be reversed by a suitably strong applied electric field in the opposite direction; the polarization is therefore dependent not only on the current electric field but also on its history, yielding a hysteresis loop. They are called ferroelectrics by analogy to ferromagnetic materials, which have spontaneous magnetization and exhibit similar hysteresis loops.
Typically, materials demonstrate ferroelectricity only below a certain phase transition temperature, called the Curie temperature (TC) and are paraelectric above this temperature: the spontaneous polarization vanishes, and the ferroelectric crystal transforms into the paraelectric state. Many ferroelectrics lose their pyroelectric properties above TC completely, because their paraelectric phase has a centrosymmetric crystal structure.
Applications
The nonlinear nature of ferroelectric materials can be used to make capacitors with adjustable capacitance. Typically, a ferroelectric capacitor simply consists of a pair of electrodes sandwiching a layer of ferroelectric material. The permittivity of ferroelectrics is not only adjustable but commonly also very high, especially when close to the phase transition temperature. Because of this, ferroelectric capacitors are small in physical size compared to dielectric (non-tunable) capacitors of similar capacitance.
The spontaneous polarization of ferroelectric materials implies a hysteresis effect which can be used as a memory function, and ferroelectric capacitors are indeed used to make ferroelectric RAM for computers and RFID cards. In these applications thin films of ferroelectric materials are typically used, as this allows the field required to switch the polarization to be achieved with a moderate voltage. However, when using thin films a great deal of attention needs to be paid to the interfaces, electrodes and sample quality for devices to work reliably.
Ferroelectric materials are required by symmetry considerations to be also piezoelectric and pyroelectric. The combined properties of memory, piezoelectricity, and pyroelectricity make ferroelectric capacitors very useful, e.g. for sensor applications. Ferroelectric capacitors are used in medical ultrasound machines (the capacitors generate and then listen for the ultrasound ping used to image the internal organs of a body), high quality infrared cameras (the infrared image is projected onto a two dimensional array of ferroelectric capacitors capable of detecting temperature differences as small as millionths of a degree Celsius), fire sensors, sonar, vibration sensors, and even fuel injectors on diesel engines.
Another idea of recent interest is the ferroelectric tunnel junction (FTJ) in which a contact is made up by nanometer-thick ferroelectric film placed between metal electrodes. The thickness of the ferroelectric layer is small enough to allow tunneling of electrons. The piezoelectric and interface effects as well as the depolarization field may lead to a giant electroresistance (GER) switching effect.
Yet another burgeoning application is multiferroics, where researchers are looking for ways to couple magnetic and ferroelectric ordering within a material or heterostructure; there are several recent reviews on this topic.
Catalytic properties of ferroelectrics have been studied since 1952 when Parravano observed anomalies in CO oxidation rates over ferroelectric sodium and potassium niobates near the Curie temperature of these materials. Surface-perpendicular component of the ferroelectric polarization can dope polarization-dependent charges on surfaces of ferroelectric materials, changing their chemistry. This opens the possibility of performing catalysis beyond the limits of the Sabatier principle. Sabatier principle states that the surface-adsorbates interaction has to be an optimal amount: not too weak to be inert toward the reactants and not too strong to poison the surface and avoid desorption of the products: a compromise situation. This set of optimum interactions is usually referred to as "top of the volcano" in activity volcano plots. On the other hand, ferroelectric polarization-dependent chemistry can offer the possibility of switching the surface—adsorbates interaction from strong adsorption to strong desorption, thus a compromise between desorption and adsorption is no longer needed. Ferroelectric polarization can also act as an energy harvester. Polarization can help the separation of photo-generated electron-hole pairs, leading to enhanced photocatalysis. Also, due to pyroelectric and piezoelectric effects under varying temperature (heating/cooling cycles) or varying strain (vibrations) conditions extra charges can appear on the surface and drive various (electro)chemical reactions forward.
Photoferroelectric imaging is a technique to record optical information on pieces of ferroelectric material. The images are nonvolatile and selectively erasable.
Materials
The internal electric dipoles of a ferroelectric material are coupled to the material lattice so anything that changes the lattice will change the strength of the dipoles (in other words, a change in the spontaneous polarization). The change in the spontaneous polarization results in a change in the surface charge. This can cause current flow in the case of a ferroelectric capacitor even without the presence of an external voltage across the capacitor. Two stimuli that will change the lattice dimensions of a material are force and temperature. The generation of a surface charge in response to the application of an external stress to a material is called piezoelectricity. A change in the spontaneous polarization of a material in response to a change in temperature is called pyroelectricity.
Generally, there are 230 space groups among which 32 crystalline classes can be found in crystals. There are 21 non-centrosymmetric classes, within which 20 are piezoelectric. Among the piezoelectric classes, 10 have a spontaneous electric polarization which varies with temperature; thus they are pyroelectric. Ferroelectricity is a subset of pyroelectricity, which brings spontaneous electronic polarization to the material.
Ferroelectric phase transitions are often characterized as either displacive (such as BaTiO3) or order-disorder (such as NaNO2), though often phase transitions will demonstrate elements of both behaviors. In barium titanate, a typical ferroelectric of the displacive type, the transition can be understood in terms of a polarization catastrophe, in which, if an ion is displaced from equilibrium slightly, the force from the local electric fields due to the ions in the crystal increases faster than the elastic-restoring forces. This leads to an asymmetrical shift in the equilibrium ion positions and hence to a permanent dipole moment. The ionic displacement in barium titanate concerns the relative position of the titanium ion within the oxygen octahedral cage. In lead titanate, another key ferroelectric material, although the structure is rather similar to barium titanate the driving force for ferroelectricity is more complex with interactions between the lead and oxygen ions also playing an important role. In an order-disorder ferroelectric, there is a dipole moment in each unit cell, but at high temperatures they are pointing in random directions. Upon lowering the temperature and going through the phase transition, the dipoles order, all pointing in the same direction within a domain.
An important ferroelectric material for applications is lead zirconate titanate (PZT), which is part of the solid solution formed between ferroelectric lead titanate and anti-ferroelectric lead zirconate. Different compositions are used for different applications; for memory applications, PZT closer in composition to lead titanate is preferred, whereas piezoelectric applications make use of the diverging piezoelectric coefficients associated with the morphotropic phase boundary that is found close to the 50/50 composition.
Ferroelectric crystals often show several transition temperatures and domain structure hysteresis, much as do ferromagnetic crystals. The nature of the phase transition in some ferroelectric crystals is still not well understood.
In 1974 R.B. Meyer used symmetry arguments to predict ferroelectric liquid crystals, and the prediction could immediately be verified by several observations of behavior connected to ferroelectricity in smectic liquid-crystal phases that are chiral and tilted. The technology allows the building of flat-screen monitors. Mass production between 1994 and 1999 was carried out by Canon. Ferroelectric liquid crystals are used in production of reflective LCoS.
In 2010 David Field found that prosaic films of chemicals such as nitrous oxide or propane exhibited ferroelectric properties. This new class of ferroelectric materials exhibit "spontelectric" properties, and may have wide-ranging applications in device and nano-technology and also influence the electrical nature of dust in the interstellar medium.
Other ferroelectric materials used include triglycine sulfate, polyvinylidene fluoride (PVDF) and lithium tantalate. A single atom thick ferroelectric monolayer can be created using pure bismuth.
It should be possible to produce materials which combine both ferroelectric and metallic properties simultaneously, at room temperature. According to research published in 2018 in Nature Communications, scientists were able to produce a two-dimensional sheet of material which was both ferroelectric (had a polar crystal structure) and which conducted electricity.
Theory
An introduction to Landau theory can be found here.
Based on Ginzburg–Landau theory, the free energy of a ferroelectric material, in the absence of an electric field and applied stress may be written as a Taylor expansion in terms of the order parameter, . If a sixth order expansion is used (i.e. 8th order and higher terms truncated), the free energy is given by:
where are the components of the polarization vector in the directions respectively, and the coefficients, must be consistent with the crystal symmetry. To investigate domain formation and other phenomena in ferroelectrics, these equations are often used in the context of a phase field model. Typically, this involves adding a gradient term, an electrostatic term and an elastic term to the free energy. The equations are then discretized onto a grid using the finite difference method or finite element method and solved subject to the constraints of Gauss's law and Linear elasticity.
In all known ferroelectrics, and . These coefficients may be obtained experimentally or from ab-initio simulations. For ferroelectrics with a first order phase transition, , whereas for a second order phase transition.
The spontaneous polarization, of a ferroelectric for a cubic to tetragonal phase transition may be obtained by considering the 1D expression of the free energy which is:
This free energy has the shape of a double well potential with two free energy minima at , the spontaneous polarization. We find the derivative of the free energy, and set it equal to zero in order to solve for :
Since the solution of this equation rather corresponds to a free energy maxima in the ferroelectric phase, the desired solutions for correspond to setting the remaining factor to zero:
whose solution is:
and eliminating solutions which take the square root of a negative number (for either the first or second order phase transitions) gives:
If , the solution for the spontaneous polarization reduces to:
The hysteresis loop ( versus ) may be obtained from the free energy expansion by including the term corresponding to the energy due to an external electric field interacting with the polarization , as follows:
We find the stable polarization values of under the influence of the external field, now denoted as , again by setting the derivative of the energy with respect to to zero:
Plotting (on the X axis) as a function of (but on the Y axis) gives an S-shaped curve which is multi-valued in for some values of . The central part of the 'S' corresponds to a free energy local maximum (since ). Elimination of this region, and connection of the top and bottom portions of the 'S' curve by vertical lines at the discontinuities gives the hysteresis loop of internal polarization due to an external electric field.
Sliding ferroelectricity
Sliding ferroelectricity is widely found but only in two-dimensional (2D) van der Waals stacked layers. The vertical electric polarization is switched by in-plane interlayer sliding.
See also
:Category:Ferroelectric materials
Physics
s
Lists
References
Further reading
External links
Ferroelectric Materials at University of Cambridge
Ferroelectric materials
Electric and magnetic fields in matter
Electrical phenomena
Phases of matter | Ferroelectricity | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,014 | [
"Physical phenomena",
"Ferroelectric materials",
"Phases of matter",
"Electric and magnetic fields in matter",
"Materials science",
"Materials",
"Electrical phenomena",
"Condensed matter physics",
"Hysteresis",
"Matter"
] |
44,716 | https://en.wikipedia.org/wiki/Razor%20wire | Barbed tape or razor wire is a mesh of metal strips with sharp edges whose purpose is to prevent trespassing by humans. The term "razor wire", through long usage, has generally been used to describe barbed tape products. Razor wire is much sharper than the standard barbed wire; it is named after its appearance but is not razor sharp. The points are very sharp and made to rip and snag clothing and flesh.
The multiple blades of a razor-wire fence are designed to inflict serious cuts on anyone attempting to climb through or over it and therefore also has a strong psychological deterrent effect. Razor wire is used in many security applications because, although it can be circumvented relatively quickly by humans with tools, penetrating a razor-wire barrier without tools is very slow and typically injurious, often thwarting such attempts or giving security forces more time to respond.
Use
The first use of barbed wire for warfare was in 1898 during the Spanish American War, thirty-one years after the first patents were issued in 1867. One of the most notable examples during the Spanish American War is the defense provided by the Moron-Jucardo Trocha. The trocha (or trench) stretched for fifty miles between the cities of Moron and Jucardo. Within this trench, and in addition to fallen trees, barbed wire was used. The barbed wire was arranged in a cat’s cradle formation that for every 12 yards of barbed fence built, 420 yards of barbed wire was strung (or 35 yards of wire per yard of fence).
Later versions of this type of barbed wire were manufactured by Germany during the First World War. The reason for this was a wartime shortage of wire to make conventional barbed wire. Therefore, flat wire with triangular cutting edges began to be punched out of steel strips ("band barbed wire"). A welcome side effect was that a comparable length of barbed wire of this new type could be produced in less time. These precursors to NATO wire did not yet have an inner wire for stabilization, were therefore easy to cut with tin snips, and were also not as robust as normal barbed wire. However, they withstood the wire cutters used at the time to cut normal barbed wire, as was common at the front.
An article in a 1918 issue of The Hardware Trade Journal tells the story under the headline: "This Cruel War’s Abuse of Our Old Friend ‘Bob Wire.'" After telling a little about Glidden and his invention, the article goes on as follows: "Quite naturally some animals enclosed by Glidden’s fencing gashed themselves on the barbs. Just as naturally, men and boys tried to climb over or under those fences and had their clothes and flesh torn...These wounds upon man and beast and the suddenness with which Glidden’s barbs halted all living things came to the attention of military men, and the barbed wire entanglement of which we now read almost every day in the war news was born...And it may be said right here that soldiers who have been halted by wire entanglements while making a charge say the devil never invented anything nastier."
Starting in the late 1960s, barbed tape was typically found in prisons and secure mental hospitals, where the increased breaching time for a poorly equipped potential escapee was a definite advantage. Until the development of reinforced barbed tape in the early 1980s (and especially after the September 11 attacks), it was rarely used for military purposes or genuine high security facilities because, with the correct tools, it was easier to breach than ordinary barbed wire. Since then, some military forces have replaced barbed wire with barbed tape for many applications, mainly because it is slightly lighter for the same effective coverage, and it takes up very little space compared to barbed wire or reinforced barbed tape when stored on drums.
More recently, barbed tape has been used in more commercial and residential security applications. This is often primarily a visual deterrent since a well-prepared burglar can breach barbed wire and barbed tape barriers in similar amounts of time, using simple techniques such as cutting the wire or throwing a piece of carpet over its strands.
Due to its dangerous nature, razor wire/barbed tape and similar fencing/barrier materials are prohibited in some locales. Norway prohibits any barbed wire except in combination with other fencing, in order to protect domesticated animals from exposure.
Construction
Razor wire has a central strand of high tensile strength wire, and a steel tape punched into a shape with barbs. The steel tape is then cold-crimped tightly to the wire everywhere except for the barbs. Flat barbed tape is very similar, but has no central reinforcement wire. The process of combining the two is called roll forming.
Types
Like barbed wire, razor wire is available as either straight wire, spiral (helical) coils, concertina (clipped) coils, flat wrapped panels or welded mesh panels. Unlike barbed wire, which usually is available only as plain steel or galvanized, barbed tape razor wire is also manufactured in stainless steel to reduce corrosion from rusting. The core wire can be galvanized and the tape stainless, although fully stainless barbed tape is used for permanent installations in harsh climatic environments or under water.
Barbed tape is also characterized by the shape of the barbs. Although there are no formal definitions, typically short barb barbed tape has barbs from , medium barb tape has barbs , and long barb tape has barbs .
According to the structure
Helical type: Helical type razor wire is the most simple pattern. There are no concertina attachments and each spiral loop is left. It shows a natural spiral freely.
Concertina type: It is the most widely used type in the security defense applications. The adjacent loops of helical coils are attached by clips at specified points on the circumference. It shows an accordion-like configuration condition.
Blade type: The razor wire are produced in straight lines and cut into a certain length to be welded onto the galvanized or powder coated frame. It can be used individually as a security barrier.
Flat type: A popular razor wire type with flat and smooth configuration (like Olympic rings). According to different technology, it can be clipped or the welded type.
Welded type: The razor wire tape are welded into panels, then the panels are connected by clips or tie wires to form a continuous razor wire fence.
Flattened type: A transformation of single coil concertina razor wire. The concertina wire is flattened to form the flat-type razor wire.
According to the coil type
Single coil: Commonly seen and widely used type, which is available in both helical and concertina types.
Double coil: A complex razor wire type to supply higher security grade. A smaller diameter coil is placed inside of the larger diameter coil. It is also available in both helical and concertina types.
Common specifications of razor wire
See also
Access control
Environmental design
Physical security
Wire obstacle
Concertina wire
References
External links
Fortification (architectural elements)
Engineering barrages
Area denial weapons
Wire | Razor wire | [
"Engineering"
] | 1,448 | [
"Area denial weapons",
"Military engineering",
"Engineering barrages"
] |
44,718 | https://en.wikipedia.org/wiki/Soapstone | Soapstone (also known as steatite or soaprock) is a talc-schist, which is a type of metamorphic rock. It is composed largely of the magnesium-rich mineral talc. It is produced by dynamothermal metamorphism and metasomatism, which occur in subduction zones, changing rocks by heat and pressure, with influx of fluids but without melting. It has been a carving medium for thousands of years.
Terminology
The definitions of the terms "steatite" and "soapstone" vary with the field of study. In geology, steatite is a rock that is, to a very large extent, composed of talc. The mining industry defines steatite as a high-purity talc rock that is suitable for the manufacturing of, for example, insulators; the lesser grades of the mineral can be called simply "talc rock". Steatite can be used both in lumps ("block steatite", "lava steatite", "lava grade talc"), and in the ground form. While the geologists logically will use "steatite" to designate both forms, in the industry, "steatite" without additional qualifications typically means the steatite that is either already ground or to be used in the ground form in the future. If the ground steatite is pressed together into blocks, these are called "synthetic block steatite", "artificial block steatite", or "artificial lava talc".
In industrial applications soapstone refers to dimension stone that consists of either amphibole-chlorite-carbonate-talc rock, talc-carbonate rock, or simply talc rock and is sold in the form of sawn slabs. "Ground soapstone" sometimes designates the ground waste product of the slab manufacturing.
Petrology
Petrologically, soapstone is composed predominantly of talc, with varying amounts of chlorite and amphiboles (typically tremolite, anthophyllite, and cummingtonite, hence its obsolete name, magnesiocummingtonite), and traces of minor iron-chromium oxides. It may be schistose or massive. Soapstone is formed by the metamorphism of ultramafic protoliths (e.g. dunite or serpentinite) and the metasomatism of siliceous dolomites.
By mass, "pure" steatite is roughly 63.37% silica, 31.88% magnesia, and 4.74% water. It commonly contains minor quantities of other oxides such as CaO or Al2O3.
Pyrophyllite, a mineral very similar to talc, is sometimes called soapstone in the generic sense, since its physical characteristics and industrial uses are similar, and because it is also commonly used as a carving material. However, this mineral typically does not have such a soapy feel as soapstone.
Physical characteristics
Soapstone is relatively soft because of its high talc content—talc has a definitional value of 1 on the Mohs hardness scale. Softer grades may feel similar to soap when touched, hence the name. No fixed hardness is given for soapstone because the amount of talc it contains varies widely, from as little as 30% for architectural grades such as those used on countertops, to as much as 80% for carving grades.
Soapstone is easy to carve; it is also durable and heat-resistant and has a high heat storage capacity. It has therefore been used for cooking and heating equipment for thousands of years.
Soapstone is often used as an insulator for housing and electrical components, due to its durability and electrical characteristics and because it can be pressed into complex shapes before firing. Soapstone undergoes transformations when heated to temperatures of into enstatite and cristobalite; on the Mohs scale, this corresponds to an increase in hardness to 5.5–6.5. The resulting material, harder than glass, is sometimes called "lava".
Historical usage
Africa
Ancient Egyptian scarab signets and amulets were most commonly made from glazed steatite. The Yoruba people of West Nigeria used soapstone for several statues, most notably at Esie, where archaeologists have uncovered hundreds of male and female statues about half of life size. The Yoruba of Ife also produced a miniature soapstone obelisk with metal studs called "the staff of Oranmiyan".
Soapstone mining in Tabaka, Kenya occurs in relatively shallow and accessible quarries in the surrounding areas of Sameta, Nyabigege and Bomware. These were at the time open to all to access provided they had the labor resources to do so. This mostly meant the men did the mining as they were custodian to the community land, meaning ancestral lands in Riamosioma, Itumbe, Nyatike etc.
Americas
Native Americans have used soapstone since the Late Archaic period. During the Archaic archaeological period (8000–1000 BC), bowls, cooking slabs, and other objects were made from soapstone. The use of soapstone cooking vessels during this period has been attributed to the rock's thermal qualities; compared to clay or metal containers, soapstone retains heat more effectively. Use of soapstone in native American cultures continue to the modern day. Later, other cultures carved soapstone smoking pipes, a practice that continues today. The soapstone's low heat conduction allows for prolonged smoking without the pipe heating up uncomfortably.
Indigenous peoples of the Arctic have traditionally used soapstone for carvings of both practical objects and art. The qulliq, a type of oil lamp, is carved out of soapstone and used by the Inuit and Dorset peoples. The soapstone oil lamps indicate these people had easy access to oils derived from marine mammals.
In the modern period, soapstone is commonly used for carvings in Inuit art.
In the United States, locally quarried soapstone was used for gravemarkers in 19th century northeast Georgia, around Dahlonega, and Cleveland as simple field stone and "slot and tab" tombs.
In Canada, soapstone was quarried in the Arctic regions like the western part of the Ungava Bay and the Appalachian Mountain System from Newfoundland.
Asia
The ancient trading city of Tepe Yahya in southeastern Iran was a center for the production and distribution of soapstone in the 5th to 3rd millennia BC.
Soapstone has been used in India as a medium for sculptures since at least the time of the Hoysala Empire, the Western Chalukya Empire and to an extent Vijayanagara Empire.
Even earlier, steatite was used as the substrate for Indus-Harappan seals. After the intricate carvings of icons and (yet undeciphered) symbols, the seals were heated above for several days to make them hard and durable to make the final seals used for making impressions on clay.
In China, during the Spring and Autumn period (771–476 BC), soapstone was carved into ceremonial knives. Soapstone was also used to carve Chinese seals.
Soapstone was used as a writing pencil in Myanmar as early as the 11th-century Pagan period. After that, it was still used as a pencil to write on Black Parabaik until the end of the Mandalay period (19th century).
Australia
Pipes and decorative carvings of local animals were made out of soapstone by Australian Aboriginal artist Erlikilyika () in Central Australia.
Europe
The Minoan civilization on Crete used soapstone. At the Palace of Knossos, a steatite libation table was found. Soapstone is relatively abundant in northern Europe. Vikings hewed soapstone directly from the stone face, shaped it into cooking pots, and sold these at home and abroad. In Shetland, there is evidence that these vessels were used for processing marine and dairy fats. Several surviving medieval buildings in northern Europe are constructed with soapstone, amongst them Nidaros Cathedral.
Modern usage
In modern times, soapstone is most commonly used for architectural applications, such as counter tops, floor tiles, showerbases, and interior surfacing.
Soapstone is sometimes used for construction of fireplace surrounds, cladding on wood-burning stoves, and as the preferred material for woodburning masonry heaters because it can absorb, store, and evenly radiate heat due to its high density and magnesite (MgCO3) content. It is also used for countertops and bathroom tiling because of the ease of working the material and its property as the "quiet stone". A weathered or aged appearance occurs naturally over time as the patina is enhanced.
Soapstone can be used to create molds for casting objects from soft metals, such as pewter or silver. The soft stone is easily carved and is not degraded by heating. The slick surface of soapstone allows the finished object to be easily removed.
Welders and fabricators use soapstone as a marker due to its resistance to heat; it remains visible when heat is applied. It has also been used for many years by seamstresses, carpenters, and other craftspeople as a marking tool, because its marks are visible but not permanent.
Resistance to heat made steatite suitable for manufacturing gas burner tips, spark plugs, and electrical switchboards.
Ceramics
Steatite ceramics are low-cost biaxial porcelains of nominal composition (MgO)3(SiO2)4. Steatite is used primarily for its dielectric and thermally-insulating properties in applications such as tile, substrates, washers, bushings, beads, and pigments. It is also used for high-voltage insulators, which have to stand large mechanical loads, such as insulators of mast radiators.
Crafts
Soapstone continues to be used for carvings and sculptures by artists and indigenous peoples. In Brazil, especially in the state of Minas Gerais, the abundance of soapstone mines allow local artisans to craft pots, pans, wine glasses, statues, jewel boxes, coasters, and vases from soapstone. These handicrafts are commonly sold in street markets found in cities across the state. Some of the oldest towns, notably Congonhas, Tiradentes, and Ouro Preto, still have some of their streets paved with soapstone from colonial times.
Mining
Architectural soapstone is mined in Canada, Brazil, India, and Finland and imported into the United States. Active North American mines include one south of Quebec City with products marketed by Canadian Soapstone, the Treasure and Regal mines in Beaverhead County, Montana mined by the Barretts Minerals Company, and another in central Virginia operated by the Alberene Soapstone Company.
Mining to meet worldwide demand for soapstone is threatening the habitat of India's tigers.
Other
Soapstones can be put in a freezer and later used in place of ice cubes to chill alcoholic beverages without diluting. Sometimes called whiskey stones, these were first introduced around 2007. Most whiskey stones feature a semipolished finish, retaining the soft look of natural soapstone, while others are highly polished.
Safety
People can be exposed to soapstone dust in the workplace via inhalation and skin or eye contact. Exposure above safe limits can lead to symptoms including coughing, shortness of breath, cyanosis, crackles, and pulmonary heart disease. Due to the potential presence of tremolite and crystalline silica in the dust, precautions should be taken to avoid occupational diseases such as asbestosis, silicosis, mesothelioma, and lung cancer.
United States
The Occupational Safety and Health Administration has set the legal limit (permissible exposure limit) for soapstone exposure in the workplace as 20 million particles per cubic foot over an 8-hour workday. The National Institute for Occupational Safety and Health has set a recommended exposure limit of 6 mg/m3 total exposure and 3 mg/m3 respiratory exposure over an 8-hour workday. At levels of 3000 mg/m3, soapstone is immediately dangerous to life and health.
Other names
The local names for the soapstone vary: in Vermont, "grit" is used, in Georgia "white-grinding" and "dark-grinding" varieties are distinguished, and California has "soft", "hard", and "blue" talc. Also:
Combarbalite stone, exclusively mined in Combarbalá, Chile, is known for its many colors. While they are not visible during mining, they appear after refining.
Palewa and gorara stones are types of Indian soapstone.
A variety of other regional and marketing names for soapstone are used.
Gallery
See also
List of minerals
List of rocks
Talc carbonate
Archeological Site 38CK1, Archeological Site 38CK44, and Archeological Site 38CK45
Citations
General and cited references
Further reading
Felce, Robert (2011). Soaprock Coast... The origins of English porcelain. .
External links
Soapstone Calculated Refractory Data w/ Technical Properties Converter (Incl. Soapstone Volume vs. Weight measuring units)
Ancient soapstone bowl (The Central States Archaeological Journal)
Soapstone Native American quarries, Maryland (Geological Society of America)
Prehistoric soapstone use in northeastern Maryland (Antiquity Journal)
The Blue Rock Soapstone Quarry, Yancey County, NC (North Carolina Office of State Archaeology)
CDC - NIOSH Pocket Guide to Chemical Hazards
Steatite historical marker in Decatur, Georgia
Ceramic materials
Dielectrics
Metamorphic rocks
Petrology
Phyllosilicates
Sculpture materials
Stone (material) | Soapstone | [
"Physics",
"Engineering"
] | 2,781 | [
"Materials",
"Ceramic materials",
"Ceramic engineering",
"Dielectrics",
"Matter"
] |
44,726 | https://en.wikipedia.org/wiki/Magnetoresistance | Magnetoresistance is the tendency of a material (often ferromagnetic) to change the value of its electrical resistance in an externally-applied magnetic field. There are a variety of effects that can be called magnetoresistance. Some occur in bulk non-magnetic metals and semiconductors, such as geometrical magnetoresistance, Shubnikov–de Haas oscillations, or the common positive magnetoresistance in metals. Other effects occur in magnetic metals, such as negative magnetoresistance in ferromagnets or anisotropic magnetoresistance (AMR). Finally, in multicomponent or multilayer systems (e.g. magnetic tunnel junctions), giant magnetoresistance (GMR), tunnel magnetoresistance (TMR), colossal magnetoresistance (CMR), and extraordinary magnetoresistance (EMR) can be observed.
The first magnetoresistive effect was discovered in 1856 by William Thomson, better known as Lord Kelvin, but he was unable to lower the electrical resistance of anything by more than 5%. Today, systems including semimetals and concentric ring EMR structures are known. In these, a magnetic field can adjust the resistance by orders of magnitude. Since different mechanisms can alter the resistance, it is useful to separately consider situations where it depends on a magnetic field directly (e.g. geometric magnetoresistance and multiband magnetoresistance) and those where it does so indirectly through magnetization (e.g. AMR and TMR).
Discovery
William Thomson (Lord Kelvin) first discovered ordinary magnetoresistance in 1856. He experimented with pieces of iron and discovered that the resistance increases when the current is in the same direction as the magnetic force and decreases when the current is at 90° to the magnetic force. He then did the same experiment with nickel and found that it was affected in the same way but the magnitude of the effect was greater. This effect is referred to as anisotropic magnetoresistance (AMR).
In 2007, Albert Fert and Peter Grünberg were jointly awarded the Nobel Prize for the discovery of giant magnetoresistance.
Geometrical magnetoresistance
An example of magnetoresistance due to direct action of magnetic field on electric current can be studied on a Corbino disc (see Figure).
It consists of a conducting annulus with perfectly conducting rims. Without a magnetic field, the battery drives a radial current between the rims. When a magnetic field perpendicular to the plane of the annulus is applied, (either into or out of the page) a circular component of current flows as well, due to Lorentz force. Initial interest in this problem began with Boltzmann in 1886, and independently was re-examined by Corbino in 1911.
In a simple model, supposing the response to the Lorentz force is the same as for an electric field, the carrier velocity is given by:
where is the carrier mobility. Solving for the velocity, we find:
where the effective reduction in mobility due to the -field (for motion perpendicular to this field) is apparent. Electric current (proportional to the radial component of velocity) will decrease with increasing magnetic field and hence the resistance of the device will increase. Critically, this magnetoresistive scenario depends sensitively on the device geometry and current lines and it does not rely on magnetic materials.
In a semiconductor with a single carrier type, the magnetoresistance is proportional to , where is the semiconductor mobility (units m2·V−1·s−1, equivalently m2·Wb−1, or T −1) and is the magnetic field (units teslas). Indium antimonide, an example of a high mobility semiconductor, could have an electron mobility above at . So in a field, for example the magnetoresistance increase would be 100%.
Anisotropic magnetoresistance (AMR)
Thomson's experiments are an example of AMR, a property of a material in which a dependence of electrical resistance on the angle between the direction of electric current and direction of magnetization is observed. The effect arises in most cases from the simultaneous action of magnetization and spin–orbit interaction (exceptions related to non-collinear magnetic order notwithstanding) and its detailed mechanism depends on the material. It can be for example due to a larger probability of s-d scattering of electrons in the direction of magnetization (which is controlled by the applied magnetic field). The net effect (in most materials) is that the electrical resistance has maximum value when the direction of current is parallel to the applied magnetic field. AMR of new materials is being investigated and magnitudes up to 50% have been observed in some uranium (but otherwise quite conventional) ferromagnetic compounds. Materials with extreme AMR have been identified driven by unconventional mechanisms such as a metal-insulator transition triggered by rotating the magnetic moments (while for some directions of magnetic moments, the system is semimetallic, for other directions a gap opens).
In polycrystalline ferromagnetic materials, the AMR can only depend on the angle between the magnetization and current direction and (as long as the resistivity of the material can be described by a rank-two tensor), it must follow
where is the (longitudinal) resistivity of the film and are the resistivities for and , respectively. Associated with longitudinal resistivity, there is also transversal resistivity dubbed (somewhat confusingly) the planar Hall effect. In monocrystals, resistivity depends also on and individually.
To compensate for the non-linear characteristics and inability to detect the polarity of a magnetic field, the following structure is used for sensors. It consists of stripes of aluminum or gold placed on a thin film of permalloy (a ferromagnetic material exhibiting the AMR effect) inclined at an angle of 45°. This structure forces the current not to flow along the “easy axes” of thin film, but at an angle of 45°. The dependence of resistance now has a permanent offset which is linear around the null point. Because of its appearance, this sensor type is called 'barber pole'.
The AMR effect is used in a wide array of sensors for measurement of Earth's magnetic field (electronic compass), for electric current measuring (by measuring the magnetic field created around the conductor), for traffic detection and for linear position and angle sensing. The biggest AMR sensor manufacturers are Honeywell, NXP Semiconductors, STMicroelectronics, and Sensitec GmbH.
As theoretical aspects, I. A. Campbell, A. Fert, and O. Jaoul () derived an expression of the AMR ratio for Ni-based alloys using the two-current model with s-s and s-d scattering processes, where 's' is a conduction electron, and 'd' is 3d states with the spin-orbit interaction. The AMR ratio is expressed as
with and , where , , and are a spin-orbit coupling constant (so-called ), an exchange field, and a resistivity for spin , respectively. In addition, recently, Satoshi Kokado et al. have obtained the general expression of the AMR ratio for 3d transition-metal ferromagnets by extending the theory to a more general one. The general expression can also be applied to half-metals.
See also
Giant magnetoresistance
Tunnel magnetoresistance
Colossal magnetoresistance
Extraordinary magnetoresistance
Magnetoresistive random-access memory
Footnotes
References
1856 introductions
1856 in science
Magnetic ordering
Spintronics
Articles containing video clips | Magnetoresistance | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,585 | [
"Magnetoresistance",
"Physical quantities",
"Spintronics",
"Electric and magnetic fields in matter",
"Materials science",
"Magnetic ordering",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
44,758 | https://en.wikipedia.org/wiki/Goldbach%27s%20conjecture | Goldbach's conjecture is one of the oldest and best-known unsolved problems in number theory and all of mathematics. It states that every even natural number greater than 2 is the sum of two prime numbers.
The conjecture has been shown to hold for all integers less than but remains unproven despite considerable effort.
History
Origins
On 7 June 1742, the Prussian mathematician Christian Goldbach wrote a letter to Leonhard Euler (letter XLIII), in which he proposed the following conjecture:
Goldbach was following the now-abandoned convention of considering 1 to be a prime number, so that a sum of units would be a sum of primes.
He then proposed a second conjecture in the margin of his letter, which implies the first:
Euler replied in a letter dated 30 June 1742 and reminded Goldbach of an earlier conversation they had had (""), in which Goldbach had remarked that the first of those two conjectures would follow from the statement
This is in fact equivalent to his second, marginal conjecture.
In the letter dated 30 June 1742, Euler stated:
Similar conjecture by Descartes
René Descartes wrote that "Every even number can be expressed as the sum of at most three primes." The proposition is equivalent to Goldbach's conjecture, and Paul Erdős said that "Descartes actually discovered this before Goldbach... but it is better that the conjecture was named for Goldbach because, mathematically speaking, Descartes was infinitely rich and Goldbach was very poor."
Partial results
The strong Goldbach conjecture is much more difficult than the weak Goldbach conjecture, which says that every odd integer greater than 5 is the sum of three primes. Using Vinogradov's method, Nikolai Chudakov, Johannes van der Corput, and Theodor Estermann showed (1937–1938) that almost all even numbers can be written as the sum of two primes (in the sense that the fraction of even numbers up to some which can be so written tends towards 1 as increases). In 1930, Lev Schnirelmann proved that any natural number greater than 1 can be written as the sum of not more than prime numbers, where is an effectively computable constant; see Schnirelmann density. Schnirelmann's constant is the lowest number with this property. Schnirelmann himself obtained . This result was subsequently enhanced by many authors, such as Olivier Ramaré, who in 1995 showed that every even number is in fact the sum of at most 6 primes. The best known result currently stems from the proof of the weak Goldbach conjecture by Harald Helfgott, which directly implies that every even number is the sum of at most 4 primes.
In 1924, Hardy and Littlewood showed under the assumption of the generalized Riemann hypothesis that the number of even numbers up to violating the Goldbach conjecture is much less than for small .
In 1948, using sieve theory methods, Alfréd Rényi showed that every sufficiently large even number can be written as the sum of a prime and an almost prime with at most factors. Chen Jingrun showed in 1973 using sieve theory that every sufficiently large even number can be written as the sum of either two primes, or a prime and a semiprime (the product of two primes). See Chen's theorem for further information.
In 1975, Hugh Lowell Montgomery and Bob Vaughan showed that "most" even numbers are expressible as the sum of two primes. More precisely, they showed that there exist positive constants and such that for all sufficiently large numbers , every even number less than is the sum of two primes, with at most exceptions. In particular, the set of even integers that are not the sum of two primes has density zero.
In 1951, Yuri Linnik proved the existence of a constant such that every sufficiently large even number is the sum of two primes and at most powers of 2. János Pintz and Imre Ruzsa found in 2020 that works. Assuming the generalized Riemann hypothesis, also works, as shown by Roger Heath-Brown and Jan-Christoph Schlage-Puchta in 2002.
A proof for the weak conjecture was submitted in 2013 by Harald Helfgott to Annals of Mathematics Studies series. Although the article was accepted, Helfgott decided to undertake the major modifications suggested by the referee. Despite several revisions, Helfgott's proof has not yet appeared in a peer-reviewed publication. The weak conjecture is implied by the strong conjecture, as if is a sum of two primes, then is a sum of three primes. However, the converse implication and thus the strong Goldbach conjecture would remain unproven if Helfgott's proof is correct.
Computational results
For small values of , the strong Goldbach conjecture (and hence the weak Goldbach conjecture) can be verified directly. For instance, in 1938, Nils Pipping laboriously verified the conjecture up to . With the advent of computers, many more values of have been checked; T. Oliveira e Silva ran a distributed computer search that has verified the conjecture for (and double-checked up to ) as of 2013. One record from this search is that is the smallest number that cannot be written as a sum of two primes where one is smaller than 9781.
In popular culture
Goldbach's Conjecture () is the title of the biography of Chinese mathematician and number theorist Chen Jingrun, written by Xu Chi.
The conjecture is a central point in the plot of the 1992 novel Uncle Petros and Goldbach's Conjecture by Greek author Apostolos Doxiadis, in the short story "Sixty Million Trillion Combinations" by Isaac Asimov and also in the 2008 mystery novel No One You Know by Michelle Richmond.
Goldbach's conjecture is part of the plot of the 2007 Spanish film Fermat's Room.
Goldbach's conjecture is featured as the main topic of research of actress Ella Rumpf's character Marguerite in the 2023 French-Swiss film Marguerite's Theorem.
Formal statement
Each of the three conjectures has a natural analog in terms of the modern definition of a prime, under which 1 is excluded. A modern version of the first conjecture is:
A modern version of the marginal conjecture is:
And a modern version of Goldbach's older conjecture of which Euler reminded him is:
These modern versions might not be entirely equivalent to the corresponding original statements. For example, if there were an even integer larger than 4, for a prime, that could not be expressed as the sum of two primes in the modern sense, then it would be a counterexample to the modern version of the third conjecture (without being a counterexample to the original version). The modern version is thus probably stronger (but in order to confirm that, one would have to prove that the first version, freely applied to any positive even integer , could not possibly rule out the existence of such a specific counterexample ). In any case, the modern statements have the same relationships with each other as the older statements did. That is, the second and third modern statements are equivalent, and either implies the first modern statement.
The third modern statement (equivalent to the second) is the form in which the conjecture is usually expressed today. It is also known as the "strong", "even", or "binary" Goldbach conjecture. A weaker form of the second modern statement, known as "Goldbach's weak conjecture", the "odd Goldbach conjecture", or the "ternary Goldbach conjecture", asserts that
Heuristic justification
Statistical considerations that focus on the probabilistic distribution of prime numbers present informal evidence in favour of the conjecture (in both the weak and strong forms) for sufficiently large integers: the greater the integer, the more ways there are available for that number to be represented as the sum of two or three other numbers, and the more "likely" it becomes that at least one of these representations consists entirely of primes.
A very crude version of the heuristic probabilistic argument (for the strong form of the Goldbach conjecture) is as follows. The prime number theorem asserts that an integer selected at random has roughly a chance of being prime. Thus if is a large even integer and is a number between 3 and , then one might expect the probability of and simultaneously being prime to be . If one pursues this heuristic, one might expect the total number of ways to write a large even integer as the sum of two odd primes to be roughly
Since , this quantity goes to infinity as increases, and one would expect that every large even integer has not just one representation as the sum of two primes, but in fact very many such representations.
This heuristic argument is actually somewhat inaccurate because it assumes that the events of and being prime are statistically independent of each other. For instance, if is odd, then is also odd, and if is even, then is even, a non-trivial relation because, besides the number 2, only odd numbers can be prime. Similarly, if is divisible by 3, and was already a prime other than 3, then would also be coprime to 3 and thus be slightly more likely to be prime than a general number. Pursuing this type of analysis more carefully, G. H. Hardy and John Edensor Littlewood in 1923 conjectured (as part of their Hardy–Littlewood prime tuple conjecture) that for any fixed , the number of representations of a large integer as the sum of primes with should be asymptotically equal to
where the product is over all primes , and is the number of solutions to the equation in modular arithmetic, subject to the constraints . This formula has been rigorously proven to be asymptotically valid for from the work of Ivan Matveevich Vinogradov, but is still only a conjecture when . In the latter case, the above formula simplifies to 0 when is odd, and to
when is even, where is Hardy–Littlewood's twin prime constant
This is sometimes known as the extended Goldbach conjecture. The strong Goldbach conjecture is in fact very similar to the twin prime conjecture, and the two conjectures are believed to be of roughly comparable difficulty.
The Goldbach partition function is the function that associates to each even integer the number of ways it can be decomposed into a sum of two primes. Its graph looks like a comet and is therefore called Goldbach's comet.
Goldbach's comet suggests tight upper and lower bounds on the number of representations of an even number as the sum of two primes, and also that the number of these representations depend strongly on the value modulo 3 of the number.
Related problems
Although Goldbach's conjecture implies that every positive integer greater than one can be written as a sum of at most three primes, it is not always possible to find such a sum using a greedy algorithm that uses the largest possible prime at each step. The Pillai sequence tracks the numbers requiring the largest number of primes in their greedy representations.
Similar problems to Goldbach's conjecture exist in which primes are replaced by other particular sets of numbers, such as the squares:
It was proven by Lagrange that every positive integer is the sum of four squares. See Waring's problem and the related Waring–Goldbach problem on sums of powers of primes.
Hardy and Littlewood listed as their Conjecture I: "Every large odd number () is the sum of a prime and the double of a prime". This conjecture is known as Lemoine's conjecture and is also called Levy's conjecture.
The Goldbach conjecture for practical numbers, a prime-like sequence of integers, was stated by Margenstern in 1984, and proved by Melfi in 1996: every even number is a sum of two practical numbers.
Harvey Dubner proposed a strengthening of the Goldbach conjecture that states that every even integer greater than 4208 is the sum of two twin primes (not necessarily belonging to the same pair). Only 34 even integers less than 4208 are not the sum of two twin primes; Dubner has verified computationally that this list is complete up to A proof of this stronger conjecture would not only imply Goldbach's conjecture, but also the twin prime conjecture.
According to Bertrand's postulate, for every integer , there is always at least one prime such that If the postulate were false, there would exist some integer for which no prime numbers lie between and , making it impossible to express as a sum of two primes.
Goldbach's conjecture is used when studying computation complexity. The connection is made through the Busy Beaver function, where BB(n) is the maximum number of steps taken by any n state Turing machine that halts. There is a 27-state Turing machine that halts if and only if Goldbach's conjecture is false. Hence if BB(27) was known, and the Turing machine did not stop in that number of steps, it would be known to run forever and hence no counterexamples exist (which proves the conjecture true). This is a completely impractical way to settle the conjecture; instead it is used to suggest that BB(27) will be very hard to compute, at least as difficult as settling the Goldbach conjecture.
References
Further reading
Terence Tao proved that all odd numbers are at most the sum of five primes.
Goldbach Conjecture at MathWorld.
External links
Goldbach's original letter to Euler — PDF format (in German and Latin)
Goldbach's conjecture, part of Chris Caldwell's Prime Pages.
Goldbach conjecture verification, Tomás Oliveira e Silva's distributed computer search.
Additive number theory
Analytic number theory
Conjectures about prime numbers
Unsolved problems in number theory
Hilbert's problems | Goldbach's conjecture | [
"Mathematics"
] | 2,849 | [
"Analytic number theory",
"Unsolved problems in mathematics",
"Unsolved problems in number theory",
"Hilbert's problems",
"Mathematical problems",
"Number theory"
] |
44,775 | https://en.wikipedia.org/wiki/Complete%20measure | In mathematics, a complete measure (or, more precisely, a complete measure space) is a measure space in which every subset of every null set is measurable (having measure zero). More formally, a measure space (X, Σ, μ) is complete if and only if
Motivation
The need to consider questions of completeness can be illustrated by considering the problem of product spaces.
Suppose that we have already constructed Lebesgue measure on the real line: denote this measure space by We now wish to construct some two-dimensional Lebesgue measure on the plane as a product measure. Naively, we would take the -algebra on to be the smallest -algebra containing all measurable "rectangles" for
While this approach does define a measure space, it has a flaw. Since every singleton set has one-dimensional Lebesgue measure zero,
for subset of However, suppose that is a non-measurable subset of the real line, such as the Vitali set. Then the -measure of is not defined but
and this larger set does have -measure zero. So this "two-dimensional Lebesgue measure" as just defined is not complete, and some kind of completion procedure is required.
Construction of a complete measure
Given a (possibly incomplete) measure space (X, Σ, μ), there is an extension (X, Σ0, μ0) of this measure space that is complete. The smallest such extension (i.e. the smallest σ-algebra Σ0) is called the completion of the measure space.
The completion can be constructed as follows:
let Z be the set of all the subsets of the zero-μ-measure subsets of X (intuitively, those elements of Z that are not already in Σ are the ones preventing completeness from holding true);
let Σ0 be the σ-algebra generated by Σ and Z (i.e. the smallest σ-algebra that contains every element of Σ and of Z);
μ has an extension μ0 to Σ0 (which is unique if μ is σ-finite), called the outer measure of μ, given by the infimum
Then (X, Σ0, μ0) is a complete measure space, and is the completion of (X, Σ, μ).
In the above construction it can be shown that every member of Σ0 is of the form A ∪ B for some A ∈ Σ and some B ∈ Z, and
Examples
Borel measure as defined on the Borel σ-algebra generated by the open intervals of the real line is not complete, and so the above completion procedure must be used to define the complete Lebesgue measure. This is illustrated by the fact that the set of all Borel sets over the reals has the same cardinality as the reals. While the Cantor set is a Borel set, has measure zero, and its power set has cardinality strictly greater than that of the reals. Thus there is a subset of the Cantor set that is not contained in the Borel sets. Hence, the Borel measure is not complete.
n-dimensional Lebesgue measure is the completion of the n-fold product of the one-dimensional Lebesgue space with itself. It is also the completion of the Borel measure, as in the one-dimensional case.
Properties
Maharam's theorem states that every complete measure space is decomposable into measures on continua, and a finite or countable counting measure.
See also
References
Measures (measure theory) | Complete measure | [
"Physics",
"Mathematics"
] | 717 | [
"Measures (measure theory)",
"Quantity",
"Physical quantities",
"Size"
] |
44,777 | https://en.wikipedia.org/wiki/Complete%20lattice | In mathematics, a complete lattice is a partially ordered set in which all subsets have both a supremum (join) and an infimum (meet). A conditionally complete lattice satisfies at least one of these properties for bounded subsets. For comparison, in a general lattice, only pairs of elements need to have a supremum and an infimum. Every non-empty finite lattice is complete, but infinite lattices may be incomplete.
Complete lattices appear in many applications in mathematics and computer science. Both order theory and universal algebra study them as a special class of lattices.
Complete lattices must not be confused with complete partial orders (CPOs), a more general class of partially ordered sets. More specific complete lattices are complete Boolean algebras and complete Heyting algebras (locales).
Formal definition
A complete lattice is a partially ordered set (L, ≤) such that every subset A of L has both a greatest lower bound (the infimum, or meet) and a least upper bound (the supremum, or join) in (L, ≤).
The meet is denoted by , and the join by .
In the special case where A is the empty set, the meet of A is the greatest element of L. Likewise, the join of the empty set is the least element of L. Then, complete lattices form a special class of bounded lattices.
Complete sublattices
A sublattice M of a complete lattice L is called a complete sublattice of L if for every subset A of M the elements and , as defined in L, are actually in M.
If the above requirement is lessened to require only non-empty meet and joins to be in M, the sublattice M is called a closed sublattice of L.
Complete semilattices
The terms complete meet-semilattice or complete join-semilattice is another way to refer to complete lattices since arbitrary meets can be expressed in terms of arbitrary joins and vice versa (for details, see completeness).
Another usage of "complete meet-semilattice" refers to a meet-semilattice that is bounded complete and a complete partial order. This concept is arguably the "most complete" notion of a meet-semilattice that is not yet a lattice (in fact, only the top element may be missing).
See semilattices for further discussion between both definitions.
Conditionally Complete Lattices
A lattice is said to be "conditionally complete" if it satisfies either or both of the following properties:
Any subset bounded above has the least upper bound.
Any subset bounded below has the greatest lower bound.
Examples
Any non-empty finite lattice is trivially complete.
The power set of a given set when ordered by inclusion. The supremum is given by the union and the infimum by the intersection of subsets.
The non-negative integers ordered by divisibility. The least element of this lattice is the number 1 since it divides any other number. Perhaps surprisingly, the greatest element is 0, because it can be divided by any other number. The supremum of finite sets is given by the least common multiple and the infimum by the greatest common divisor. For infinite sets, the supremum will always be 0 while the infimum can well be greater than 1. For example, the set of all even numbers has 2 as the greatest common divisor. If 0 is removed from this structure it remains a lattice but ceases to be complete.
The subgroups of any given group under inclusion. (While the infimum here is the usual set-theoretic intersection, the supremum of a set of subgroups is the subgroup generated by the set-theoretic union of the subgroups, not the set-theoretic union itself.) If e is the identity of G, then the trivial group {e} is the minimum subgroup of G, while the maximum subgroup is the group G itself.
The ideals of a ring, when ordered by inclusion. The supremum is given by the sum of ideals and the infimum by the intersection.
The open sets of a topological space, when ordered by inclusion. The supremum is given by the union of open sets and the infimum by the interior of the intersection.
Non-examples
The empty set is not a complete lattice. If it were a complete lattice, then in particular the empty set would have an infimum and supremum in the empty set, a contradiction.
The rational numbers with the usual order ≤ is not a complete lattice. It is a lattice with and . However, itself has no infimum or supremum, nor does .
Locally finite complete lattices
A complete lattice L is said to be locally finite if the supremum of any infinite subset is equal to the supremal element. Denoting this supremal element "1", the condition is equivalently that the set is finite for any . This notation may clash with other notation, as in the case of the lattice (N, |), i.e., the non-negative integers ordered by divisibility. In this locally finite lattice, the infimal element denoted "0" for the lattice theory is the number 1 in the set N and the supremal element denoted "1" for the lattice theory is the number 0 in the set N.
Morphisms of complete lattices
The traditional morphisms between complete lattices, taking the complete lattices as the objects of a category, are the complete homomorphisms (or complete lattice homomorphisms). These are characterized as functions that preserve all joins and all meets. Explicitly, this means that a function f: L→M between two complete lattices L and M is a complete homomorphism if
and
,
for all subsets A of L. Such functions are automatically monotonic, but the condition of being a complete homomorphism is in fact much more specific. For this reason, it can be useful to consider weaker notions of morphisms, such as those that are only required to preserve all joins (giving a category Sup) or all meets (giving a category Inf), which are indeed inequivalent conditions. These notions may also be considered as homomorphisms of complete meet-semilattices or complete join-semilattices, respectively.
Galois connections and adjoints
Furthermore, morphisms that preserve all joins are equivalently characterized as the lower adjoint part of a unique Galois connection. For any pair of preorders X and Y, a Galois connection is given by a pair of monotone functions f and g from X to Y such that for each pair of elements x of X and y of Y
where f is called the lower adjoint and g is called the upper adjoint. By the adjoint functor theorem, a monotone map between any pair of preorders preserves all joins if and only if it is a lower adjoint and preserves all meets if and only if it is an upper adjoint.
As such, each join-preserving morphism determines a unique upper adjoint in the inverse direction that preserves all meets. Hence, considering complete lattices with complete semilattice morphisms (of either type, join-preserving or meet-preserving) boils down to considering Galois connections as one's lattice morphisms. This also yields the insight that three classes of morphisms discussed above basically describe just two different categories of complete lattices: one with complete homomorphisms and one with Galois connections that captures both the meet-preserving functions (upper adjoints) and their dual join-preserving mappings (lower adjoints).
A particularly important class of special cases arises between lattices of subsets of X and Y, i.e., the power sets and , given a function from X to Y. In these cases, the direct image and inverse image maps induced by between the power sets are upper and lower adjoints to each other, respectively.
Free construction and completion
Free "complete semilattices"
The construction of free objects depends on the chosen class of morphisms. Functions that preserve all joins (i.e. lower adjoints of Galois connections) are called free complete join-semilattices.
The standard definition from universal algebra states that a free complete lattice over a generating set is a complete lattice together with a function , such that any function from to the underlying set of some complete lattice can be factored uniquely through a morphism from to . This means that for every element of , and that is the only morphism with this property. Hence, there is a functor from the category of sets and functions to the category of complete lattices and join-preserving functions which is left adjoint to the forgetful functor from complete lattices to their underlying sets.
Free complete lattices can thus be constructed such that the complete lattice generated by some set is just the powerset , the set of all subsets of ordered by subset inclusion. The required unit maps any element of to the singleton set . Given a mapping as above, the function is defined by
.
Then transforms unions into suprema and thus preserves joins.
These considerations also yield a free construction for morphisms that preserve meets instead of joins (i.e. upper adjoints of Galois connections). The above can be dualized: free objects are given as powersets ordered by reverse inclusion, such that set union provides the meet operation, and the function is defined in terms of meets instead of joins. The result of this construction is known as a free complete meet-semilattice. It can be noted that these free constructions extend those that are used to obtain free semilattices, where finite sets need to be considered.
Free complete lattices
The situation for complete lattices with complete homomorphisms is more intricate. In fact, free complete lattices generally do not exist. Of course, one can formulate a word problem similar to the one for the case of lattices, but the collection of all possible words (or "terms") in this case would be a proper class, because arbitrary meets and joins comprise operations for argument sets of every cardinality.
This property in itself is not a problem: as the case of free complete semilattices above shows, it can well be that the solution of the word problem leaves only a set of equivalence classes. In other words, it is possible that the proper classes of all terms have the same meaning and are thus identified in the free construction. However, the equivalence classes for the word problem of complete lattices are "too small," such that the free complete lattice would still be a proper class, which is not allowed.
Now, one might still hope that there are some useful cases where the set of generators is sufficiently small for a free, complete lattice to exist. Unfortunately, the size limit is very low, and we have the following theorem:
The free complete lattice on three generators does not exist; it is a proper class.
A proof of this statement is given by Johnstone. The original argument is attributed to Alfred W. Hales; see also the article on free lattices.
Completion
If a complete lattice is freely generated from a given poset used in place of the set of generators considered above, then one speaks of a completion of the poset. The definition of the result of this operation is similar to the above definition of free objects, where "sets" and "functions" are replaced by "posets" and "monotone mappings". Likewise, one can describe the completion process as a functor from the category of posets with monotone functions to some category of complete lattices with appropriate morphisms that are left adjoint to the forgetful functor in the converse direction.
As long as one considers meet- or join-preserving functions as morphisms, this can easily be achieved through the so-called Dedekind–MacNeille completion. For this process, elements of the poset are mapped to (Dedekind-) cuts, which can then be mapped to the underlying posets of arbitrary complete lattices in much the same way as done for sets and free complete (semi-) lattices above.
The aforementioned result that free complete lattices do not exist entails that an according free construction from a poset is not possible either. This is easily seen by considering posets with a discrete order, where every element only relates to itself. These are exactly the free posets on an underlying set. Would there be a free construction of complete lattices from posets, then both constructions could be composed, which contradicts the negative result above.
Representation
G. Birkhoff's book Lattice Theory contains a very useful representation method. It associates a complete lattice to any binary relation between two sets by constructing a Galois connection from the relation, which then leads to two dually isomorphic closure systems. Closure systems are intersection-closed families of sets. When ordered by the subset relation ⊆, they are complete lattices.
A special instance of Birkhoff's construction starts from an arbitrary poset (P,≤) and constructs the Galois connection from the order relation ≤ between P and itself. The resulting complete lattice is the Dedekind-MacNeille completion. When this completion is applied to a poset that already is a complete lattice, then the result is isomorphic to the original one. Thus, we immediately find that every complete lattice is represented by Birkhoff's method, up to isomorphism.
The construction is utilized in formal concept analysis, where one represents real-word data by binary relations (called formal contexts) and uses the associated complete lattices (called concept lattices) for data analysis. The mathematics behind formal concept analysis therefore is the theory of complete lattices.
Another representation is obtained as follows: A subset of a complete lattice is itself a complete lattice (when ordered with the induced order) if and only if it is the image of an increasing and idempotent (but not necessarily extensive) self-map.
The identity mapping has these two properties. Thus all complete lattices occur.
Further results
Besides the previous representation results, there are some other statements that can be made about complete lattices, or that take a particularly simple form in this case. An example is the Knaster–Tarski theorem, which states that the set of fixed points of a monotone function on a complete lattice is again a complete lattice. This is easily seen to be a generalization of the above observation about the images of increasing and idempotent functions.
References
Closure operators
Lattice theory | Complete lattice | [
"Mathematics"
] | 3,049 | [
"Fields of abstract algebra",
"Order theory",
"Lattice theory",
"Closure operators"
] |
44,785 | https://en.wikipedia.org/wiki/Dream | A dream is a succession of images, ideas, emotions, and sensations that usually occur involuntarily in the mind during certain stages of sleep. Humans spend about two hours dreaming per night, and each dream lasts around 5–20 minutes, although the dreamer may perceive the dream as being much longer than this.
The content and function of dreams have been topics of scientific, philosophical and religious interest throughout recorded history. Dream interpretation, practiced by the Babylonians in the third millennium BCE and even earlier by the ancient Sumerians, figures prominently in religious texts in several traditions, and has played a lead role in psychotherapy. The scientific study of dreams is called oneirology. Most modern dream study focuses on the neurophysiology of dreams and on proposing and testing hypotheses regarding dream function. It is not known where in the brain dreams originate, if there is a single origin for dreams or if multiple regions of the brain are involved, or what the purpose of dreaming is for the body or mind.
The human dream experience and what to make of it has undergone sizable shifts over the course of history. Long ago, according to writings from Mesopotamia and Ancient Egypt, dreams dictated post-dream behaviors to an extent that was sharply reduced in later millennia. These ancient writings about dreams highlight visitation dreams, where a dream figure, usually a deity or a prominent forebear, commands the dreamer to take specific actions, and which may predict future events. Framing the dream experience varies across cultures as well as through time.
Dreaming and sleep are intertwined. Dreams occur mainly in the rapid-eye movement (REM) stage of sleep—when brain activity is high and resembles that of being awake. Because REM sleep is detectable in many species, and because research suggests that all mammals experience REM, linking dreams to REM sleep has led to conjectures that animals dream. However, humans dream during non-REM sleep, also, and not all REM awakenings elicit dream reports. To be studied, a dream must first be reduced to a verbal report, which is an account of the subject's memory of the dream, not the subject's dream experience itself. So, dreaming by non-humans is currently unprovable, as is dreaming by human fetuses and pre-verbal infants.
Subjective experience and content
Preserved writings from early Mediterranean civilizations indicate a relatively abrupt change in subjective dream experience between Bronze Age antiquity and the beginnings of the classical era.
In visitation dreams reported in ancient writings, dreamers were largely passive in their dreams, and visual content served primarily to frame authoritative auditory messaging. Gudea, the king of the Sumerian city-state of Lagash (reigned 2144–2124 BCE), rebuilt the temple of Ningirsu as the result of a dream in which he was told to do so. After antiquity, the passive hearing of visitation dreams essentially gave way to visualized narratives in which the dreamer becomes a character who actively participates.
From the 1940s to 1985, Calvin S. Hall collected more than 50,000 dream reports at Western Reserve University. In 1966, Hall and Robert Van de Castle published The Content Analysis of Dreams, outlining a coding system to study 1,000 dream reports from college students. Results indicated that participants from varying parts of the world demonstrated similarity in their dream content. The only residue of antiquity's authoritative dream figure in the Hall and Van de Castle listing of dream characters is the inclusion of God in the category of prominent persons. Hall's complete dream reports were made publicly available in the mid-1990s by his protégé William Domhoff. More recent studies of dream reports, while providing more detail, continue to cite the Hall study favorably.
In the Hall study, the most common emotion experienced in dreams was anxiety. Other emotions included abandonment, anger, fear, joy, and happiness. Negative emotions were much more common than positive ones. The Hall data analysis showed that sexual dreams occur no more than 10% of the time and are more prevalent in young to mid-teens. Another study showed that 8% of both men's and women's dreams have sexual content. In some cases, sexual dreams may result in orgasms or nocturnal emissions. These are colloquially known as "wet dreams".
The visual nature of dreams is generally highly phantasmagoric; that is, different locations and objects continuously blend into each other. The visuals (including locations, people, and objects) are generally reflective of a person's memories and experiences, but conversation can take on highly exaggerated and bizarre forms. Some dreams may even tell elaborate stories wherein the dreamer enters entirely new, complex worlds and awakes with ideas, thoughts, and feelings never experienced before the dream.
People who are blind from birth do not have visual dreams. Their dream contents are related to other senses, such as hearing, touch, smell, and taste, whichever are present since birth.
Effects of regional or global catastrophes
The COVID-19 pandemic also influenced the content of people's dreams, according to a scientific study of over 15,000 dream reports by Deirdre Barrett. This analysis revealed that themes involving fear, illness, and death were two to four times more prevalent in dreams following the onset of the pandemic than they were before.
Neurophysiology
Dream study is popular with scientists exploring the mind–brain problem. Some "propose to reduce aspects of dream phenomenology to neurobiology." But current science cannot specify dream physiology in detail. Protocols in most nations restrict human brain research to non-invasive procedures. In the United States, invasive brain procedures with a human subject are allowed only when these are deemed necessary in surgical treatment to address medical needs of the same human subject. Non-invasive measures of brain activity like electroencephalogram (EEG) voltage averaging or cerebral blood flow cannot identify small but influential neuronal populations. Also, fMRI signals are too slow to explain how brains compute in real time.
Scientists researching some brain functions can work around current restrictions by examining animal subjects. As stated by the Society for Neuroscience, "Because no adequate alternatives exist, much of this research must [sic] be done on animal subjects." However, since animal dreaming can be only inferred, not confirmed, animal studies yield no hard facts to illuminate the neurophysiology of dreams. Examining human subjects with brain lesions can provide clues, but the lesion method cannot discriminate between the effects of destruction and disconnection and cannot target specific neuronal groups in heterogeneous regions like the brain stem.
Generation
Denied precision tools and obliged to depend on imaging, much dream research has succumbed to the law of the instrument. Studies detect an increase of blood flow in a specific brain region and then credit that region with a role in generating dreams. But pooling study results has led to the newer conclusion that dreaming involves large numbers of regions and pathways, which likely are different for different dream events.
Image creation in the brain involves significant neural activity downstream from eye intake, and it is theorized that "the visual imagery of dreams is produced by activation during sleep of the same structures that generate complex visual imagery in waking perception."
Dreams present a running narrative rather than exclusively visual imagery. Following their work with split-brain subjects, Gazzaniga and LeDoux postulated, without attempting to specify the neural mechanisms, a "left-brain interpreter" that seeks to create a plausible narrative from whatever electro-chemical signals reach the brain's left hemisphere. Sleep research has determined that some brain regions fully active during waking are, during REM sleep, activated only in a partial or fragmentary way. Drawing on this knowledge, textbook author James W. Kalat explains, "[A] dream represents the brain's effort to make sense of sparse and distorted information.... The cortex combines this haphazard input with whatever other activity was already occurring and does its best to synthesize a story that makes sense of the information." Neuroscientist Indre Viskontas is even more blunt, calling often bizarre dream content "just the result of your interpreter trying to create a story out of random neural signaling."
Theories on function
For many humans across multiple eras and cultures, dreams are believed to have functioned as revealers of truths sourced during sleep from gods or other external entities. Ancient Egyptians believed that dreams were the best way to receive divine revelation, and thus they would induce (or "incubate") dreams. They went to sanctuaries and slept on special "dream beds" in hope of receiving advice, comfort, or healing from the gods. From a Darwinian perspective dreams would have to fulfill some kind of biological requirement, provide some benefit for natural selection to take place, or at least have no negative impact on fitness. Robert (1886), a physician from Hamburg, was the first who suggested that dreams are a need and that they have the function to erase (a) sensory impressions that were not fully worked up, and (b) ideas that were not fully developed during the day. In dreams, incomplete material is either removed (suppressed) or deepened and included into memory. Freud, whose dream studies focused on interpreting dreams, not explaining how or why humans dream, disputed Robert's hypothesis and proposed that dreams preserve sleep by representing as fulfilled those wishes that otherwise would awaken the dreamer. Freud wrote that dreams "serve the purpose of prolonging sleep instead of waking up. Dreams are the GUARDIANS of sleep and not its disturbers."
A turning point in theorizing about dream function came in 1953, when Science published the Aserinsky and Kleitman paper establishing REM sleep as a distinct phase of sleep and linking dreams to REM sleep. Until and even after publication of the Solms 2000 paper that certified the separability of REM sleep and dream phenomena, many studies purporting to uncover the function of dreams have in fact been studying not dreams but measurable REM sleep.
Theories of dream function since the identification of REM sleep include:
Hobson's and McCarley's 1977 activation-synthesis hypothesis, which proposed "a functional role for dreaming sleep in promoting some aspect of the learning process...." In 2010 a Harvard study was published showing experimental evidence that dreams were correlated with improved learning.
Crick's and Mitchison's 1983 "reverse learning" theory, which states that dreams are like the cleaning-up operations of computers when they are offline, removing (suppressing) parasitic nodes and other "junk" from the mind during sleep.
Hartmann's 1995 proposal that dreams serve a "quasi-therapeutic" function, enabling the dreamer to process trauma in a safe place.
Revonsuo's 2000 threat simulation hypothesis, whose premise is that during much of human evolution, physical and interpersonal threats were serious, giving reproductive advantage to those who survived them. Dreaming aided survival by replicating these threats and providing the dreamer with practice in dealing with them. In 2015, Revonsuo proposed social simulation theory, which describes dreams as a simulation for training social skills and bonds.
Eagleman's and Vaughn's 2021 defensive activation theory, which says that, given the brain's neuroplasticity, dreams evolved as a visual hallucinatory activity during sleep's extended periods of darkness, busying the occipital lobe and thereby protecting it from possible appropriation by other, non-vision, sense operations.
Erik Hoel proposes, based on artificial neural networks, that dreams prevent overfitting to past experiences; that is, they enable the dreamer to learn from novel situations.
Religious and other cultural contexts
Dreams figure prominently in major world religions. The dream experience for early humans, according to one interpretation, gave rise to the notion of a human "soul", a central element in much religious thought. J. W. Dunne wrote: But there can be no reasonable doubt that the idea of a soul must have first arisen in the mind of primitive man as a result of observation of his dreams. Ignorant as he was, he could have come to no other conclusion but that, in dreams, he left his sleeping body in one universe and went wandering off into another. It is considered that, but for that savage, the idea of such a thing as a 'soul' would never have even occurred to mankind....
Hindu
In the Mandukya Upanishad, part of the Veda scriptures of Indian Hinduism, a dream is one of three states that the soul experiences during its lifetime, the other two states being the waking state and the sleep state. The earliest Upanishads, written before 300 BCE, emphasize two meanings of dreams. The first says that dreams are merely expressions of inner desires. The second is the belief of the soul leaving the body and being guided until awakened.
Abrahamic
In Judaism, dreams are considered part of the experience of the world that can be interpreted and from which lessons can be garnered. It is discussed in the Talmud, Tractate Berachot 55–60.
The ancient Hebrews connected their dreams heavily with their religion, though the Hebrews were monotheistic and believed that dreams were the voice of one God alone. Hebrews also differentiated between good dreams (from God) and bad dreams (from evil spirits). The Hebrews, like many other ancient cultures, incubated dreams in order to receive a divine revelation. For example, the Hebrew prophet Samuel would "lie down and sleep in the temple at Shiloh before the Ark and receive the word of the Lord", and Joseph interpreted a Pharaoh's dream of seven lean cows swallowing seven fat cows as meaning the subsequent seven years would be bountiful, followed by seven years of famine. Most of the dreams in the Bible are in the Book of Genesis.
Christians mostly shared the beliefs of the Hebrews and thought that dreams were of a supernatural character because the Old Testament includes frequent stories of dreams with divine inspiration. The most famous of these dream stories was Jacob's dream of a ladder that stretches from Earth to Heaven. Many Christians preach that God can speak to people through their dreams. The famous glossary, the Somniale Danielis, written in the name of Daniel, attempted to teach Christian populations to interpret their dreams.
Iain R. Edgar has researched the role of dreams in Islam. He has argued that dreams play an important role in the history of Islam and the lives of Muslims, since dream interpretation is the only way that Muslims can receive revelations from God since the death of the last prophet, Muhammad. According to Edgar, Islam classifies three types of dreams. Firstly, there is the true dream (al-ru’ya), then the false dream, which may come from the devil (shaytan), and finally, the meaningless everyday dream (hulm). This last dream could be brought forth by the dreamer's ego or base appetite based on what they experienced in the real world. The true dream is often indicated by Islam's hadith tradition. In one narration by Aisha, the wife of the Prophet, it is said that the Prophet's dreams would come true like the ocean's waves. Just as in its predecessors, the Quran also recounts the story of Joseph and his unique ability to interpret dreams.
In both Christianity and Islam dreams feature in conversion stories. According to ancient authors, Constantine the Great started his conversion to Christianity because he had a dream which prophesied that he would win the battle of the Milvian Bridge if he adopted the Chi-Rho as his battle standard."
Buddhist
In Buddhism, ideas about dreams are similar to the classical and folk traditions in South Asia. The same dream is sometimes experienced by multiple people, as in the case of the Buddha-to-be, before he is leaving his home. It is described in the Mahāvastu that several of the Buddha's relatives had premonitory dreams preceding this. Some dreams are also seen to transcend time: the Buddha-to-be has certain dreams that are the same as those of previous Buddhas, the Lalitavistara states. In Buddhist literature, dreams often function as a "signpost" motif to mark certain stages in the life of the main character.
Buddhist views about dreams are expressed in the Pāli Commentaries and the Milinda Pañhā.
Other
In Chinese history, people wrote of two vital aspects of the soul of which one is freed from the body during slumber to journey in a dream realm, while the other remained in the body. This belief and dream interpretation had been questioned since early times, such as by the philosopher Wang Chong ().
The Babylonians and Assyrians divided dreams into "good," which were sent by the gods, and "bad," sent by demons. A surviving collection of dream omens entitled Iškar Zaqīqu records various dream scenarios as well as prognostications of what will happen to the person who experiences each dream, apparently based on previous cases. Some list different possible outcomes, based on occasions in which people experienced similar dreams with different results. The Greeks shared their beliefs with the Egyptians on how to interpret good and bad dreams, and the idea of incubating dreams. Morpheus, the Greek god of dreams, also sent warnings and prophecies to those who slept at shrines and temples. The earliest Greek beliefs about dreams were that their gods physically visited the dreamers, where they entered through a keyhole, exiting the same way after the divine message was given.
Antiphon wrote the first known Greek book on dreams in the 5th century BCE. In that century, other cultures influenced Greeks to develop the belief that souls left the sleeping body. The father of modern medicine, Hippocrates (), thought dreams could analyze illness and predict diseases. For instance, a dream of a dim star high in the night sky indicated problems in the head region, while low in the night sky indicated bowel issues. Galen (129–216 AD) believed the same thing. Greek philosopher Plato (427–347 BCE) wrote that people harbor secret, repressed desires, such as incest, murder, adultery, and conquest, which build up during the day and run rampant during the night in dreams. Plato's student, Aristotle (384–322 BCE), believed dreams were caused by processing incomplete physiological activity during sleep, such as eyes trying to see while the sleeper's eyelids were closed. Marcus Tullius Cicero, for his part, believed that all dreams are produced by thoughts and conversations a dreamer had during the preceding days. Cicero's Somnium Scipionis described a lengthy dream vision, which in turn was commented on by Macrobius in his Commentarii in Somnium Scipionis.
Herodotus in his The Histories, writes "The visions that occur to us in dreams are, more often than not, the things we have been concerned about during the day."
The Dreaming is a common term within the animist creation narrative of indigenous Australians for a personal, or group, creation and for what may be understood as the "timeless time" of formative creation and perpetual creating.
Some Indigenous American tribes and Mexican populations believe that dreams are a way of visiting and having contact with their ancestors. Some Native American tribes have used vision quests as a rite of passage, fasting and praying until an anticipated guiding dream was received, to be shared with the rest of the tribe upon their return.
Interpretation
Beginning in the late 19th century, Austrian neurologist Sigmund Freud, founder of psychoanalysis, theorized that dreams reflect the dreamer's unconscious mind and specifically that dream content is shaped by unconscious wish fulfillment. He argued that important unconscious desires often relate to early childhood memories and experiences. Carl Jung and others expanded on Freud's idea that dream content reflects the dreamer's unconscious desires.
Dream interpretation can be a result of subjective ideas and experiences. One study found that most people believe that "their dreams reveal meaningful hidden truths". The researchers surveyed students in the United States, South Korea, and India, and found that 74% of Indians, 65% of South Koreans and 56% of Americans believed their dream content provided them with meaningful insight into their unconscious beliefs and desires. This Freudian view of dreaming was believed significantly more than theories of dreaming that attribute dream content to memory consolidation, problem-solving, or as a byproduct of unrelated brain activity. The same study found that people attribute more importance to dream content than to similar thought content that occurs while they are awake. Americans were more likely to report that they would intentionally miss their flight if they dreamt of their plane crashing than if they thought of their plane crashing the night before flying (while awake), and that they would be as likely to miss their flight if they dreamt of their plane crashing the night before their flight as if there was an actual plane crash on the route they intended to take. Participants in the study were more likely to perceive dreams to be meaningful when the content of dreams was in accordance with their beliefs and desires while awake. They were more likely to view a positive dream about a friend to be meaningful than a positive dream about someone they disliked, for example, and were more likely to view a negative dream about a person they disliked as meaningful than a negative dream about a person they liked.
According to surveys, it is common for people to feel their dreams are predicting subsequent life events. Psychologists have explained these experiences in terms of memory biases, namely a selective memory for accurate predictions and distorted memory so that dreams are retrospectively fitted onto life experiences. The multi-faceted nature of dreams makes it easy to find connections between dream content and real events. The term "veridical dream" has been used to indicate dreams that reveal or contain truths not yet known to the dreamer, whether future events or secrets.
In one experiment, subjects were asked to write down their dreams in a diary. This prevented the selective memory effect, and the dreams no longer seemed accurate about the future. Another experiment gave subjects a fake diary of a student with apparently precognitive dreams. This diary described events from the person's life, as well as some predictive dreams and some non-predictive dreams. When subjects were asked to recall the dreams they had read, they remembered more of the successful predictions than unsuccessful ones.
Images and literature
Graphic artists, writers and filmmakers all have found dreams to offer a rich vein for creative expression. In the West, artists' depictions of dreams in Renaissance and Baroque art often were related to Biblical narrative. Especially preferred by visual artists were the Jacob's Ladder dream in Genesis and St. Joseph's dreams in the Gospel according to Matthew.
Many later graphic artists have depicted dreams, including Japanese woodblock artist Hokusai (1760–1849) and Western European painters Rousseau (1844–1910), Picasso (1881–1973), and Dalí (1904–1989).
In literature, dream frames were frequently used in medieval allegory to justify the narrative; The Book of the Duchess and The Vision Concerning Piers Plowman are two such dream visions. Even before them, in antiquity, the same device had been used by Cicero and Lucian of Samosata.
Dreams have also featured in fantasy and speculative fiction since the 19th century. One of the best-known dream worlds is Wonderland from Lewis Carroll's Alice's Adventures in Wonderland, as well as Looking-Glass Land from its sequel, Through the Looking-Glass. Unlike many dream worlds, Carroll's logic is like that of actual dreams, with transitions and flexible causality.
Other fictional dream worlds include the Dreamlands of H. P. Lovecraft's Dream Cycle and The Neverending Storys world of Fantastica, which includes places like the Desert of Lost Dreams, the Sea of Possibilities and the Swamps of Sadness. Dreamworlds, shared hallucinations and other alternate realities feature in a number of works by Philip K. Dick, such as The Three Stigmata of Palmer Eldritch and Ubik. Similar themes were explored by Jorge Luis Borges, for instance in The Circular Ruins.
Modern popular culture often conceives of dreams, as did Freud, as expressions of the dreamer's deepest fears and desires. In speculative fiction, the line between dreams and reality may be blurred even more in service to the story. Dreams may be psychically invaded or manipulated (Dreamscape, 1984; the Nightmare on Elm Street films, 1984–2010; Inception, 2010) or even come literally true (as in The Lathe of Heaven, 1971).
Lucidity
Lucid dreaming is the conscious perception of one's state while dreaming. In this state the dreamer may often have some degree of control over their own actions within the dream or even the characters and the environment of the dream. Dream control has been reported to improve with practiced deliberate lucid dreaming, but the ability to control aspects of the dream is not necessary for a dream to qualify as "lucid"—a lucid dream is any dream during which the dreamer knows they are dreaming. The occurrence of lucid dreaming has been scientifically verified.
"Oneironaut" is a term sometimes used for those who lucidly dream.
In 1975, psychologist Keith Hearne successfully recorded a communication from a dreamer experiencing a lucid dream. On April 12, 1975, after agreeing to move his eyes left and right upon becoming lucid, the subject and Hearne's co-author on the resulting article, Alan Worsley, successfully carried out this task. Years later, psychophysiologist Stephen LaBerge conducted similar work including:
Using eye signals to map the subjective sense of time in dreams.
Comparing the electrical activity of the brain while singing awake and while dreaming.
Studies comparing in-dream sex, arousal, and orgasm.
Communication between two dreamers has also been documented. The processes involved included EEG monitoring, ocular signaling, incorporation of reality in the form of red light stimuli and a coordinating website. The website tracked when both dreamers were dreaming and sent the stimulus to one of the dreamers where it was incorporated into the dream. This dreamer, upon becoming lucid, signaled with eye movements; this was detected by the website whereupon the stimulus was sent to the second dreamer, invoking incorporation into that dreamer's dream.
Recollection
The recollection of dreams is extremely unreliable, though it is a skill that can be trained. Dreams can usually be recalled if a person is awakened while dreaming. Women tend to have more frequent dream recall than men. Dreams that are difficult to recall may be characterized by relatively little affect, and factors such as salience, arousal, and interference play a role in dream recall. Often, a dream may be recalled upon viewing or hearing a random trigger or stimulus. The salience hypothesis proposes that dream content that is salient, that is, novel, intense, or unusual, is more easily remembered. There is considerable evidence that vivid, intense, or unusual dream content is more frequently recalled. A dream journal can be used to assist dream recall, for personal interest or psychotherapy purposes.
Adults report remembering around two dreams per week, on average. Unless a dream is particularly vivid and if one wakes during or immediately after it, the content of the dream is typically not remembered.
In line with the salience hypothesis, there is considerable evidence that people who have more vivid, intense or unusual dreams show better recall. There is evidence that continuity of consciousness is related to recall. Specifically, people who have vivid and unusual experiences during the day tend to have more memorable dream content and hence better dream recall. People who score high on measures of personality traits associated with creativity, imagination, and fantasy, such as openness to experience, daydreaming, fantasy proneness, absorption, and hypnotic susceptibility, tend to show more frequent dream recall. There is also evidence for continuity between the bizarre aspects of dreaming and waking experience. That is, people who report more bizarre experiences during the day, such as people high in schizotypy (psychosis proneness), have more frequent dream recall and also report more frequent nightmares.
Dream-recording machine
Recording or reconstructing dreams may one day assist with dream recall. Using the permitted non-invasive technologies, functional magnetic resonance imaging (fMRI) and electromyography (EMG), researchers have been able to identify basic dream imagery, dream speech activity and dream motor behavior (such as walking and hand movements).
Miscellany
Illusion of reality
Some philosophers have proposed that what we think of as the "real world" could be or is an illusion (an idea known as the skeptical hypothesis about ontology). The first recorded mention of the idea was in the 4th century BCE by Zhuangzi, and in Eastern philosophy, the problem has been named the "Zhuangzi Paradox."
He who dreams of drinking wine may weep when morning comes; he who dreams of weeping may in the morning go off to hunt. While he is dreaming he does not know it is a dream, and in his dream he may even try to interpret a dream. Only after he wakes does he know it was a dream. And someday there will be a great awakening when we know that this is all a great dream. Yet the stupid believe they are awake, busily and brightly assuming they understand things, calling this man ruler, that one herdsman—how dense! Confucius and you are both dreaming! And when I say you are dreaming, I am dreaming, too. Words like these will be labeled the Supreme Swindle. Yet, after ten thousand generations, a great sage may appear who will know their meaning, and it will still be as though he appeared with astonishing speed.
The idea also is discussed in Hindu and Buddhist writings. It was formally introduced to Western philosophy by Descartes in the 17th century in his Meditations on First Philosophy.
Absent-minded transgression
Dreams of absent-minded transgression (DAMT) are dreams wherein the dreamer absent-mindedly performs an action that he or she has been trying to stop (one classic example is of a quitting smoker having dreams of lighting a cigarette). Subjects who have had DAMT have reported waking with intense feelings of guilt. One study found a positive association between having these dreams and successfully stopping the behavior.
Non-REM dreams
Hypnogogic and hypnopompic dreams, dreamlike states shortly after falling asleep and shortly before awakening, and dreams during stage 2 of NREM-sleep, also occur, but are shorter than REM-dreams.
Daydreams
A daydream is a visionary fantasy, especially one of happy, pleasant thoughts, hopes or ambitions, imagined as coming to pass, and experienced while awake. There are many different types of daydreams, and there is no consistent definition amongst psychologists. The general public also uses the term for a broad variety of experiences. Research by Harvard psychologist Deirdre Barrett has found that people who experience vivid dreamlike mental images reserve the word for these, whereas many other people refer to milder imagery, realistic future planning, review of memories or just "spacing out"—i.e. one's mind going relatively blank—when they talk about "daydreaming".
While daydreaming has long been derided as a lazy, non-productive pastime, it is now commonly acknowledged that daydreaming can be constructive in some contexts. There are numerous examples of people in creative or artistic careers, such as composers, novelists and filmmakers, developing new ideas through daydreaming. Similarly, research scientists, mathematicians and physicists have developed new ideas by daydreaming about their subject areas.
Hallucination
A hallucination, in the broadest sense of the word, is a perception in the absence of a stimulus. In a stricter sense, hallucinations are perceptions in a conscious and awake state, in the absence of external stimuli, and have qualities of real perception, in that they are vivid, substantial, and located in external objective space. The latter definition distinguishes hallucinations from the related phenomena of dreaming, which does not involve wakefulness.
Nightmare
A nightmare is an unpleasant dream that can cause a strong negative emotional response from the mind, typically fear or horror, but also despair, anxiety and great sadness. The dream may contain situations of danger, discomfort, psychological or physical terror. Sufferers usually awaken in a state of distress and may be unable to return to sleep for a prolonged period of time.
Night terror
A night terror, also known as a sleep terror or pavor nocturnus, is a parasomnia disorder that predominantly affects children, causing feelings of terror or dread. Night terrors should not be confused with nightmares, which are bad dreams that cause the feeling of horror or fear.
Déjà vu
One theory of déjà vu attributes the feeling of having previously seen or experienced something to having dreamed about a similar situation or place, and forgetting about it until one seems to be mysteriously reminded of the situation or the place while awake.
Melatonin
Melatonin is a natural hormone secreted by the brain's pineal gland, inducing nocturnal behaviors in animals and sleep in humans during nighttime. Chemically isolated in 1958, melatonin has been marketed as a sleep aid since the 1990s and is currently sold in the United States as an over-the-counter product requiring no prescription. Anecdotal reports and formal research studies over the past few decades have established a link between melatonin supplementation and more vivid dreams.
See also
Dream dictionary
Dream incubation
Dream of Macsen Wledig
Dream pop
Dream sequence
Dream yoga
Dreamcatcher
Dreams in analytical psychology
Dreamwork
False awakening
Hatsuyume
Incubus
Lilith, a Sumerian dream demoness
List of dream diaries
Mare (folklore)
Mabinogion
Recurring dream
Sleep in animals
Sleep paralysis
Spirit spouse
Succubus
Works based on dreams
References
Further reading
Dreaming – journal published by the American Psychological Association
Harris, William V. (2009) Dreams and Еxperience in Classical Antiquity. Cambridge, MA & London: Harvard University Press.
External links
Archive for Research in Archetypal Symbolism website
The International Association for the Study of Dreams
alt.dreams A long-running USENET forum wherein readers post and analyze dreams
LSDBase – online sleep research database documenting physiological effects of dreams through biofeedback
Night
Sleep
Symbols | Dream | [
"Astronomy",
"Mathematics",
"Biology"
] | 7,015 | [
"Time in astronomy",
"Behavior",
"Dream",
"Symbols",
"Night",
"Sleep"
] |
44,787 | https://en.wikipedia.org/wiki/Up%20to | Two mathematical objects and are called "equal up to an equivalence relation "
if and are related by , that is,
if holds, that is,
if the equivalence classes of and with respect to are equal.
This figure of speech is mostly used in connection with expressions derived from equality, such as uniqueness or count.
For example, " is unique up to " means that all objects under consideration are in the same equivalence class with respect to the relation .
Moreover, the equivalence relation is often designated rather implicitly by a generating condition or transformation.
For example, the statement "an integer's prime factorization is unique up to ordering" is a concise way to say that any two lists of prime factors of a given integer are equivalent with respect to the relation that relates two lists if one can be obtained by reordering (permuting) the other. As another example, the statement "the solution to an indefinite integral is , up to addition of a constant" tacitly employs the equivalence relation between functions, defined by if the difference is a constant function, and means that the solution and the function are equal up to this .
In the picture, "there are 4 partitions up to rotation" means that the set has 4 equivalence classes with respect to defined by if can be obtained from by rotation; one representative from each class is shown in the bottom left picture part.
Equivalence relations are often used to disregard possible differences of objects, so "up to " can be understood informally as "ignoring the same subtleties as ignores".
In the factorization example, "up to ordering" means "ignoring the particular ordering".
Further examples include "up to isomorphism", "up to permutations", and "up to rotations", which are described in the Examples section.
In informal contexts, mathematicians often use the word modulo (or simply mod) for similar purposes, as in "modulo isomorphism".
Objects that are distinct up to an equivalence relation defined by a group action, such as rotation, reflection, or permutation, can be counted using Burnside's lemma or its generalization, Pólya enumeration theorem.
Examples
Tetris
Consider the seven Tetris pieces (I, J, L, O, S, T, Z), known mathematically as the tetrominoes. If you consider all the possible rotations of these pieces — for example, if you consider the "I" oriented vertically to be distinct from the "I" oriented horizontally — then you find there are 19 distinct possible shapes to be displayed on the screen. (These 19 are the so-called "fixed" tetrominoes.) But if rotations are not considered distinct — so that we treat both "I vertically" and "I horizontally" indifferently as "I" — then there are only seven. We say that "there are seven tetrominoes, up to rotation". One could also say that "there are five tetrominoes, up to rotation and reflection", which accounts for the fact that L reflected gives J, and S reflected gives Z.
Eight queens
In the eight queens puzzle, if the queens are considered to be distinct (e.g. if they are colored with eight different colors), then there are 3709440 distinct solutions. Normally, however, the queens are considered to be interchangeable, and one usually says "there are unique solutions up to permutation of the queens", or that "there are 92 solutions modulo the names of the queens", signifying that two different arrangements of the queens are considered equivalent if the queens have been permuted, as long as the set of occupied squares remains the same.
If, in addition to treating the queens as identical, rotations and reflections of the board were allowed, we would have only 12 distinct solutions "up to symmetry and the naming of the queens". For more, see .
Polygons
The regular -gon, for a fixed , is unique up to similarity. In other words, the "similarity" equivalence relation over the regular -gons (for a fixed ) has only one equivalence class; it is impossible to produce two regular -gons which are not similar to each other.
Group theory
In group theory, one may have a group acting on a set , in which case, one might say that two elements of are equivalent "up to the group action"—if they lie in the same orbit.
Another typical example is the statement that "there are two different groups of order 4 up to isomorphism", or "modulo isomorphism, there are two groups of order 4". This means that, if one considers isomorphic groups "equivalent", there are only two equivalence classes of groups of order 4.
Nonstandard analysis
A hyperreal and its standard part are equal up to an infinitesimal difference.
See also
Abuse of notation
Adequality
Essentially unique
List of mathematical jargon
Modulo (jargon)
Quotient group
Quotient set
References
Mathematical terminology | Up to | [
"Mathematics"
] | 1,032 | [
"nan"
] |
44,790 | https://en.wikipedia.org/wiki/Luminosity | Luminosity is an absolute measure of radiated electromagnetic energy per unit time, and is synonymous with the radiant power emitted by a light-emitting object. In astronomy, luminosity is the total amount of electromagnetic energy emitted per unit of time by a star, galaxy, or other astronomical objects.
In SI units, luminosity is measured in joules per second, or watts. In astronomy, values for luminosity are often given in the terms of the luminosity of the Sun, L⊙. Luminosity can also be given in terms of the astronomical magnitude system: the absolute bolometric magnitude (Mbol) of an object is a logarithmic measure of its total energy emission rate, while absolute magnitude is a logarithmic measure of the luminosity within some specific wavelength range or filter band.
In contrast, the term brightness in astronomy is generally used to refer to an object's apparent brightness: that is, how bright an object appears to an observer. Apparent brightness depends on both the luminosity of the object and the distance between the object and observer, and also on any absorption of light along the path from object to observer. Apparent magnitude is a logarithmic measure of apparent brightness. The distance determined by luminosity measures can be somewhat ambiguous, and is thus sometimes called the luminosity distance.
Measurement
When not qualified, the term "luminosity" means bolometric luminosity, which is measured either in the SI units, watts, or in terms of solar luminosities (). A bolometer is the instrument used to measure radiant energy over a wide band by absorption and measurement of heating. A star also radiates neutrinos, which carry off some energy (about 2% in the case of the Sun), contributing to the star's total luminosity. The IAU has defined a nominal solar luminosity of to promote publication of consistent and comparable values in units of the solar luminosity.
While bolometers do exist, they cannot be used to measure even the apparent brightness of a star because they are insufficiently sensitive across the electromagnetic spectrum and because most wavelengths do not reach the surface of the Earth. In practice bolometric magnitudes are measured by taking measurements at certain wavelengths and constructing a model of the total spectrum that is most likely to match those measurements. In some cases, the process of estimation is extreme, with luminosities being calculated when less than 1% of the energy output is observed, for example with a hot Wolf-Rayet star observed only in the infrared. Bolometric luminosities can also be calculated using a bolometric correction to a luminosity in a particular passband.
The term luminosity is also used in relation to particular passbands such as a visual luminosity of K-band luminosity. These are not generally luminosities in the strict sense of an absolute measure of radiated power, but absolute magnitudes defined for a given filter in a photometric system. Several different photometric systems exist. Some such as the UBV or Johnson system are defined against photometric standard stars, while others such as the AB system are defined in terms of a spectral flux density.
Stellar luminosity
A star's luminosity can be determined from two stellar characteristics: size and effective temperature. The former is typically represented in terms of solar radii, R⊙, while the latter is represented in kelvins, but in most cases neither can be measured directly. To determine a star's radius, two other metrics are needed: the star's angular diameter and its distance from Earth. Both can be measured with great accuracy in certain cases, with cool supergiants often having large angular diameters, and some cool evolved stars having masers in their atmospheres that can be used to measure the parallax using VLBI. However, for most stars the angular diameter or parallax, or both, are far below our ability to measure with any certainty. Since the effective temperature is merely a number that represents the temperature of a black body that would reproduce the luminosity, it obviously cannot be measured directly, but it can be estimated from the spectrum.
An alternative way to measure stellar luminosity is to measure the star's apparent brightness and distance. A third component needed to derive the luminosity is the degree of interstellar extinction that is present, a condition that usually arises because of gas and dust present in the interstellar medium (ISM), the Earth's atmosphere, and circumstellar matter. Consequently, one of astronomy's central challenges in determining a star's luminosity is to derive accurate measurements for each of these components, without which an accurate luminosity figure remains elusive. Extinction can only be measured directly if the actual and observed luminosities are both known, but it can be estimated from the observed colour of a star, using models of the expected level of reddening from the interstellar medium.
In the current system of stellar classification, stars are grouped according to temperature, with the massive, very young and energetic Class O stars boasting temperatures in excess of 30,000 K while the less massive, typically older Class M stars exhibit temperatures less than 3,500 K. Because luminosity is proportional to temperature to the fourth power, the large variation in stellar temperatures produces an even vaster variation in stellar luminosity. Because the luminosity depends on a high power of the stellar mass, high mass luminous stars have much shorter lifetimes. The most luminous stars are always young stars, no more than a few million years for the most extreme. In the Hertzsprung–Russell diagram, the x-axis represents temperature or spectral type while the y-axis represents luminosity or magnitude. The vast majority of stars are found along the main sequence with blue Class O stars found at the top left of the chart while red Class M stars fall to the bottom right. Certain stars like Deneb and Betelgeuse are found above and to the right of the main sequence, more luminous or cooler than their equivalents on the main sequence. Increased luminosity at the same temperature, or alternatively cooler temperature at the same luminosity, indicates that these stars are larger than those on the main sequence and they are called giants or supergiants.
Blue and white supergiants are high luminosity stars somewhat cooler than the most luminous main sequence stars. A star like Deneb, for example, has a luminosity around 200,000 L⊙, a spectral type of A2, and an effective temperature around 8,500 K, meaning it has a radius around . For comparison, the red supergiant Betelgeuse has a luminosity around 100,000 L⊙, a spectral type of M2, and a temperature around 3,500 K, meaning its radius is about . Red supergiants are the largest type of star, but the most luminous are much smaller and hotter, with temperatures up to 50,000 K and more and luminosities of several million L⊙, meaning their radii are just a few tens of R⊙. For example, R136a1 has a temperature over 46,000 K and a luminosity of more than 6,100,000 L⊙ (mostly in the UV), it is only .
Radio luminosity
The luminosity of a radio source is measured in , to avoid having to specify a bandwidth over which it is measured. The observed strength, or flux density, of a radio source is measured in Jansky where .
For example, consider a 10W transmitter at a distance of 1 million metres, radiating over a bandwidth of 1 MHz. By the time that power has reached the observer, the power is spread over the surface of a sphere with area or about , so its flux density is .
More generally, for sources at cosmological distances, a k-correction must be made for the spectral index α of the source, and a relativistic correction must be made for the fact that the frequency scale in the emitted rest frame is different from that in the observer's rest frame. So the full expression for radio luminosity, assuming isotropic emission, is
where Lν is the luminosity in , Sobs is the observed flux density in , DL is the luminosity distance in metres, z is the redshift, α is the spectral index (in the sense , and in radio astronomy, assuming thermal emission the spectral index is typically equal to 2.)
For example, consider a 1 Jy signal from a radio source at a redshift of 1, at a frequency of 1.4 GHz.
Ned Wright's cosmology calculator calculates a luminosity distance for a redshift of 1 to be 6701 Mpc = 2×1026 m giving a radio luminosity of .
To calculate the total radio power, this luminosity must be integrated over the bandwidth of the emission. A common assumption is to set the bandwidth to the observing frequency, which effectively assumes the power radiated has uniform intensity from zero frequency up to the observing frequency. In the case above, the total power is . This is sometimes expressed in terms of the total (i.e. integrated over all wavelengths) luminosity of the Sun which is , giving a radio power of .
Luminosity formulae
The Stefan–Boltzmann equation applied to a black body gives the value for luminosity for a black body, an idealized object which is perfectly opaque and non-reflecting:
where A is the surface area, T is the temperature (in kelvins) and is the Stefan–Boltzmann constant, with a value of
Imagine a point source of light of luminosity that radiates equally in all directions. A hollow sphere centered on the point would have its entire interior surface illuminated. As the radius increases, the surface area will also increase, and the constant luminosity has more surface area to illuminate, leading to a decrease in observed brightness.
where
is the area of the illuminated surface.
is the flux density of the illuminated surface.
The surface area of a sphere with radius r is , so for stars and other point sources of light:
where is the distance from the observer to the light source.
For stars on the main sequence, luminosity is also related to mass approximately as below:
Relationship to magnitude
Luminosity is an intrinsic measurable property of a star independent of distance. The concept of magnitude, on the other hand, incorporates distance. The apparent magnitude is a measure of the diminishing flux of light as a result of distance according to the inverse-square law. The Pogson logarithmic scale is used to measure both apparent and absolute magnitudes, the latter corresponding to the brightness of a star or other celestial body as seen if it would be located at an interstellar distance of . In addition to this brightness decrease from increased distance, there is an extra decrease of brightness due to extinction from intervening interstellar dust.
By measuring the width of certain absorption lines in the stellar spectrum, it is often possible to assign a certain luminosity class to a star without knowing its distance. Thus a fair measure of its absolute magnitude can be determined without knowing its distance nor the interstellar extinction.
In measuring star brightnesses, absolute magnitude, apparent magnitude, and distance are interrelated parameters—if two are known, the third can be determined. Since the Sun's luminosity is the standard, comparing these parameters with the Sun's apparent magnitude and distance is the easiest way to remember how to convert between them, although officially, zero point values are defined by the IAU.
The magnitude of a star, a unitless measure, is a logarithmic scale of observed visible brightness. The apparent magnitude is the observed visible brightness from Earth which depends on the distance of the object. The absolute magnitude is the apparent magnitude at a distance of , therefore the bolometric absolute magnitude is a logarithmic measure of the bolometric luminosity.
The difference in bolometric magnitude between two objects is related to their luminosity ratio according to:
where:
is the bolometric magnitude of the first object
is the bolometric magnitude of the second object.
is the first object's bolometric luminosity
is the second object's bolometric luminosity
The zero point of the absolute magnitude scale is actually defined as a fixed luminosity of . Therefore, the absolute magnitude can be calculated from a luminosity in watts:
where is the zero point luminosity
and the luminosity in watts can be calculated from an absolute magnitude (although absolute magnitudes are often not measured relative to an absolute flux):
See also
Glossary of astronomy
List of brightest stars
List of most luminous stars
Orders of magnitude (power)
Solar luminosity
References
Further reading
External links
Luminosity calculator
Ned Wright's cosmology calculator
Concepts in astrophysics
Physical quantities | Luminosity | [
"Physics",
"Mathematics"
] | 2,688 | [
"Physical phenomena",
"Concepts in astrophysics",
"Physical quantities",
"Quantity",
"Astrophysics",
"Physical properties"
] |
44,883 | https://en.wikipedia.org/wiki/Welding | Welding is a fabrication process that joins materials, usually metals or thermoplastics, primarily by using high temperature to melt the parts together and allow them to cool, causing fusion. Common alternative methods include solvent welding (of thermoplastics) using chemicals to melt materials being bonded without heat, and solid-state welding processes which bond without melting, such as pressure, cold welding, and diffusion bonding.
Metal welding is distinct from lower temperature bonding techniques such as brazing and soldering, which do not melt the base metal (parent metal) and instead require flowing a filler metal to solidify their bonds.
In addition to melting the base metal in welding, a filler material is typically added to the joint to form a pool of molten material (the weld pool) that cools to form a joint that can be stronger than the base material. Welding also requires a form of shield to protect the filler metals or melted metals from being contaminated or oxidized.
Many different energy sources can be used for welding, including a gas flame (chemical), an electric arc (electrical), a laser, an electron beam, friction, and ultrasound. While often an industrial process, welding may be performed in many different environments, including in open air, under water, and in outer space. Welding is a hazardous undertaking and precautions are required to avoid burns, electric shock, vision damage, inhalation of poisonous gases and fumes, and exposure to intense ultraviolet radiation.
Until the end of the 19th century, the only welding process was forge welding, which blacksmiths had used for millennia to join iron and steel by heating and hammering. Arc welding and oxy-fuel welding were among the first processes to develop late in the century, and electric resistance welding followed soon after. Welding technology advanced quickly during the early 20th century, as world wars drove the demand for reliable and inexpensive joining methods. Following the wars, several modern welding techniques were developed, including manual methods like shielded metal arc welding, now one of the most popular welding methods, as well as semi-automatic and automatic processes such as gas metal arc welding, submerged arc welding, flux-cored arc welding and electroslag welding. Developments continued with the invention of laser beam welding, electron beam welding, magnetic pulse welding, and friction stir welding in the latter half of the century. Today, as the science continues to advance, robot welding is commonplace in industrial settings, and researchers continue to develop new welding methods and gain greater understanding of weld quality.
Etymology
The term weld is derived from the Middle English verb well (; plural/present tense: ) or welling (), meaning 'to heat' (to the maximum temperature possible); 'to bring to a boil'. The modern word was probably derived from the past-tense participle welled (), with the addition of d for this purpose being common in the Germanic languages of the Angles and Saxons. It was first recorded in English in 1590. A fourteenth century translation of the Christian Bible into English by John Wycliffe translates Isaiah 2:4 as "" (they shall beat together their swords into plowshares). In the 1590 version this was changed to "" (they shall weld together their swords into plowshares), suggesting this particular use of the word probably became popular in English sometime between these periods.
The Old English word for welding iron was ('to bring together') or ('to bring together hot').
The word is related to the Old Swedish word , meaning 'to boil', which could refer to joining metals, as in (literally 'to boil iron'). Sweden was a large exporter of iron during the Middle Ages, so the word may have entered English from the Swedish iron trade, or may have been imported with the thousands of Viking settlements that arrived in England before and during the Viking Age, as more than half of the most common English words in everyday use are Scandinavian in origin.
History
The history of joining metals goes back several millennia. The earliest examples of this come from the Bronze and Iron Ages in Europe and the Middle East. The ancient Greek historian Herodotus states in The Histories of the 5th century BC that Glaucus of Chios "was the man who single-handedly invented iron welding". Forge welding was used in the construction of the Iron pillar of Delhi, erected in Delhi, India about 310 AD and weighing 5.4 metric tons.
The Middle Ages brought advances in forge welding, in which blacksmiths pounded heated metal repeatedly until bonding occurred. In 1540, Vannoccio Biringuccio published De la pirotechnia, which includes descriptions of the forging operation. Renaissance craftsmen were skilled in the process, and the industry continued to grow during the following centuries.
In 1800, Sir Humphry Davy discovered the short-pulse electrical arc and presented his results in 1801. In 1802, Russian scientist Vasily Petrov created the continuous electric arc, and subsequently published "News of Galvanic-Voltaic Experiments" in 1803, in which he described experiments carried out in 1802. Of great importance in this work was the description of a stable arc discharge and the indication of its possible use for many applications, one being melting metals. In 1808, Davy, who was unaware of Petrov's work, rediscovered the continuous electric arc. In 1881–82 inventors Nikolai Benardos (Russian) and Stanisław Olszewski (Polish) created the first electric arc welding method known as carbon arc welding using carbon electrodes. The advances in arc welding continued with the invention of metal electrodes in the late 1800s by a Russian, Nikolai Slavyanov (1888), and an American, C. L. Coffin (1890). Around 1900, A. P. Strohmenger released a coated metal electrode in Britain, which gave a more stable arc. In 1905, Russian scientist Vladimir Mitkevich proposed using a three-phase electric arc for welding. Alternating current welding was invented by C. J. Holslag in 1919, but did not become popular for another decade.
Resistance welding was also developed during the final decades of the 19th century, with the first patents going to Elihu Thomson in 1885, who produced further advances over the next 15 years. Thermite welding was invented in 1893, and around that time another process, oxyfuel welding, became well established. Acetylene was discovered in 1836 by Edmund Davy, but its use was not practical in welding until about 1900, when a suitable torch was developed. At first, oxyfuel welding was one of the more popular welding methods due to its portability and relatively low cost. As the 20th century progressed, however, it fell out of favor for industrial applications. It was largely replaced with arc welding, as advances in metal coverings (known as flux) were made. Flux covering the electrode primarily shields the base material from impurities, but also stabilizes the arc and can add alloying components to the weld metal.
World War I caused a major surge in the use of welding, with the various military powers attempting to determine which of the several new welding processes would be best. The British primarily used arc welding, even constructing a ship, the "Fullagar" with an entirely welded hull. Arc welding was first applied to aircraft during the war as well, as some German airplane fuselages were constructed using the process. Also noteworthy is the first welded road bridge in the world, the Maurzyce Bridge in Poland (1928).
During the 1920s, significant advances were made in welding technology, including the introduction of automatic welding in 1920, in which electrode wire was fed continuously. Shielding gas became a subject receiving much attention, as scientists attempted to protect welds from the effects of oxygen and nitrogen in the atmosphere. Porosity and brittleness were the primary problems, and the solutions that developed included the use of hydrogen, argon, and helium as welding atmospheres. During the following decade, further advances allowed for the welding of reactive metals like aluminum and magnesium. This in conjunction with developments in automatic welding, alternating current, and fluxes fed a major expansion of arc welding during the 1930s and then during World War II. In 1930, the first all-welded merchant vessel, M/S Carolinian, was launched.
During the middle of the century, many new welding methods were invented. In 1930, Kyle Taylor was responsible for the release of stud welding, which soon became popular in shipbuilding and construction. Submerged arc welding was invented the same year and continues to be popular today. In 1932 a Russian, Konstantin Khrenov eventually implemented the first underwater electric arc welding. Gas tungsten arc welding, after decades of development, was finally perfected in 1941, and gas metal arc welding followed in 1948, allowing for fast welding of non-ferrous materials but requiring expensive shielding gases. Shielded metal arc welding was developed during the 1950s, using a flux-coated consumable electrode, and it quickly became the most popular metal arc welding process. In 1957, the flux-cored arc welding process debuted, in which the self-shielded wire electrode could be used with automatic equipment, resulting in greatly increased welding speeds, and that same year, plasma arc welding was invented by Robert Gage. Electroslag welding was introduced in 1958, and it was followed by its cousin, electrogas welding, in 1961. In 1953, the Soviet scientist N. F. Kazakov proposed the diffusion bonding method.
Other recent developments in welding include the 1958 breakthrough of electron beam welding, making deep and narrow welding possible through the concentrated heat source. Following the invention of the laser in 1960, laser beam welding debuted several decades later, and has proved to be especially useful in high-speed, automated welding. Magnetic pulse welding (MPW) has been industrially used since 1967. Friction stir welding was invented in 1991 by Wayne Thomas at The Welding Institute (TWI, UK) and found high-quality applications all over the world. All of these four new processes continue to be quite expensive due to the high cost of the necessary equipment, and this has limited their applications.
Processes
Welding joins two pieces of metal using heat, pressure, or both. The most common modern welding methods use heat sufficient to melt the base metals to be joined and the filler metal. This includes gas welding and all forms of arc welding. The area where the base and filler metals melt is called the weld pool or puddle. Most welding methods involve pushing the puddle along a joint to create a weld bead. Overlapping pieces of metal can be joined by forming the weld pool within a hole made in the topmost piece of base metal. This is called a plug weld. Overlapping base metals are commonly joined using electric resistance welding, a process that combines heat and pressure and does not require a filler metal. Solid-state welding processes join two pieces of metal using pressure.
Gas welding
The most common gas welding process is oxyfuel welding, also known as oxyacetylene welding. It is one of the oldest and most versatile welding processes, but in recent years it has become less popular in industrial applications. It is still widely used for welding pipes and tubes, as well as repair work.
The equipment is relatively inexpensive and simple, generally employing the combustion of acetylene in oxygen to produce a welding flame temperature of about 3100 °C (5600 °F). The flame, since it is less concentrated than an electric arc, causes slower weld cooling, which can lead to greater residual stresses and weld distortion, though it eases the welding of high alloy steels. A similar process, generally called oxyfuel cutting, is used to cut metals.
Arc welding
These processes use a welding power supply to create and maintain an electric arc between an electrode and the base material to melt metals at the welding point. They can use either direct current (DC) or alternating current (AC), and consumable or non-consumable electrodes. The welding region is sometimes protected by some type of inert or semi-inert gas, known as a shielding gas, and filler material is sometimes used as well.
Arc welding processes
One of the most common types of arc welding is shielded metal arc welding (SMAW); it is also known as manual metal arc welding (MMAW) or stick welding. Electric current is used to strike an arc between the base material and consumable electrode rod, which is made of filler material (typical steel) and is covered with a flux that protects the weld area from oxidation and contamination by producing carbon dioxide (CO2) gas during the welding process. The electrode core itself acts as filler material, making a separate filler unnecessary.
The process is versatile and can be performed with relatively inexpensive equipment, making it well suited to shop jobs and field work. An operator can become reasonably proficient with a modest amount of training and can achieve mastery with experience. Weld times are rather slow, since the consumable electrodes must be frequently replaced and because slag, the residue from the flux, must be chipped away after welding. Furthermore, the process is generally limited to welding ferrous materials, though special electrodes have made possible the welding of cast iron, stainless steel, aluminum, and other metals.
Gas metal arc welding (GMAW), also known as metal inert gas or MIG welding, is a semi-automatic or automatic process that uses a continuous wire feed as an electrode and an inert or semi-inert gas mixture to protect the weld from contamination. Since the electrode is continuous, welding speeds are greater for GMAW than for SMAW.
A related process, flux-cored arc welding (FCAW), uses similar equipment but uses wire consisting of a steel electrode surrounding a powder fill material. This cored wire is more expensive than the standard solid wire and can generate fumes and/or slag, but it permits even higher welding speed and greater metal penetration.
Gas tungsten arc welding (GTAW), or tungsten inert gas (TIG) welding, is a manual welding process that uses a non-consumable tungsten electrode, an inert or semi-inert gas mixture, and a separate filler material. Especially useful for welding thin materials, this method is characterized by a stable arc and high-quality welds, but it requires significant operator skill and can only be accomplished at relatively low speeds.
GTAW can be used on nearly all weldable metals, though it is most often applied to stainless steel and light metals. It is often used when quality welds are extremely important, such as in bicycle, aircraft and naval applications. A related process, plasma arc welding, also uses a tungsten electrode but uses plasma gas to make the arc. The arc is more concentrated than the GTAW arc, making transverse control more critical and thus generally restricting the technique to a mechanized process. Because of its stable current, the method can be used on a wider range of material thicknesses than can the GTAW process and it is much faster. It can be applied to all of the same materials as GTAW except magnesium, and automated welding of stainless steel is one important application of the process. A variation of the process is plasma cutting, an efficient steel cutting process.
Submerged arc welding (SAW) is a high-productivity welding method in which the arc is struck beneath a covering layer of flux. This increases arc quality since contaminants in the atmosphere are blocked by the flux. The slag that forms on the weld generally comes off by itself, and combined with the use of a continuous wire feed, the weld deposition rate is high. Working conditions are much improved over other arc welding processes, since the flux hides the arc and almost no smoke is produced. The process is commonly used in industry, especially for large products and in the manufacture of welded pressure vessels. Other arc welding processes include atomic hydrogen welding, electroslag welding (ESW), electrogas welding, and stud arc welding. ESW is a highly productive, single-pass welding process for thicker materials between 1 inch (25 mm) and 12 inches (300 mm) in a vertical or close to vertical position.
Arc welding power supplies
To supply the electrical power necessary for arc welding processes, a variety of different power supplies can be used. The most common welding power supplies are constant current power supplies and constant voltage power supplies. In arc welding, the length of the arc is directly related to the voltage, and the amount of heat input is related to the current. Constant current power supplies are most often used for manual welding processes such as gas tungsten arc welding and shielded metal arc welding, because they maintain a relatively constant current even as the voltage varies. This is important because in manual welding, it can be difficult to hold the electrode perfectly steady, and as a result, the arc length and thus voltage tend to fluctuate. Constant voltage power supplies hold the voltage constant and vary the current, and as a result, are most often used for automated welding processes such as gas metal arc welding, flux-cored arc welding, and submerged arc welding. In these processes, arc length is kept constant, since any fluctuation in the distance between the wire and the base material is quickly rectified by a large change in current. For example, if the wire and the base material get too close, the current will rapidly increase, which in turn causes the heat to increase and the tip of the wire to melt, returning it to its original separation distance.
The type of current used plays an important role in arc welding. Consumable electrode processes such as shielded metal arc welding and gas metal arc welding generally use direct current, but the electrode can be charged either positively or negatively. In welding, the positively charged anode will have a greater heat concentration, and as a result, changing the polarity of the electrode affects weld properties. If the electrode is positively charged, the base metal will be hotter, increasing weld penetration and welding speed. Alternatively, a negatively charged electrode results in more shallow welds. Non-consumable electrode processes, such as gas tungsten arc welding, can use either type of direct current, as well as alternating current. However, with direct current, because the electrode only creates the arc and does not provide filler material, a positively charged electrode causes shallow welds, while a negatively charged electrode makes deeper welds. Alternating current rapidly moves between these two, resulting in medium-penetration welds. One disadvantage of AC, the fact that the arc must be re-ignited after every zero crossings, has been addressed with the invention of special power units that produce a square wave pattern instead of the normal sine wave, making rapid zero crossings possible and minimizing the effects of the problem.
Resistance welding
Resistance welding involves the generation of heat by passing current through the resistance caused by the contact between two or more metal surfaces. Small pools of molten metal are formed at the weld area as high current (1,000–100,000 A) is passed through the metal. In general, resistance welding methods are efficient and cause little pollution, but their applications are somewhat limited and the equipment cost can be high.
Resistance spot welding is a popular method used to join overlapping metal sheets of up to 3 mm thick. Two electrodes are simultaneously used to clamp the metal sheets together and to pass current through the sheets. The advantages of the method include efficient energy use, limited workpiece deformation, high production rates, easy automation, and no required filler materials. Weld strength is significantly lower than with other welding methods, making the process suitable for only certain applications. It is used extensively in the automotive industry—ordinary cars can have several thousand spot welds made by industrial robots. A specialized process called shot welding, can be used to spot weld stainless steel.
Seam welding also relies on two electrodes to apply pressure and current to join metal sheets. However, instead of pointed electrodes, wheel-shaped electrodes roll along and often feed the workpiece, making it possible to make long continuous welds. In the past, this process was used in the manufacture of beverage cans, but now its uses are more limited. Other resistance welding methods include butt welding, flash welding, projection welding, and upset welding.
Energy beam welding
Energy beam welding methods, namely laser beam welding and electron beam welding, are relatively new processes that have become quite popular in high production applications. The two processes are quite similar, differing most notably in their source of power. Laser beam welding employs a highly focused laser beam, while electron beam welding is done in a vacuum and uses an electron beam. Both have a very high energy density, making deep weld penetration possible and minimizing the size of the weld area. Both processes are extremely fast, and are easily automated, making them highly productive. The primary disadvantages are their very high equipment costs (though these are decreasing) and a susceptibility to thermal cracking. Developments in this area include laser-hybrid welding, which uses principles from both laser beam welding and arc welding for even better weld properties, laser cladding, and x-ray welding.
Solid-state welding
Like forge welding (the earliest welding process discovered), some modern welding methods do not involve the melting of the materials being joined. One of the most popular, ultrasonic welding, is used to connect thin sheets or wires made of metal or thermoplastic by vibrating them at high frequency and under high pressure. The equipment and methods involved are similar to that of resistance welding, but instead of electric current, vibration provides energy input. When welding metals, the vibrations are introduced horizontally, and the materials are not melted; with plastics, which should have similar melting temperatures, vertically. Ultrasonic welding is commonly used for making electrical connections out of aluminum or copper, and it is also a very common polymer welding process.
Another common process, explosion welding, involves the joining of materials by pushing them together under extremely high pressure. The energy from the impact plasticizes the materials, forming a weld, even though only a limited amount of heat is generated. The process is commonly used for welding dissimilar materials, including bonding aluminum to carbon steel in ship hulls and stainless steel or titanium to carbon steel in petrochemical pressure vessels.
Other solid-state welding processes include friction welding (including friction stir welding and friction stir spot welding), magnetic pulse welding, co-extrusion welding, cold welding, diffusion bonding, exothermic welding, high frequency welding, hot pressure welding, induction welding, and roll bonding.
Geometry
Welds can be geometrically prepared in many different ways. The five basic types of weld joints are the butt joint, lap joint, corner joint, edge joint, and T-joint (a variant of this last is the cruciform joint). Other variations exist as well—for example, double-V preparation joints are characterized by the two pieces of material each tapering to a single center point at one-half their height. Single-U and double-U preparation joints are also fairly common—instead of having straight edges like the single-V and double-V preparation joints, they are curved, forming the shape of a U. Lap joints are also commonly more than two pieces thick—depending on the process used and the thickness of the material, many pieces can be welded together in a lap joint geometry.
Many welding processes require the use of a particular joint design; for example, resistance spot welding, laser beam welding, and electron beam welding are most frequently performed on lap joints. Other welding methods, like shielded metal arc welding, are extremely versatile and can weld virtually any type of joint. Some processes can also be used to make multipass welds, in which one weld is allowed to cool, and then another weld is performed on top of it. This allows for the welding of thick sections arranged in a single-V preparation joint, for example.
After welding, a number of distinct regions can be identified in the weld area. The weld itself is called the fusion zone—more specifically, it is where the filler metal was laid during the welding process. The properties of the fusion zone depend primarily on the filler metal used, and its compatibility with the base materials. It is surrounded by the heat-affected zone, the area that had its microstructure and properties altered by the weld. These properties depend on the base material's behavior when subjected to heat. The metal in this area is often weaker than both the base material and the fusion zone, and is also where residual stresses are found.
Quality
Many distinct factors influence the strength of welds and the material around them, including the welding method, the amount and concentration of energy input, the weldability of the base material, filler material, and flux material, the design of the joint, and the interactions between all these factors.
For example, the factor of welding position influences weld quality, that welding codes & specifications may require testing—both welding procedures and welders—using specified welding positions: 1G (flat), 2G (horizontal), 3G (vertical), 4G (overhead), 5G (horizontal fixed pipe), or 6G (inclined fixed pipe).
To test the quality of a weld, either destructive or nondestructive testing methods are commonly used to verify that welds are free of defects, have acceptable levels of residual stresses and distortion, and have acceptable heat-affected zone (HAZ) properties. Types of welding defects include cracks, distortion, gas inclusions (porosity), non-metallic inclusions, lack of fusion, incomplete penetration, lamellar tearing, and undercutting.
The metalworking industry has instituted codes and specifications to guide welders, weld inspectors, engineers, managers, and property owners in proper welding technique, design of welds, how to judge the quality of welding procedure specification, how to judge the skill of the person performing the weld, and how to ensure the quality of a welding job. Methods such as visual inspection, radiography, ultrasonic testing, phased-array ultrasonics, dye penetrant inspection, magnetic particle inspection, or industrial computed tomography can help with detection and analysis of certain defects.
Heat-affected zone
The heat-affected zone (HAZ) is a ring surrounding the weld in which the temperature of the welding process, combined with the stresses of uneven heating and cooling, alters the heat-treatment properties of the alloy. The effects of welding on the material surrounding the weld can be detrimental—depending on the materials used and the heat input of the welding process used, the HAZ can be of varying size and strength. The thermal diffusivity of the base material plays a large role—if the diffusivity is high, the material cooling rate is high and the HAZ is relatively small. Conversely, a low diffusivity leads to slower cooling and a larger HAZ. The amount of heat injected by the welding process plays an important role as well, as processes like oxyacetylene welding have an unconcentrated heat input and increase the size of the HAZ. Processes like laser beam welding give a highly concentrated, limited amount of heat, resulting in a small HAZ. Arc welding falls between these two extremes, with the individual processes varying somewhat in heat input. To calculate the heat input for arc welding procedures, the following formula can be used:
where Q = heat input (kJ/mm), V = voltage (V), I = current (A), and S = welding speed (mm/min). The efficiency is dependent on the welding process used, with shielded metal arc welding having a value of 0.75, gas metal arc welding and submerged arc welding, 0.9, and gas tungsten arc welding, 0.8. Methods of alleviating the stresses and brittleness created in the HAZ include stress relieving and tempering.
One major defect concerning the HAZ would be cracking at the toes , due to the rapid expansion (heating) and contraction (cooling) the material may not have the ability to withstand the stress and could cause cracking, one method the control these stress would be to control the heating and cooling rate, such as pre-heating and post- heating
Lifetime extension with after treatment methods
The durability and life of dynamically loaded, welded steel structures is determined in many cases by the welds, in particular the weld transitions. Through selective treatment of the transitions by grinding (abrasive cutting), shot peening, High-frequency impact treatment, Ultrasonic impact treatment, etc. the durability of many designs increases significantly.
Metallurgy
Most solids used are engineering materials consisting of crystalline solids in which the atoms or ions are arranged in a repetitive geometric pattern which is known as a lattice structure. The only exception is material that is made from glass which is a combination of a supercooled liquid and polymers which are aggregates of large organic molecules.
Crystalline solids cohesion is obtained by a metallic or chemical bond that is formed between the constituent atoms. Chemical bonds can be grouped into two types consisting of ionic and covalent. To form an ionic bond, either a valence or bonding electron separates from one atom and becomes attached to another atom to form oppositely charged ions. The bonding in the static position is when the ions occupy an equilibrium position where the resulting force between them is zero. When the ions are exerted in tension force, the inter-ionic spacing increases creating an electrostatic attractive force, while a repulsing force under compressive force between the atomic nuclei is dominant.
Covalent bonding takes place when one of the constituent atoms loses one or more electrons, with the other atom gaining the electrons, resulting in an electron cloud that is shared by the molecule as a whole. In both ionic and covalent bonding the location of the ions and electrons are constrained relative to each other, thereby resulting in the bond being characteristically brittle.
Metallic bonding can be classified as a type of covalent bonding for which the constituent atoms are of the same type and do not combine with one another to form a chemical bond. Atoms will lose an electron(s) forming an array of positive ions. These electrons are shared by the lattice which makes the electron cluster mobile, as the electrons are free to move as well as the ions. For this, it gives metals their relatively high thermal and electrical conductivity as well as being characteristically ductile.
Three of the most commonly used crystal lattice structures in metals are the body-centred cubic, face-centred cubic and close-packed hexagonal. Ferritic steel has a body-centred cubic structure and austenitic steel, non-ferrous metals like aluminium, copper and nickel have the face-centred cubic structure.
Ductility is an important factor in ensuring the integrity of structures by enabling them to sustain local stress concentrations without fracture. In addition, structures are required to be of an acceptable strength, which is related to a material's yield strength. In general, as the yield strength of a material increases, there is a corresponding reduction in fracture toughness.
A reduction in fracture toughness may also be attributed to the embrittlement effect of impurities, or for body-centred cubic metals, from a reduction in temperature. Metals and in particular steels have a transitional temperature range where above this range the metal has acceptable notch-ductility while below this range the material becomes brittle. Within the range, the materials behavior is unpredictable. The reduction in fracture toughness is accompanied by a change in the fracture appearance. When above the transition, the fracture is primarily due to micro-void coalescence, which results in the fracture appearing fibrous. When the temperatures falls the fracture will show signs of cleavage facets. These two appearances are visible by the naked eye. Brittle fracture in steel plates may appear as chevron markings under the microscope. These arrow-like ridges on the crack surface point towards the origin of the fracture.
Fracture toughness is measured using a notched and pre-cracked rectangular specimen, of which the dimensions are specified in standards, for example ASTM E23. There are other means of estimating or measuring fracture toughness by the following: The Charpy impact test per ASTM A370; The crack-tip opening displacement (CTOD) test per BS 7448–1; The J integral test per ASTM E1820; The Pellini drop-weight test per ASTM E208.
Unusual conditions
While many welding applications are done in controlled environments such as factories and repair shops, some welding processes are commonly used in a wide variety of conditions, such as open air, underwater, and vacuums (such as space). In open-air applications, such as construction and outdoors repair, shielded metal arc welding is the most common process. Processes that employ inert gases to protect the weld cannot be readily used in such situations, because unpredictable atmospheric movements can result in a faulty weld. Shielded metal arc welding is also often used in underwater welding in the construction and repair of ships, offshore platforms, and pipelines, but others, such as flux cored arc welding and gas tungsten arc welding, are also common. Welding in space is also possible—it was first attempted in 1969 by Russian cosmonauts during the Soyuz 6 mission, when they performed experiments to test shielded metal arc welding, plasma arc welding, and electron beam welding in a depressurized environment. Further testing of these methods was done in the following decades, and today researchers continue to develop methods for using other welding processes in space, such as laser beam welding, resistance welding, and friction welding. Advances in these areas may be useful for future endeavours similar to the construction of the International Space Station, which could rely on welding for joining in space the parts that were manufactured on Earth.
Safety issues
Welding can be dangerous and unhealthy if the proper precautions are not taken. However, using new technology and proper protection greatly reduces risks of injury and death associated with welding.
Since many common welding procedures involve an open electric arc or flame, the risk of burns and fire is significant; this is why it is classified as a hot work process. To prevent injury, welders wear personal protective equipment in the form of heavy leather gloves and protective long-sleeve jackets to avoid exposure to extreme heat and flames. Synthetic clothing such as polyester should not be worn since it may burn, causing injury. Additionally, the brightness of the weld area leads to a condition called arc eye or flash burns in which ultraviolet light causes inflammation of the cornea and can burn the retinas of the eyes. Goggles and welding helmets with dark UV-filtering face plates are worn to prevent this exposure. Since the 2000s, some helmets have included a face plate which instantly darkens upon exposure to the intense UV light. To protect bystanders, the welding area is often surrounded with translucent welding curtains. These curtains, made of a polyvinyl chloride plastic film, shield people outside the welding area from the UV light of the electric arc, but cannot replace the filter glass used in helmets. Depending on the type of material, welding varieties, and other factors, welding can produce over 100 dB(A) of noise. Long term or continuous exposure to higher decibels can lead to noise-induced hearing loss.
Welders are often exposed to dangerous gases and particulate matter. Processes like flux-cored arc welding and shielded metal arc welding produce smoke containing particles of various types of oxides. The size of the particles in question tends to influence the toxicity of the fumes, with smaller particles presenting a greater danger. This is because smaller particles have the ability to cross the blood–brain barrier. Fumes and gases, such as carbon dioxide, ozone, and fumes containing heavy metals, can be dangerous to welders lacking proper ventilation and training. Exposure to manganese welding fumes, for example, even at low levels (<0.2 mg/m3), may lead to neurological problems or to damage to the lungs, liver, kidneys, or central nervous system. Nano particles can become trapped in the alveolar macrophages of the lungs and induce pulmonary fibrosis. The use of compressed gases and flames in many, welding processes poses an explosion and fire risk. Some common precautions include limiting the amount of oxygen in the air, and keeping combustible materials away from the workplace.
Costs and trends
As an industrial process, the cost of welding plays a crucial role in manufacturing decisions. Many different variables affect the total cost, including equipment cost, labor cost, material cost, and energy cost. Depending on the process, equipment cost can vary, from inexpensive for methods like shielded metal arc welding and oxyfuel welding, to extremely expensive for methods like laser beam welding and electron beam welding. Because of their high cost, they are only used in high production operations. Similarly, because automation and robots increase equipment costs, they are only implemented when high production is necessary. Labor cost depends on the deposition rate (the rate of welding), the hourly wage, and the total operation time, including time spent fitting, welding, and handling the part. The cost of materials includes the cost of the base and filler material, and the cost of shielding gases. Finally, energy cost depends on arc time and welding power demand.
For manual welding methods, labor costs generally make up the vast majority of the total cost. As a result, many cost-saving measures are focused on minimizing operation time. To do this, welding procedures with high deposition rates can be selected, and weld parameters can be fine-tuned to increase welding speed. Mechanization and automation are often implemented to reduce labor costs, but this frequently increases the cost of equipment and creates additional setup time. Material costs tend to increase when special properties are necessary, and energy costs normally do not amount to more than several percent of the total welding cost.
In recent years, in order to minimize labor costs in high production manufacturing, industrial welding has become increasingly more automated, most notably with the use of robots in resistance spot welding (especially in the automotive industry) and in arc welding. In robot welding, mechanized devices both hold the material and perform the weld and at first, spot welding was its most common application, but robotic arc welding increases in popularity as technology advances. Other key areas of research and development include the welding of dissimilar materials (such as steel and aluminum, for example) and new welding processes, such as friction stir, magnetic pulse, conductive heat seam, and laser-hybrid welding. Furthermore, progress is desired in making more specialized methods like laser beam welding practical for more applications, such as in the aerospace and automotive industries. Researchers also hope to better understand the often unpredictable properties of welds, especially microstructure, residual stresses, and a weld's tendency to crack or deform.
The trend of accelerating the speed at which welds are performed in the steel erection industry comes at a risk to the integrity of the connection. Without proper fusion to the base materials provided by sufficient arc time on the weld, a project inspector cannot ensure the effective diameter of the puddle weld therefore he or she cannot guarantee the published load capacities unless they witness the actual installation. This method of puddle welding is common in the United States and Canada for attaching steel sheets to bar joist and structural steel members. Regional agencies are responsible for ensuring the proper installation of puddle welding on steel construction sites. Currently there is no standard or weld procedure which can ensure the published holding capacity of any unwitnessed connection, but this is under review by the American Welding Society.
Glass and plastic welding
Glasses and certain types of plastics are commonly welded materials. Unlike metals, which have a specific melting point, glasses and plastics have a melting range, called the glass transition. When heating the solid material past the glass-transition temperature (Tg) into this range, it will generally become softer and more pliable. When it crosses through the range, above the glass-melting temperature (Tm), it will become a very thick, sluggish, viscous liquid, slowly decreasing in viscosity as temperature increases. Typically, this viscous liquid will have very little surface tension compared to metals, becoming a sticky, taffy to honey-like consistency, so welding can usually take place by simply pressing two melted surfaces together. The two liquids will generally mix and join at first contact. Upon cooling through the glass transition, the welded piece will solidify as one solid piece of amorphous material.
Glass welding
Glass welding is a common practice during glassblowing. It is used very often in the construction of lighting, neon signs, flashtubes, scientific equipment, and the manufacture of dishes and other glassware. It is also used during glass casting for joining the halves of glass molds, making items such as bottles and jars. Welding glass is accomplished by heating the glass through the glass transition, turning it into a thick, formable, liquid mass. Heating is usually done with a gas or oxy-gas torch, or a furnace, because the temperatures for melting glass are often quite high. This temperature may vary, depending on the type of glass. For example, lead glass becomes a weldable liquid at around , and can be welded with a simple propane torch. On the other hand, quartz glass (fused silica) must be heated to over , but quickly loses its viscosity and formability if overheated, so an oxyhydrogen torch must be used. Sometimes a tube may be attached to the glass, allowing it to be blown into various shapes, such as bulbs, bottles, or tubes. When two pieces of liquid glass are pressed together, they will usually weld very readily. Welding a handle onto a pitcher can usually be done with relative ease. However, when welding a tube to another tube, a combination of blowing and suction, and pressing and pulling is used to ensure a good seal, to shape the glass, and to keep the surface tension from closing the tube in on itself. Sometimes a filler rod may be used, but usually not.
Because glass is very brittle in its solid state, it is often prone to cracking upon heating and cooling, especially if the heating and cooling are uneven. This is because the brittleness of glass does not allow for uneven thermal expansion. Glass that has been welded will usually need to be cooled very slowly and evenly through the glass transition, in a process called annealing, to relieve any internal stresses created by a temperature gradient.
There are many types of glass, and it is most common to weld using the same types. Different glasses often have different rates of thermal expansion, which can cause them to crack upon cooling when they contract differently. For instance, quartz has very low thermal expansion, while soda-lime glass has very high thermal expansion. When welding different glasses to each other, it is usually important to closely match their coefficients of thermal expansion, to ensure that cracking does not occur. Also, some glasses will simply not mix with others, so welding between certain types may not be possible.
Glass can also be welded to metals and ceramics, although with metals the process is usually more adhesion to the surface of the metal rather than a commingling of the two materials. However, certain glasses will typically bond only to certain metals. For example, lead glass bonds readily to copper or molybdenum, but not to aluminum. Tungsten electrodes are often used in lighting but will not bond to quartz glass, so the tungsten is often wetted with molten borosilicate glass, which bonds to both tungsten and quartz. However, care must be taken to ensure that all materials have similar coefficients of thermal expansion to prevent cracking both when the object cools and when it is heated again. Special alloys are often used for this purpose, ensuring that the coefficients of expansion match, and sometimes thin, metallic coatings may be applied to a metal to create a good bond with the glass.
Plastic welding
Plastics are generally divided into two categories, which are "thermosets" and "thermoplastics." A thermoset is a plastic in which a chemical reaction sets the molecular bonds after first forming the plastic, and then the bonds cannot be broken again without degrading the plastic. Thermosets cannot be melted, therefore, once a thermoset has set it is impossible to weld it. Examples of thermosets include epoxies, silicone, vulcanized rubber, polyester, and polyurethane.
Thermoplastics, by contrast, form long molecular chains, which are often coiled or intertwined, forming an amorphous structure without any long-range, crystalline order. Some thermoplastics may be fully amorphous, while others have a partially crystalline/partially amorphous structure. Both amorphous and semicrystalline thermoplastics have a glass transition, above which welding can occur, but semicrystallines also have a specific melting point which is above the glass transition. Above this melting point, the viscous liquid will become a free-flowing liquid (see rheological weldability for thermoplastics). Examples of thermoplastics include polyethylene, polypropylene, polystyrene, polyvinylchloride (PVC), and fluoroplastics like Teflon and Spectralon.
Welding thermoplastic with heat is very similar to welding glass. The plastic first must be cleaned and then heated through the glass transition, turning the weld-interface into a thick, viscous liquid. Two heated interfaces can then be pressed together, allowing the molecules to mix through intermolecular diffusion, joining them as one. Then the plastic is cooled through the glass transition, allowing the weld to solidify. A filler rod may often be used for certain types of joints. The main differences between welding glass and plastic are the types of heating methods, the much lower melting temperatures, and the fact that plastics will burn if overheated. Many different methods have been devised for heating plastic to a weldable temperature without burning it. Ovens or electric heating tools can be used to melt the plastic. Ultrasonic, laser, or friction heating are other methods. Resistive metals may be implanted in the plastic, which respond to induction heating. Some plastics will begin to burn at temperatures lower than their glass transition, so welding can be performed by blowing a heated, inert gas onto the plastic, melting it while, at the same time, shielding it from oxygen.
Solvent welding
Many thermoplastics can also be welded using chemical solvents. When placed in contact with the plastic, the solvent will begin to soften it, bringing the surface into a thick, liquid solution. When two melted surfaces are pressed together, the molecules in the solution mix, joining them as one. Because the solvent can permeate the plastic, the solvent evaporates out through the surface of the plastic, causing the weld to drop out of solution and solidify. A common use for solvent welding is for joining PVC (polyvinyl chloride) or ABS (acrylonitrile butadiene styrene) pipes during plumbing, or for welding styrene and polystyrene plastics in the construction of models. Solvent welding is especially effective on plastics like PVC which burn at or below their glass transition, but may be ineffective on plastics like Teflon or polyethylene that are resistant to chemical decomposition.
See also
Aluminium joining
Fasteners
List of welding codes
List of welding processes
Welding Procedure Specification
Welder certification
Welded sculpture
Welding table
References
Sources
External links
Pipes Joint Welding
Welding Process
Welding Ventilation at CCOHS
IARC Group 1 carcinogens
Articles containing video clips
Joining
Mechanical engineering | Welding | [
"Physics",
"Engineering"
] | 9,833 | [
"Welding",
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
44,928 | https://en.wikipedia.org/wiki/Mahogany | Mahogany is a straight-grained, reddish-brown timber of three tropical hardwood species of the genus Swietenia, indigenous to the Americas and part of the pantropical chinaberry family, Meliaceae. Mahogany is used commercially for a wide variety of goods, due to its coloring and durable nature. It is naturally found within the Americas, but has also been imported to plantations across Asia and Oceania. The mahogany trade may have begun as early as the 16th century and flourished in the 17th and 18th centuries. In certain countries, mahogany is considered an invasive species.
Mahogany is wood from any of three tree species: Honduran or big-leaf mahogany (Swietenia macrophylla), West Indian or Cuban mahogany (Swietenia mahagoni), and Swietenia humilis. Honduran mahogany is the most widespread and the only genuine mahogany species commercially grown today. Mahogany is a valuable lumber used for paneling, furniture, boats, musical instruments, and other items. The United States is the leading importer of mahogany, while Peru is the largest exporter. Mahogany is the national tree of the Dominican Republic and Belize.
Swietenia species have been introduced in various countries outside the Americas since the 1800s, with many plantings becoming naturalized forests. All species of Swietenia are now listed by CITES and protected due to concerns over illegal logging and mismanagement. Mahogany species can crossbreed when they grow in proximity, and the hybrid between S. mahagoni and S. macrophylla is widely planted for timber.
The history of the American mahogany trade dates back to the 17th century when the wood was first noticed by Europeans during the Spanish colonization of the Americas. Mahogany became more popular in the 18th century when the British Parliament removed import duties on timber from British possessions, leading to increased exports to Europe and North America. Throughout the 18th and 19th centuries, mahogany from various regions was imported into Europe and North America, with Britain being the largest consumer.
By the late 19th century, African mahogany began to dominate the market, and by the early 20th century, the supply of American mahogany became scarcer. In response to concerns about the sustainability of mahogany, several species have been placed on CITES Appendices to regulate the trade.
Mahogany is known for its straight, fine grain and durability, making it a popular choice for fine furniture, boat construction, and musical instruments. However, the over-harvesting of mahogany and environmental concerns have led to a decrease in its use.
Etymology
The etymology of mahogany is uncertain and a subject of debate. The term first appeared in John Ogilby's "America" (1671), referring to a "curious and rich wood" from Jamaica. Initial mentions of the mahogany tree (as opposed to wood) date to 1731, with its first detailed description in 1743, attributed to Swietenia mahagoni by Kemp Malone in 1940. Malone suggested that mahogany originated as a generic term for 'wood' in a native Bahamian language. F. Bruce Lamb disagreed, pointing out that the Arawak language's word for wood is caoba. Lamb identified a West African origin for the word in the Yoruba oganwo, collectively m'oganwo (meaning one which is the tallest or most high) used for the Khaya genus of trees, whose timber is today called African mahogany. Lamb proposes that Yoruba and Igbo people brought to Jamaica as slaves identified the local trees of the Swietenia genus as m'oganwo, which developed into the Portuguese term mogano, which first appeared in print as the name of a river in 1661, before finally developing into the English mahogany in Jamaica between 1655 and 1670.
Malone criticized this etymology, arguing that the proposed metamorphosis from the Yoruba m'oganwo to the Portuguese mogano to the English mahogany was a logical and linguistic stretch relying on the conversion of the singular oganwo to the collective m'oganwo, which Malone finds unlikely considering the tree's generally solitary nature. He also argues that Lamb's earliest identified use of the Portuguese mog(a)no, which is for a river that Lamb asserts must have been so named for the mahogany oganwo trees on its banks, could just as well have been named for any tall tree, since oganwo only means tall. Lamb, in turn, criticized Malone's methodology and perceived bias, and maintained that there is no evidence for mahogany as a generic word.
Description
Mahogany is a commercially important lumber prized for its beauty, durability, and color, and used for paneling and to make furniture, boats, musical instruments and other items. The leading importer of mahogany is the United States, followed by Britain; while the largest exporter today is Peru, which surpassed Brazil after that country banned mahogany exports in 2001. It is estimated that some 80 or 90 percent of Peruvian mahogany exported to the United States is illegally harvested, with the economic cost of illegal logging in Peru placed conservatively at $40–70 million USD annually. It was estimated that in 2000, some 57,000 mahogany trees were harvested to supply the U.S. furniture trade alone.
Mahogany is the national tree of the Dominican Republic and Belize. A mahogany tree with two woodcutters bearing an axe and a paddle also appears on the Belizean national coat of arms, under the national motto, , Latin for "under the shade I flourish."
The specific density of mahogany is 0.55. Mahogany, African: (500–850 kg/m3); Mahogany, Cuban: 660 kg/m3; Mahogany, Honduras: 650 kg/m3; Mahogany, Spanish: 850 kg/m3.
Species
The three species are:
Honduran or big-leaf mahogany (Swietenia macrophylla), with a range from Mexico to southern Amazonia in Brazil, the most widespread species of mahogany and the only genuine mahogany species commercially grown today. Illegal logging of S. macrophylla, and its highly destructive environmental effects, led to the species' placement in 2003 on Appendix II of Convention on International Trade in Endangered Species (CITES), the first time that a high-volume, high-value tree was listed on Appendix II.
West Indian or Cuban mahogany (Swietenia mahagoni), native to southern Florida and the Caribbean, formerly dominant in the mahogany trade, but not in widespread commercial use since World War II.
Swietenia humilis, a small and often twisted mahogany tree limited to seasonally dry forests in Pacific Central America that is of limited commercial utility. Some botanists believe that S. humilis is a mere variant of S. macrophylla.
Other species
While only the three Swietenia species are classified officially as "genuine mahogany", the Federal Trade Commission allows certain species of trees other than Swietenia to be sold as "mahoganies" in the U.S. timber trade. This is due to the long-standing usage of the terms. But it must be prefixed with another descriptor, and they are not allowed to be sold under the name "mahogany" alone.
Two names are allowed. The first is "African mahogany" for the five species of the genus Khaya (which also belong to the mahogany family), namely: K. anthotheca, K. grandifoliola, K. ivorensis, K. madagascariensis, and K. senegalensis. All of them are native to native to Africa and Madagascar. The second is the name "Philippine mahogany" for seven species (all native to the Philippines) in the genus Shorea and Parashorea (which are unrelated dipterocarps, more commonly known as "lauan" or "meranti"), namely:S. polysperma, S. negrosensis, S. contorta, S. ovata, S. almon, S. palosapis, and P. malaanonan. The timber from both "African mahoganies" and "Philippine mahoganies" as defined by the FTC, are very close in terms of appearance and properties to true mahoganies. No other species are allowed to be sold in the United States under the name "mahogany", aside from the three Swietenia species and the aforementioned exceptions.
Within the mahogany family, other closely-related members of other genera which also resemble mahoganies in terms of appearance and properties are also sometimes known as "mahoganies", though they can not be sold as such in the US timber trade. This includes some members of the genus Toona, namely: "Philippine mahogany" (Toona calantas, different from the above usage); "Indian mahogany" (Toona ciliata); "Chinese mahogany" (Toona sinensis); and Indonesian mahogany (Toona sureni);. However members of this genus are more usually known as "toons" or "red cedars." They have similar properties to true mahoganies but differ in appearance. Other species in the same family sometimes known as "mahoganies" include "Indian mahogany" (Chukrasia velutina, different from T. ciliata); "sipo mahogany" (Entandrophragma utile); "sapele mahogany" (Entandrophragma cylindricum); "royal mahogany" (Carapa guianensis); "white mahogany" (Turraeanthus africanus); "New Zealand mahogany" (Dysoxylum spectabile); "pink mahogany" (Guarea spp.); and "demerara mahogany" (Carapa guianensis).
Multiple other unrelated species are also known as "mahogany". These include the aforementioned Shorea species which does actually come close to true mahogany in terms of appearance and properties. But it also includes other species which do not resemble true mahogany at all and have very different wood properties, like the "Santos mahogany" (Myroxylon balsamum), "mountain mahogany" (Cercocarpus spp.), and "swamp mahogany" (Eucalyptus robusta).
Distribution
The natural distribution of these species within the Americas is geographically distinct. S. mahagoni grows on the West Indian islands as far north as the Bahamas, the Florida Keys and parts of Florida; S. humilis grows in the dry regions of the Pacific coast of Central America from south-western Mexico to Costa Rica; S. macrophylla grows in Central America from Yucatan southwards and into South America, extending as far as Peru, Bolivia and extreme western Brazil. In the 20th century various botanists attempted to further define S. macrophylla in South America as a new species, such as S. candollei Pittier and S. tessmannii Harms., but many authorities consider these spurious. According to Record and Hess, all of the mahogany of continental North and South America can be considered as one botanical species, Swietenia macrophylla King.
Both major species of Swietenia were introduced in several countries outside of the Americas during the 1800s and early 1900s using seeds from South America and the Caribbean. Many of these plantings became naturalized forests over time.
India had both S. macrophylla and S. mahagoni introduced in 1865 using seeds from West Indies. Both eventually became naturalized forests. Bangladesh had Honduran S. macrophylla introduced in 1872 and as with India it became naturalized in some areas. S. mahagoni and S. macrophylla were introduced in Indonesia in 1870 using seeds from India. S. macrophylla was included in plantation forests planted in Indonesia from the 1920s to the 1940s. Philippines had S. macrophylla introduced in 1907 and in 1913 as well as S. mahagoni in 1911, 1913, 1914, 1920 and 1922. Planting resumed in the late 1980s. It was planted with many other exotic tree species for the purpose of reforestation. S. macrophylla was planted in Sri Lanka in 1897 but it was left unmanaged until the 1950s when reforestation efforts initiated by the Sri Lankan government led to plantations being consciously developed. In the early 1900s S. mahagoni was planted on the islands of O'ahu and Maui in Hawaii but was neglected and became naturalized forests. Additionally, S. macrophylla was planted in 1922 on O'ahu and is now naturalized. Fiji had S. macrophylla introduced originally in 1911 as an ornamental species using seeds from Honduras and Belize. Fiji has become a major producer of mahogany in the 21st Century due to a robust plantation program spanning over 50 years. Harvesting began in 2003.
History
The name mahogany was initially associated only with those islands in the West Indies under British control (French colonists used the term acajou, while in the Spanish territories it was called caoba). The origin of the name is uncertain, but it could be a corruption of 'm'oganwo', the name used by the Yoruba and Ibo people of West Africa to describe trees of the genus Khaya, which is closely related to Swietenia. When transported to Jamaica as slaves, they gave the same name to the similar trees they saw there. Though this interpretation has been disputed, no one has suggested a more plausible origin. The indigenous Arawak name for the tree is not known. In 1671 the word mahogany appeared in print for the first time, in John Ogilby's America. Among botanists and naturalists, however, the tree was considered a type of cedar, and in 1759 was classified by Carl Linnaeus (1707–1778) as Cedrela mahagoni. The following year it was assigned to a new genus by Nicholas Joseph Jacquin (1727–1817), and named Swietenia mahagoni.
Until the 19th century all of the mahogany was regarded as one species, although varying in quality and character according to soil and climate. In 1836 the German botanist Joseph Gerhard Zuccarini (1797–1848) identified a second species while working on specimens collected on the Pacific coast of Mexico, and named it Swietenia humilis. In 1886 a third species, Swietenia macrophylla, was named by Sir George King (1840–1909) after studying specimens of Honduras mahogany planted in the Botanic Gardens in Calcutta, India.
Today, all species of Swietenia grown in their native locations are listed by CITES, and are therefore protected. After S. mahogani and S. macrophylla were added to CITES appendixes in 1992 and 1995 respectively international conservation programs began in earnest aided by a 1993 World Bank report entitled "Tropical Hardwood Marketing Strategies for Southeast Asia". Efforts to repopulate mahogany largely failed in its native locations due to attacks from the shoot borer Hypsipyla grandella and similarly failed in Africa due to the attacks by the equivalent Hypsipyla robusta. After so many years of mismanagement and illegal logging, Swietenia also suffered from genetic loss thus mutating and weakening the seeds. Additionally erosion in its native locations meant seeds could no longer even be planted. However, both species grew well in Asia and Asia Pacific due to the absence of these shoot borers and absence of other limitations. Plantation management progressed throughout the 1990s and 2000s in Asia and the South Pacific. Global supply of genuine mahogany has been increasing from these plantations, notably Fiji, and Philippines. For Swietenia macrophylla, the trees in these plantations are still relatively young compared to the trees being harvested from old growth forests in South America. Thus, the illegal trade of bigleaf mahogany continues apace.
History of American mahogany trade
In the 17th century, the buccaneer Alexandre Exquemelin recorded the use of mahogany or Caoba (Cedrela being the Spanish name) on Hispaniola for making canoes: "The Indians make these canoes without the use of any iron instruments, by only burning the trees at the bottom near the root, and afterwards governing the fire with such industry that nothing is burnt more than what they would have..."
The wood first came to the notice of Europeans with the beginning of Spanish colonisation in the Americas. A cross in the Cathedral at Santo Domingo, bearing the date 1514, is said to be mahogany, and Philip II of Spain apparently used the wood for the interior joinery of the palace El Escorial, begun in 1584. However, caoba, as the Taino Natives called the wood, was principally reserved for shipbuilding, and it was declared a royal monopoly at Havana in 1622. Hence very little of the mahogany growing in Spanish controlled territory found its way to Europe.
After the French established a colony in Saint Domingue (now Haiti), some mahogany from that island probably found its way to France, where joiners in the port cities of Saint-Malo, Nantes, La Rochelle and Bordeaux used the wood to a limited extent from about 1700. On the English-controlled islands, especially Jamaica and the Bahamas, mahogany was abundant but not exported in any quantity before 1700.
18th century
While the trade in mahogany from the Spanish and French territories in America remained moribund for most of the 18th century, this was not true for those islands under British control. The British Parliament passed an act of Parliament, the Naval Stores Act 1721 (8 Geo. 1. c. 12), which removed all import duties from timber imported into Britain from British possessions in the Americas. This immediately stimulated the trade in West Indian timbers including, most importantly, mahogany. Importations of mahogany into England (and excluding those to Scotland, which were recorded separately) reached 525 tons per annum by 1740, 3,688 tons by 1750, and more than 30,000 tons in 1788, the peak year of the 18th century trade.
At the same time, the Naval Stores Act 1721 had the effect of substantially increasing exports of mahogany from the West Indies to the British colonies in North America. Although initially regarded as a joinery wood, mahogany rapidly became the timber of choice for makers of high quality furniture in both the British Isles and the 13 colonies of North America.
Until the 1760s over 90 per cent of the mahogany imported into Britain came from Jamaica. Some of this was re-exported to continental Europe, but most was used by British furniture makers. Quantities of Jamaican mahogany also went to the North American colonies, but most of the wood used in American furniture came from the Bahamas. This was sometimes called Providence wood, after the main port of the islands, but more often madera or madeira, which was the West Indian name for mahogany.
In addition to Jamaica and Bahamas, all the British controlled islands exported some mahogany at various times, but the quantities were not large. The most significant third source was Black River and adjacent areas on the Mosquito Coast (now Republic of Honduras), from where quantities of mahogany were shipped from the 1740s onwards. This mahogany was known as 'Rattan mahogany', after the island of Ruatan, which was the main offshore entrepot for the British settlers in the area.
At the end of the Seven Years' War (1756–63), the mahogany trade began to change significantly. During the occupation of Havana by British forces between August 1762 and July 1763, quantities of Cuban or Havanna mahogany were sent to Britain, and after the city was restored to Spain in 1763, Cuba continued to export small quantities, mostly to ports on the north coast of Jamaica, from where it went to Britain. However, this mahogany was regarded as inferior to the Jamaican variety, and the trade remained sporadic until the 19th century.
Another variety new to the market was Hispaniola mahogany, also called 'Spanish' and 'St Domingo' mahogany. This was the result of the Free Port Act 1766 (6 Geo. 3. c. 49), which opened Kingston and other designated Jamaican ports to foreign vessels for the first time. The object was primarily to encourage importations of cotton from French plantations in Saint Domingue, but quantities of high quality mahogany were also shipped. These were then forwarded to Britain, where they entered the market in the late 1760s.
In terms of quantity, the most significant new addition to the mahogany trade was Honduras mahogany, also called 'baywood', after the Bay of Honduras. British settlers had been active in southern Yucatan since the beginning of the 18th century, despite the opposition of the Spanish, who claimed sovereignty over all of Central America.
Their main occupation was cutting logwood, a dyewood in high demand in Europe. The center of their activity and the primary point of export was Belize. Under Article XVII of the Treaty of Paris (1763), British cutters were for the first time given the right to cut logwood in Yucatan unmolested, within agreed limits. Such was the enthusiasm of the cutters that within a few years the European market was glutted, and the price of logwood collapsed.
However, the price of mahogany was still high after the war, and so the cutters turned to cutting mahogany. The first Honduras mahogany arrived in Kingston, Jamaica, in November 1763, and the first shipments arrived in Britain the following year.
By the 1790s most of the viable stocks of mahogany in Jamaica had been cut, and the market was divided between two principal sources or types of mahogany. Honduras mahogany was relatively cheap, plentiful, but rarely of the best quality. Hispaniola (also called Spanish or Santo Domingo) mahogany was the wood of choice for high quality work.
Data are lacking, but it is likely that the newly independent United States now received a good proportion of its mahogany from Cuba. In the last quarter of the 18th century France began to use mahogany more widely; they had ample supplies of high quality wood from Saint Domingue. The rest of Europe, where the wood was increasingly fashionable, obtained most of their wood from Britain.
Recent history
The French Revolution of 1789 and the wars that followed radically changed the mahogany trade, primarily due to the progressive collapse of the French and Spanish colonial empires, which allowed British traders into areas previously closed to them. Saint Domingue became the independent republic of Haiti, and from 1808, Spanish controlled Santo Domingo and Cuba were both open to British vessels for the first time.
From the 1820s mahogany from all these areas was imported into Europe and North America, with the majority of them going to Britain. In Central America British loggers moved northwest towards Mexico and south into Guatemala. Other areas of Central America as far south as Panama also began to be exploited.
The most important new development was the beginning of large scale logging in Mexico from the 1860s. Most mahogany was cut in the province of Tabasco and exported from a number of ports on the Gulf of Campeche, from Vera Cruz eastwards to Campeche and Sisal. By the end of the 19th century there was scarcely any part of Central America within reach of the coast untouched by logging, and activity also extended into Colombia, Venezuela, Peru and Brazil.
Trade in American mahogany probably reached a peak in the last quarter of the 19th century. Figures are not available for all countries, but Britain alone imported more than 80,000 tons in 1875. This figure was not matched again. From the 1880s, African mahogany (Khaya spp.), a related genus, began to be exported in increasing quantities from West Africa, and by the early 20th century it dominated the market.
In 1907 the total of mahogany from all sources imported into Europe was 159,830 tons, of which 121,743 tons were from West Africa. By this time mahogany from Cuba, Haiti and other West Indian sources had become increasingly difficult to obtain in commercial sizes, and by the late 20th century Central American and even South American mahogany was heading in a similar direction. In 1975 S. humilis was placed on CITES Appendix II (a list of species that would be in danger of extinction without strict regulation) followed by S. mahagoni in 1992. The most abundant species, S. macrophylla, was placed on Appendix III in 1995 and moved to Appendix II in 2003.
Uses
Mahogany has a straight, fine, and even grain, and is relatively free of voids and pockets. Its reddish-brown color darkens over time, and displays a reddish sheen when polished. It has excellent workability, and is very durable. Historically, the tree's girth allowed for wide boards from traditional mahogany species. These properties make it a favorable wood for crafting cabinets and furniture.
Much of the first-quality furniture made in the American colonies from the mid 18th century was made of mahogany, when the wood first became available to American craftsmen. Mahogany is still widely used for fine furniture; however, the rarity of Cuban mahogany, the over-harvesting of Honduras and Brazilian mahogany, and the protests by indigenous peoples and environmental organizations from the 1980s into the 2000s, have diminished their use. Recent mahogany production from Mexico and Fiji has a lighter color and density than South American production from the early 20th century.
Mahogany also resists wood rot, making it attractive in boat construction and outdoor decking. It is a tonewood, often used for musical instruments, particularly the backs, sides and necks of acoustic guitars, electric guitar bodies, and drum shells because of its ability to produce a very deep, warm tone compared to other commonly used woods, such as maple, alder, ash (Fraxinus) or spruce. Guitars featuring mahogany in their construction include many acoustic guitars from Martin, Taylor, and Gibson, and Gibson electric guitars such as the Les Paul and SG. In the 1930s Gibson used the wood to make banjo necks as well.
Mahogany as an invasive species
In the Philippines, environmentalists are calling for an end to the planting of mahogany because of its negative impact on the environment and wildlife, including possibly causing soil acidification and no net benefit to wildlife.
References
External links
Antiques
Forestry in Asia
Forestry in Central America
Furniture
History of forestry
Plant common names
Swietenia
Wood | Mahogany | [
"Biology"
] | 5,317 | [
"Plant common names",
"Common names of organisms",
"Plants"
] |
44,932 | https://en.wikipedia.org/wiki/Carlo%20Rubbia | Carlo Rubbia (born 31 March 1934) is an Italian particle physicist and inventor who shared the Nobel Prize in Physics in 1984 with Simon van der Meer for work leading to the discovery of the W and Z particles at CERN.
Early life and education
Rubbia was born in 1934 in Gorizia, an Italian town on the border with Slovenia. His family moved to Venice then Udine because of wartime disruption. His father was an electrical engineer and encouraged him to study the same, though he stated his wish to study physics. In the local countryside, he collected and experimented with abandoned military communications equipment. After taking an entrance exam for the Scuola Normale Superiore di Pisa to study physics, he failed to get into the required top ten (coming eleventh), so began an engineering course in Milan in 1953. Soon after, a Pisa student dropped out, presenting Rubbia with his opportunity. He gained a degree and doctorate in a relatively short time with a thesis on cosmic ray experimentation; his adviser was Marcello Conversi. At Pisa, he met his future wife, Marisa, also a physics student.
Career and research
Columbia University
Following his degree, he went to the United States to do postdoctoral research, where he spent about one and a half years at Columbia University performing experiments on the decay and the nuclear capture of muons. This was the first of a long series of experiments that Rubbia has performed in the field of weak interactions and which culminated in the Nobel Prize-winning work at CERN.
CERN
He moved back to Europe for a placement at the University of Rome before joining the newly founded CERN in 1960, where he worked on experiments on the structure of weak interactions. CERN had just commissioned a new type of accelerator, the Intersecting Storage Rings, using counter-rotating beams of protons colliding against each other. Rubbia and his collaborators conducted experiments there, again studying the weak force. The main results in this field were the observation of the structure in the elastic scattering process and the first observation of the charmed baryons. These experiments were crucial in order to perfect the techniques needed later for the discovery of more exotic particles in a different type of particle collider.
In 1976, he suggested adapting CERN's Super Proton Synchrotron (SPS) to collide protons and antiprotons in the same ring – the Proton-Antiproton Collider. Using Simon van der Meers technology of stochastic cooling, the Antiproton Accumulator was also built. The collider started running in 1981 and, in early 1983, an international team of more than 100 physicists headed by Rubbia and known as the UA1 Collaboration, detected the intermediate vector bosons, the W and Z bosons, which had become a cornerstone of modern theories of elementary particle physics long before this direct observation. They carry the weak force that causes radioactive decay in the atomic nucleus and controls the combustion of the Sun, just as photons, massless particles of light, carry the electromagnetic force which causes most physical and biochemical reactions. The weak force also plays a fundamental role in the nucleosynthesis of the elements, as studied in theories of stars evolution. These particles have a mass almost 100 times greater than the proton. In 1984 Carlo Rubbia and Simon van der Meer were awarded the Nobel Prize "for their decisive contributions to the large project, which led to the discovery of the field particles W and Z, communicators of weak interaction".
To achieve energies high enough to create these particles, Rubbia, together with David Cline and Peter McIntyre, proposed a radically new particle accelerator design. They proposed to use a beam of protons and a beam of antiprotons, their antimatter twins, counter-rotating in the vacuum pipe of the accelerator and colliding head-on. The idea of creating particles by colliding beams of more "ordinary" particles was not new: electron-positron and proton-proton colliders were already in use. However, by the late 1970s / early 1980s those could not approach the needed energies in the centre of mass to explore the W/Z region predicted by theory. At those energies, protons colliding with anti-protons were the best candidates, but how to obtain sufficiently intense (and well-collimated) beams of anti-protons, which are normally produced by impinging a beam of protons on a fixed target? Van den Meer had in the meantime developed the concept of "stochastic cooling", in which particles, like anti-protons, could be kept in a circular array, and their beam divergence reduced progressively by sending signals to bending magnets downstream. Since decreasing the divergence of the beam meant to reduce transverse velocity or energy components, the suggestive term "stochastic cooling" was given to the scheme. The scheme could then be used to "cool" (to collimate) the anti-protons, which could thus be forced into a well-focused beam, suitable for acceleration to high energies, without losing too many anti-protons to collisions with the structure. Stochastic expresses the fact that signals to be taken resemble random noise, which was called "Schottky noise" when first encountered in vacuum tubes. Without van der Meer's technique, UA1 would never have had the sufficient high-intensity anti-protons it needed. Without Rubbia's realisation of its usefulness, stochastic cooling would have been the subject of a few publications and nothing else. Simon van de Meer developed and tested the technology in the proton Intersecting Storage Rings at CERN, but it is most effective on rather low-intensity beams, such as the anti-protons which were prepared for use in the SPS when configured as a collider.
Harvard University
In 1970, Rubbia was appointed Higgins Professor of Physics at Harvard University, where he spent one semester per year for 18 years, while continuing his research activities at CERN. In 1989, he was appointed Director-General of the CERN Laboratory. During his mandate, in 1993, "CERN agreed to allow anybody to use the Web protocol and code free of charge … without any royalty or other constraint".
Gran Sasso Laboratory
Rubbia has also been one of the leaders in a collaboration effort deep in the Gran Sasso Laboratory, designed to detect any sign of decay of the proton. The experiment seeks evidence that would disprove the conventional belief that matter is stable. The most widely accepted version of the unified field theories predicts that protons do not last forever, but gradually decay into energy after an average lifetime of at least 1032 years. The same experiment, known as ICARUS and based on a new technique of electronic detection of ionizing events in ultra-pure liquid argon, is aiming at the direct detection of the neutrinos emitted from the Sun, a first rudimentary neutrino telescope to explore neutrino signals of cosmic nature.
Rubbia further proposed the concept of an energy amplifier, a novel and safe way of producing nuclear energy exploiting present-day accelerator technologies, which is actively being studied worldwide in order to incinerate high-activity waste from nuclear reactors, and produce energy from natural thorium and depleted uranium. In 2013 he proposed building a large number of small-scale thorium power plants.
Other organisational affiliations
Rubbia was principal Scientific Adviser of CIEMAT (Spain), a member of the high-level Advisory Group on global warming set up by EU's President Barroso in 2007 and of the board of trustees at the IMDEA Energy Institute. In 2009–2010, he was Special Adviser for Energy to the Secretary General of ECLAC, the United Nations Economic Commission for Latin America, based in Santiago (Chile). In June 2010, Rubbia was appointed Scientific Director of the Institute for Advanced Sustainability Studies in Potsdam (Germany). He is a member of the Italy-USA Foundation. During his term as President of ENEA (1999–2005) he promoted a novel method for concentrating solar power at high temperatures for energy production, known as the Archimede Project, which is being developed by industry for commercial use.
Personal life
Marisa and Carlo Rubbia have two children.
Rubbia is also an openly believer, as his book shows, published by Rizzoli, The temptation to believe. He is also a member of the Pontifical Academy of Sciences.
Awards and honours
In December 1984, Rubbia was nominated Cavaliere di Gran Croce OMRI.
On 30 August 2013, Rubbia was appointed to the Senate of Italy as a Senator for Life by President Giorgio Napolitano.
On 8 January 2016, he was awarded with the International Scientific and Technological Cooperation Award by the People's Republic of China.
Asteroid 8398 Rubbia is named in his honour. He was elected a Foreign Member of the Royal Society (ForMemRS) in 1984.
In 1984, Rubbia received the Golden Plate Award of the American Academy of Achievement.
References
External links
including the Nobel Lecture, 8 December 1984 Experimental Observation of the Intermediate Vector Bosons W+, W− and Z0
1934 births
People associated with CERN
Columbia University people
Commanders of the Order of Merit of the Republic of Poland
Experimental physicists
Foreign members of the Royal Society
Foreign members of the Russian Academy of Sciences
Harvard University faculty
Italian Nobel laureates
20th-century Italian physicists
Italian life senators
Living people
Members of the Pontifical Academy of Sciences
Foreign associates of the National Academy of Sciences
Nobel laureates in Physics
Particle physicists
People from Gorizia
People from the Province of Gorizia
University of Pisa alumni
Scuola Normale Superiore di Pisa alumni | Carlo Rubbia | [
"Physics"
] | 1,990 | [
"Particle physicists",
"Particle physics"
] |
317,419 | https://en.wikipedia.org/wiki/Game%20complexity | Combinatorial game theory measures game complexity in several ways:
State-space complexity (the number of legal game positions from the initial position)
Game tree size (total number of possible games)
Decision complexity (number of leaf nodes in the smallest decision tree for initial position)
Game-tree complexity (number of leaf nodes in the smallest full-width decision tree for initial position)
Computational complexity (asymptotic difficulty of a game as it grows arbitrarily large)
These measures involve understanding the game positions, possible outcomes, and computational complexity of various game scenarios.
Measures of game complexity
State-space complexity
The state-space complexity of a game is the number of legal game positions reachable from the initial position of the game.
When this is too hard to calculate, an upper bound can often be computed by also counting (some) illegal positions (positions that can never arise in the course of a game).
Game tree size
The game tree size is the total number of possible games that can be played. This is the number of leaf nodes in the game tree rooted at the game's initial position.
The game tree is typically vastly larger than the state-space because the same positions can occur in many games by making moves in a different order (for example, in a tic-tac-toe game with two X and one O on the board, this position could have been reached in two different ways depending on where the first X was placed). An upper bound for the size of the game tree can sometimes be computed by simplifying the game in a way that only increases the size of the game tree (for example, by allowing illegal moves) until it becomes tractable.
For games where the number of moves is not limited (for example by the size of the board, or by a rule about repetition of position) the game tree is generally infinite.
Decision trees
A decision tree is a subtree of the game tree, with each position labelled "player A wins", "player B wins", or "draw" if that position can be proved to have that value (assuming best play by both sides) by examining only other positions in the graph. Terminal positions can be labelled directly—with player A to move, a position can be labelled "player A wins" if any successor position is a win for A; "player B wins" if all successor positions are wins for B; or "draw" if all successor positions are either drawn or wins for B. (With player B to move, corresponding positions are marked similarly.)
The following two methods of measuring game complexity use decision trees:
Decision complexity
Decision complexity of a game is the number of leaf nodes in the smallest decision tree that establishes the value of the initial position.
Game-tree complexity
Game-tree complexity of a game is the number of leaf nodes in the smallest full-width decision tree that establishes the value of the initial position. A full-width tree includes all nodes at each depth. This is an estimate of the number of positions one would have to evaluate in a minimax search to determine the value of the initial position.
It is hard even to estimate the game-tree complexity, but for some games an approximation can be given by , where is the game's average branching factor and is the number of plies in an average game.
Computational complexity
The computational complexity of a game describes the asymptotic difficulty of a game as it grows arbitrarily large, expressed in big O notation or as membership in a complexity class. This concept doesn't apply to particular games, but rather to games that have been generalized so they can be made arbitrarily large, typically by playing them on an n-by-n board. (From the point of view of computational complexity, a game on a fixed size of board is a finite problem that can be solved in O(1), for example by a look-up table from positions to the best move in each position.)
The asymptotic complexity is defined by the most efficient algorithm for solving the game (in terms of whatever computational resource one is considering). The most common complexity measure, computation time, is always lower-bounded by the logarithm of the asymptotic state-space complexity, since a solution algorithm must work for every possible state of the game. It will be upper-bounded by the complexity of any particular algorithm that works for the family of games. Similar remarks apply to the second-most commonly used complexity measure, the amount of space or computer memory used by the computation. It is not obvious that there is any lower bound on the space complexity for a typical game, because the algorithm need not store game states; however many games of interest are known to be PSPACE-hard, and it follows that their space complexity will be lower-bounded by the logarithm of the asymptotic state-space complexity as well (technically the bound is only a polynomial in this quantity; but it is usually known to be linear).
The depth-first minimax strategy will use computation time proportional to the game's tree-complexity (since it must explore the whole tree), and an amount of memory polynomial in the logarithm of the tree-complexity (since the algorithm must always store one node of the tree at each possible move-depth, and the number of nodes at the highest move-depth is precisely the tree-complexity).
Backward induction will use both memory and time proportional to the state-space complexity, as it must compute and record the correct move for each possible position.
Example: tic-tac-toe (noughts and crosses)
For tic-tac-toe, a simple upper bound for the size of the state space is 39 = 19,683. (There are three states for each of the nine cells.) This count includes many illegal positions, such as a position with five crosses and no noughts, or a position in which both players have a row of three. A more careful count, removing these illegal positions, gives 5,478. And when rotations and reflections of positions are considered identical, there are only 765 essentially different positions.
To bound the game tree, there are 9 possible initial moves, 8 possible responses, and so on, so that there are at most 9! or 362,880 total games. However, games may take less than 9 moves to resolve, and an exact enumeration gives 255,168 possible games. When rotations and reflections of positions are considered the same, there are only 26,830 possible games.
The computational complexity of tic-tac-toe depends on how it is generalized. A natural generalization is to m,n,k-games: played on an m by n board with winner being the first player to get k in a row. This game can be solved in DSPACE(mn) by searching the entire game tree. This places it in the important complexity class PSPACE; with more work, it can be shown to be PSPACE-complete.
Complexities of some well-known games
Due to the large size of game complexities, this table gives the ceiling of their logarithm to base 10. (In other words, the number of digits). All of the following numbers should be considered with caution: seemingly minor changes to the rules of a game can change the numbers (which are often rough estimates anyway) by tremendous factors, which might easily be much greater than the numbers shown.
Notes
References
See also
Go and mathematics
Solved game
Solving chess
Shannon number
list of NP-complete games and puzzles
list of PSPACE-complete games and puzzles
External links
David Eppstein's Computational Complexity of Games and Puzzles
Combinatorial game theory
Game theory | Game complexity | [
"Mathematics"
] | 1,586 | [
"Recreational mathematics",
"Game theory",
"Combinatorial game theory",
"Combinatorics"
] |
317,458 | https://en.wikipedia.org/wiki/Object%20request%20broker | In distributed computing, an object request broker (ORB) is a concept of a middleware, which allows program calls to be made from one computer to another via a computer network, providing location transparency through remote procedure calls. ORBs promote interoperability of distributed object systems, enabling such systems to be built by piecing together objects from different vendors, while different parts communicate with each other via the ORB. Common Object Request Broker Architecture standardizes the way ORB may be implemented.
Overview
ORBs assumed to handle the transformation of in-process data structures to and from the raw byte sequence, which is transmitted over the network. This is called marshalling or serialization. In addition to marshalling data, ORBs often expose many more features, such as distributed transactions, directory services or real-time scheduling. Some ORBs, such as CORBA-compliant systems, use an interface description language to describe the data that is to be transmitted on remote calls.
In object-oriented languages (e.g. Java), an ORB actually provides a framework which enables remote objects to be used over the network, in the same way as if they were local and part of the same process. On the client side, so-called stub objects are created and invoked, serving as the only part visible and used inside the client application. After the stub's methods are invoked, the client-side ORB performs the marshalling of invocation data, and forwards the request to the server-side ORB. On the server side, ORB locates the targeted object, executes the requested operation, and returns the results. Having the results available, the client's ORB performs the demarshalling and passes the results back into the invoked stub, making them available to the client application. The whole process is transparent, resulting in remote objects appearing as if they were local.
Implementations
CORBA - Common Object Request Broker Architecture.
ICE - the Internet Communications Engine
.NET Remoting - object remoting library within Microsoft's .NET Framework
Windows Communication Foundation (WCF)
ORBexpress - Real-time and Enterprise ORBs by Objective Interface Systems
Orbix - An Enterprise-level CORBA ORB originally developed by IONA Technologies
DCOM - the Distributed Component Object Model from Microsoft
RMI - the Remote Method Invocation Protocol from Sun Microsystems
ORBit - an open-source CORBA ORB used as middleware for GNOME
The ACE ORB - a CORBA implementation from the / Distributed Object Computing (DOC) Group
See also
References
Middleware | Object request broker | [
"Technology",
"Engineering"
] | 510 | [
"Software engineering",
"Middleware",
"IT infrastructure"
] |
317,471 | https://en.wikipedia.org/wiki/Silastic | Silastic (a portmanteau of 'silicone' and 'plastic') is a trademark registered in 1948 by Dow Corning Corporation for flexible, inert silicone elastomer.
Composition
The Silastic trademark refers to silicone elastomers, silicone tubing and some cross-linked polydimethylsiloxane materials manufactured by Dow Corning, the owner of the global trademark.
Applications
Silastic-brand silicone elastomers have a range of applications. In the automotive industry they are used for making gaskets, spark plug boots, hoses and other components that must operate over a broad temperature range and resist oil and coolants. The elastomers are widely used in the architectural, aerospace, electronic, food and beverage, textile, and transportation industries for molding, coating, adhesion and sealing. Due to their inert nature, medical-grade Silastic-brand silicone elastomers are important materials in numerous medical and pharmaceutical devices including catheters, pacemaker leads, tubing, wound dressings, silos for abdominal wall defects, and nasolacrimal duct obstruction. These medical-grade elastomers are also used in the manufacture of hyper-realistic masks, where they perfectly mimic the texture of human skin and follow all facial movements and expressions.
References
External links
Dow Corning Corporation
Silicones
Elastomers
Brand name materials
Dow Chemical Company | Silastic | [
"Chemistry"
] | 294 | [
"Synthetic materials",
"Elastomers"
] |
317,508 | https://en.wikipedia.org/wiki/Volition%20%28psychology%29 | Volition, also known as will or conation, is the cognitive process by which an individual decides on and commits to a particular course of action. It is defined as purposive striving and is one of the primary human psychological functions. Others include affect (feeling or emotion), motivation (goals and expectations), and cognition (thinking). Volitional processes can be applied consciously or they can be automatized as habits over time.
Most modern conceptions of volition address it as a process of conscious action control which becomes automatized (e.g. see Heckhausen and Kuhl; Gollwitzer; Boekaerts and Corno).
Overview
Many researchers treat volition and willpower as scientific and colloquial terms (respectively) for the same process. When a person makes up their mind to do a thing, that state is termed 'immanent volition'. When we put forth any particular act of choice, that act is called an emanant, executive, or imperative volition. When an immanent or settled state of choice controls or governs a series of actions, that state is termed predominant volition. Subordinate volitions are particular acts of choice which carry into effect the object sought for by the governing or predominant volition.
According to Gary Kielhofner's "Model of Human Occupation", volition is one of the three sub-systems that act on human behavior. Within this model, volition refers to a person's values, interests and self-efficacy (personal causation) about personal performance. Kurt Lewin argues that motivation and volition are one and the same, in distinction to the nineteenth century psychologist Narziß Ach. Ach proposed that there is a certain threshold of desire that distinguishes motivation from volition: when desire lies below this threshold, it is motivation, and when it crosses over, it becomes volition. In the book A Bias for Action, Heinrich Bruch and Sumantra Ghoshal also differentiate volition (willpower) from motivation. Using this model, they propose assessing individuals' differing levels of commitment with regard to tasks by measuring it on a scale of intent from motivation(an emotion) to volition (a decision). Discussions of impulse control (e.g., Kuhl and Heckhausen) and education (e.g., Corno), also make the motivation-volition distinction. Corno's model ties volition to the processes of self-regulated learning.
See also
Appetition
Avolition
Executive functions
Free will
Motivational salience
Neuroscience of free will
Self-agency
Prohairesis
True Will
References
Bibliography
External links
Weakness of Will (Stanford Encyclopedia of Philosophy)
Modeling Willpower (Darcey Riley)
Narziß Kaspar Ach (1871-1946) (University of Konstanz)
http://www.sci.brooklyn.cuny.edu/~schopra/Persons/Frankfurt.pdf (Harry Frankfurt's Analysis of the Volition among other things)
Cognition
Motivation | Volition (psychology) | [
"Biology"
] | 632 | [
"Ethology",
"Behavior",
"Motivation",
"Human behavior"
] |
317,549 | https://en.wikipedia.org/wiki/March%20%28territory%29 | In medieval Europe, a march or mark was, in broad terms, any kind of borderland, as opposed to a state's "heartland". More specifically, a march was a border between realms or a neutral buffer zone under joint control of two states in which different laws might apply. In both of these senses, marches served a political purpose, such as providing warning of military incursions or regulating cross-border trade.
Marches gave rise to the titles marquess (masculine) or marchioness (feminine).
Etymology
The word "march" derives ultimately from a Proto-Indo-European root *mereg-, meaning "edge, boundary". The root *mereg- produced Latin margo ("margin"), Old Irish mruig ("borderland"), Welsh bro ("region, border, valley") and Persian and Armenian marz ("borderland"). The Proto-Germanic *marko gave rise to the Old English word mearc and Frankish marka, as well as Old Norse mǫrk meaning "borderland, forest", and derived from merki "boundary, sign", denoting a borderland between two centres of power.
In Old English, "mark" meant "boundary" or "sign of a boundary", and the meaning only later evolved to encompass "sign" in general, "impression" and "trace".
The Anglo-Saxon kingdom of Mercia took its name from West Saxon mearc "marches", which in this instance referred explicitly to the territory's position on the Anglo-Saxon frontier with the Romano-British to the west.
During the Frankish Carolingian dynasty, usage of the word spread throughout Europe.
The name "Denmark" preserves the Old Norse cognates merki ("boundary") mǫrk ("wood", "forest") up to the present. Following the Anschluss, the Nazi German government revived the old name "Ostmark" for Austria.
Historical examples of marches and marks
Frankish Empire and successor states
Marca Hispanica
In the early ninth century, Charlemagne issued his new kind of land grant, the aprisio, which redisposed land belonging to the Imperial fisc in deserted areas, and included special rights and immunities that resulted in a range of independence of action. Historians interpret the aprisio both as the basis of feudalism and in economic and military terms as a mechanism to entice settlers to a depopulated border region. Such self-sufficient landholders would aid the counts in providing armed men in defense of the Frankish frontier. Aprisio grants (the first ones were in Septimania) emanated directly from the Carolingian king, and they reinforced central loyalties, to counterbalance the local power exercised by powerful marcher counts.
After some early setbacks, Emperor Louis the Pious ventured beyond the province of Septimania and eventually took Barcelona from the Moorish emir in 801. Thus he established a foothold in the borderland between the Franks and the Moors. The Carolingian "Hispanic Marches" (Marca Hispanica) became a buffer zone ruled by a number of feudal lords, among them the count of Barcelona. It had its own outlying territories, each ruled by a lesser miles with armed retainers, who theoretically owed allegiance through a count to the emperor or, with less fealty, to his Carolingian and Ottonian successors. Such territory had a catlá ("castellan" or lord of the castle) in an area largely defined by a day's ride, and the region became known, like Castile at a later date, as "Catalunya". Counties in the Pyrenees that appeared in the 9th century, in addition to the County of Barcelona, included Cerdanya, Girona and Urgell.
Communications were arduous, and the power centre was far away. Primitive feudal entities developed, self-sufficient and agrarian, each ruled by a small hereditary military elite. The sequence in the County of Barcelona exhibits a pattern that emerges similarly in marches everywhere: the count is appointed by the king (from 802), the appointment settles on the heirs of a strong count (Sunifred) and the appointment becomes a formality, until the position is declared hereditary (897) and then the count declares independence (by Borrell II in 985). At each stage the de facto situation precedes the de jure assertion, which merely regularizes an existing fact of life. This is feudalism in the larger landscape.
Some counts aspired to the characteristically Frankish (Germanic) title "Margrave of the Hispanic March", a "margrave" being a graf ("count") of the march.
The early history of Andorra provides a fairly typical career of another such march county, the only modern survivor in the Pyrenees of the Hispanic Marches.
Marches set up by Charlemagne
The Danish March (sometimes regarded as just a series of forts rather than a march) between the Eider and Schlei rivers, against the Danes;
the Saxon or Nordalbingen march between the Eider and Elbe rivers in modern Holstein, against the Obotrites;
the Thuringian or Sorbian march on the Saale river, against the Sorbs dwelling behind the limes sorabicus;
the March of Lusatia, March of Meissen, March of Merseburg and March of Zeitz;
the Franconian march in modern Upper Franconia, against the Czechs;
the Avar march between Enns river and Wienerwald (the later Eastern March that became the Margraviate of Austria);
the Pannonian march east of Vienna (divided into Upper and Lower);
the Carantanian march;
Steiermark (Styria), established under Charlemagne from a part of Carantania (Carinthia), erected as a border territory against the Avars and Slavs;
the March of Friuli;
the Marca Hispanica against the Muslims of Al-Andalus
France
The province of France called Marche (), sometimes Marche Limousine, was originally a small border district between the Duchy of Aquitaine and the domains of the Frankish kings in central France, partly of Limousin and partly of Poitou.
Its area was increased during the 13th century and remained the same until the French Revolution. Marche was bounded on the north by Berry, on the east by Bourbonnais and Auvergne; on the south by Limousin itself and on the west by Poitou. It embraced the greater part of the modern département of Creuse, a considerable part of the northern Haute-Vienne, and a fragment of Indre, up to Saint-Benoît-du-Sault. Its area was about its capital was Charroux and later Guéret, and among its other principal towns were Dorat, Bellac and Confolens.
Marche first appeared as a separate fief about the middle of the 10th century when William III, duke of Aquitaine, gave it to one of his vassals named Boso, who took the title of count. In the 12th century it passed to the family of Lusignan, sometimes also counts of Angoulême, until the death of the childless Count Hugh in 1303, when it was seized by King Philip IV. In 1316 it was made an appanage for his youngest son Charles and a few years later (1327) it passed into the hands of the family of Bourbon.
The family of Armagnac held it from 1435 to 1477, when it reverted to the Bourbons, and in 1527 it was seized by King Francis I and became part of the domains of the French crown. It was divided into Haute-Marche (i.e. "Upper Marche") and Basse-Marche (i.e. "Lower Marche"), the estates of the former being in existence until the 17th century. From 1470 until the Revolution the province was under the jurisdiction of the parlement of Paris.
Several communes of France are named similarly:
Marches, Drôme in the Drôme département
La Marche in the Nièvre département
Germany and Austria
The Germanic tribes that Romans called Marcomanni, who battled the Romans in the 1st and 2nd centuries, were simply the "men of the borderlands".
Marches were territorial organisations created as borderlands in the Carolingian Empire and had a long career as purely conventional designations under the Holy Roman Empire. In modern German, "Mark" denotes a piece of land that historically was a borderland, as in the following names:
Later medieval marches
Nordmark, the "Northern March", the Ottonian empire's territorial organisation on the conquered areas of the Wends. In 1134, in the wake of a German crusade against the Wends, the German magnate Albert the Bear was granted the Northern March by the Holy Roman Emperor Lothar II.
the March of the Billungs on the Baltic coast, stretching approximately from Stettin (Szczecin) to Schleswig;
Marca Geronis (march of Gero), a precursor of the Saxon Eastern March, later divided into smaller marches (the Northern March, which later was reestablished as Margraviate of Brandenburg; the Lusatian March and the Meißen March in modern Free state of Saxony; the March of Zeitz; the Merseburg March; the Milzener March around Bautzen);
March of Austria (marcha Orientalis, the "Eastern March" or "Bavarian Eastern March" () in modern lower Austria);
the Hungarian March
the Carantania march or March of Styria (Steiermark);
the Drau March (Marburg and Pettau);
the Sann March (Cilli);
the Krain or March of Carniola, also Windic march and White Carniola (White March), in modern Slovenia.
three marches were created in the Low Countries: Antwerp, Valenciennes, Ename.
Other
The Margraviate of Brandenburg, its ruler designated (margrave, literally "march-count"). It was further divided into regions also designated "Mark":
Altmark ("Old March"), the western region of the former margraviate, between Hamburg and Magdeburg.
Mittelmark ("Central March"), the area surrounding Berlin. Today, this region makes up for the bulk of the German federal state of Brandenburg, and thus in modern usage is referred to as Mark Brandenburg.
Neumark ("New March") since the 1250s was Brandenburg's eastern extremity between Pomerania and Greater Poland. Since 1945, the area is a part of Poland.
Uckermark, the Brandenburg–Pomeranian borderland. The name is still in use for the region as well as for a Brandenburgian district.
Mark, a medieval territory that is recalled in the Märkischer Kreis district (formed in 1975) of today's North Rhine-Westphalia. The northern portion (north of the Lippe River) is still called Hohe Mark ("Higher Mark"). The former "Lower Mark" (between Ruhr and Lippe rivers) is the present Ruhr area and is no longer called "Mark". The title, in the form "Count of the Mark", survived the territory as a subsidiary title of the Dukes of Saxe-Coburg and Gotha
Ostmark ("Eastern March") is a modern rendition of the term marchia orientalis used in Carolingian documents referring to the area of Lower Austria that was later a markgraftum (margraviate or "county of the mark"). Ostmark has been variously used to denote Austria, the Saxon Eastern March, or, as Ostmarkenverein, the territories Prussia gained in the partitions of Poland.
Habsburg Empire
Italy
From the Carolingian period onwards the name marca begins to appear in Italy, first the Marca Fermana for the mountainous part of Picenum, the Marca Camerinese for the district farther north, including a part of Umbria, and the Marca Anconitana for the former Pentapolis (Ancona). In 1080, the marca Anconitana was given in investiture to Robert Guiscard by Pope Gregory VII, to whom the Countess Matilda ceded the marches of Camerino and Fermo.
In 1105, the Emperor Henry IV invested Werner with the whole territory of the three marches, under the name of the March of Ancona. It was afterwards once more recovered by the Church and governed by papal legates as part of the Papal States. The Marche became part of the Kingdom of Italy in 1860. After Italian unification in the 1860s, Austria-Hungary still controlled territory Italian nationalists still claimed as part of Italy. One of these territories was Austrian Littoral, which Italian nationalists began to call the Julian March because of its positioning and as an act of defiance against the hated Austro-Hungarian empire.
Marche were repeated on a miniature level, fringing many of the small territorial states of pre-Risorgimento Italy with a ring of smaller dependencies on their borders, which represent territorial marche on a small scale. A map of the Duchy of Mantua in 1702 (Braudel 1984, fig 26) reveals the independent, though socially and economically dependent arc of small territories from the principality of Castiglione in the northwest across the south to the duchy of Mirandola southeast of Mantua: the lords of Bozolo, Sabioneta, Dosolo, Guastalla, the count of Novellare.
Hungary
In medieval Hungary the system of gyepű and gyepűelve, effective until the mid-13th century, can be considered as marches even though in its organisation it shows major differences from Western European feudal marches. For one thing, the gyepű was not controlled by a Marquess.
The Gyepű was a strip of land that was specially fortified or made impassable, while gyepűelve was the mostly uninhabited or sparsely inhabited land beyond it. The gyepűelve is much more comparable to modern buffer zones than traditional European marches.
Portions of the gyepű were usually guarded by tribes who had joined the Hungarian nation and were granted special rights for their services at the borders, such as the Székelys, Pechenegs and Cumans. A ban on settlement north of Niš by the Byzantine Empire in the twelfth century helped to establish uninhabited marchland between the empire's territory and Hungary.
The Hungarian gyepű originates from the Turkish yapi meaning palisade. During the 17th and 18th centuries these borderlands were called Markland in the area of Transylvania that bordered with the Kingdom of Hungary and was controlled by a Count or Countess.
Iberia
In addition to the Carolingian Marca Hispanica, Iberia was home to several marches set up by the native states. The future kingdoms of Portugal and Castile were founded as marcher counties intended to protect the Kingdom of León from the Cordoban Emirate, to the south and east respectively.
Likewise, Córdoba set up its own marches as a buffer to the Christian states to the north. The Upper March (al-Tagr al-A'la), centered on Zaragoza, faced the eastern Marca Hispanica and the western Pyrenees, and included the Distant or Farthest March (al-Tagr al-Aqsa). The Middle March (al-Tagr al-Awsat), centred on Toledo and later Medinaceli, faced the western Pyrenees and Asturias. The Lower March (al-Tagr al-Adna), centred on Mérida and later Badajoz, facing León and Portugal. These too would give rise to Kingdoms, the Taifas of Zaragoza, Toledo, and Badajoz.
Scandinavia
Denmark means "the march of the Danes".
In Norse, "mark" meant "borderlands" and "forest"; in present-day Norwegian and Swedish it has acquired the meaning "ground", while in Danish it has come to mean "field" or "grassland".
Markland was the Norse name of an area in North America discovered by Norwegian Vikings.
The forests surrounding Norwegian cities are called "Marka" – the marches. For example, the forests surrounding Oslo are called Nordmarka, Østmarka and Vestmarka – i.e. the northern, eastern and western marches.
In Norway, there are – or have been – the counties:
Finnmark, "the borderlands of the Sámi" (known to the Norse as Finns)
Hedmark, "the borderlands of heath"
Telemark, "the borderlands of the Þela tribe"
In Finland, mark occurs in the following placenames in Satakunta:
Noormarkku (Swedish: Norrmark), a former municipality of Finland
Pomarkku (Swedish: Påmark), a municipality of Finland
(Swedish: Södermark), a village in Noormarkku, Finland
In Värmland in Sweden, Nordmark Hundred was the frontier area near the border to Norway. Almost all of it is now a part of Årjäng Municipality. In the Middle Ages the area was called Nordmarkerna and was a part of Dalsland and not of Värmland.
British Isles
The name of the Anglo-Saxon kingdom in the midlands of England was Mercia. The name "Mercia" comes from the Old English for "boundary folk", and the traditional interpretation was that the kingdom originated along the frontier between the Welsh and the Anglo-Saxon invaders, although P. Hunter Blair has argued an alternative interpretation that they emerged along the frontier between the Kingdom of Northumbria and the inhabitants of the River Trent valley.
Latinizing the Anglo-Saxon term mearc, the border areas between England and Wales were collectively known as the Welsh Marches (marchia Wallia), while the native Welsh lands to the west were considered Wales Proper (pura Wallia). The Norman lords in the Welsh Marches were to become the new Marcher Lords.
The title Earl of March is at least two distinct feudal titles: one in the northern marches, as an alternative title for the Earl of Dunbar (c. 1290 in the Peerage of Scotland); and one, that was held by the family of Mortimer (1328 in the Peerage of England), in the west Welsh Marches.
The Scottish Marches is a term for the border regions on both sides of the border between England and Scotland. From the Norman conquest of England until the reign of King James VI of Scotland, who also became King James I of England, border clashes were common and the monarchs of both countries relied on Marcher Lords to defend the frontier areas known as the Marches. They were hand-picked for their suitability for the challenges the responsibilities presented.
Patrick Dunbar, 8th Earl of Dunbar, a descendant of the Earls of Northumbria was recognized in the end of the 13th century to use the name March as his earldom in Scotland, otherwise known as Dunbar, Lothian, and Northumbrian border.
Roger Mortimer, 1st Earl of March, Regent of England together with Isabella of France during the minority of her son, Edward III, was a usurper who had deposed, and allegedly arranged the murder of, King Edward II. He was created an earl in September 1328 at the height of his de facto rule. His wife was Joan de Geneville, 2nd Baroness Geneville, whose mother, Jeanne of Lusignan was one of the heiresses of the French Counts of La Marche and Angouleme.
His family, Mortimer Lords of Wigmore, had been border lords and leaders of defenders of Welsh marches for centuries. He selected March as the name of his earldom for several reasons: Welsh marches referred to several counties, whereby the title signified superiority compared to usual single county-based earldoms. Mercia was an ancient kingdom. His wife's ancestors had been Counts of La Marche and Angouleme in France.
In Ireland, a hybrid system of marches existed which was condemned as barbaric at the time. The Irish marches constituted the territory between English and Irish-dominated lands, which appeared as soon as the English did and were called by King John to be fortified. By the 14th century, they had become defined as the land between The Pale and the rest of Ireland. Local Anglo-Irish and Gaelic chieftains who acted as powerful spokespeople were recognised by the Crown and given a degree of independence. Uniquely, the keepers of the marches were given the power to terminate indictments. In later years, wardens of the Irish marches took Irish tenants.
Titles
Marquis, marchese and margrave (Markgraf) all had their origins in feudal lords who held trusted positions in the borderlands. The English title was a foreign importation from France, tested out tentatively in 1385 by Richard II, but not naturalized until the mid-15th century, and now more often spelled "marquess".
Related concepts
Abbasid Caliphate
Armenia
The specific subdivisions of Armenia are each called marz, մարզ (pl. "marzer, մարզեր"), a loanword from Persian.
The Balkans
See Krajina and Military Frontier.
Byzantine Empire
China
The Chinese concept of March is called Fan (藩), referring to feudatory domains and petty kingdoms on the borderlands of the empire.
In their initial development during the later Zhou dynasty, the commanderies (jùn, 郡) functioned as marches, ranking below the dukes' and kings' original fiefs and below the more secure and populous counties (xiàn). As the commanderies formed the front lines between the major states, however, their military strength and strategic importance were typically much greater than the counties'. Over time, however, the commanderies were eventually developed into regular provinces and then discontinued entirely during the Tang dynasty reforms.
Japan
The European concept of marches applies just as well to the fief of Matsumae clan on the southern tip of Hokkaidō which was at Japan's northern border with the Ainu people of Hokkaidō, known as Ezo at the time. In 1590, this land was granted to the Kakizaki clan, who took the name Matsumae from then on. The Lords of Matsumae, as they are sometimes called, were exempt from owing rice to the shōgun in tribute, and from the sankin-kōtai system established by Tokugawa Ieyasu, under which most lords (daimyōs) had to spend half the year at court (in the capital of Edo).
By guarding the border, rather than conquering or colonizing Ezo, the Matsumae, in essence, made the majority of the island an Ainu reservation. This also meant that Ezo, and the Kurile Islands beyond, were left essentially open to Russian colonization. However, the Russians never did colonize Ezo, and the marches were officially eliminated during the Meiji Restoration in the late 19th century, when the Ainu came under Japanese control, and Ezo was renamed Hokkaidō, and annexed to Japan.
Persia (Sassanid Empire)
Roman Empire
Ukraine
Ukraine, from the Moscow-centric Russian viewpoint,
functioned as a "borderland" or "march" and arguably could have gained its current name, which is derived from a Slavic term that can take on the same meaning (see above for similar in Slovenia, etc.), ultimately from this function. This, though, was merely a continuation of a semi-formal arrangement with the Poles, before escalating feuds, political infighting in Poland, and religious differences (mainly Eastern Orthodox vs. Roman Catholic) saw a loose coalition of Ukrainian lords and independent landowners collectively known as the Cossacks shift to ally with the Russian Empire.
The Cossacks became a significant part of Russian military history in their role as military border/buffer-troops in the Wild Fields of Ukraine. The Tatar slave raids in East Slavic lands brought considerable devastation and depopulation to this area prior to the rise of the Zaporozhian Cossacks. As settlement advanced and the borders moved, the Tsars transferred or formed Cossack units to perform similar functions on other borderlands/marches further south and east in (for example) the Kuban and in Siberia, forming (for example) the Black Sea Cossack Host, the Kuban Cossack Host and the Amur Cossack Host.
See also
American Frontier
Buffer state
List of marches
Marz (territorial entity)
No man's land
Notes
References
Attribution:
Endnote:
A. Thomas, Les États provinciaux de la France centrale (1879).
Defunct types of subdivision in the United Kingdom
Borders
Geopolitics | March (territory) | [
"Physics"
] | 5,079 | [
"Spacetime",
"Borders",
"Space"
] |
317,552 | https://en.wikipedia.org/wiki/Cover%20%28topology%29 | In mathematics, and more particularly in set theory, a cover (or covering) of a set is a family of subsets of whose union is all of . More formally, if is an indexed family of subsets (indexed by the set ), then is a cover of if
Thus the collection is a cover of if each element of belongs to at least one of the subsets .
Definition
Covers are commonly used in the context of topology. If the set is a topological space, then a cover of is a collection of subsets of whose union is the whole space . In this case is said to cover , or that the sets cover .
If is a (topological) subspace of , then a cover of is a collection of subsets of whose union contains . That is, is a cover of if
Here, may be covered with either sets in itself or sets in the parent space .
A cover of is said to be locally finite if every point of has a neighborhood that intersects only finitely many sets in the cover. Formally, is locally finite if, for any , there exists some neighborhood of such that the set
is finite. A cover of is said to be point finite if every point of is contained in only finitely many sets in the cover. A cover is point finite if locally finite, though the converse is not necessarily true.
Subcover
Let be a cover of a topological space . A subcover of is a subset of that still covers . The cover is said to be an if each of its members is an open set. That is, each is contained in , where is the topology on X).
A simple way to get a subcover is to omit the sets contained in another set in the cover. Consider specifically open covers. Let be a topological basis of and be an open cover of . First, take . Then is a refinement of . Next, for each one may select a containing (requiring the axiom of choice). Then is a subcover of Hence the cardinality of a subcover of an open cover can be as small as that of any topological basis. Hence, second countability implies space is Lindelöf.
Refinement
A refinement of a cover of a topological space is a new cover of such that every set in is contained in some set in . Formally,
is a refinement of if for all there exists such that
In other words, there is a refinement map satisfying for every This map is used, for instance, in the Čech cohomology of .
Every subcover is also a refinement, but the opposite is not always true. A subcover is made from the sets that are in the cover, but omitting some of them; whereas a refinement is made from any sets that are subsets of the sets in the cover.
The refinement relation on the set of covers of is transitive and reflexive, i.e. a Preorder. It is never asymmetric for .
Generally speaking, a refinement of a given structure is another that in some sense contains it. Examples are to be found when partitioning an interval (one refinement of being ), considering topologies (the standard topology in Euclidean space being a refinement of the trivial topology). When subdividing simplicial complexes (the first barycentric subdivision of a simplicial complex is a refinement), the situation is slightly different: every simplex in the finer complex is a face of some simplex in the coarser one, and both have equal underlying polyhedra.
Yet another notion of refinement is that of star refinement.
Compactness
The language of covers is often used to define several topological properties related to compactness. A topological space is said to be:
compact if every open cover has a finite subcover, (or equivalently that every open cover has a finite refinement);
Lindelöf if every open cover has a countable subcover, (or equivalently that every open cover has a countable refinement);
metacompact: if every open cover has a point-finite open refinement; and
paracompact: if every open cover admits a locally finite open refinement.
For some more variations see the above articles.
Covering dimension
A topological space X is said to be of covering dimension n if every open cover of X has a point-finite open refinement such that no point of X is included in more than n+1 sets in the refinement and if n is the minimum value for which this is true. If no such minimal n exists, the space is said to be of infinite covering dimension.
See also
References
Introduction to Topology, Second Edition, Theodore W. Gamelin & Robert Everist Greene. Dover Publications 1999.
External links
Topology
General topology
Families of sets | Cover (topology) | [
"Physics",
"Mathematics"
] | 1,007 | [
"General topology",
"Combinatorics",
"Basic concepts in set theory",
"Topology",
"Space",
"Families of sets",
"Geometry",
"Spacetime"
] |
317,558 | https://en.wikipedia.org/wiki/Instituto%20de%20Astrof%C3%ADsica%20de%20Canarias | The Instituto de Astrofísica de Canarias (IAC) is an astrophysical research institute located in the Canary Islands, Spain. It was founded in 1975 at the University of La Laguna. It operates two astronomical observatories in the Canary Islands: Roque de los Muchachos Observatory on La Palma, and Teide Observatory on Tenerife.
The current director of the IAC is Valentín Martínez Pillet, who succeeded Rafael Rebolo López on July 1, 2024. In 2016, English scientist Stephen Hawking was appointed Honorary Professor of the IAC, the first such appointment made by the institute.
See also
Instituto de Astrofísica de Andalucía
Centro de Estudios de Fisica del Cosmos de Aragon
Irene González Hernández
References
External links
IAC Homepage
European Northern Observatory
Buildings and structures in Tenerife
Research institutes in Spain
Astronomy institutes and departments
Astrophysics research institutes
Organisations based in the Canary Islands
San Cristóbal de La Laguna | Instituto de Astrofísica de Canarias | [
"Physics",
"Astronomy"
] | 200 | [
"Astronomy institutes and departments",
"Astronomy stubs",
"Astronomy organizations",
"Astrophysics",
"Astrophysics stubs",
"Astrophysics research institutes"
] |
317,575 | https://en.wikipedia.org/wiki/Sarah%20Baartman | Sarah Baartman (; 1789 – 29 December 1815), also spelled Sara, sometimes in the diminutive form Saartje (), or Saartjie, and Bartman, Bartmann, was a Xhosa-Khoekhoe woman who was exhibited as a freak show attraction in 19th-century Europe under the name Hottentot Venus, a name that was later attributed to at least one other woman similarly exhibited. The women were exhibited for their steatopygic body type uncommon in Western Europe that was perceived as a curiosity at that time, and became subject of scientific interest as well as of erotic projection.
"Venus" is sometimes used to designate representations of the female body in arts and cultural anthropology, referring to the Roman goddess of love and fertility. "Hottentot" was a Dutch-colonial era term for the indigenous Khoekhoe people of southwestern Africa, which then became commonly used in English, but which is now usually considered an offensive term. Although it is still unclear how much she was a willing participant, the Sarah Baartman story is often portrayed as the epitome of racist colonial exploitation, and of the commodification and dehumanization of black people.
Life
Early life in the Cape Colony
Baartman was born to a [Xhosa and Khoekhoe] family in the vicinity of the Camdeboo Dutch Cape Colony; a British colony by the time she was an adult. Her birth name is unknown, but is thought by some to have been Ssehura, supposedly the closest to her given name. Saartjie is the diminutive form of Sarah; in Cape Dutch the use of the diminutive form commonly indicated familiarity, endearment or contempt. Her surname has also been spelt Bartman and Bartmann. She was an infant when her mother died and her father was later killed by Bushmen (San people) while driving cattle.
Baartman spent her childhood and teenage years on Dutch European farms. She went through puberty rites, and kept a small tortoise shell necklace, most likely her mother's, until her death in France. In the 1790s, a free black (a designation for people of enslaved descent) trader named Peter Cesars (also recorded as Caesar) met her and encouraged her to move to Cape Town. She lived in Cape Town for at least two years working in households as a washerwoman and a nursemaid, first for Peter Cesars, then in the house of a Dutch man in Cape Town. She finally moved to be a wet-nurse in the household of Peter Cesars' brother, Hendrik Cesars, outside Cape Town in present day Woodstock. There is evidence that she had two children, though both died as babies. She had a relationship with a poor Dutch soldier, Hendrik van Jong, who lived in Hout Bay near Cape Town, but the relationship ended when his regiment left the Cape.
Hendrik Cesars began to show her at the city hospital in exchange for cash, where surgeon Alexander Dunlop worked. Dunlop, (sometimes wrongly cited as William Dunlop), a Scottish military surgeon in the Cape slave lodge, operated a side business in supplying showmen in Britain with animal specimens, and suggested she travel to Europe to make money by exhibiting herself. Baartman refused. Dunlop persisted, and Baartman said she would not go unless Hendrik Cesars came too. He agreed in 1810 to go to Britain to make money by putting Baartman on stage. It is unknown whether Baartman went willingly or was forced, although the acceptance of her earlier refusal might imply she did eventually agree to go of her own free will.
Dunlop was the frontman and driver of the plan to exhibit Baartman. According to a British legal report of 26 November 1810, an affidavit supplied to the Court of King's Bench from a "Mr. Bullock of Liverpool Museum" stated: "some months since a Mr. Alexander Dunlop, who, he believed, was a surgeon in the army, came to him to sell the skin of a Camelopard, which he had brought from the Cape of Good Hope.... Some time after, Mr. Dunlop again called on Mr. Bullock, and told him, that he had then on her way from the Cape, a female Hottentot, of very singular appearance; that she would make the fortune of any person who shewed her in London, and that he (Dunlop) was under an engagement to send her back in two years..." Lord Caledon, governor of the Cape, gave permission for the trip, but later said he regretted it after he fully learned the purpose of the trip.
On display in Europe
Hendrik Cesars and Alexander Dunlop brought Baartman to London in 1810. The group lived together in Duke Street, St. James, the most expensive part of London. In the household were Sarah Baartman, Hendrik Cesars, Alexander Dunlop, and two African boys, possibly brought illegally by Dunlop from the slave lodge in Cape Town.
Dunlop had to have Baartman exhibited and Cesars was the showman. Dunlop exhibited Baartman at the Egyptian Room at the London residence of Thomas Hope at No. 10 Duchess Street, Cavendish Square, London. Dunlop thought he could make money because of Londoners' lack of familiarity with Africans and because of Baartman's pejoratively perceived large buttocks. Crais and Scully allege that: "People came to see her because they saw her not as a person but as a pure example of this one part of the natural world". She became known as the "Hottentot Venus" (as was at least one other woman, in 1829). A handwritten note made on an exhibition flyer by someone who saw Baartman in London in January 1811 indicates curiosity about her origins and probably reproduced some of the language from the exhibition; thus the following origin story should be treated with skepticism: "Sartjee is 22 Years old is 4 feet 10 Inches high, and has (for a Hottentot) a good capacity. She lived in the occupation of a Cook at the Cape of Good Hope. Her Country is situated not less than 600 Miles from the Cape, the Inhabitants of which are rich in Cattle and sell them by barter for a mere trifle. A Bottle of Brandy, or small roll of Tobacco will purchase several Sheep – Their principal trade is in Cattle Skins or Tallow. – Beyond this Nation is an other, of small stature, very subtle & fierce; the Dutch could not bring them under subjection, and shot them whenever they found them. 9 Jany, 1811. [H.C.?]" The tradition of freak shows was well established in Europe at this time, and historians have argued that this is at first how Baartman was displayed. Baartman never allowed herself to be exhibited nude, and an account of her appearance in London in 1810 makes it clear that she was wearing a garment, albeit a tight-fitting one. She became a subject of scientific interest, albeit of racist bias frequently, as well as of erotic projection. It is alleged she was marketed as the "missing link between man and beast".
Her exhibition in London just a few years after the passing of the 1807 Slave Trade Act, which abolished the slave trade, created a scandal. A British abolitionist society, the African Association, conducted a newspaper campaign for her release. The British abolitionist Zachary Macaulay led the protest, with Hendrik Cesars protesting in response that Baartman was entitled to earn her living, stating: "has she not as good a right to exhibit herself as an Irish Giant or a Dwarf?" Cesars was comparing Baartman to the contemporary Irish giants Charles Byrne and Patrick Cotter O'Brien. Macaulay and The African Association took the matter to court and on 24 November 1810 at the Court of King's Bench the Attorney-General began the attempt "to give her liberty to say whether she was exhibited by her own consent." In support he produced two affidavits in court. The first, from William Bullock of Liverpool Museum, was intended to show that Baartman had been brought to Britain by people who referred to her as if she were property. The second, by the Secretary of the African Association, described the degrading conditions under which she was exhibited and also gave evidence of coercion. Baartman was then questioned before an attorney in Dutch, in which she was fluent, via interpreters.
Some historians have subsequently expressed doubts on the veracity and independence of the statement that Baartman then made, although there remains no direct evidence that she was lying. She stated that she was not under restraint, had not been sexually abused and had come to London on her own free will. She also did not wish to return to her family and understood perfectly that she was guaranteed half of the profits. The case was therefore dismissed. She was questioned for three hours. Her statement contradicts accounts of her exhibitions made by Zachary Macaulay of the African Institution and other eyewitnesses. A written contract was produced, which has been suggested by some interested modern commentators to be a legal subterfuge.
The publicity given by the court case increased Baartman's popularity as an exhibit. She later toured other parts of England and was exhibited at a fair in Limerick, Ireland in 1812. She also was exhibited at a fair at Bury St Edmunds in Suffolk. On 1 December 1811 Baartman was baptised at Manchester Cathedral and there is evidence that she got married on the same day.
Later life
If indeed she had come to England of her own free will, her situation appears to have changed when she travelled to France. A man called Henry Taylor took Baartman there around September 1814. Taylor then sold her to a man sometimes reported as an animal trainer, S. Réaux, but whose name was actually Jean Riaux and belonged to a ballet master who had been deported from the Cape Colony for seditious behaviour. Riaux exhibited her under more pressured conditions for 15 months at the Palais Royal in Paris. In France she may have been in effect enslaved, although her exact position remains unclear. In Paris, her exhibition became more clearly entangled with scientific racism. French scientists were curious about whether she had the elongated labia which earlier naturalists such as François Levaillant had purportedly observed other Khoekhoe women to have at the Cape. French naturalists, among them Georges Cuvier, head keeper of the menagerie at the Muséum national d'Histoire naturelle and founder of the discipline of comparative anatomy, visited her. She was the subject of several scientific paintings at the Jardin du Roi, where she was examined in March 1815.
She was brought out as an exhibit at wealthy people's parties and private salons. In Paris, Baartman's promoters did not need to concern themselves with slavery charges. Crais and Scully suggest: "By the time she got to Paris, her existence was really quite miserable and extraordinarily poor". At some points a collar was placed around her neck (although it is unclear whether that was just a prop for the performance)." Specifically, she was exhibited with a collar on some occasions. At the end of her life she was penniless, which was probably connected to the economic depression in France after Napoleon's defeat, resulting in a dearth of audiences that were able and willing to pay to see her. According to present-day accounts in the New York Times and The Independent, she was also working as a prostitute, but the biography by Crais and Scully only notes that as an uncertain possibility (since she was exhibited, besides other places, at the brothel in Cours des Fontaines).
Death and aftermath
Baartman died on 29 December 1815 around age 26, of an undetermined inflammatory ailment, possibly smallpox, while other sources suggest she contracted syphilis, or pneumonia. Cuvier conducted a dissection but no autopsy to inquire into the reasons for Baartman's death.
The French anatomist Henri Marie Ducrotay de Blainville published notes on the dissection in 1816, which were republished by Georges Cuvier in the Memoires du Museum d'Histoire Naturelle in 1817. Cuvier, who had met Baartman, notes in his monograph that its subject was an intelligent woman with an excellent memory, particularly for faces. In addition to her native tongue, she spoke fluent Dutch, passable English, and a smattering of French. He describes her shoulders and back as "graceful", arms "slender", hands and feet as "charming" and "pretty". He adds she was adept at playing the Jew's harp, could dance according to the traditions of her country, and had a lively personality. Despite this, Cuvier interpreted her remains as evidencing ape-like traits. He thought her small ears were similar to those of an orangutan and also compared her vivacity, when alive, to the quickness of a monkey. He was part of a movement of scientists who sought to identify and study differences between human races, with the aim of theorising a racial hierarchy.
Display of remains
Saint-Hilaire applied on behalf of the Muséum d'Histoire Naturelle to retain her remains (Cuvier had preserved her brain, genitalia and skeleton), on the grounds that it was of a singular specimen of humanity and therefore of special scientific interest. The application was approved and Baartman's skeleton and body cast were displayed in Muséum d'histoire naturelle d’Angers. Her skull was stolen in 1827 but returned a few months later. The restored skeleton and skull continued to arouse the interest of visitors until the remains were moved to the Musée de l'Homme, when it was founded in 1937, and continued up until the late 1970s. Her body cast and skeleton stood side by side and faced away from the viewer which emphasised her steatopygia (accumulation of fat on the buttocks) while reinforcing that aspect as the primary interest of her body. The Baartman exhibit proved popular until it elicited complaints for being a degrading representation of women. The skeleton was removed in 1974, and the body cast in 1976.
From the 1940s, there were sporadic calls for the return of her remains. A poem written in 1998 by South African poet Diana Ferrus, herself of Khoekhoe descent, entitled "I've come to take you home", played a pivotal role in spurring the movement to bring Baartman's remains back to her birth soil. The case gained world-wide prominence only after American paleontologist Stephen Jay Gould wrote The Mismeasure of Man in the 1980s. Mansell Upham, a researcher and jurist specializing in colonial South African history, also helped spur the movement to bring Baartman's remains back to South Africa. After the victory of the African National Congress (ANC) in the 1994 South African general election, President Nelson Mandela formally requested that France return the remains. After much legal wrangling and debates in the French National Assembly, France acceded to the request on 6 March 2002. Her remains were repatriated to her homeland, the Gamtoos Valley, on 6 May 2002, and they were buried on 9 August 2002 on Vergaderingskop, a hill in the town of Hankey over 200 years after her birth.
Symbolism
Sarah Baartman was not the only Khoekhoe to be taken from her homeland. Her story is sometimes used to illustrate social and political strains, and through this, some facts have been lost. Dr. Yvette Abrahams, professor of women and gender studies at the University of the Western Cape, writes, "we lack academic studies that view Sarah Baartman as anything other than a symbol. Her story becomes marginalized, as it is always used to illustrate some other topic." Baartman is used to represent African discrimination and suffering in the West although there were many other Khoekhoe people who were taken to Europe. Historian Neil Parsons writes of two Khoekhoe children 13 and six years old respectively, who were taken from South Africa and displayed at a holiday fair in Elberfeld, Prussia, in 1845. Bosjemans, a travelling show including two Khoekhoe men, women, and a baby, toured Britain, Ireland, and France from 1846 to 1855. P. T. Barnum's show "Little People" advertised a 16-year-old Khoekhoe girl named Flora as the "missing link" and acquired six more Khoekhoe children later.
Baartman's tale may be better known because she was the first Khoekhoe taken from her homeland, or because of the extensive exploitation and examination of her body by scientists such as Georges Cuvier, an anatomist, and the public as well as the mistreatment she received during and after her lifetime. She was brought to the West for her "exaggerated" female form, and the European public developed an obsession with her reproductive organs. Her body parts were on display at the Musée de l'Homme for 150 years, sparking awareness and sympathy in the public eye. Although Baartman was the first Khoekhoe to land in Europe, much of her story has been lost, and she is defined by her exploitation in the West.
Her body as a foundation for science
Julien-Joseph Virey used Sarah Baartman's published image to validate typologies. In his essay "Dictionnaire des sciences medicales" (Dictionary of medical sciences), he summarizes the true nature of the black female within the framework of accepted medical discourse. Virey focused on identifying her sexual organs as more developed and distinct in comparison to white female organs. All of his theories regarding sexual primitivism are influenced and supported by the anatomical studies and illustrations of Sarah Baartman which were created by Georges Cuvier.
It has been suggested by anthropologists that this body type was once more widespread in humans, based on carvings of female forms dating to the Paleolithic era which are collectively known as Venus figurines, also referred to as Steatopygian Venuses.
Colonialism
Much speculation and study about colonialist influence relates to Baartman's name, social status, her illustrated and performed presentation as the "Hottentot Venus", although considered an extremely offensive term, and the negotiation for her body's return to her homeland. These components and events in Baartman's life have been used by activists and theorists to determine the ways in which 19th-century European colonists exercised control and authority over Khoekhoe people and simultaneously crafted racist and sexist ideologies about their culture. In addition to this, recent scholars have begun to analyze the surrounding events leading up to Baartman's return to her homeland and conclude that it is an expression of recent contemporary post colonial objectives.
In Janet Shibamoto's book review of Deborah Cameron's book Feminism and Linguistic Theory, Shibamoto discusses Cameron's study on the patriarchal context within language, which consequentially influences the way in which women continue to be contained by or subject to ideologies created by the patriarchy. Many scholars have presented information on how Baartman's life was heavily controlled and manipulated by colonialist and patriarchal language.
Baartman grew up on a farm. There is no historical documentation of her birth name. She was given the Dutch name "Saartjie" by Dutch colonists who occupied the land she lived on during her childhood. According to Clifton Crais and Pamela Scully:
Her first name is the Cape Dutch form for "Sarah" which marked her as a colonialist's servant. "Saartje" the diminutive, was also a sign of affection. Encoded in her first name were the tensions of affection and exploitation. Her surname literally means "bearded man" in Dutch. It also means uncivilized, uncouth, barbarous, savage. Saartjie Baartman – the savage servant.
Dutch colonisers also bestowed the term "Hottentot", which is derived from "hot" and "tot", Dutch approximations of common sounds in the Khoekhoe language. The Dutch used this word when referencing Khoekhoe people because of the clicking sounds and staccato pronunciations that characterise the Khoekhoe language; these components of the Khoekhoe language were considered strange and "bestial" to Dutch colonisers. The term was used until the late 20th century, at which point most people understood its effect as a derogatory term.
Travelogues that circulated in Europe would describe Africa as being "uncivilised" and lacking regard for religious virtue. Travelogues and imagery depicting Black women as "sexually primitive" and "savage" enforced the belief that it was in Africa's best interest to be colonised by European settlers. Cultural and religious conversion was considered to be an altruistic act with imperialist undertones; colonisers believed that they were reforming and correcting Khoekhoe culture in the name of Christianity and the empire. Scholarly arguments discuss how Baartman's body became a symbolic depiction of "all African women" as "fierce, savage, naked, and untamable" and became a crucial role in colonising parts of Africa and shaping narratives.
During the lengthy negotiation to have Baartman's body returned to her home country after her death, the assistant curator of the Musée de l'Homme, Philippe Mennecier, argued against her return, stating: "We never know what science will be able to tell us in the future. If she is buried, this chance will be lost ... for us she remains a very important treasure." According to Sadiah Qureshi, due to the continued treatment of Baartman's body as a cultural artifact, Philippe Mennecier's statement is contemporary evidence of the same type of ideology that surrounded Baartman's body while she was alive in the 18th century.
Feminist reception
Traditional iconography of Sarah Baartman and feminist contemporary art
Many African female diasporic artists have criticised the traditional iconography of Baartman. According to the studies of contemporary feminists, traditional iconography and historical illustrations of Baartman are effective in revealing the ideological representation of black women in art throughout history. Such studies assess how the traditional iconography of the black female body was institutionally and scientifically defined in the 19th century.
Renee Cox, Renée Green, Joyce Scott, Lorna Simpson, Carrie Mae Weems and Deborah Willis are artists who seek to investigate contemporary social and cultural issues that still surround the African female body. Sander Gilman, a cultural and literary historian states: "While many groups of African Blacks were known to Europeans in the 19th century, the Hottentot remained representative of the essence of the Black, especially the Black female. Both concepts fulfilled the iconographic function in the perception and representation of the world."
His article "Black Bodies, White Bodies: Toward an Iconography of Female Sexuality in the Late Nineteenth Century Art, Medicine and Literature" traces art historical records of black women in European art, and also proves that the association of black women with concupiscence within art history has been illustrated consistently since the beginning of the Middle Ages.
Lyle Ashton Harris and Renee Valerie Cox worked in collaboration to produce the photographic piece Hottentot Venus 2000. In this piece, Harris photographs Victoria Cox who presents herself as Baartman while wearing large, sculptural, gilded metal breasts and buttocks attached to her body.
"Permitted" is an installation piece created by Renée Green inspired by Sarah Baartman. Green created a specific viewing arrangement to investigate the European perception of the black female body as "exotic", "bizarre" and "monstrous". Viewers were prompted to step onto the installed platform which was meant to evoke a stage, where Baartman may have been exhibited. Green recreates the basic setting of Baartman's exhibition. At the centre of the platform, which there is a large image of Baartman, and wooden rulers or slats with an engraved caption by Francis Galton encouraging viewers to measure Baartman's buttocks. In the installation there is also a peephole that allows viewers to see an image of Baartman standing on a crate. According to Willis, the implication of the peephole, demonstrates how ethnographic imagery of the black female form in the 19th century functioned as a form of pornography for Europeans present at Baartman's exhibit.
In her film Reassemblage: From the firelight to the screen, Trinh T. Minh-ha comments on the ethnocentric bias that the coloniser's eye applies to the naked female form, arguing that this bias causes the nude female body to be seen as inherently sexually provocative, promiscuous and pornographic within the context of European or western culture.
Feminist artists are interested in re-representing Baartman's image, and work to highlight the stereotypes and ethnocentric bias surrounding the black female body based on art historical representations and iconography that occurred before, after and during Baartman's lifetime.
Media representation and feminist criticism
In November 2014, Paper Magazine released a cover of Kim Kardashian in which she was illustrated as balancing a champagne glass on her extended rear. The cover received much criticism for endorsing "the exploitation and fetishism of the black female body". The similarities with the way in which Baartman was represented as the "Hottentot Venus" during the 19th century have prompted much criticism and commentary.
According to writer Geneva S. Thomas, anyone that is aware of black women's history under colonialist influence would consequentially be aware that Kardashian's photo easily elicits memory regarding the visual representation of Baartman.
The photographer and director of the photo, Jean-Paul Goude, based the photo on his previous work "Carolina Beaumont", taken of a nude model in 1976 and published in his book Jungle Fever.
A People Magazine article in 1979 about his relationship with model Grace Jones describes Goude in the following statement:
Jean-Paul has been fascinated with women like Grace since his youth. The son of a French engineer and an American-born dancer, he grew up in a Paris suburb. From the moment he saw West Side Story and the Alvin Ailey dance troupe, he found himself captivated by "ethnic minorities" — black girls, PRs. "I had jungle fever." He now says, "Blacks are the premise of my work."
Days before the shoot, Goude often worked with his models to find the best "hyperbolised" position to take his photos. His model and partner, Grace Jones, would also pose for days prior to finally acquiring the perfect form. "That's the basis of my entire work," Goude states, "creating a credible illusion." Similarly, Baartman and other black female slaves were illustrated and depicted in a specific form to identify features, which were seen as proof of ideologies regarding black female primitivism.
The professional background of Goude and the specific posture and presentation of Kardashian's image in the recreation on the cover of Paper Magazine has caused feminist critics to comment how the objectification of the Baartman's body and the ethnographic representation of her image in 19th-century society presents a comparable and complementary parallel to how Kardashian is currently represented in the media.
In response to the November 2014 photograph of Kim Kardashian, Cleuci de Oliveira published an article on Jezebel titled "Saartjie Baartman: The Original Bootie Queen", which claims that Baartman was "always an agent in her own path." Oliveira goes on to assert that Baartman performed on her own terms and was unwilling to view herself as a tool for scientific advancement, an object of entertainment, or a pawn of the state.
Neelika Jayawardane, a literature professor and editor of the website Africa is a Country, published a response to Oliveira's article. Jayawardane criticises de Oliveira's work, stating that she "did untold damage to what the historical record shows about Baartman". Jayawardane's article is cautious about introducing what she considers false agency to historical figures such as Baartman.
An article entitled "Body Talk: Feminism, Sexuality and the Body in the Work of Six African Women Artists", curated by Cameroonian-born Koyo Kouoh, mentions Baartman's legacy and its impact on young female African artists. The work linked to Baartman is meant to reference the ethnographic exhibits of the 19th century that enslaved Baartman and displayed her naked body. Artist Valérie Oka's (Untitled, 2015) rendered a live performance of a black naked woman in a cage with the door swung open, walking around a sculpture of male genitalia, repeatedly. Her work was so impactful it led one audience member to proclaim, "Do we allow this to happen because we are in the white cube, or are we revolted by it?". Oka's work has been described as 'black feminist art' where the female body is a site for activism and expression. The article also mentions other African female icons and how artists are expressing themselves through performance and discussion by posing the question "How Does the White Man Represent the Black Woman?".
Social scientists James McKay and Helen Johnson cited Baartman to fit newspaper coverage of the African-American tennis players Venus and Serena Williams within racist trans-historical narratives of "pornographic eroticism" and "sexual grotesquerie." According to McKay and Johnson, white male reporters covering the Williams sisters have fixated upon their on-court fashions and their muscular bodies, while downplaying their on-court achievements, describing their bodies as mannish, animalistic, or hyper-sexual, rather than well-developed. Their victories have been attributed to their supposed natural physical superiorities, while their defeats have been blamed on their supposed lack of discipline. This analysis claims that commentary on the size of Serena's breasts and bottom, in particular, mirrors the spectacle made of Baartman's body.
Heather Radke's 2022 Butts: A Backstory heavily relied on Baartman's story to examine the cultural history of women's buttocks.
Reclaiming the story
In recent years, some black women have found her story to be a source of empowerment, one that protests the ideals of white mainstream beauty, as curvaceous bodies are increasingly lauded in popular culture and mass media.
Paramount Chief Glen Taaibosch, chair of the Gauteng Khoi and San Council, says that today "we call her our Hottentot Queen" and honour her.
Legacy and honours
Baartman became an icon in South Africa as representative of many aspects of the nation's history.
The Saartjie Baartman Centre for Women and Children, a refuge for survivors of domestic violence, opened in Cape Town in 1999.
South Africa's first offshore environmental protection vessel, the Sarah Baartman, is also named after her.
In 2015 South Africa's former Cacadu District Municipality was renamed Sarah Baartman District Municipality in her honor.
On 8 December 2018, the University of Cape Town made the decision to rename Memorial Hall, at the centre of the campus, to Sarah Baartman Hall. This follows the earlier removal of "Jameson" from the hall's name.
Cultural references
On 10 January 1811, at the New Theatre, London, a pantomime called "The Hottentot Venus" featured at the end of the evening's entertainment.
In William Makepeace Thackeray's 1847 novel Vanity Fair, George Osborne angrily refuses his father's instruction to marry a West Indian mulatto heiress by referring to Miss Swartz as "that Hottentot Venus".
In "Crinoliniana" (1863), a poem satirising Victorian fashion, the author compares a woman in a crinoline to a "Venus" from "the Cape".
In James Joyce's 1916 novel A Portrait of the Artist as a Young Man, the protagonist, Stephen Dedalus, refers to "the great flanks of Venus" after a reference to the Hottentot people, when discussing the discrepancies between cultural perceptions of female beauty.
Dame Edith Sitwell referred to her allusively in "Hornpipe", a poem in the satirical collection Façade.
In Jean Rhys' 1934 novel Voyage in the Dark, the Creole protagonist Anna Morgan is referred to as "the Hottentot".
Elizabeth Alexander explores her story in a 1987 poem and 1990 book, both entitled The Venus Hottentot.
Hebrew poet Mordechai Geldman wrote a poem titled "THE HOTTENTOT VENUS" exploring the subject in his 1993 book Eye.
Suzan-Lori Parks used the story of Baartman as the basis for her 1996 play Venus.
Zola Maseko directed a documentary on Baartman, The Life and Times of Sarah Baartman, in 1998.
Lyle Ashton Harris collaborated with the model Renee Valerie Cox to produce a photographic image, Hottentot Venus 2000.
Barbara Chase-Riboud wrote the novel Hottentot Venus: A Novel (2003), which humanizes Sarah Baartman
Cathy Park Hong wrote a poem entitled "Hottentot Venus" in her 2007 book Translating Mo'um.
Lydia R. Diamond's 2008 play Voyeurs de Venus investigates Baartman's life from a postcolonial perspective.
A movie entitled Black Venus, directed by Abdellatif Kechiche and starring Yahima Torres as Sarah, was released in 2010.
Hendrik Hofmeyr composed a 20-minute opera entitled Saartjie, which was to be premiered by Cape Town Opera in November 2010.
Joanna Bator refers to a fictional descendant in her novel:
Douglas Kearney published a poem titled "Drop It Like It's Hottentot Venus" in April 2012.
Diane Awerbuck has Baartman feature as a central thread in her novel Home Remedies. The work is critical of the "grandstanding" that so often surrounds Baartman: as Awerbuck has explained, "Saartjie Baartman is not a symbol. She is a dead woman who once suffered in a series of cruel systems. The best way we can remember her is by not letting it happen again."
Brett Bailey's Exhibit B (a human zoo) depicts Baartman.
Jamila Woods' song "Blk Girl Soldier" on her 2016 album Heavn references Baartman's story: "They put her body in a jar and forget her".
Nitty Scott makes reference to Baartman in her song "For Sarah Baartman" on her 2017 album CREATURE!.
The Carters, Jay-Z and Beyoncé, make mention of her in their song "Black Effect": "Stunt with your curls, your lips, Sarah Baartman hips", off their 2018 album Everything is Love.
The University of Cape Town made the historic decision to rename Memorial Hall to Sarah Baartman Hall (8 December 2018).
Zodwa Nyoni debuted at Summerhall in 2019, a new play called A Khoisan Woman - a play about the Hottentot Venus.
Royce 5'9 references Sarah Baartman in his song "Upside Down" in 2020.
Tessa McWatt discusses Baartman and the Hottentot in her 2019/20 book, "Shame on Me: An Anatomy of Race and Belonging".
Meghan Swaby explores the ideas of colonialism and culture as they relate to BIPOC and Saartjie Baartman in her book/play, "Venus' Daughter".
See also
Awoulaba
Body shape
Female body shape
Feminine beauty ideal
Feminism and racism
Human variability
Human zoo
Ota Benga
Racial fetishism
Racism in Europe
Scientific racism
Tono Maria
References
Bibliography
Willis, Deborah (Ed.) Black Venus 2010: They Called Her 'Hottentot'. Philadelphia, PA: Temple University Press. . Available in: https://doc.lagout.org/Others/Temple.University.-.Black.Venus.pdf
Further reading
Fausto-Sterling, Anne (1995). "Gender, Race, and Nation: The Comparative Anatomy of 'Hottentot' Women in Europe, 1815–1817". In Terry, Jennifer and Jacqueline Urla (Ed.) Deviant Bodies: Critical Perspectives on Difference in Science and Popular Culture, 19–48. Bloomington, Indiana University Press. .
Ritter, Sabine (2010). Facetten der Sarah Baartman: Repräsentationen und Rekonstruktionen der ‚Hottentottenvenus'''. Münster: Lit Verlag. .
Films
Abdellatif Kechiche: Vénus noire (Black Venus). Paris: MK2, 2009
Zola Maseko: The life and times of Sara Baartman''. Icarus, 1998
External links
South Africa government site about her, including Diana Ferrus's pivotal poem
A French print
Mara Verna's interactive audio and video piece including a bibliography
Guardian article on the return of her remains
A documentary film called The Life and Times of Sara Baartman by Zola Maseko
The Saartjie Baartman Story
1780s births
1815 deaths
Year of birth uncertain
Art and cultural repatriation
Khoekhoe
People from the Eastern Cape
Sideshow performers
Human zoo performers
Cape Colony women
Ethnological show business
South African emigrants to France
South African expatriates in the United Kingdom
Scientific racism | Sarah Baartman | [
"Biology"
] | 7,831 | [
"Biology theories",
"Obsolete biology theories",
"Scientific racism"
] |
317,625 | https://en.wikipedia.org/wiki/Image%20scanner | An image scanner (often abbreviated to just scanner) is a device that optically scans images, printed text, handwriting, or an object and converts it to a digital image. The most common type of scanner used in the home and the office is the flatbed scanner, where the document is placed on a glass bed. A sheetfed scanner, which moves the page across an image sensor using a series of rollers, may be used to scan one page of a document at a time or multiple pages, as in an automatic document feeder. A handheld scanner is a portable version of an image scanner that can be used on any flat surface. Scans are typically downloaded to the computer that the scanner is connected to, although some scanners are able to store scans on standalone flash media (e.g., memory cards and USB drives).
Modern scanners typically use a charge-coupled device (CCD) or a contact image sensor (CIS) as the image sensor, whereas drum scanners, developed earlier and still used for the highest possible image quality, use a photomultiplier tube (PMT) as the image sensor. Document cameras, which use commodity or specialized high-resolution cameras, photograph documents all at once.
History
Precursors
Image scanners are considered the successors of early facsimile (fax) machines. The earliest attempt at a fax machine was patented in 1843 by the Scottish clockmaker Alexander Bain but never put into production. In his design, a metal stylus linked to a pendulum scans across a copper plate with a raised image. When the stylus makes contact with a raised part of the plate, it sends a pulse across a pair of wires to a receiver containing an electrode linked to another pendulum. A piece of paper impregnated with an electrochemically sensitive solution resides underneath the electrode and changes color whenever a pulse reaches the electrode. A gear advances the copper plate and paper in tandem with each swing of the pendulum; over time, the result is a perfect reproduction of the copper plate. In Bain's system, it is critical that the pendulums of the transceiver and receiver are in perfect step, or else the reproduced image will be distorted.
In 1847, the English physicist Frederick Bakewell developed the first working fax machine. Bakewell's machine was similar to Bain's but used a revolving drum coated in tinfoil, with non-conductive ink painted on the foil and a stylus that scans across the drum and sends a pulse down a pair of wires when it contacts a conductive point on the foil. The receiver contains an electrode that touches a sheet of chemically treated paper, which changes color when the electrode receives a pulse; the result is a reverse contrast (white-on-blue) reproduction of the original image. Bakewell's fax machine was marginally more successful than Bain's but suffered from the same synchronization issues. In 1862, Giovanni Caselli solved this with the pantelegraph, the first fax machine put into regular service. Largely based on Bain's design, it ensured complete synchronization by flanking the pendulums of both the transceiver and receiver between two magnetic regulators, which become magnetized with each swing of the pendulum and become demagnetized when the pendulum reaches the maxima and minima of each oscillation.
In 1893, the American engineer Elisha Gray introduced the telautograph, the first widely commercially successful fax machine that used linkage bars translating x- and y-axis motion at the receiver to scan a pen across the paper and strike it only when actuated by the stylus moving across the transceiver drum. Because it could use commodity stationery paper, it became popular in business and hospitals. In 1902, the German engineer Arthur Korn introduced the phototelautograph, a fax machine that used a light-sensitive selenium cell to scan a paper to be copied, instead of relying on a metallic drum and stylus. It was even more commercially successful than Gray's machine and became the basis for telephotography machines used by newspapers around the world from the early 1900s onward.
Analog era
Alexander Murray and Richard Morse invented and patented the first analog color scanner at Eastman Kodak in 1937. Intended for color separation at printing presses, their machine was an analog drum scanner that imaged a color transparency mounted in the drum, with a light source placed underneath the film, and three photocells with red, green, and blue color filters reading each spot on the transparency to translate the image into three electronic signals. In Murray and Morse's initial design, the drum was connected to three lathes that etched cyan, magenta, and yellow (CMY) halftone dots onto three offset cylinders directly. The rights to the patent were sold to Printing Developments Incorporated (P.D.I.) in 1946, who improved on the design by using a photomultiplier tube to image the points on the negative, which produced an amplified signal that was then fed to a single-purpose computer that processed the RGB signals into color-corrected cyan, magenta, yellow, and black (CMYK) values. The processed signals are then sent to four lathes that etch CMYK halftone dots onto the offset cylinders.
In 1948, Arthur Hardy of the Interchemical Corporation and F. L. Wurzburg of the Massachusetts Institute of Technology invented the first analog, color flatbed image scanner, intended for producing color-corrected lithographic plates from a color negative. In this system, three color-separated plates (of CMY values) are prepared from a color negative via dot etching and placed in the scanner bed. Above each plate are rigidly fixed, equidistant light beam projectors that focus a beam of light onto one corner of the plate. The entire bed with all three plates moves horizontally, back and forth, to reach the opposite corners of the plate; with each horiztonal oscillation of the bed, the bed moves down one step to cover the entire vertical area of the plate. While this is happening, the beam of light focused on a given spot on the plate gets reflected and bounced off to a photocell adjacent to the projector. Each photocell connects to an analog image processor, which evaluates the reflectance of the combined CMY values using Neugebauer equations and outputs a signal to a light projector hovering over a fourth, unexposed lithographic plate. This plate receives a color-corrected, continuous-tone dot-etch of either the cyan, magenta, or yellow values. The fourth plate is replaced with another unexposed plate, and the process repeats until three color-corrected plates, of cyan, magenta and yellow, are produced. In the 1950s, the Radio Corporation of America (RCA) took Hardy and Wurzburg's patent and replaced the projector-and-photocell arrangement with a video camera tube focusing on one spot of the plate.
Digital era
The first digital imaging system was the Bartlane system in 1920. Named after the pair who invented it, Harry G. Bartholomew and Maynard D. McFarlane, the Bartlane system used zinc plates etched with an image from a film negative projected at five different exposure levels to correspond to five quantization levels. All five plates are affixed to a long, motor-driven rotating cylinder, with five equidistant contacts scanning over each plate at the same starting position. The Bartlane system was initially used exclusively by telegraph, with the five-bit Baudot code used to transmit the grayscale digital image. In 1921, the system was modified for offline use, with a five-bit paper tape punch punching holes depending on whether its connections to the contacts are bridged or not. The result was a stored digital image with five gray levels. Reproduction of the image was achieved with a lamp passing over the punched holes, exposing five different intensities of light onto a film negative.
The first scanner to store its images digitally onto a computer was a drum scanner built in 1957 at the National Bureau of Standards (NBS, later NIST) by a team led by Russell A. Kirsch. It used a photomultiplier tube to detect light at a given point and produced an amplified signal that a computer could read and store into memory. The computer of choice at the time was the SEAC mainframe; the maximum horizontal resolution that the SEAC was capable of processing was 176 pixels. The first image ever scanned on this machine was a photograph of Kirsch's three-month-old son, Walden.
In 1969, Dacom introduced the 111 fax machine, which was the first digital fax machine to employ data compression using an on-board computer. It employed a flatbed design with a continuous feed capable of scanning up to letter paper in 1-bit monochrome (black and white).
The first flatbed scanner used for digital image processing was the Autokon 8400, introduced by ECRM Inc., a subsidiary of AM International, in 1975. The Autokon 8400 used a laser beam to scan pages up to 11 by 14 inches at a maximum resolution of 1000 lines per inch. Although it was only capable of scanning in 1-bit monochrome, the on-board processor was capable of halftoning, unsharp masking, contrast adjustment, and anamorphic distortions, among other features. The Autokon 8400 could either be connected to a film recorder to create a negative for producing plates or connected to a mainframe or minicomputer for further image processing and digital storage. The Autokon 8400 enjoyed widespread use in newspapers—ECRM shipped 1,000 units to newspaper publishers by 1985—but its limited resolution and maximum scan size made it unsuitable for commercial printing. In 1982, ECRM introduced the Autokon 8500, capable of scanning up to 1200 lines per inch. Four of ECRM's competitors introduced commercial flatbed scanners that year, including Scitex, Agfa-Gevaert, and Linotype-Hell, all of which were capable of scanning larger prints at higher resolutions. ECRM introduced the Autokon 1000DE in 1985 to address the shortcomings of the Autokon 8400/8500. The 1000DE (digital enhancement) used a microprocessor to produce the sharpening effect as against the 8400 which used analogue electronics and an optical method to create sharpening. The Autokon 1000DE had a touchpad rather than analogue rotary controls. The Autokon 1000DE had applications in both commercial and newspaper environments where only a single halftone was required ie black and white. Whilst typically the Autokon 8400 was a standalone output device that scanned and then output to either photosensitive, roll format bromide paper or film, the Autokon 1000DE was often connected to Apple Macintoshes or PCs via a dedicated interface such as those from HighWater Designs. The last Autokon was a wider format, online only device which utilised both a red and green laser to improve the response to the scanning of colour photographs.
In 1977, Raymond Kurzweil, of his start-up company Kurzweil Computer Products, released the Kurzweil Reading Machine, which was the first flatbed scanner with a charge-coupled device (CCD) imaging element. The Kurzweil Reading Machine was invented to assist blind people in reading books that had not been translated to braille. It comprised the image scanner and a Data General Nova minicomputer—the latter performing the image processing, optical character recognition (OCR), and speech synthesis.
The first scanners for personal computers appeared in the mid-1980s, starting with ThunderScan for the Macintosh in December 1984. Designed by Andy Hertzfeld and released by Thunderware Inc., the ThunderScan contains a specialized image sensor built into a plastic housing the same shape as the ink ribbon cartridge of Apple's ImageWriter printer. The ThunderScan slots into the ImageWriter's ribbon carrier and connects to both the ImageWriter and the Macintosh simultaneously. The ImageWriter's carriage, controlled by the ThunderScan, moves left-to-right to scan one 200-dpi (dots per inch) line at a time, with the carriage return serving to advance the scanner down the print to be scanned. The ThunderScan was the Macintosh's first scanner and sold well but operated very slowly and was only capable of scanning prints at 1-bit monochrome. In 1999, Canon iterated on this idea with the IS-22, a cartridge that fit into their inkjet printers to convert them into sheetfed scanners.
In early 1985, the first flatbed scanner for the IBM PC, the Datacopy Model 700, was released. Based on a CCD imaging element, the Model 700 was capable of scanning letter-sized documents at a maximum resolution of 200 dpi at 1-bit monochrome. The Model 700 came with a special interface card for connecting to the PC, and an optional, aftermarket OCR software card and software package were sold for the Model 700. In April 1985, LaserFAX Inc. introduced the first CCD-based color flatbed scanner, the SpectraSCAN 200, for the IBM PC. The SpectraSCAN 200 worked by placing color filters over the CCD and taking four passes (three for each primary color and one for black) per scan to build up a color reproduction. The SpectraSCAN 200 took between two and three minutes to produce a scan of a letter-sized print at 200-dpi; its grayscale counterpart, the DS-200, took only 30 seconds to make a scan at the same size and resolution.
The first relatively affordable flatbed scanner for personal computers appeared in February 1987 with Hewlett-Packard's ScanJet, which was capable of scanning 4-bit (64-shade) grayscale images at a maximum resolution of 300 dpi. By the beginning of 1988, the ScanJet had accounted for 27 percent of all scanner sales in terms of dollar volume, per Gartner Dataquest. In February 1989, the company introduced the ScanJet Plus, which increased the bit depth to 8 bits (256 shades) while costing only US$200 more than the original ScanJet's $1990 (). This led to a massive price drop in grayscale scanners with equivalent or lesser features in the market. The number of third-party developers producing software and hardware supporting these scanners jumped dramatically in turn, effectively popularizing the scanner for the personal computer user. By 1999, the cost of the average color-capable scanner had dropped to $300 (). That year, Computer Shopper declared 1999 "the year that scanners finally became a mainstream commodity".
Types
Flatbed
A flatbed scanner is a type of scanner that provides a glass bed (platen) on which the object to be scanned lies motionless. The scanning element moves vertically from under the glass, scanning either the entirety of the platen or a predetermined portion. The driver software for most flatbed scanners allows users to prescan their documents—in essence, to take a quick, low-resolution pass at a document in order to judge what area of the document should be scanned (if not the entirety of it), before scanning it at a higher resolution. Some flatbed scanners incorporate sheet-feeding mechanisms called automatic document feeders (ADFs) that use the same scanning element as the flatbed portion.
This type of scanner is sometimes called a reflective scanner, because it works by shining white light onto the object to be scanned and reading the intensity and color of light that is reflected from it, usually a line at a time. They are designed for scanning prints or other flat, opaque materials, but some have available transparency adapters, which—for a number of reasons—in most cases, are not very well suited to scanning film.
Sheetfed
A sheetfed scanner, also known as a document feeder, is a type of scanner that uses motor-driven rollers to move one single sheet of paper at a time past a stationary scanning element (two scanning elements, in the case of scanners with duplex functionality). Unlike flatbed scanners, sheetfed scanners are not equipped to scan bound material such as books or magazines, nor are they suitable for any material thicker than plain printer paper. Some sheetfed scanners, called automatic document feeders (ADFs), are capable of scanning several sheets in one session, although others only accept one page at a time. Some sheetfed scanners are portable, powered by batteries, and have their own storage, eventually transferring stored scans to a computer.
Handheld
A handheld scanner is a type of scanner that must be manually dragged or gilded by hand across the surface of the object to be scanned. Scanning documents in this manner requires a steady hand, as an uneven scanning rate produces distorted images. Some handheld scanners have an indicator light on the scanner for this purpose, actuating if the user is moving the scanner too fast. They typically have at least one button that starts the scan when pressed; it is held by the user for the duration of the scan. Some other handheld scanners have switches to set the optical resolution, as well as a roller, which generates a clock pulse for synchronization with the computer. Older hand scanners were monochrome, and produced light from an array of green LEDs to illuminate the image; later ones scan in monochrome or color, as desired. A hand scanner may also have a small window through which the document being scanned could be viewed. As hand scanners are much narrower than most normal document or book sizes, software (or the end user) needed to combine several narrow "strips" of scanned documents to produce the finished article.
Inexpensive, portable, battery-powered or USB-powered wand scanners and pen scanners, typically capable of scanning an area as wide as a normal letter and much longer, remain available . Some computer mice can also scan documents.
Drum
A drum scanner is a type of scanner that uses a clear, motor-driven rotating cylinder (drum) onto which a print, a film negative, a transparency, or any other flat object is taped or otherwise secured. A beam of light either projects past, or reflects off, the material to be scanned onto a series of mirrors, which focus the beam onto the drum scanner's photomultiplier tube (PMT). After one revolution, the beam of light moves down a single step. When scanning transparent media, such as negatives, a light beam is directed from within the cylinder onto the media; when scanning opaque items, a light beam from above is reflected off the surface of the media. When only one PMT is present, three passes of the image are required for a full-color RGB scan. When three PMTs are present, only a single pass is required.
The photomultiplier tubes of drum scanners offer superior dynamic range to that of CCD sensors. For this reason, drum scanners can extract more detail from very dark shadow areas of a transparency than flatbed scanners using CCD sensors. The smaller dynamic range of the CCD sensors (versus photomultiplier tubes) can lead to loss of shadow detail, especially when scanning very dense transparency film. Drum scanners are also able to resolve true detail in excess of 10000 dpi, producing higher-resolution scans than any CCD scanner.
Overhead
An overhead scanner is a type of scanner that places the scanning element in a housing on top of a vertical post, hovering above the document or object to be scanned, which lies stationary on an open-air bed. Chinon Industries patented a specific type of overhead scanner, which uses a rotating mirror to reflect the contents of the bed onto a linear CCD, in 1987. Although very flexible—allowing users to scan not only two-dimensional prints and documents but any 3D object, of any size—the Chinon design required the user to provide uniform illumination of the object to be scanned and was more cumbersome to set up.
A more modern type of overhead scanner is a document camera (also known as a video scanner), which uses a digital camera to capture a document all at once. Most document cameras output live video of the document and are usually reserved for displaying documents to a live audience, but they may also be used as replacements for image scanners, capturing a single frame of the output as an image file. Document cameras may even use the same APIs as scanners when connected to computers. A planetary scanner is a type of very-high-resolution document camera used for capturing certain fragile documents. A book scanner is another kind of document camera, pairing a digital camera with a scanning area defined by a mat to assist in scanning books. Some more advanced models of book scanners project a laser onto the page for calibration and software skew correction.
Film
A film scanner, also known as a slide scanner or a transparency scanner, is a type of specialized flatbed scanner specifically for scanning film negatives and slides. A typical film scanner works by passing a narrowly focused beam of light through the film and reading the intensity and color of the light that emerges. The lowest-cost dedicated film scanners can be had for less than $50, and they might be sufficient for modest needs. From there they inch up in staggered levels of quality and advanced features upward of five figures.
Portable
Image scanners are usually used in conjunction with a computer which controls the scanner and stores scans. Small portable scanners, either sheetfed or handheld and operated by batteries and with storage capability, are available for use away from a computer; stored scans can be transferred later. Many can scan both small documents such as business cards and till receipts, as well as letter-sized documents.
Software scanners
The higher-resolution cameras fitted to some smartphones can produce reasonable quality document scans by taking a photograph with the phone's camera and post-processing it with a scanning app, a range of which are available for most phone operating systems, to whiten the background of a page, correct perspective distortion so that the shape of a rectangular document is corrected, convert to black-and-white, etc. Many such apps can scan multiple-page documents with successive camera exposures and output them either as a single file or multiple-page files. Some smartphone scanning apps can save documents directly to online storage locations, such as Dropbox and Evernote, send via email, or fax documents via email-to-fax gateways.
Smartphone scanner apps can be broadly divided into three categories:
Document scanning apps primarily designed to handle documents and output PDF, and sometimes JPEG, files
Photo scanning apps that output JPEG files, and have editing functions useful for photo rather than document editing;
Barcode-like QR code scanning apps that then search the internet for information associated with the code.
Scanning elements
Charge-coupled device (CCD)
Scanners equipped with charge-coupled device (CCD) scanning elements require a sophisticated series of mirrors and lenses to reproduce an image, but the result of this complexity is a much higher-quality scan. Because CCDs have a much greater depth of field, they are more forgiving when it comes to scanning documents that are difficult to get perfectly flat against the platen (such as bound books).
Contact image sensor (CIS)
Scanners equipped with contact image sensor (CIS) scanning elements are designed to be in near-direct contact with the document to be scanned and thus do not require the complex optics of CCDs scanners. However, their depth of field is much worse, resulting in blurry scans if the scanned document is not perfectly flush against the platten. Because the sensors require far less power than CCD scanners, CIS scanners are able to be manufactured down to a low cost and are typically much lighter in weight and depth than CCD scanners.
Photomultiplier tube (PMT)
Scanners equipped with photomultiplier tubes (PMT) are nearly exclusively drum scanners.
Scan quality
Color scanners typically read RGB (red-green-blue) color data from the array. This data is then processed with some proprietary algorithm to correct for different exposure conditions, and sent to the computer via the device's input/output interface (usually USB, previous to which was SCSI or bidirectional parallel port in older units).
Color depth varies depending on the scanning array characteristics, but is usually at least 24 bits. High-quality models have 36-48 bits of color depth.
Another qualifying parameter for a scanner is its resolution, measured in pixels per inch (ppi), sometimes more accurately referred to as samples per inch (spi). Instead of using the scanner's true optical resolution, the only meaningful parameter, manufacturers like to refer to the interpolated resolution, which is much higher thanks to software interpolation. , a high-end flatbed scanner can scan up to 5400 ppi and drum scanners have an optical resolution of between 3000 and 24000 ppi.
Effective resolution refers to the true resolution of a scanner, and is determined by using a resolution test chart. The effective resolution of most all consumer flatbed scanners is considerably lower than the manufactures' given optical resolution.
Manufacturers often claim interpolated resolutions as high as 19200 ppi; but such numbers carry little meaningful value because the number of possible interpolated pixels is unlimited, and doing so does not increase the level of captured detail.
The size of the file created increases with the square of the resolution; doubling the resolution quadruples the file size. A resolution must be chosen that is within the capabilities of the equipment, preserves sufficient detail, and does not produce a file of excessive size. The file size can be reduced for a given resolution by using "lossy" compression methods such as JPEG, at some cost in quality. If the best possible quality is required lossless compression should be used; reduced-quality files of smaller size can be produced from such an image when required (e.g., image designed to be printed on a full page, and a much smaller file to be displayed as part of a fast-loading web page).
Purity can be diminished by scanner noise, optical flare, poor analog to digital conversion, scratches, dust, Newton's rings, out-of-focus sensors, improper scanner operation, and poor software. Drum scanners are said to produce the purest digital representations of the film, followed by high-end film scanners that use the larger Kodak Tri-Linear sensors.
The third important parameter for a scanner is its dynamic range (also known as density range). A high-density range means that the scanner is able to record shadow details and brightness details in one scan. Density of film is measured on a base 10 log scale and varies between 0.0 (transparent) and 5.0, about 16 stops. Density range is the space taken up in the 0 to 5 scale, and Dmin and Dmax denote where the least dense and most dense measurements on a negative or positive film. The density range of negative film is up to 3.6d, while slide film dynamic range is 2.4d. Color negative density range after processing is 2.0d thanks to the compression of the 12 stops into a small density range. Dmax will be the densest on slide film for shadows, and densest on negative film for highlights. Some slide films can have a Dmax close to 4.0d with proper exposure, and so can black-and-white negative film.
Consumer-level flatbed photo scanners have a dynamic range in the 2.0–3.0 range, which can be inadequate for scanning all types of photographic film, as Dmax can be and often is between 3.0d and 4.0d with traditional black-and-white film. Color film compresses its 12 stops of a possible 16 stops (film latitude) into just 2.0d of space via the process of dye coupling and removal of all silver from the emulsion. Kodak Vision 3 has 18 stops. So, color-negative film scans the easiest of all film types on the widest range of scanners. Because traditional black-and-white film retains the image creating silver after processing, density range can be almost twice that of color film. This makes scanning traditional black-and-white film more difficult and requires a scanner with at least a 3.6d dynamic range, but also a Dmax between 4.0d to 5.0d. High-end (photo lab) flatbed scanners can reach a dynamic range of 3.7, and Dmax around 4.0d. Dedicated film scanners have a dynamic range between 3.0d–4.0d. Office document scanners can have a dynamic range of less than 2.0d. Drum scanners have a dynamic range of 3.6–4.5.
For scanning film, is a technique used to remove the effects of dust and scratches on images scanned from film; many modern scanners incorporate this feature. It works by scanning the film with infrared light; the dyes in typical color film emulsions are transparent to infrared light, but dust and scratches are not, and block infrared; scanner software can use the visible and infrared information to detect scratches and process the image to greatly reduce their visibility, considering their position, size, shape, and surroundings. Scanner manufacturers usually have their own names attached to this technique. For example, Epson, Minolta, Nikon, Konica Minolta, Microtek, and others use Digital ICE, while Canon uses its own system, FARE (Film Automatic Retouching and Enhancement). Plustek uses LaserSoft Imaging iSRD. Some independent software developers design infrared cleaning tools.
By combining full-color imagery with 3D models, modern hand-held scanners are able to completely reproduce objects electronically. The addition of 3D color printers enables accurate miniaturization of these objects, with applications across many industries and professions.
For scanner apps, the scan quality is highly dependent on the quality of the phone camera and on the framing chosen by the user of the app.
Connectivity
Scans must virtually always be transferred from the scanner to a computer or information storage system for further processing or storage. There are two basic issues: (1) how the scanner is physically connected to the computer and (2) how the application retrieves the information from the scanner.
Direct connection
The file size of a scan can go up to about 100 MB for a 600 dpi, 23 × 28 cm (slightly larger than A4 paper) uncompressed 24-bit image. Scanned files must be transferred and stored. Scanners can generate this volume of data in a matter of seconds, making a fast connection desirable.
Scanners communicate to their host computer using one of the following physical interfaces, listing roughly from slow to fast:
Parallel port – Connecting through a parallel port is the slowest common transfer method. Early scanners had parallel port connections that could not transfer data faster than 70 kilobytes/second. The primary advantage of the parallel port connection was economic and user skill level: it avoided adding an interface card to the computer.
GPIB – General Purpose Interface Bus. Certain drum scanners like the Howtek D4000 featured both a SCSI and GPIB interface. The latter conforms to the IEEE-488 standard, introduced in the mid-1970s. The GPIB interface has only been used by a few scanner manufacturers, mostly serving the DOS/Windows environment. For Apple Macintosh systems, National Instruments provided a NuBus GPIB interface card.
Small Computer System Interface (SCSI) – SCSI is rarely used since the early 21st century, supported only by computers with a SCSI interface, either on a card or built-in. During the evolution of the SCSI standard, speeds increased. Widely available and easily set up USB and Firewire largely supplanted SCSI.
Universal Serial Bus (USB) – USB scanners can transfer data quickly. The early USB 1.1 standard could transfer data at 1.5 megabytes per second (slower than SCSI), but the later USB 2.0/3.0 standards can transfer at more than 20/60 megabytes per second in practice.
FireWire – Also known as IEEE-1394, FireWire is an interface of comparable speed to USB 2.0. Possible FireWire speeds are 25, 50, and 100, 400, and 800 megabits per second, but devices may not support all speeds.
Proprietary interfaces – Bespoke interfaces were used on some early scanners that used a proprietary interface card rather than a standard interface.
Indirect connection
During the early 1990s professional flatbed scanners were available over a local computer network. This proved useful to publishers, print shops, etc. This functionality largely fell out of use as the cost of flatbed scanners reduced enough to make sharing unnecessary.
From 2000 all-in-one multi-purpose devices became available which were suitable for both small offices and consumers, with printing, scanning, copying, and fax capability in a single apparatus that can be made available to all members of a workgroup.
Battery-powered portable scanners store scans on internal memory; they can later be transferred to a computer either by direct connection, typically USB, or in some cases a memory card may be removed from the scanner and plugged into the computer.
Applications programming interface
A raster image editor must be able to communicate with a scanner. There are many different scanners, and many of those scanners use different protocols. In order to simplify applications programming, some application programming interfaces (APIs) were developed. The API presents a uniform interface to the scanner. This means that the application does not need to know the specific details of the scanner in order to access it directly. For example, Adobe Photoshop supports the TWAIN standard; therefore in theory Photoshop can acquire an image from any scanner that has a TWAIN driver.
In practice, there are often problems with an application communicating with a scanner. Either the application or the scanner manufacturer (or both) may have faults in their implementation of the API.
Typically, the API is implemented as a dynamically linked library. Each scanner manufacturer provides software that translates the API procedure calls into primitive commands that are issued to a hardware controller (such as the SCSI, USB, or FireWire controller). The manufacturer's part of the API is commonly called a device driver, but that designation is not strictly accurate: the API does not run in kernel mode and does not directly access the device. Rather the scanner API library translates application requests into hardware requests.
Common scanner software API include:
TWAIN – An API used by most scanners. Originally used for low-end and home-use equipment, it is now widely used for large-volume scanning.
SANE (Scanner Access Now Easy) – A free/open-source API for accessing scanners. Originally developed for Unix and Linux operating systems, it has been ported to OS/2, Mac OS X, and Microsoft Windows. Unlike TWAIN, SANE does not handle the user interface. This allows batch scans and transparent network access without any special support from the device driver.
Windows Image Acquisition (WIA) – An API provided by Microsoft for use on Microsoft Windows.
Image and Scanner Interface Specification (ISIS) – Created by Pixel Translations, which still uses SCSI-2 for performance reasons, ISIS is used by large, departmental-scale, machines.
Bundled applications
Although no software beyond a scanning utility is a feature of any scanner, many scanners come bundled with software. Typically, in addition to the scanning utility, some type of raster image editor (such as Photoshop or GIMP) and optical character recognition (OCR) software are supplied. OCR software converts graphical images of text into standard text that can be edited using common word-processing and text-editing software; accuracy is rarely perfect.
Output data
Some scanners, especially those designed for scanning printed documents, only work in black and white, but most modern scanners work in color. For the latter, the scanned result is a non-compressed RGB image, which can be transferred to a computer's memory. The color output of different scanners is not the same due to the spectral response of their sensing elements, the nature of their light source, and the correction applied by the scanning software. While most image sensors have a linear response, the output values are usually gamma-compressed. Some scanners compress and clean up the image using embedded firmware. Once on the computer, the image can be processed with a raster graphics editor (such as Photoshop) and saved on a storage device (such as a hard disk).
Scans may be stored uncompressed in image file formats such as BMP; stored losslessly compressed in file formats such as TIFF and PNG; stored lossy-compressed in file formats such as JPEG; or stored as embedded images or converted to vector graphics within a PDF. Optical character recognition (OCR) software allows a scanned image of text to be converted into editable text with reasonable accuracy, so long as the text is cleanly printed and in a typeface and size that can be read by the software. OCR capability may be integrated into the scanning software, or the scanned image file can be processed with a separate OCR program.
Specific uses
Document processing
Document processing requirements differ from those of image scanning. These requirements include scanning speed, automated paper feed, and the ability to automatically scan both the front and the back of a document. On the other hand, image scanning typically requires the ability to handle fragile and or three-dimensional objects as well as scan at a much higher resolution.
Document scanners have document feeders, usually larger than those sometimes found on copiers or all-purpose scanners. Scans are made at high speed, from 20 up to 420 pages per minute, often in grayscale, although many scanners support color. Many scanners can scan both sides of double-sided originals (duplex operation). Sophisticated document scanners have firmware or software that cleans up scans of text as they are produced, eliminating accidental marks and sharpening type; this would be unacceptable for photographic work, where marks cannot reliably be distinguished from desired fine detail. Files created are compressed as they are made.
The resolution used is usually from 150 to 300 dpi, although the hardware may be capable of 600 or higher resolution; this produces images of text good enough to read and for OCR, without the higher demands on storage space required by higher-resolution images.
Document scans are often processed using OCR technology to create editable and searchable files. Most scanners use ISIS or TWAIN device drivers to scan documents into TIFF format so that the scanned pages can be fed into a document management system that will handle the archiving and retrieval of the scanned pages. Lossy JPEG compression, which is very efficient for pictures, is undesirable for text documents, as slanted straight edges take on a jagged appearance, and solid black (or other color) text on a light background compresses well with lossless compression formats.
While paper feeding and scanning can be done automatically and quickly, preparation and indexing are necessary and require much work by humans. Preparation involves manually inspecting the papers to be scanned and making sure that they are in order, unfolded, without staples or anything else that might jam the scanner. Additionally, some industries such as legal and medical may require documents to have Bates Numbering or some other mark giving a document identification number and date/time of the document scan.
Indexing involves associating relevant keywords to files so that they can be retrieved by content. This process can sometimes be automated to some extent, but it often requires manual labour performed by data-entry clerks. One common practice is the use of barcode-recognition technology: during preparation, barcode sheets with folder names or index information are inserted into the document files, folders, and document groups. Using automatic batch scanning, the documents are saved into appropriate folders, and an index is created for integration into document management systems.
A specialized form of document scanning is book scanning. Technical difficulties arise from the books usually being bound and sometimes fragile and irreplaceable, but some manufacturers have developed specialized machinery to deal with this. Often special robotic mechanisms are used to automate the page-turning and scanning process.
Other uses
Flatbed scanners have been used as digital backs for large-format cameras to create high-resolution digital images of static subjects. A modified flatbed scanner has been used for documentation and quantification of thin layer chromatograms detected by fluorescence quenching on silica gel layers containing an ultraviolet (UV) indicator. The ChromImage is allegedly the first commercial flatbed scanner densitometer. It enables acquisition of TLC plate images and quantification of chromatograms by use of Galaxie-TLC software. Other than being turned into densitometers, flatbed scanners were also turned into colorimeters using different methods. Trichromatic Color Analyser is allegedly the first distributable system using a flatbed scanner as a tristimulus colorimetric device.
Flatbed scanners may also be used to create artwork directly, in a practice known as scanography.
In the biomedical research field, detection devices for DNA microarrays are also referred to as scanners. These scanners are high-resolution systems (up to 1 μm/pixel), similar to microscopes. Detection is performed using CCDs or photomultiplier tubes.
In pathology, scanners are used to capture glass slides with tissue from biopsies and other kinds of sampling, allowing for various methods of digital pathology such as telepathology and the application of artificial intelligence for interpretation.
See also
3D scanner
Barcode reader
Display resolution
Photocopier
Telecine
References
External links
Information management
Office equipment
Records management
Records management technology
English inventions
German inventions
Italian inventions | Image scanner | [
"Technology"
] | 8,632 | [
"Information systems",
"Information management"
] |
317,695 | https://en.wikipedia.org/wiki/Moulting | In biology, moulting (British English), or molting (American English), also known as sloughing, shedding, or in many invertebrates, ecdysis, is a process by which an animal casts off parts of its body to serve some beneficial purpose, either at specific times of the year, or at specific points in its life cycle.
In medieval times, it was also known as "mewing" (from the French verb "muer", to moult), a term that lives on in the name of Britain's Royal Mews where the King's hawks used to be kept during moulting time before becoming horse stables after Tudor times.
Moulting can involve shedding the epidermis (skin), pelage (hair, feathers, fur, wool), or other external layer. In some groups, other body parts may be shed, for example, the entire exoskeleton in arthropods, including the wings in some insects.
Examples
In birds
In birds, moulting is the periodic replacement of feathers by shedding old feathers while producing new ones. Feathers are dead structures at maturity which are gradually abraded and need to be replaced. Adult birds moult at least once a year, although many moult twice and a few three times each year. It is generally a slow process: birds rarely shed all their feathers at any one time. The bird must retain sufficient feathers to regulate its body temperature and repel moisture. The number and area of feathers that are shed varies. In some moulting periods, a bird may renew only the feathers on the head and body, shedding the wing and tail feathers during a later moulting period.
Some species of bird become flightless during an annual "wing moult" and must seek a protected habitat with a reliable food supply during that time. While the plumage may appear thin or uneven during the moult, the bird's general shape is maintained despite the loss of apparently many feathers; bald spots are typically signs of unrelated illnesses, such as gross injuries, parasites, feather pecking (especially in commercial poultry), or (in pet birds) feather plucking. Some birds will drop feathers, especially tail feathers, in what is called a "fright moult".
The process of moulting in birds is as follows: First, the bird begins to shed some old feathers, then pin feathers grow in to replace the old feathers. As the pin feathers become full feathers, other feathers are shed. This is a cyclical process that occurs in many phases. It is usually symmetrical, with feather loss equal on each side of the body. Because feathers make up 4–12% of a bird's body weight, it takes a large amount of energy to replace them.
For this reason, moults often occur immediately after the breeding season, but while food is still abundant. The plumage produced during this time is called postnuptial plumage. Prenuptial moulting occurs in red-collared widowbirds where the males replace their nonbreeding plumage with breeding plumage. It is thought that large birds can advance the moult of severely damaged feathers.
Determining the process birds go through during moult can be useful in understanding breeding, migration and foraging strategies. One non-invasive method of studying moult in birds is through using field photography. The evolutionary and ecological forces driving moult can also be investigated using intrinsic markers such as stable hydrogen isotope (δ2H) analysis. In some tropical birds, such as the common bulbul, breeding seasonality is weak at the population level, instead moult can show high seasonality with individuals probably under strong selection to match moult with peak environmental conditions.
A 2023 paleontological analysis concluded that moulting probably evolved late in the evolutionary lineage of birds.
Forced moulting
In some countries, flocks of commercial layer hens are force-moulted to reinvigorate egg-laying. This usually involves complete withdrawal of their food and sometimes water for 7–14 days or up to 28 days under experimental conditions, which presumably reflect standard farming practice in some countries. This causes a body weight loss of 25 to 35%, which stimulates the hen to lose her feathers, but also reinvigorates egg-production.
Some flocks may be force-moulted several times. In 2003, more than 75% of all flocks were force-moulted in the US. Other methods of inducing a moult include low-density diets (e.g. grape pomace, cotton seed meal, alfalfa meal) or dietary manipulation to create an imbalance of a particular nutrient(s). The most important among these include manipulation of minerals including sodium (Na), calcium (Ca), iodine (I) and zinc (Zn), with full or partially reduced dietary intakes.
In reptiles and amphibians
Squamates periodically engage in moulting, as their skin is scaly. The most familiar example of moulting in such reptiles is when snakes "shed their skin". This is usually achieved by the snake rubbing its head against a hard object, such as a rock (or between two rocks) or piece of wood, causing the already stretched skin to split.
At this point, the snake continues to rub its skin on objects, causing the end nearest the head to peel back on itself, until the snake is able to crawl out of its skin, effectively turning the moulted skin inside-out. This is similar to how one might remove a sock from one's foot by grabbing the open end and pulling it over itself. The snake's skin is often left in one piece after the moulting process, including the discarded brille (ocular scale), so that the moult is vital for maintaining the animal's quality of vision. The skins of lizards, in contrast, generally fall off in pieces.
Both frogs and salamanders moult regularly and consume the skin, with some species moulting in pieces and others in one piece.
In arthropods
In arthropods, such as insects, arachnids and crustaceans, moulting is the shedding of the exoskeleton, which is often called its shell, typically to let the organism grow. This process is called ecdysis. Most Arthropoda with soft, flexible skins also undergo ecdysis. Ecdysis permits metamorphosis, the sometimes radical difference between the morphology of successive instars.
A new skin can replace structures, such as by providing new external lenses for eyes. The new exoskeleton is initially soft but hardens after the moulting of the old exoskeleton. The old exoskeleton is called an exuviae. While moulting, insects cannot breathe. In the crustacean Ovalipes catharus molting must occur before they mate.
In dogs
Most dogs moult twice each year, in the spring and autumn, depending on the breed, environment and temperature. Dogs shedding much more than usual are known as "blow coats" or "blowing coats".
Gallery
See also
Abscission (Shedding, more general)
References
External links
Moulting in Pigeons
Moulting in Chicken and other fowl
Animal developmental biology
Skin
Ethology
de:Häutung | Moulting | [
"Biology"
] | 1,524 | [
"Behavioural sciences",
"Ethology",
"Behavior"
] |
317,730 | https://en.wikipedia.org/wiki/Refrain | A refrain (from Vulgar Latin refringere, "to repeat", and later from Old French refraindre) is the line or lines that are repeated in music or in poetry—the "chorus" of a song. Poetic fixed forms that feature refrains include the villanelle, the virelay, and the sestina.
In popular music, the refrain or chorus may contrast with the verse melodically, rhythmically, and harmonically; it may assume a higher level of dynamics and activity, often with added instrumentation. Chorus form, or strophic form, is a sectional and/or additive way of structuring a piece of music based on the repetition of one formal section or block played repeatedly.
Usage in history
Although repeats of refrains may use different words, refrains are made recognizable by reusing the same melody (when sung as music) and by preserving any rhymes. For example, "The Star-Spangled Banner" contains a refrain which is introduced by a different phrase in each verse, but which always ends:
O'er the land of the free, and the home of the brave.
A similar refrain is found in the "Battle Hymn of the Republic", which affirms in successive verses that "Our God", or "His Truth", is "marching on."
Refrains usually, but not always, come at the end of the verse. Some songs, especially ballads, incorporate refrains (or burdens) into each verse. For example, one version of the traditional ballad "The Cruel Sister" includes a refrain mid-verse:
There lived a lady by the North Sea shore,
Lay the bent to the bonny broom
Two daughters were the babes she bore.
Fa la la la la la la la la.
As one grew bright as is the sun,
Lay the bent to the bonny broom
So coal black grew the other one.
Fa la la la la la la la.
. . .
(Note: the refrain of "Lay the bent to the bonny broom" is not traditionally associated with the ballad of "The Cruel Sister" (Child #10). This was the work of 'pop-folk' group Pentangle on their 1970 LP Cruel Sister which has subsequently been picked up by many folk singers as being traditional. Both the melody and the refrain come from the ballad known as "Riddles Wisely Expounded" (Child #1).)
Here, the refrain is syntactically independent of the narrative poem in the song, and has no obvious relationship to its subject, and indeed little inherent meaning at all. The device can also convey material which relates to the subject of the poem. Such a refrain is found in Dante Gabriel Rossetti's "Troy Town":
Phrases of apparent nonsense in refrains (Lay the bent to the bonny broom?), and syllables such as fa la la, familiar from the Christmas carol "Deck the Halls with Boughs of Holly", have given rise to much speculation. Some believe that the traditional refrain Hob a derry down O encountered in some English folksongs is in fact an ancient Celtic phrase meaning "dance around the oak tree." These suggestions remain controversial.
In popular music
There are two distinct uses of the word "chorus". In the thirty-two bar song form that was most common in the earlier twentieth-century popular music (especially the Tin Pan Alley tradition), "chorus" referred to the entire main section of the song (which was in a thirty-two bar AABA form). Beginning in the rock music of the 1950s, another form became more common in commercial pop music, which was based in an open-ended cycle of verses instead of a fixed 32-bar form. In this form (which is more common than thirty-two bar form in later-twentieth century pop music), "choruses" with fixed lyrics are alternated with "verses" in which the lyrics are different with each repetition. In this use of the word, chorus contrasts with the verse, which usually has a sense of leading up to the chorus. "Many popular songs, particularly from early in this century, are in a verse and a chorus (refrain) form. Most popular songs from the middle of the century consist only of a chorus."
While the terms 'refrain' and 'chorus' often are used synonymously, it has been suggested to use 'refrain' exclusively for a recurring line of identical text and melody which is part of a formal section—an A section in an AABA form (as in "I Got Rhythm": "...who could ask for anything more?") or a verse (as in "Blowin' in the Wind": "...the answer my friend is blowing in the wind")—whereas 'chorus' shall refer to a discrete form part (as in "Yellow Submarine": "We all live in a..."). According to the musicologists Ralf von Appen and Markus Frei-Hauenschild
In German, the term, "Refrain," is used synonymously with "chorus" when referring to a chorus within the verse/chorus form. At least one English-language author, Richard Middleton, uses the term in the same way.
In English usage, however, the term, »refrain« typically refers to what in German is more precisely called the »Refrainzeile« (refrain line): a lyric at the beginning or end of a section that is repeated in every iteration. In this usage, the refrain does not constitute a discrete, independent section within the form.
In jazz
Many Tin-Pan Alley songs using thirty-two bar form are central to the traditional jazz repertoire. In jazz arrangements the word "chorus" refers to the same unit of music as in the Tin Pan Alley tradition, but unlike the Tin Pan Alley tradition a single song can have more than one chorus. Von Appen and Frei-Hauenschild explain, "The term, 'chorus' can also refer to a single iteration of the entire 32 bars of the AABA form, especially among jazz musicians, who improvise over multiple repetitions of such choruses."
Arranger's chorus
In jazz, an arranger's chorus is where the arranger uses particularly elaborate techniques to exhibit their skill and to impress the listener. This may include use of counterpoint, reharmonization, tone color, or any other arranging device. The arranger's chorus is generally not the first or the last chorus of a jazz performance.
Shout chorus
In jazz, a shout chorus (occasionally: out chorus) is usually the last chorus of a big band arrangement, and is characterized by being the most energetic, lively, and exciting and by containing the musical climax of the piece. A shout chorus characteristically employs extreme ranges, loud dynamics, and a re-arrangement of melodic motives into short, accented riffs. Shout choruses often feature tutti or concerted writing, but may also use contrapuntal writing or call and response between the brass and saxophones, or between the ensemble and the drummer. Additionally, brass players frequently use extended techniques such as falls, doits, turns, and shakes to add excitement.
See also
Bridge (music)
Hook (music)
Pallavi, a refrain in carnatic music
Ritornello
References
Formal sections in music analysis
Jazz terminology
Musical terminology
Song forms | Refrain | [
"Technology"
] | 1,512 | [
"Components",
"Formal sections in music analysis"
] |
317,794 | https://en.wikipedia.org/wiki/Lake%20Lyndon%20B.%20Johnson | Lake Lyndon B. Johnson (more commonly referred to as Lake LBJ and originally named Lake Granite Shoals) is a reservoir on the Colorado River in the Texas Hill Country about 45 miles northwest of Austin. The reservoir was formed in 1950 by the construction of Granite Shoals Dam by the Lower Colorado River Authority (LCRA). The Colorado River and the Llano River meet in the northern portion of the lake at Kingsland.
Location and history
The towns of Granite Shoals, Kingsland, Horseshoe Bay, Highland Haven, and Sunrise Beach are located on the lake. The boundary line separating Burnet County and Llano County runs down the center of the lake.
The lake was originally called Lake Granite Shoals. The dam would be renamed Wirtz Dam in 1952 for Alvin J. Wirtz, the first general counsel of the LCRA, and the lake was renamed to Lake Lyndon B. Johnson in 1965 in honor of US President Lyndon Baines Johnson. In addition to his work to enact the Rural Electrification Act that formed the basis for building the Texas Highland Lakes, President Johnson owned a ranch on the lake (which was separate and apart from the LBJ Ranch in Stonewall, Texas). He and Mrs. Johnson entertained national and foreign dignitaries on the lake during his vice presidency and presidency.
The other reservoirs on the Colorado River are Lake Buchanan, Inks Lake, Lake Marble Falls, Lake Travis, Lake Austin, and Lady Bird Lake. Lake LBJ along with Inks Lake and Lake Marble Falls are pass-through lakes for Lake Buchanan and Lake Travis. There is no room in Lake LBJ for additional water storage, and water that comes in must go out. Therefore, Lake LBJ is at a near constant level, but the level can fluctuate, especially during a flood. The LCRA lowers the lake periodically for maintenance on Wirtz Dam and to allow landowners to remove sediment around their docks.
Fish and wildlife populations
Lake LBJ has been stocked with several species of fish intended to improve the utility of the reservoir for recreational fishing. Fish present in Lake LBJ include largemouth bass, white bass, catfish, and crappie. Lake LBJ is one of the Highland Lakes infested with hydrilla, a non-native aquatic plant species, and the LCRA is undergoing treatment to eradicate the hydrilla.
Recreational uses
Most of the property bordering Lake LBJ is privately owned. The Nightengale Archaeological Center at Kingsland is a unique educational park operated by the Lower Colorado River Authority that is adjacent to Lake LBJ. The lake is also home to Camp Champions, the only summer camp with property on the lake. The popularity of Lake LBJ is largely due to its normally constant level water which provides ideal conditions for boating, water skiing, riding personal water craft and other water sports. Swimming in summer months is inadvisable due to the presence of the rare but deadly Naegleria fowleri.
Cooling water
The lake provides cooling water for the Thomas C. Ferguson Power Plant that is located on its shores.
See also
List of memorials to Lyndon B. Johnson
References
External links
Official LCRA Wirtz Dam and Lake LBJ website
Lake LBJ
Lake LBJ - Texas Parks & Wildlife
Nightengale Archaeological Center
City of Granite Shoals web site
Camp Champions
Johnson, Lyndon B.
Lyndon B. Johnson
Johnson, Lyndon B.
Johnson, Lyndon B.
Bodies of water of Burnet County, Texas
Bodies of water of Llano County, Texas
1950 establishments in Texas
Cooling ponds
Lower Colorado River Authority | Lake Lyndon B. Johnson | [
"Chemistry",
"Environmental_science"
] | 720 | [
"Cooling ponds",
"Water pollution"
] |
317,844 | https://en.wikipedia.org/wiki/Cooling%20pond | A cooling pond is a man-made body of water primarily formed for the purpose of cooling heated water or to store and supply cooling water to a nearby power plant or industrial facility such as a petroleum refinery, pulp and paper mill, chemical plant, steel mill or smelter.
Overview
Cooling ponds are used where sufficient land is available, as an alternative to cooling towers or discharging of heated water to a nearby river or coastal bay, a process known as “once-through cooling.” The latter process can cause thermal pollution of the receiving waters. Cooling ponds are also sometimes used with air conditioning systems in large buildings as an alternative to cooling towers.
The pond receives thermal energy in the water from the plant's condensers during the process of energy production and the thermal energy is then dissipated mainly through evaporation and convection. Once the water has cooled in the pond, it is reused by the plant. New water is added to the system (“make-up” water) to replace the water lost through evaporation.
A 1970 research study published by the U.S. Environmental Protection Agency reported that cooling ponds have a lower overall electrical cost than cooling towers while providing the same benefits. The study concluded that a cooling pond will work optimally within 5 degrees Fahrenheit of natural water temperature with an area encompassing approximately 4 acres per megawatt of dissipated thermal energy.
Examples
Lake Anna is a cooling pond in Virginia, which provides cooling water for the North Anna Nuclear Generating Station. This pond has recreational uses such as fishing, swimming, boating, camping, and picnicking as well as being a cooling pond for the nuclear plant.
The cooling pond at the Chernobyl Nuclear Power Plant (Pripyat, Ukraine) has abundant wildlife, despite the radiation present in the area. There are some accounts of wels catfish (Silurus glanis) growing up to 350 pounds and having a lifespan of up to 50 years in the area.
The Columbia Energy Center in Pacific, Wisconsin is a coal fired power plant with a capacity of 1000 MW. A dual cooling system is used for heat rejection that consists of a cooling pond and two cooling towers. The pond and towers are connected in a parallel arrangement to help dissipate thermal energy at expedited rates.
In 1994 the reactor at Yongbyon Nuclear Scientific Research Center, North Korea, was under U.S scrutiny and its nuclear fuel rods were taken out of the reactor and placed in the facility's cooling pond. The fuel rods have since been removed.
At the 2.05 MW Ashford A power station Kent, UK, cooling water for the oil-fired engines was obtained from, and returned to, cooling water ponds. The principal cooling mechanism in the ponds was by convection from the water surface.
At the 89 MW Back o’ the’ Bank power station in Bolton UK the cooling water was cooled in 4 spray ponds. The small size of the spray droplets improved the heat transfer, increased evaporation, and led to more effective cooling. Each cooling pond had a capacity of 0.75 million gallons per hour (0.95 m3/s). Make up water was abstracted from the nearby River Tonge. In about 1950 a hyperbolic reinforced concrete cooling tower was built with a capacity of 2.5 million gallons per hour (3.15 m3/s), with cooling range of 15 °F (8.3 °C). However, there were complaints that operation of the cooling tower let to problems with ice in cold weather as water vapour from the tower froze as fine particles.
In 1963 the UK's Central Electricity Generating Board (CEGB) was researching the possibility of using warmed cooling water from power stations to support fish-farming both for recreational use and for food. At Grove Road power station in London water was cooled in wooden natural draft cooling towers and fell into cooling water ponds. The CEGB introduced carp (Cyprinus carpio), grass carp, silver carp and Tilapia into the cooling water ponds; the fish grew rapidly in the warm water (up to 27 °C).
Zaporizhzhia Nuclear Power Plant, Ukraine, has massive cooling ponds with additional water spray.
See also
Pond
Solar pond (thermal energy collector)
Deep lake water cooling
References
Cooling technology
Ponds
Water pollution | Cooling pond | [
"Chemistry",
"Environmental_science"
] | 869 | [
"Cooling ponds",
"Water pollution"
] |
317,900 | https://en.wikipedia.org/wiki/Clothes%20dryer | A clothes dryer (tumble dryer, drying machine, or simply dryer) is a powered household appliance that is used to remove moisture from a load of clothing, bedding and other textiles, usually after they are washed in the washing machine.
Many dryers consist of a rotating drum called a "tumbler" through which heated air is circulated to evaporate moisture while the tumbler is rotated to maintain air space between the articles. Using such a machine may cause clothes to shrink or become less soft (due to loss of short soft fibers). A simpler non-rotating machine called a "drying cabinet" may be used for delicate fabrics and other items not suitable for a tumble dryer. Other machines include steam to de-shrink clothes and avoid ironing.
Tumble dryers
Tumble dryers continuously draw in the ambient air around them and heat it before passing it through the tumbler. The resulting hot, humid air is usually vented outside to make room for more air to continue the drying process.
Tumble dryers are sometimes integrated with a washing machine, in the form of washer-dryer combos, which are essentially a front loading washing machine with an integrated dryer or (in the US) a laundry center, which stacks the dryer on top of the washer and integrates the controls for both machines into a single control panel. Often the washer and dryer functions will have a different capacity, with the dryer usually having a lower capacity than the washer. Tumble dryers can also be top loading, in which the drum is loaded from the top of the machine and the drum's end supports are in the left and right sides, instead of the more conventional front and rear. They can be as thin as in width, and may include detachable stationary racks for drying items like plush toys and footwear.
Ventless dryers
Spin dryers
These centrifuge machines simply spin their drums much faster than a typical washer could, in order to extract additional water from the load. They may remove more water in two minutes than a heated tumbler dryer can in twenty, thus saving significant amounts of time and energy. Although spinning alone will not completely dry clothing, this additional step saves a worthwhile amount of time and energy for large laundry operations such as those of hospitals.
Condenser dryers
Just as in a tumble dryer, condenser or condensation dryers pass heated air through the load. However, instead of exhausting this air, the dryer uses a heat exchanger to cool the air and condense the water vapor into either a drain pipe or a collection tank. The drier air is run through the loop again. The heat exchanger typically uses ambient air as its coolant, therefore the heat produced by the dryer will go into the immediate surroundings instead of the outside, increasing the room temperature. In some designs, cold water is used in the heat exchanger, eliminating this heating, but requiring increased water usage.
In terms of energy use, condenser dryers typically require around 2 kilowatt hours (kW⋅h) of energy per average load.
Because the heat exchange process simply cools the internal air using ambient air (or cold water in some cases), it will not dry the air in the internal loop to as low a level of humidity as typical fresh, ambient air. As a consequence of the increased humidity of the air used to dry the load, this type of dryer requires somewhat more time than a tumble dryer. Condenser dryers are a particularly attractive option where long, intricate ducting would be required to vent the dryer.
Heat pump dryers
A closed-cycle heat pump clothes dryer uses a heat pump to dehumidify the processing air. Such dryers typically use under half the energy per load of a condenser dryer.
Whereas condensation dryers use a passive heat exchanger cooled by ambient air, these dryers use a heat pump. The hot, humid air from the tumbler is passed through a heat pump where the cold side condenses the water vapor into either a drain pipe or a collection tank and the hot side reheats the air afterward for re-use. In this way not only does the dryer avoid the need for ducting, but it also conserves much of its heat within the dryer instead of exhausting it into the surroundings. Heat pump dryers can, therefore, use up to 50% less energy required by either condensation or conventional electric dryers. Heat pump dryers use about 1 kW⋅h of energy to dry an average load instead of 2 kW⋅h for a condenser dryer, or from 3 to 9 kW⋅h, for a conventional electric dryer. Domestic heat pump dryers are designed to work in typical ambient temperatures from . Below , drying times significantly increase.
As with condensation dryers, the heat exchanger will not dry the internal air to as low a level of humidity as the typical ambient air. With respect to ambient air, the higher humidity of the air used to dry the clothes has the effect of increasing drying times; however, because heat pump dryers conserve much of the heat of the air they use, the already-hot air can be cycled more quickly, possibly leading to shorter drying times than tumble dryers, depending on the model.
Mechanical steam compression dryers
A new type of dryer in development, these machines are a more advanced version of heat pump dryers. Instead of using hot air to dry the clothing, mechanical steam compression dryers use water recovered from the clothing in the form of steam. First, the tumbler and its contents are heated to . The wet steam that results purges the system of air and is the only remaining atmosphere in the tumbler.
As wet steam exits the tumbler, it is mechanically compressed (hence the name) to extract water vapor and transfer the heat of vaporization to the remaining gaseous steam. This pressurized, gaseous steam is then allowed to expand, and is superheated before being injected back into the tumbler where its heat causes more water to vaporize from the clothing, creating more wet steam and restarting the cycle.
Like heat pump dryers, mechanical steam compression dryers recycle much of the heat used to dry the clothes, and they operate in a very similar range of efficiency as heat pump dryers. Both types can be over twice as efficient as conventional tumble dryers. The considerably higher temperatures used in mechanical steam compression dryers result in drying times on the order of half as long as those of heat pump dryers.
Convectant drying
Marketed by some manufacturers as a "static clothes drying technique", convectant dryers simply consist of a heating unit at the bottom, a vertical chamber, and a vent at top. The unit heats air at the bottom, reducing its relative humidity, and the natural tendency of hot air to rise brings this low-humidity air into contact with the clothes. This design is slower than conventional tumble dryers, but relatively energy-efficient if well-implemented. It works particularly well in cold and humid environments, where it dries clothes substantially faster than line-drying. In hot and dry weather, the performance delta over line-drying is negligible.
Given that this is a relatively simple and cheap technique to materialize, most consumer products showcase the added benefit of portability and/or modularity. Newer designs implement a fan heater at the bottom to pump hot air into the vertical drying rack chamber. Temperatures in excess of can be reached inside these "hot air balloons," yet lint, static cling, and shrinkage are minimal. Upfront cost is significantly lower than tumble, condenser and heat pump designs.
If used in combination with washing machines featuring fast spin cycles (800+ rpm) or spin dryers, the cost-effectiveness of this technique has the potential to render tumble dryer-like designs obsolete in single-person and small family households. One disadvantage is that the moisture from the clothes is released into the immediate surroundings. Proper ventilation or a complementary dehumidifier is recommended for indoor use. It also cannot compete with the tumble dryer's capacity to dry multiple loads of wet clothing in a single day.
Solar clothes dryer
The solar dryer is a box-shaped stationary construction which encloses a second compartment where the clothes are held. It uses the sun's heat without direct sunlight reaching the clothes. Alternatively, a solar heating box may be used to heat air that is driven through a conventional tumbler dryer.
Microwave dryers
Japanese manufacturers have developed highly efficient clothes dryers that use microwave radiation to dry the clothes (though a vast majority of Japanese air dry their laundry). Most of the drying is done using microwaves to evaporate the water, but the final drying is done by convection heating, to avoid problems of arcing with metal pieces in the laundry. There are a number of advantages: shorter drying times (25% less), energy savings (17–25% less), and lower drying temperatures. Some analysts think that the arcing and fabric damage is a factor preventing microwave dryers from being developed for the US market.
Ultrasonic dryers
Ultrasonic dryers use high-frequency signals to drive piezoelectric actuators in order to mechanically shake the clothes, releasing water in the form of a mist which is then removed from the drum. They have the potential to significantly cut energy consumption while needing only one-third of the time needed by a conventional electric dryer for a given load. They also do not have the same issues related with lint in most other types of dryers.
Hybrid dryers
Some manufacturers, like LG Electronics and Whirlpool, have introduced hybrid dryers, that offer the user the option of using either a heat pump or a traditional electric heating element for drying the user's clothes. Hybrid dryers can also use a heat pump and a heating element at the same time to dry clothes faster.
Static electricity
Clothes dryers can cause static cling through the triboelectric effect. This can be a minor nuisance and is often a symptom of over-drying textiles to below their equilibrium moisture level, particularly when using synthetic materials. Fabric conditioning products such as dryer sheets are marketed to dissipate this static charge, depositing surfactants onto the fabric load by mechanical abrasion during tumbling. Modern dryers often have improved temperature and humidity sensors and electronic controls which aim to stop the drying cycle once textiles are sufficiently dry, avoiding over-drying and the static charge and energy wastage this causes.
Pest control use
Drying at a minimum of heat for thirty minutes kills many parasites including house dust mites, bed bugs, and scabies mites and their eggs; a bit more than ten minutes kills ticks. Simply washing drowns dust mites, and exposure to direct sunlight for three hours kills their eggs.
Lint build-up (tumble dryers)
Moisture and lint are byproducts of the tumble drying process and are pulled from the drum by a fan motor and then pushed through the remaining exhaust conduit to the exterior termination fitting. Typical exhaust conduit comprises flex transition hose found immediately behind the dryer, the rigid galvanized pipe and elbow fittings found within the wall framing, and the vent duct hood found outside the house.
A clean, unobstructed dryer vent improves both the efficiency and safety of the dryer. As the dryer duct pipe becomes partially obstructed and filled with lint, drying time markedly increases and causes the dryer to waste energy. A blocked vent increases the internal temperature and may result in a fire. Clothes dryers are one of the more costly home appliances to operate.
Several factors can contribute to or accelerate rapid lint build-up. These include long or restrictive ducts, bird or rodent nests in the termination, crushed or kinked flex transition hose, terminations with screen-like features, and condensation within the duct due to un-insulated ducts traveling through cold spaces such as a crawl space or attic. If plastic flaps are at the outside end of the duct, one may be able to flex, bend, and temporarily remove the plastic flaps, clean the inside surface of the flaps, clean the last foot or so of the duct, and reattach the plastic flaps. The plastic flaps keep insects, birds, and snakes out of the dryer vent pipe. During cold weather, the warm wet air condenses on the plastic flaps, and minor trace amounts of lint sticks to the wet inside part of the plastic flaps at the outside of the building.
Ventless dryers include multi-stage lint filtration systems and some even include automatic evaporator and condenser cleaning functions that can run even while the dryer is running. The evaporator and condenser are usually cleaned with running water. These systems are necessary, in order to prevent lint from building up inside the dryer and evaporator and condenser coils.
Aftermarket add-on lint and moisture traps can be attached to the dryer duct pipe, on machines originally manufactured as outside-venting, to facilitate installation where an outside vent is not available. Increased humidity at the location of installation is a drawback to this method.
Safety
Dryers expose flammable materials to heat. Underwriters Laboratories recommends cleaning the lint filter after every cycle for safety and energy efficiency, provision of adequate ventilation, and cleaning of the duct at regular intervals. UL also recommends that dryers not be used for glass fiber, rubber, foam or plastic items, or any item that has had a flammable substance spilled on it.
In the United States, an estimate from the US Fire Administration in a 2012 report estimated that from 2008 to 2010, fire departments responded to an estimated 2,900 clothes dryer fires in residential buildings each year across the nation. These fires resulted in an annual average loss of 5 deaths, 100 injuries, and $35 million in property loss. The Fire Administration attributes "Failure to clean" (34%) as the leading factor contributing to clothes dryer fires in residential buildings, and observed that new home construction trends place clothes dryers and washing machines in more hazardous locations away from outside walls, such as in bedrooms, second-floor hallways, bathrooms, and kitchens.
To address the problem of clothes dryer fires, a fire suppression system can be used with sensors to detect the change in temperature when a blaze starts in a dryer drum. These sensors then activate a water vapor mechanism to put out the fire.
Environmental impact
The environmental impact of clothes dryers is especially severe in the US and Canada, where over 80% of all homes have a clothes dryer. According to the US Environmental Protection Agency, if all residential clothes dryers sold in the US were energy efficient, "the utility cost savings would grow to more than $1.5 billion each year and more than 10 billion kilograms (22 billion pounds) of annual greenhouse gas emissions would be prevented”.
Clothes dryers are second only to refrigerators and freezers as the largest residential electrical energy consumers in America.
In the European Union, the EU energy labeling system is applied to dryers; dryers are classified with a label from A+++ (best) to G (worst) according to the amount of energy used per kilogram of clothes (kW⋅h/kg). Sensor dryers can automatically sense that clothes are dry and switch off. This means over-drying is not as frequent. Most of the European market sells sensor dryers now, and they are normally available in condenser and vented dryers.
History
A hand-cranked clothes dryer was created in 1800 by M. Pochon from France. Henry W. Altorfer invented and patented an electric clothes dryer in 1937. J. Ross Moore, an inventor from North Dakota, developed designs for automatic clothes dryers and published his design for an electrically operated dryer in 1938. Industrial designer Brooks Stevens developed an electric dryer with a glass window in the early 1940s.
See also
Laundry-folding machine
List of home appliances
Sheila Maid
Shoe dryer
Surge protector
References
External links
"What You Should Know About Clothes Dryers." Popular Mechanics, December 1954, pp. 170–175, basic principles of dryers even today.
19th-century inventions
Dryers
Home appliances
Laundry drying equipment
Products introduced in 1937 | Clothes dryer | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 3,374 | [
"Machines",
"Chemical equipment",
"Dryers",
"Physical systems",
"Home appliances"
] |
317,921 | https://en.wikipedia.org/wiki/Hydromorphone | Hydromorphone, also known as dihydromorphinone, and sold under the brand name Dilaudid among others, is a morphinan opioid used to treat moderate to severe pain. Typically, long-term use is only recommended for pain due to cancer. It may be used by mouth or by injection into a vein, muscle, or under the skin. Effects generally begin within half an hour and last for up to five hours. A 2016 Cochrane review (updated in 2021) found little difference in benefit between hydromorphone and other opioids for cancer pain.
Common side effects include dizziness, sleepiness, nausea, itchiness, and constipation. Serious side effects may include abuse, low blood pressure, seizures, respiratory depression, and serotonin syndrome. Rapidly decreasing the dose may result in opioid withdrawal. Generally, use during pregnancy or breastfeeding is not recommended. Hydromorphone is believed to work by activating opioid receptors, mainly in the brain and spinal cord. Hydromorphone 2 mg IV is equivalent to approximately 10 mg morphine IV.
Hydromorphone was patented in 1923. Hydromorphone is made from morphine. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. In 2022, it was the 233rd most commonly prescribed medication in the United States, with more than 1million prescriptions.
Side effects
Adverse effects of hydromorphone are similar to those of other potent opioid analgesics such as morphine and heroin. The major hazards of hydromorphone include dose-related respiratory depression, urinary retention, bronchospasm, and sometimes, circulatory depression. More common side effects include lightheadedness, dizziness, sedation, itching, constipation, nausea, vomiting, headache, perspiration, and hallucinations. These symptoms are common in ambulatory patients and in those not experiencing severe pain.
Simultaneous use of hydromorphone with other opioids, muscle relaxants, tranquilizers, sedatives, and general anesthetics may cause a significant increase in respiratory depression, progressing to coma or death. Taking benzodiazepines (e.g., diazepam) in conjunction with hydromorphone may increase side effects such as dizziness and difficulty concentrating. If simultaneous use of these drugs is required, dose adjustment may be made.
A particular problem that may occur with hydromorphone is accidental administration in place of morphine due to a mix-up between the similar names, either at the time the prescription is written or when the drug is dispensed. This has led to several deaths and calls for hydromorphone to be distributed in distinctly different packaging from morphine to avoid confusion.
Massive overdoses are rarely observed in opioid-tolerant individuals, but when they occur, they may lead to circulatory system collapse. Symptoms of overdose include respiratory depression, drowsiness leading to coma and sometimes to death, drooping of skeletal muscles, low heart rate, and decreasing blood pressure. At the hospital, individuals with hydromorphone overdose are provided supportive care, such as assisted ventilation to provide oxygen and gut decontamination using activated charcoal through a nasogastric tube. Opioid antagonists, such as naloxone, also may be administered concurrently with oxygen supplementation. Naloxone works by reversing the effects of hydromorphone, and only is administered in the presence of significant respiratory depression and circulatory depression.
Sugar cravings associated with hydromorphone use are the result of a glucose crash after transient hyperglycemia following injection, or a less profound lowering of blood sugar over a period of hours, in common with morphine, heroin, codeine, and other opioids.
Hormone imbalance
As with other opioids, hydromorphone (particularly during heavy chronic use) often causes temporary hypogonadism or hormone imbalance.
Neurotoxicity
In the setting of prolonged use, high dosage, and/or kidney dysfunction, hydromorphone has been associated with neuroexcitatory symptoms such as tremor, myoclonus, agitation, and cognitive dysfunction. This toxicity is less than that associated with other classes of opioids such as the pethidine class of synthetics in particular.
Withdrawal
Users of hydromorphone may experience painful symptoms if the drug is suspended. Some people cannot tolerate the symptoms, which results in continuous drug use. Symptoms of opioid withdrawal are not easy to decipher, as there are differences between drug-seeking behaviors and true withdrawal effects. Symptoms associated with hydromorphone withdrawal include:
Abdominal pain
Anxiety
Panic attacks
Depression
Piloerection (goose bumps)
Inability to enjoy daily activities
Muscle and joint pain
Nausea
Vomiting
Runny nose and excessive secretion of tears
Sweating
In the clinical setting, excessive secretion of tears, yawning, and dilation of pupils are helpful presentations in diagnosing opioid withdrawal. Hydromorphone is a rapid-acting painkiller; however, some formulations may last up to several hours. Patients who stop taking this drug abruptly may experience withdrawal symptoms, which may start within hours of taking the last dose of hydromorphone, and last up to several weeks. Withdrawal symptoms in people who stopped taking the opioid may be managed by using opioids or non-opioid adjuncts. Methadone is an opioid commonly used for this kind of therapy. However, the selection of therapy should be tailored to each specific person. Methadone also is used for detoxification in people who have opiate addiction, such as heroin or drugs similar to morphine. It may be given orally or intramuscularly. There is controversy regarding whether any opioid (such as methadone) should be included in the treatment of opioid withdrawal symptoms, since these agents also may cause relapse when therapy is suspended. Clonidine is a non-opioid adjunct which may be used in situations where opioid use is not desired, such as in patients with high blood pressure.
Interactions
CNS depressants may enhance the depressant effects of hydromorphone, such as other opioids, anesthetics, sedatives, hypnotics, barbiturates, benzodiazepines, phenothiazines, chloral hydrate, dimenhydrinate, and glutethimide. The depressant effect of hydromorphone also may be enhanced by monoamine oxidase inhibitors (MAO inhibitors), first-generation antihistamines (e.g., brompheniramine, promethazine, diphenhydramine, chlorphenamine), beta blockers, and alcohol. When combined therapy is contemplated, the dose of one or both agents should be reduced.
Pharmacology
Hydromorphone is a semi-synthetic μ-opioid agonist. As a hydrogenated ketone of morphine, it shares the pharmacologic properties typical of opioid analgesics. Hydromorphone and related opioids produce their major effects on the central nervous system and gastrointestinal tract. These include analgesia, drowsiness, mental clouding, changes in mood, euphoria or dysphoria, respiratory depression, cough suppression, decreased gastrointestinal motility, nausea, vomiting, increased cerebrospinal fluid pressure, increased biliary pressure, and increased pinpoint constriction of the pupils.
Formulations
Hydromorphone is available in parenteral, rectal, subcutaneous, and oral formulations, and also can be administered via epidural or intrathecal injection. Hydromorphone also has been administered via nebulization to treat shortness of breath, but it is not used as a route for pain control due to low bioavailability. Transdermal delivery systems are also under consideration to induce local skin analgesia.
Concentrated aqueous solutions of hydromorphone hydrochloride have a visibly different refractive index from pure water, isotonic 9‰ (0·9 per cent) saline and the like, especially when stored in clear ampoules and phials may acquire a slight clear amber discolouration upon exposure to light; this reportedly has no effect on the potency of the solution, but 14-dihydromorphinones such as hydromorphone, oxymorphone, and relatives come with instructions to protect from light. Ampoules of solution which have developed a precipitate should be discarded.
Battery-powered intrathecal drug delivery systems are implanted for chronic pain when other options are ruled out, such as surgery and traditional pharmacotherapy, provided that the patient is considered a suitable fit in terms of any contraindications, both physiological and psychological.
An extended-release (once-daily) version of hydromorphone is available in the United States. Previously, an extended-release version of hydromorphone, Palladone, was available before being voluntarily withdrawn from the market after a July 2005 FDA advisory warned of a high overdose potential when taken with alcohol. As of March 2010, it is still available in the United Kingdom under the brand name Palladone SR, Nepal under the brand name Opidol, and in most other European countries, In Canada, prescription continuous release hydromorphone is available as both brand name (Hydromorph Contin) and generic formulations (Apo-Hydromorphone CR).
Pharmacokinetics
The chemical modification of the morphine molecule to hydromorphone results in higher lipid solubility and greater ability to cross the blood–brain barrier to produce more rapid and complete central nervous system penetration. On a per milligram basis, hydromorphone is considered to be five times as potent as morphine; although the conversion ratio may vary from 4–8 times, five times is in typical clinical usage.
Patients with renal abnormalities must exercise caution when dosing hydromorphone. In those with renal impairment, the half-life of hydromorphone may increase to as much as 40 hours. The typical half-life of intravenous hydromorphone is 2.3 hours. Peak plasma levels usually occur between 30 and 60 minutes after oral dosing.
The onset of action for hydromorphone administered intravenously is less than 5 minutes and within 30 minutes of oral administration (immediate release).
Metabolism
While other opioids in its class, such as codeine or oxycodone, are metabolized via CYP450 enzymes, hydromorphone is not. Hydromorphone is extensively metabolized in the liver to hydromorphone-3-glucuronide, which has no analgesic effects. As similarly seen with the morphine metabolite, morphine-3-glucuronide, a build-up in levels of hydromorphone-3-glucuronide may produce excitatory neurotoxic effects such as restlessness, myoclonus and hyperalgesia. Patients with compromised kidney function and older patients are at higher risk for metabolite accumulation.
Chemistry
With a formula of C17H19NO3 and a molecular weight of 285.343, both identical to morphine, hydromorphone can be considered a structural isomer of morphine and is a hydrogenated ketone thereof.
Hydromorphone is made from morphine either by direct re-arrangement (made by reflux heating of alcoholic or acidic aqueous solution of morphine in the presence of platinum or palladium catalyst) or reduction to dihydromorphine (usually via catalytic hydrogenation), followed by oxidation with benzophenone in presence of potassium tert butoxide or aluminium tert butoxide (Oppenauer oxidation). The 6 ketone group may be replaced with a methylene group via the Wittig reaction to produce 6-Methylenedihydrodesoxymorphine, which is 80× stronger than morphine.
Hydromorphone is more soluble in water than morphine; therefore, hydromorphone solutions may be produced to deliver the drug in a smaller volume of water. The hydrochloride salt is soluble in three parts of water, whereas a gram of morphine hydrochloride dissolves in 16 ml of water; for all common purposes, the pure powder for hospital use can be used to produce solutions of virtually arbitrary concentration. When the powder appeared on the street, this very small volume of powder needed for a dose means that overdoses are likely for those who mistake it for heroin or other powdered narcotics, especially those that have been diluted prior to consumption.
Bacteria
Some bacteria have been shown to be able to turn morphine into closely related drugs, including hydromorphone and dihydromorphine among others. The bacterium Pseudomonas putida serotype M10 produces a naturally-occurring NADH-dependent morphinone reductase that can work on unsaturated 7,8 bonds, with result that, when these bacteria are living in an aqueous solution containing morphine, significant amounts of hydromorphone form, as it is an intermediary metabolite in this process; the same goes for codeine being turned into hydrocodone.
History
Hydromorphone was patented in 1923. It was introduced to the mass market in 1926 under the brand name Dilaudid, indicating its derivation and degree of similarity to morphine (by way of laudanum).
Society and culture
Names
Hydromorphone is known in various countries around the world by the brand names Hydal, Dimorphone, Exalgo, Sophidone LP, Dilaudid, Hydrostat, Hydromorfan, Hydromorphan, Hymorphan, Laudicon, Opidol, Palladone, Hydromorph Contin, and others. An extended-release version of hydromorphone, called Palladone, was available for a short time in the United States before being voluntarily withdrawn from the market after a July 2005 FDA advisory warned of a high overdose potential when taken with alcohol. As of March 2010, it is still available in Nepal under the brand name Opidol, in the United Kingdom under the brand name Palladone SR, and in most other European countries.
There has also been a once-daily prolonged release version of hydromorphone available in Australia under the brand name Jurnista as of May 2009.
Legal status
In the United States, the main drug control agency, the Drug Enforcement Administration, reports an increase in annual aggregate production quotas of hydromorphone from in 1998 to in 2006, and an increase in prescriptions in this time of 289%, from about 470,000 to 1,830,000. The 2013 production quota was .
Like all opioids used for analgesia, hydromorphone is potentially habit-forming and is listed in Schedule II of the United States Controlled Substances Act of 1970 as well as in similar levels under the drugs laws of practically all other countries and it is listed in the Single Convention On Narcotic Drugs. The DEA ACSCN for hydromorphone is 9150.
Hydromorphone is listed under the German Betäubungsmittelgesetz as a Betäubungsmittel in the most restricted schedule for medicinal drugs; it is controlled similarly in Austria (Suchtgift) under the SMG and the Swiss BetmG. The Misuse of Drugs Act 1971 (United Kingdom) and comparable French, Canadian, Australian, Italian, Czech, Croatian, Slovenian, Swedish, Polish, Spanish, Greek, Russian, and other laws similarly control it, as do regulations in virtually all other countries.
Use in executions
In 2009, Ohio approved the use of an intramuscular injection of 500 mg of hydromorphone and a supratherapeutic dose of midazolam as a backup means of carrying out executions by lethal injection when a suitable vein cannot be found for intravenous injection.
Hydromorphone and midazolam was injected intravenously to execute double-murderer Joseph Wood in Arizona on 24 July 2014. Wood was heavily sedated (surgical anesthesia) within four minutes from start, but took almost two hours to transition to stage 4 (cessation of respiration) and death.
References
External links
Dihydromorphinones from morphine & analogues
"When is a pain doctor a drug pusher?", The New York Times, 17 June 2007
4,5-Epoxymorphinans
Euphoriants
German inventions
Ketones
Mu-opioid receptor agonists
Hydroxyarenes
Semisynthetic opioids
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | Hydromorphone | [
"Chemistry"
] | 3,549 | [
"Ketones",
"Functional groups"
] |
317,938 | https://en.wikipedia.org/wiki/Lawn | A lawn () is an area of soil-covered land planted with grasses and other durable plants such as clover which are maintained at a short height with a lawn mower (or sometimes grazing animals) and used for aesthetic and recreational purposes—it is also commonly referred to as part of a garden. Lawns are usually composed only of grass species, subject to weed and pest control, maintained in a green color (e.g., by watering), and are regularly mowed to ensure an acceptable length. Lawns are used around houses, apartments, commercial buildings and offices. Many city parks also have large lawn areas. In recreational contexts, the specialised names turf, parade, pitch, field or green may be used, depending on the sport and the continent.
The term "lawn", referring to a managed grass space, dates to at least the 16th century. With suburban expansion, the lawn has become culturally ingrained in some areas of the world as part of the desired household aesthetic. However, awareness of the negative environmental impact of this ideal is growing. In some jurisdictions where there are water shortages, local government authorities are encouraging alternatives to lawns to reduce water use. Researchers in the United States have noted that suburban lawns are "biological deserts" that are contributing to a "continental-scale ecological homogenization." Lawn maintenance practices also cause biodiversity loss in surrounding areas. Some forms of lawn, such as tapestry lawns, are designed partly for biodiversity and pollinator support.
Etymology
Lawn is a cognate of Welsh llan which is derived from the Common Brittonic word landa () that originally meant
heath, barren land, or clearing.
History
Origins
Areas of grass grazed regularly by rabbits, horses or sheep over a long period often form a very low, tight sward similar to a modern lawn. This was the original meaning of the word "lawn", and the term can still be found in place names. Some forest areas where extensive grazing is practiced still have these seminatural lawns. For example, in the New Forest, England, such grazed areas are common, and are known as lawns, for example Balmer Lawn.
Lawns may have originated as grassed enclosures within early medieval settlements used for communal grazing of livestock, as distinct from fields reserved for agriculture. Low, mown-meadow areas may also have been valued because they allowed those inside an enclosed fence or castle to view those approaching. The early lawns were not always distinguishable from pasture fields. The damp climate of maritime Western Europe in the north made lawns possible to grow and manage. They were not a part of gardens in most other regions and cultures of the world until contemporary influence.
In 1100s Britain, low-growing area of grasses and meadow flowers were grazed or scythed to keep them short, and used for sport. Lawn bowling, which began in the 12th or 13th century, required short turf.
Establishing grass using sod instead of seed was first documented in a Japanese text of 1159.
Lawns became popular with the aristocracy in northern Europe from the Middle Ages onward. In the fourteen hundreds, open expanses of low grasses appear in paintings of public and private areas; by the fifteen hundreds, such areas were found in the gardens of the wealthy across northern and central Europe. Public meadow areas, kept short by sheep, were used for new sports such as cricket, soccer, and golf. The word "laune" is first attested in 1540 from the Old French lande "heath, moor, barren land; clearing". It initially described a natural opening in a woodland. In the sixteen hundreds, "lawn" came to mean a grassy stretch of untilled land, and by mid-century, there were publications on seeding and transplanting sod. In the seventeen hundreds, "lawn" came to mean specifically a mown stretch of meadow. Lawns similar to those of today first appeared in France and England in the 1700s when André Le Nôtre designed the gardens of the Palace of Versailles that included a small area of grass called the , or "green carpet", which became a common feature of French gardens. Large, mown open spaces became popular, in Europe and North America. The lawn was influenced by later seventeen-hundreds trends replicating the romantic aestheticism of grassy pastoralism from Italian landscape paintings.
Before the invention of mowing machines in 1830, lawns were managed very differently. They were an element of wealthy estates and manor houses, and in some places were maintained by labor-intensive scything and shearing (for hay or silage). They were also pasture land maintained through grazing by sheep or other livestock.
The English lawn
It was not until the 17th and 18th century that the garden and the lawn became a place created first as walkways and social areas. They were made up of meadow plants, such as camomile, a particular favourite (see camomile lawn). In the early 17th century, the Jacobean epoch of gardening began; during this period, the closely cut "English" lawn was born. By the end of this period, the English lawn was a symbol of status of the aristocracy and gentry.
In the early 18th century, landscape gardening for the aristocracy entered a golden age, under the direction of William Kent and Lancelot "Capability" Brown. They refined the English landscape garden style with the design of natural, or "romantic", estate settings for wealthy Englishmen. Brown, remembered as "England's greatest gardener", designed over 170 parks, many of which still endure. His influence was so great that the contributions to the English garden made by his predecessors Charles Bridgeman and William Kent are often overlooked.
His work still endures at Croome Court (where he also designed the house), Blenheim Palace, Warwick Castle, Harewood House, Bowood House, Milton Abbey (and nearby Milton Abbas village), in traces at Kew Gardens and many other locations. His style of smooth undulating lawns which ran seamlessly to the house and meadow, clumps, belts and scattering of trees and his serpentine lakes formed by invisibly damming small rivers, were a new style within the English landscape, a "gardenless" form of landscape gardening, which swept away almost all the remnants of previous formally patterned styles. His landscapes were fundamentally different from what they replaced, the well-known formal gardens of England which were criticised by Alexander Pope and others from the 1710s.
The open "English style" of parkland first spread across Britain and Ireland, and then across Europe, such as the garden à la française being replaced by the French landscape garden. By this time, the word "lawn" in England had semantically shifted to describe a piece of a garden covered with grass and closely mown.
In North America
Wealthy families in America during the late 18th century also began mimicking English landscaping styles. British settlers in North America imported an affinity for landscapes in the style of the English lawn. However, early in the colonization of the continent, environments with thick, low-growing, grass-dominated vegetation were rare in the eastern part of the continent, enough so that settlers were warned that it would be difficult to find land suitable for grazing cattle. In 1780, the Shaker community began the first industrial production of high-quality grass seed in North America, and a number of seed companies and nurseries were founded in Philadelphia. The increased availability of these grasses meant they were in plentiful supply for parks and residential areas, not just livestock.
Thomas Jefferson has long been given credit for being the first person to attempt an English-style lawn at his estate, Monticello, in 1806, but many others had tried to emulate English landscaping before he did. Over time, an increasing number towns in New England began to emphasize grass spaces. Many scholars link this development to the romantic and transcendentalist movements of the 19th century. These green commons were also heavily associated with the success of the Revolutionary War and often became the homes of patriotic war memorials after the Civil War ended in 1865.
Middle class pursuit
Before the mechanical lawn mower, the upkeep of lawns was possible only for the extremely wealthy estates and manor houses of the aristocracy. Labor-intensive methods of scything and shearing the grass were required to maintain the lawn in its correct state, and most of the land in England was required for more functional, agricultural purposes.
This all changed with the invention of the lawn mower by Edwin Beard Budding in 1830. Budding had the idea for a lawn mower after seeing a machine in a local cloth mill which used a cutting cylinder (or bladed reel) mounted on a bench to trim the irregular nap from the surface of woolen cloth and give a smooth finish. Budding realised that a similar device could be used to cut grass if the mechanism was mounted in a wheeled frame to make the blades rotate close to the lawn's surface. His mower design was to be used primarily to cut the lawn on sports grounds and extensive gardens, as a superior alternative to the scythe, and he was granted a British patent on 31 August 1830.
Budding went into partnership with a local engineer, John Ferrabee, who paid the costs of development and acquired rights to manufacture and sell lawn mowers and to license other manufacturers. Together they made mowers in a factory at Thrupp near Stroud. Among the other companies manufacturing under license the most successful was Ransomes, Sims & Jefferies of Ipswich which began mower production as early as 1832.
However, his model had two crucial drawbacks. It was immensely heavy (it was made of cast iron) and difficult to manoeuvre in the garden, and did not cut the grass very well. The blade would often spin above the grass uselessly. It took ten more years and further innovations, including the advent of the Bessemer process for the production of the much lighter alloy steel and advances in motorization such as the drive chain, for the lawn mower to become a practical proposition. Middle-class families across the country, in imitation of aristocratic landscape gardens, began to grow finely trimmed lawns in their back gardens.
In the 1850s, Thomas Green of Leeds introduced a revolutionary mower design called the Silens Messor (meaning silent cutter), which used a chain to transmit power from the rear roller to the cutting cylinder. The machine was much lighter and quieter than the gear driven machines that preceded them, and won first prize at the first lawn mower trial at the London Horticultural Gardens. Thus began a great expansion in the lawn mower production in the 1860s. James Sumner of Lancashire patented the first steam-powered lawn mower in 1893. Around 1900, Ransomes' Automaton, available in chain- or gear-driven models, dominated the British market. In 1902, Ransomes produced the first commercially available mower powered by an internal combustion gasoline engine. JP Engineering of Leicester, founded after World War I, invented the first riding mowers.
This went hand-in-hand with a booming consumer market for lawns from the 1860s onward. With the increasing popularity of sports in the mid-Victorian period, the lawn mower was used to craft modern-style sporting ovals, playing fields, pitches and grass courts for the nascent sports of football, lawn bowls, lawn tennis and others. The rise of Suburbanisation in the interwar period was heavily influenced by the garden city movement of Ebenezer Howard and the creation of the first garden suburbs at the turn of the 20th century. The garden suburb, developed through the efforts of social reformer Henrietta Barnett and her husband, exemplified the incorporation of the well manicured lawn into suburban life. Suburbs dramatically increased in size. Harrow Weald went from just 1,500 to over 10,000 while Pinner jumped from 3,00 to over 20,000. During the 1930s, over 4 million new suburban houses were built and the 'suburban revolution' had made England the most heavily suburbanized country in the world by a considerable margin.
Lawns began to proliferate in America from the 1870s onwards. As more plants were introduced from Europe, lawns became smaller as they were filled with flower beds, perennials, sculptures, and water features. Eventually the wealthy began to move away from the cities into new suburban communities. In 1856, an architectural book was published to accompany the development of the new suburbia that placed importance on the availability of a grassy space for children to play on and a space to grow fruits and vegetables that further imbued the lawn with cultural importance. Lawns began making more appearances in development plans, magazine articles, and catalogs. The lawn became less associated with being a status symbol, instead giving way to a landscape aesthetic. Improvements in the lawn mower and water supply enabled the spread of lawn culture from the Northeast to the South, where the grass grew more poorly. This in combination with setback rules, which required all homes to have a 30-foot gap between the structure and the sidewalk meant that the lawn had found a specific place in suburbia. In 1901, the United States Congress allotted $17,000 to the study of the best grasses for lawns, creating the spark for lawn care to become an industry.
The chemical boom
After World War II, a surplus of synthetic nitrogen in the United States led to chemical firms such as DuPont seeking to expand the market for fertilizers. The suburban lawn offered an opportunity to market fertilizers, previously only used by farmers, to homeowners. In 1955, DuPont released Uramite, a slow-release nitrogen fertilizer specifically marketed for lawns. The trend continued throughout the 1960s, with chemical firms such as DuPont and Monsanto utilizing television advertising and other forms of advertisement to market pesticides, fertilizers, and herbicides. The environmental impacts of this widespread chemical use were noticed as early as the 1960s, but suburban lawns as a source of pollution were largely ignored.
Organic lawns
Due to the harmful effects of excessive pesticide use, fertilizer use, climate change and pollution, a movement developed in the late 20th century to require organic lawn management. By the first decade of the 21st century, American homeowners were using ten times more pesticides per acre than farmers, poisoning an estimated 60 to 70 million birds yearly. Lawn mowers are a significant contributor to pollution released into Earth's atmosphere, with a riding lawn mower producing the same amount of pollution in one hour of use as 34 cars.
In recent years, some municipalities have banned synthetic pesticides and fertilizers and required organic land care techniques be used. There are many locations with organic lawns that require organic landscaping.
United States
Prior to European colonization, the grasses on the East Coast of North America were mostly broom straw, wild rye, and marsh grass. As Europeans moved into the region, it was noted by colonists in New England, more than others, that the grasses of the New World were inferior to those of England and that their livestock seemed to receive less nutrition from it. In fact, once livestock brought overseas from Europe spread throughout the colonies, much of the native grasses of New England disappeared, and an inventory list from the 17th century noted supplies of clover and grass seed from England. New colonists were even urged by their country and companies to bring grass seed with them to North America. By the late 17th century, a new market in imported grass seed had begun in New England.
Much of the new grasses brought by Europeans spread quickly and effectively, often ahead of the colonists. One such species, Bermuda grass (Cynodon dactylon), became the most important pasture grass for the southern colonies.
Kentucky bluegrass (Poa pratensis) is a grass native to Europe or the Middle East. It was likely carried to Midwestern United States in the early 1600s by French missionaries and spread via the waterways to the region around Kentucky. However, it may also have spread across the Appalachian Mountains after an introduction on the east coast.
Farmers at first continued to harvest meadows and marshes composed of indigenous grasses until they became overgrazed. These areas quickly fell to erosion and were overrun with less favorable plant life. Soon, farmers began to purposefully plant new species of grass in these areas, hoping to improve the quality and quantity of hay to provide for their livestock as native species had a lower nutritive value. While Middle Eastern and Europeans species of grass did extremely well on the East Coast of North America, it was a number of grasses from the Mediterranean that dominated the Western seaboard. As cultivated grasses became valued for their nutritional benefits to livestock, farmers relied less and less on natural meadows in the more colonized areas of the country. Eventually even the grasses of the Great Plains were overrun with European species that were more durable to the grazing patterns of imported livestock.
A pivotal factor in the spread of the lawn in America was the passage of legislation in 1938 of the 40-hour work week. Until then, Americans had typically worked half days on Saturdays, leaving little time to focus on their lawns. With this legislation and the housing boom following the Second World War, managed grass spaces became more commonplace. The creation in the early 20th century of country clubs and golf courses completed the rise of lawn culture.
According to study based on satellite observations by Cristina Milesi, NASA Earth System Science, its estimates: "More surface area in the United States is devoted to lawns than to individual irrigated crops such as corn or wheat.... area, covering about 128,000 square kilometers in all."
Lawn monoculture was a reflection of more than an interest in offsetting depreciation, it propagated the homogeneity of the suburb itself. Although lawns had been a recognizable feature in English residences since the 19th century, a revolution in industrialization and monoculture of the lawn since the Second World War fundamentally changed the ecology of the lawn. Money and ideas flowed back from Europe after the U.S. entered WWI, changing the way Americans interacted with themselves and nature, and the industrialization of war hastened the industrialization of pest control. Intensive suburbanization both concentrated and expanded the spread of lawn maintenance which meant increased inputs in not only petrochemicals, fertilizers, and pesticides, but also natural resources like water.
Lawns became a means of performing class values for the urban middle class, in which the condition of the lawn becomes representative of moral character and social reliability. The social values associated with lawns are promoted and upheld by social pressure, laws, and chemical producers. Social pressure comes from neighbors or homeowner associations who think that the unkempt lawns of neighbors may affect their own property values or create eyesores. Pressures to maintain a lawn are also legal; there are often local or state laws against letting weeds get too tall or letting a lawn space be especially unkempt, punishable by fees or litigation. Chemical producers unwilling to lose business propagate the ideal of a lawn, making it seem unattainable without chemical aid.
Front lawns became standardized in the 1930s when, over time, specific aspects such as grass type and maintenance methods became popular. The lawn-care industry boomed, but the Great Depression of the 1930s and in the period prior to World War II made it difficult to maintain the cultural standards that had become heavily associated with the lawn due to grass seed shortages in Europe, America's main supplier. Still, seed distributors such as Scotts Miracle-Gro Company in the United States encouraged families to continue to maintain their lawns, promoting it as a stress-relieving hobby. During the war itself, homeowners were asked to maintain the appearances of the home front, likely as a show of strength, morale, and solidarity. After World War II, the lawn aesthetic once again became a standard feature of North America, bouncing back from its minor decline in the decades before with a vengeance, particularly as a result of the housing and population boom post-war.
The VA loan in the United States let American ex-servicemen buy homes without providing a down payment, while the Federal Housing Administration offered lender inducements that aided the reduction of down payments for the average American from 30% to as little as 10%. These developments made owning your own home cheaper than renting, further enabling the spread of suburbia and its lawns.
Levittown, New York, was the beginning of the industrial suburb in the 20th century, and by proxy the industrial lawn. Between 1947 and 1951, Abraham Levitt and his sons built more than seventeen thousand homes, each with its own lawn. Abraham Levitt wrote "No single feature of a suburban residential community contributes as much to the charm and beauty of the individual home and the locality as well-kept lawns". Landscaping was one of the most important factors in Levittown's success – and no feature was more prominent than the lawn. The Levitts understood that landscaping could add to the appeal of their developments and claimed that, "increase in values are most often found in neighborhoods where lawns show as green carpets" and that, over the years, "lawns trees and shrubs become more valuable both aesthetically and monetarily". During 1948, the first spring that Levittown had enjoyed, Levitt and Sons fertilized and reseeded all of the lawns free of charge.
The economic recession that began in 2008 has resulted in many communities worldwide to dig up their lawns and plant fruit and vegetable gardens. This has the potential to greatly change cultural values attached to the lawn, as they are increasingly viewed as environmentally and economically unviable in the modern context.
Australia
The appearance of the lawn in Australia followed closely after its establishment in North America and parts of Europe. Lawn was established on the so-called "nature strip" (a uniquely Australian term) by the 1920s and was common throughout the developing suburbs of Australia. By the 1950s, the Australian-designed Victa lawn mower was being used by the many people who had turned pastures into lawn and was also being exported to dozens of countries. Prior to the 1970s, all brush and native species were stripped from a development site and replaced with lawns that utilized imported plant species. Since the 1970s there has been an interest in using indigenous species for lawns, especially considering their lower water requirements. Lawns are also established in garden areas as well as used for the surface of sporting fields.
Over time, with consideration to the frequency of droughts in Australia, the movement towards "naturalism", or the use of indigenous plant species in yards, was beneficial. These grasses were more drought resistant than their European counterparts, and many who wished to keep their lawns switched to these alternatives or allowed their green carpets to revert to the indigenous scrub in an effort to reduce the strain on water supplies. However, lawns remain a popular surface and their practical and aesthetically pleasing appearance reduces the use of water-impervious surfaces such as concrete. The growing use of rainwater storage tanks has improved the ability to maintain them.
Following recent droughts, Australia has seen a change to predominately warm-season turfgrasses, particularly in the southern states like New South Wales and Victoria which are predominately temperate climates within urban regions. The more drought tolerant grasses have been chosen by councils and homeowners for the choice of using less water compared to cool-season turfgrasses like fescue and ryegrass. Mild dormancy seems to be of little concern when high-profile areas can be oversown for short periods or nowadays, turf colourants (fake green) are very popular.
Uses
Lawns are a common feature of private gardens, public landscapes and parks in many parts of the world. They are created for aesthetic pleasure, as well as for sports or other outdoor recreational use. Lawns are useful as a playing surface both because they mitigate erosion and dust generated by intensive foot traffic and because they provide a cushion for players in sports such as rugby, football, soccer, cricket, baseball, golf, tennis, field hockey, and lawn bocce.
Lawns and the resulting lawn clipping waste can be used as an ingredient in making compost and is also viewed as fodder, used in the production of lawn clipping silage which is fed to livestock as a sustainable feed source.
Types of lawn plants
Lawns need not be, and have not always been, made up of grasses alone. There exist, for instance, moss lawns, clover lawns, thyme lawns, and tapestry lawns (made from diverse forbs). Sedges, low herbs and wildflowers, and other ground covers that can be walked upon are also used.
Thousands of varieties of grasses and grasslike plants are used for lawns, each adapted to specific conditions of precipitation and irrigation, seasonal temperatures, and sun/shade tolerances. Plant hybridizers and botanists are constantly creating and finding improved varieties of the basic species and new ones, often more economical and environmentally sustainable by needing less water, fertilizer, pest and disease treatments, and maintenance. The three basic categories are cool season grasses, warm season grasses, and grass alternatives.
Grasses
Many different species of grass are currently used, depending on the intended use and the climate. Coarse grasses are used where active sports are played, and finer grasses are used for ornamental lawns for their visual effects. Some grasses are adapted to oceanic climates with cooler summers, and others to tropical and continental climates with hotter summers. Often, a mixture of grass or low plant types is used to form a stronger lawn when one type does better in the warmer seasons and the other in the colder ones. This mixing is taken further by a form of grass breeding which produces what are known as cultivars. A cultivar is a cross-breed of two different varieties of grass and aims to combine certain traits taken from each individual breed. This creates a new strain which can be very specialised, suited to a particular environment, such as low water, low light or low nutrient.
Cool season grasses
Cool season grasses start growth at , and grow at their fastest rate when temperatures are between and , in climates that have relatively mild/cool summers, with two periods of rapid growth in the spring and autumn. They retain their color well in extreme cold and typically grow very dense, carpetlike lawns with relatively little thatch.
Bluegrass (Poa spp.)
Bentgrass (Agrostis spp.)
Ryegrasses (Lolium spp.)
Fescues (Festuca spp.)
Feather reed grass (Calamagrostis spp.)
Tufted hair grass (Deschampsia spp.)
Warm season grasses
Warm season grasses only start growth at temperatures above , and grow fastest when temperatures are between and , with one long growth period over the spring and summer (Huxley 1992). They often go dormant in cooler months, turning shades of tan or brown. Many warm season grasses are quite drought tolerant, and can handle very high summer temperatures, although temperatures below can kill most southern ecotype warm season grasses. The northern varieties, such as buffalograss and blue grama, are hardy to .
Zoysiagrass (Zoysia spp.)
Bermudagrass (Cynodon spp.)
St. Augustine grass (Stenotaphrum secundatum)
Bahiagrass (Paspalum spp.)
Centipedegrass (Eremochloa ophiuroides)
Carpet grass (Axonopus spp.)
Buffalograss (Bouteloua dactyloides)
Grama grass (Bouteloua spp.)
Kikuyu grass (Pennisetum clandestinum)
Grass seed for shade
Grass seed mixes have been developed to include only grass seed species that grow will in low sunlight conditions. These seed mixes are designed to deal with light shade caused by trees that can create patchiness, or slightly heavier shade that prevents the full growth of grass. Most lawns will experience shade in some shape or form due to surrounding fences, furniture, trees or hedges and these grass seed species' are especially useful in the Northern Hemisphere and Northwestern Europe.
Festuca rubra subsp. commutata (Chewings Fescue)
Poa pratensis (Smooth Stalked Meadow Grass)
Festuca ovina (Sheeps Fescue)
Festuca trachyphylla (hard fescue)
Festuca rubra (Strong Creeping Red Fescue)
Sedges
Carex species and cultivars are well represented in the horticulture industry as 'sedge' alternatives for 'grass' in mowed lawns and garden meadows. Both low-growing and spreading ornamental cultivars and native species are used in for sustainable landscaping as low-maintenance and drought-tolerant grass replacements for lawns and garden meadows. Wildland habitat restoration projects and natural landscaping and gardens also use them for 'user-friendly' areas. The J. Paul Getty Museum has used Carex pansa (meadow sedge) and Carex praegracilis (dune sedge) expansively in the Sculpture Gardens in Los Angeles.
Some lower sedges used are:
Carex caryophyllea (cultivar 'The Beatles')
C. divulsa (Berkeley sedge)
C. glauca (blue sedge) (syn. C. flacca)
C. pansa (meadow sedge)
C. praegracilis (dune sedge)
C. subfusca (mountain sedge)
C. tumulicola (foothill sedge) (cultivar 'Santa Cruz Mnts. selection')
C. uncifolia (ruby sedge)
Other ground-cover plants
Moss lawns do well in shaded areas under trees, and require only about 1% of the water of a traditional grass lawn once established. Clover lawns do especially well in damp, alkaline soils. Yarrow lawns are drought resistant, can be mowed to form a soft, comfortable turf; common yarrow is native throughout Europe, North America, and parts of Asia, and spreads vegetatively to cover the ground. Camomile lawns and thyme lawns are fragrant (and native to Europe an North Africa). Soleirolia soleirolii favours shaded, damp spaces (and is often used in tsubo-niwas); it is native to the European side of the Mediterranean, and can be invasive elsewhere.
Other low ground covers suitable for lawns include Corsican mint (native to three mediterranean islands, invasive), Ophiopogon planiscapus (native to Japan), Lippia and lawnleaf, (native to Central America and southern North America), purple flowering Mazus (native to East Asia), grey Dymondia (native to South Africa), creeping sedums (various species native to various continents), Cotula species (ditto), and creeping jenny (native to Europe).
Eastern North America
Some plants native to Eastern North America that can be used as alternatives to grass lawns or incorporated into lawns are:
Common yarrow
Virginia springbeauty
Wild strawberry
Dwarf cinquefoil
Moss phlox
Creeping phlox
Sensitive fern
Canadian wild ginger
Cinnamon fern
Lyreleaf sage
Allegheny pachysandra
Woodland stonecrop
Green-and-gold
Beetleweed
Blue-eyed grass
Common blue violet
Dwarf crested iris
Wild pink
Purple wood sorrel
Spotted cranesbill
Alternatives to lawns
Alternatives to lawns include meadows, drought-tolerant xeriscape gardens, natural landscapes, native plant habitat gardens, paved Spanish courtyard and patio gardens, butterfly gardens, rain gardens, and kitchen gardens. Trees and shrubs in close proximity to lawns provide habitat for birds in traditional, cottage and wildlife gardens.
Lawn care and maintenance
Seasonal lawn establishment and care varies depending on the climate zone and type of lawn grown.
Planting and seeding
Early autumn, spring, and early summer are the primary seasons to seed, lay sod (turf), plant 'liners', or 'sprig' new lawns, when the soil is warmer and air cooler. Seeding is the least expensive, but may take longer for the lawn to be established. Aerating just before planting/seeding may promote deeper root growth and thicker turf.
Sodding (American English), or turfing (British English), provides an almost instant lawn, and can be undertaken in most temperate climates in any season, but is more expensive and more vulnerable to drought until established. Hydroseeding is a quick, less expensive method of planting large, sloped or hillside landscapes. Some grasses and sedges are available and planted from 'liner' and containers, from 'flats', 'plugs' or 'sprigs', and are planted apart to grow together.
Fertilizers and chemicals
Various organic and inorganic or synthetic fertilizers are available, with instant or time-release applications. Pesticides, which includes biological and chemical herbicides, insecticides and fungicides, treating diseases like gray leaf spot, are available. Consideration for their effects on the lawn and garden ecosystem and via runoff and dispersion on the surrounding environment, inform laws constraining their use. For example, the Canadian province of Quebec and over 130 municipalities prohibit the use of synthetic lawn pesticides. The Ontario provincial government promised in September 2007 to also implement a province-wide ban on the cosmetic use of lawn pesticides, for protecting the public. Medical and environmental groups supported such a ban.
On 22 April 2008, the Provincial Government of Ontario announced that it would pass legislation that would prohibit, province-wide, the cosmetic use and sale of lawn and garden pesticides. The Ontario legislation would also echo Massachusetts law requiring pesticide manufacturers to reduce the toxins they use in production. Experts advise that a healthy lawn contains at least some "weeds" and insects, discouraging indiscriminate use of potentially harmful chemicals.
Sustainable gardening uses organic horticulture methods, such as organic fertilizers, biological pest control, beneficial insects, and companion planting, among other methods, to sustain an attractive lawn in a safe garden. An example of an organic herbicide is corn gluten meal, which releases an 'organic dipeptide' into the soil to inhibit root formation of germinating weed seeds. An example of an organic alternative to insecticide use is applying beneficial nematodes to combat soil-dwelling grubs, such as the larvae of chafer beetles. The Integrated Pest Management approach is a coordinated low impact approach.
Mowing and other maintenance practices
Maintaining a rough lawn requires only occasional cutting with a suitable machine, or grazing by animals. Maintaining a smooth and closely cut lawn, be it for aesthetic or practical reasons or because social pressure from neighbors and local municipal ordinances requires it, necessitates more organized and regular treatments. Usually once a week is adequate for maintaining a lawn in most climates. However, in the hot and rainy seasons of regions contained in hardiness zones greater than 8, lawns may need to be maintained up to two times a week.
Social impacts
The prevalence of the lawns in films such as Pleasantville (1998) and Edward Scissorhands (1990) alludes to the importance of the lawn as a social mechanism that gives great importance to visual representation of the American suburb as well as its practised culture. It is implied that a neighbor whose lawn is not in pristine condition is morally corrupt, emphasizing the role a well-kept lawn plays in neighborly and community relationships. In both of these films, green space surrounding a house in the suburbs becomes an indicator of moral integrity as well as of social and gender norms – lawn care has long been associated with men. These lawns also reinforce class and societal norms by subtly excluding those who may not have been able to afford a house with a lawn.
The lawn as a reflection of someone's character and the neighborhood at large is not restricted to films; the same theme appears in The Great Gatsby (1925), by American novelist F. Scott Fitzgerald. Character Nick Carraway rents the house next to Gatsby's and fails to maintain his lawn according to West Egg standards. The rift between the two lawns troubles Gatsby to the point that he dispatches his gardener to mow Carraway's grass and thereby establish uniformity.
Most lawn-care equipment over the decades has been advertised to men, and companies have long associated good lawn-care with good citizenship in their marketing campaigns. The appearance of a healthy lawn was meant to imply the health of the man taking care of it; controlled weeds and strict boundaries became a practical application of the desire to control nature, as well as an expression of control over personal lives once working full-time became central to suburban success. Women were encultured over time to view the lawn as part of the household, as an essential furnishing, and to encourage their husbands to maintain a lawn for the family and community reputation.
During World War II (1939–1945), women became the focus of lawn-care companies in the absence of their husbands and sons. These companies promoted lawn care as a necessary means by which women could help support their male family-members and American patriotism as a whole. The image of the lawn changed from focusing on technology and manhood to emphasizing aesthetic pleasure and the health benefits derived from its maintenance; advertisers at lawn care companies assumed that women would not respond positively to images of efficiency and power. The language of these marketing campaigns still intended to imbue the female population with notions of family, motherhood, and the duties of a wife; it has been argued that this was done so that it would be easier for men returning from war to resume the roles which their wives had taken over in their absence. This was especially apparent in the 1950s and 1960s, when lawn-care rhetoric emphasized the lawn as a husband's responsibility and as a pleasurable hobby when he retired.
There are differences in the particulars of lawn maintenance and appearance, such as the length of the grass, species (and therefore its color), and mowing.
Environmental concerns
On average, greater amounts of chemical fertilizer, herbicide and pesticide are used to maintain a given area of lawn than on an equivalent area of cultivated farmland. The use of these products causes environmental pollution, disturbance in the lawn ecosystem, and health risks to humans and wildlife.
In response to environmental concerns, organic landscaping and organic lawn management systems have been developed and are mandated in some municipalities and properties. In the United Kingdom, the environmental group Plantlife has encouraged gardeners to refrain from mowing in the month of May to encourage plant diversity and provide nectar for insects.
Other concerns, criticisms, and ordinances regarding lawns arise from wider environmental consequences:
Lawns can reduce biodiversity, especially when the lawn covers a large area. Traditional lawns often replace plant species that feed pollinators, requiring bees and butterflies to cross "wastelands" to reach food and host plants. Lawns promote homogenization and are normally cleared of unwanted plant and animal species, typically with synthetic pesticides, which can also kill unintended target species. They may be composed of introduced species not native to the area, particularly in the United States. This can produce a habitat that supports a reduced number of wildlife species.
Lawn maintenance commonly involves use of fertilizers and synthetic pesticides, which can cause great harm. Some are carcinogens and endocrine disruptors. They may permanently linger in the environment and negatively affect the health of potentially all nearby organisms. The United States Environmental Protection Agency estimated in 2012 that nearly of active pesticide ingredients are used on suburban lawns each year in the United States. There are indications of an emerging regulatory response to this issue. For example, Sweden, Denmark, Norway, Kuwait, and Belize have placed restrictions on the use of the herbicide 2,4-D.
It has been estimated that nearly of gasoline are spilled each summer while re-fueling garden and lawn-care equipment in the United States: approximately 50% more than that spilled during the Exxon Valdez incident.
The use of pesticides and fertilizers, requiring fossil fuels for manufacturing, distribution, and application, has been shown to contribute to global warming. (Sustainable organic techniques have been shown to help reduce global warming.) A hectare of lawn in Nashville, Tennessee, produces greenhouse gases equivalent to 697 to 2,443 kg of carbon dioxide a year. The higher figure is equivalent to a flight more than halfway around the world. Lawn mowing is one element of lawn culture that causes a great amount of emissions (which can be mitigated by replacing lawn mowers with grazing livestock).
Water conservation
Maintaining a green lawn sometimes requires large amounts of water. While natural rainfall is usually sufficient to maintain a lawn's health in the temperate British Isles- the birthplace of the concept of the lawn- in times of drought hosepipe bans may be implemented by the water suppliers. Conversely, exportation of the lawn ideal to more arid regions (e.g. U.S. Southwest and Australia) strains water supply systems when water supplies are already scarce. This necessitates upgrades to larger, more environmentally invasive equipment to deal with increased demand due to lawn watering. Grass typically goes dormant during periods of cold or heat outside of its preferred temperature ranges; dormancy reduces the grasses' water demand. Most grasses typically recover quite well from a drought, but many property owners become concerned about the brown appearance and increase watering during the summer months. Water in Australia observed 1995 data that up to 90% of the water used in Canberra during summer drought periods was used for watering lawns.
In the United States, 50 to 70% of residential water is used for landscaping, with most used to water lawns. A 2005 NASA study estimated conservatively of irrigated lawn in the US, three times the area of irrigated corn. That translates to about of drinking-quality fresh water per person per day is required to keep up United States' lawn surface area.
In 2022, the state of Nevada pass a bill that not only banned the installation of new lawns in the state, but also mandated the removal of any lawn deemed "nonfunctional." This was in response to a years-long drought in the state.
Chemicals
An increased concern from the general public over pesticide and fertilizer use and their associated health risks, combined with the implementation of the legislation, such as the US Food Quality Protection Act, has resulted in the reduced presence of synthetic chemicals, namely pesticides, in urban landscapes such as lawns in the late 20th century. Many of these concerns over the safety and environmental impact of some of the synthetic fertilizers and pesticides has led to their ban by the United States Environmental Protection Agency and many local governments. The use of pesticides and other chemicals to care for lawns has also led to the death of nearly 7 million birds each year, a topic that was central to the novel Silent Spring by the conservationist Rachel Carson.
The use of lawn chemicals made its first appearance in the 18th century through the introduction of "English garden" fads. These types of lawns put precise hedging, clean cut grass, and extravagant plants on display. Following the initial introduction of lawn chemicals, they have still been continually used throughout North America. Because many of the turf-grass species in North America are not native to our ecosystems, they require extensive maintenance. According to the United States Geological Survey, 99% of the urban water samples that were tested contained one or more types of pesticides. In addition to water contamination, chemicals are making their way into houses which can lead to chronic exposure. Currently, standards for pesticide management practices have been put in place through the Food Quality Protection Act.
Environmental impact
In the United States, lawn heights are generally maintained by gasoline-powered lawn mowers, which contribute to urban smog during the summer months. The EPA found, in some urban areas, up to 5% of smog was due to small gasoline engines made before 1997, such as are typically used on lawn mowers. Since 1997, the EPA has mandated emissions controls on newer engines in an effort to reduce smog.
A 2010 study seemed to show lawn care inputs were balanced by the carbon sequestration benefits of lawns, and they may not be contributors to anthropogenic global warming. Lawns with high maintenance (mowing, irrigation, and leaf blowing) and high fertilization rates have a net emission of carbon dioxide and nitrous oxide that have large global warming potential. Lawns that are fertilized, irrigated, and mowed weekly have a lower species diversity.
Replacing turf grass with low-maintenance groundcovers or employing a variety of low-maintenance perennials, trees and shrubs can be a good alternative to traditional lawn spaces, especially in hard-to-grow or hard-to-mow areas, as it can reduce maintenance requirements, associated pollution and offers higher aesthetic and wildlife value. Growing a mixed variety of flowering plants instead of turfgrass is sometimes referred to as meadowscaping.
Non-productive space
Lawns take up space that could otherwise be used more productively, such as for urban agriculture or home gardening. This is the case in many cities and suburbs in the United States, where open or unused spaces are "not generally a result of a positive decision to leave room for some use, but rather is an expression of a pastoral aesthetic norm that prizes spacious lawns and the zoning restrictions and neighborhood covenants that give these norms the force of law."
In urban and suburban spaces, growing food in front yards and parking strips can not only provide fresh produce but also be a source of neighborhood pride. While converting lawn space into strictly utilitarian farms is not common, incorporating edible plants into front yards with sustainable and aesthetically pleasing design is of growing interest in the United States.
See also
Bacterial lawn
Moss lawn
Tapestry lawn
Organic lawn management
Gardening
List of organic gardening and farming topics
References
Further reading
Bormann, F. Herbert, et al. (1993) Redesigning the American Lawn.
Hessayon, D. G. (1997). The Lawn Expert. Expert. .
Huxley, A., ed. (1992). New RHS Dictionary of Gardening. Lawns: Ch. 3: pp. 26–33. Macmillan. .
Jenkins, V. S. (1994). The Lawn: A History of an American Obsession. Smithsonian Books. .
Steinberg, T. (2006). American Green, The Obsessive Quest for the Perfect Lawn. W.W. Norton & Co. .
Wasowski, Sally and Andy (2004). Requiem for a Lawnmower.
External links
"Planting and care of Lawns" from the UNT Govt. Documents Dept.
Integrated Pest Management Program: website & search-engine
How to look after your Lawn
Lawn Care University at Michigan State University
"EPA Management of Polluted Runoff: Nonpoint Source Pollution" (includes mismanagement of lawns problems.)
Garden features
Grasslands
Groundcovers
Hydrology and urban planning | Lawn | [
"Biology",
"Environmental_science"
] | 9,619 | [
"Hydrology and urban planning",
"Hydrology",
"Grasslands",
"Ecosystems"
] |
318,051 | https://en.wikipedia.org/wiki/Law%20of%20mass%20action | In chemistry, the law of mass action is the proposition that the rate of a chemical reaction is directly proportional to the product of the activities or concentrations of the reactants. It explains and predicts behaviors of solutions in dynamic equilibrium. Specifically, it implies that for a chemical reaction mixture that is in equilibrium, the ratio between the concentration of reactants and products is constant.
Two aspects are involved in the initial formulation of the law: 1) the equilibrium aspect, concerning the composition of a reaction mixture at equilibrium and 2) the kinetic aspect concerning the rate equations for elementary reactions. Both aspects stem from the research performed by Cato M. Guldberg and Peter Waage between 1864 and 1879 in which equilibrium constants were derived by using kinetic data and the rate equation which they had proposed. Guldberg and Waage also recognized that chemical equilibrium is a dynamic process in which rates of reaction for the forward and backward reactions must be equal at chemical equilibrium. In order to derive the expression of the equilibrium constant appealing to kinetics, the expression of the rate equation must be used. The expression of the rate equations was rediscovered independently by Jacobus Henricus van 't Hoff.
The law is a statement about equilibrium and gives an expression for the equilibrium constant, a quantity characterizing chemical equilibrium. In modern chemistry this is derived using equilibrium thermodynamics. It can also be derived with the concept of chemical potential.
History
Two chemists generally expressed the composition of a mixture in terms of numerical values relating the amount of the product to describe the equilibrium state.
Cato Maximilian Guldberg and Peter Waage, building on Claude Louis Berthollet's ideas about reversible chemical reactions, proposed the law of mass action in 1864. These papers, in Danish, went largely unnoticed, as did the later publication (in French) of 1867 which contained a modified law and the experimental data on which that law was based.
In 1877 van 't Hoff independently came to similar conclusions, but was unaware of the earlier work, which prompted Guldberg and Waage to give a fuller and further developed account of their work, in German, in 1879. Van 't Hoff then accepted their priority.
1864
The equilibrium state (composition)
In their first paper, Guldberg and Waage suggested that in a reaction such as
A + B <=> A' + B'
the "chemical affinity" or "reaction force" between A and B did not just depend on the chemical nature of the reactants, as had previously been supposed, but also depended on the amount of each reactant in a reaction mixture. Thus the law of mass action was first stated as follows:
When two reactants, A and B, react together at a given temperature in a "substitution reaction," the affinity, or chemical force between them, is proportional to the active masses, [A] and [B], each raised to a particular power
.
In this context a substitution reaction was one such as {alcohol} + acid <=> {ester} + water. Active mass was defined in the 1879 paper as "the amount of substance in the sphere of action". For species in solution active mass is equal to concentration. For solids, active mass is taken as a constant. , a and b were regarded as empirical constants, to be determined by experiment.
At equilibrium, the chemical force driving the forward reaction must be equal to the chemical force driving the reverse reaction. Writing the initial active masses of A,B, A' and B' as p, q, p' and q' and the dissociated active mass at equilibrium as , this equality is represented by
represents the amount of reagents A and B that has been converted into A' and B'. Calculations based on this equation are reported in the second paper.
Dynamic approach to the equilibrium state
The third paper of 1864 was concerned with the kinetics of the same equilibrium system. Writing the dissociated active mass at some point in time as x, the rate of reaction was given as
Likewise the reverse reaction of A' with B' proceeded at a rate given by
The overall rate of conversion is the difference between these rates, so at equilibrium (when the composition stops changing) the two rates of reaction must be equal. Hence
...
1867
The rate expressions given in Guldberg and Waage's 1864 paper could not be differentiated, so they were simplified as follows. The chemical force was assumed to be directly proportional to the product of the active masses of the reactants.
This is equivalent to setting the exponents a and b of the earlier theory to one. The proportionality constant was called an affinity constant, k. The equilibrium condition for an "ideal" reaction was thus given the simplified form
[A]eq, [B]eq etc. are the active masses at equilibrium. In terms of the initial amounts reagents p,q etc. this becomes
The ratio of the affinity coefficients, k'/k, can be recognized as an equilibrium constant. Turning to the kinetic aspect, it was suggested that the velocity of reaction, v, is proportional to the sum of chemical affinities (forces). In its simplest form this results in the expression
where is the proportionality constant. Actually, Guldberg and Waage used a more complicated expression which allowed for interaction between A and A', etc. By making certain simplifying approximations to those more complicated expressions, the rate equation could be integrated and hence the equilibrium quantity could be calculated. The extensive calculations in the 1867 paper gave support to the simplified concept, namely,
The rate of a reaction is proportional to the product of the active masses of the reagents involved.
This is an alternative statement of the law of mass action.
1879
In the 1879 paper the assumption that reaction rate was proportional to the product of concentrations was justified microscopically in terms of the frequency of independent collisions, as had been developed for gas kinetics by Boltzmann in 1872 (Boltzmann equation). It was also proposed that the original theory of the equilibrium condition could be generalised to apply to any arbitrary chemical equilibrium.
The exponents α, β etc. are explicitly identified for the first time as the stoichiometric coefficients for the reaction.
Modern statement of the law
The affinity constants, k+ and k−, of the 1879 paper can now be recognised as rate constants. The equilibrium constant, K, was derived by setting the rates of forward and backward reactions to be equal. This also meant that the chemical affinities for the forward and backward reactions are equal. The resultant expression
is correct even from the modern perspective, apart from the use of concentrations instead of activities (the concept of chemical activity was developed by Josiah Willard Gibbs, in the 1870s, but was not widely known in Europe until the 1890s). The derivation from the reaction rate expressions is no longer considered to be valid. Nevertheless, Guldberg and Waage were on the right track when they suggested that the driving force for both forward and backward reactions is equal when the mixture is at equilibrium. The term they used for this force was chemical affinity. Today the expression for the equilibrium constant is derived by setting the chemical potential of forward and backward reactions to be equal. The generalisation of the law of mass action, in terms of affinity, to equilibria of arbitrary stoichiometry was a bold and correct conjecture.
The hypothesis that reaction rate is proportional to reactant concentrations is, strictly speaking, only true for elementary reactions (reactions with a single mechanistic step), but the empirical rate expression
is also applicable to second order reactions that may not be concerted reactions. Guldberg and Waage were fortunate in that reactions such as ester formation and hydrolysis, on which they originally based their theory, do indeed follow this rate expression.
In general many reactions occur with the formation of reactive intermediates, and/or through parallel reaction pathways. However, all reactions can be represented as a series of elementary reactions and, if the mechanism is known in detail, the rate equation for each individual step is given by the expression so that the overall rate equation can be derived from the individual steps. When this is done the equilibrium constant is obtained correctly from the rate equations for forward and backward reaction rates.
In biochemistry, there has been significant interest in the appropriate mathematical model for chemical reactions occurring in the intracellular medium. This is in contrast to the initial work done on chemical kinetics, which was in simplified systems where reactants were in a relatively dilute, pH-buffered, aqueous solution. In more complex environments, where bound particles may be prevented from disassociation by their surroundings, or diffusion is slow or anomalous, the model of mass action does not always describe the behavior of the reaction kinetics accurately. Several attempts have been made to modify the mass action model, but consensus has yet to be reached. Popular modifications replace the rate constants with functions of time and concentration. As an alternative to these mathematical constructs, one school of thought is that the mass action model can be valid in intracellular environments under certain conditions, but with different rates than would be found in a dilute, simple environment .
The fact that Guldberg and Waage developed their concepts in steps from 1864 to 1867 and 1879 has resulted in much confusion in the literature as to which equation the law of mass action refers. It has been a source of some textbook errors. Thus, today the "law of mass action" sometimes refers to the (correct) equilibrium constant formula,
and at other times to the (usually incorrect) rate formula.
Applications to other fields
In semiconductor physics
The law of mass action also has implications in semiconductor physics. Regardless of doping, the product of electron and hole densities is a constant at equilibrium. This constant depends on the thermal energy of the system (i.e. the product of the Boltzmann constant, , and temperature, ), as well as the band gap (the energy separation between conduction and valence bands, ) and effective density of states in the valence and conduction bands. When the equilibrium electron and hole densities are equal, their density is called the intrinsic carrier density as this would be the value of and in a perfect crystal. Note that the final product is independent of the Fermi level :
Diffusion in condensed matter
Yakov Frenkel represented diffusion process in condensed matter as an ensemble of elementary jumps and quasichemical interactions of particles and defects. Henry Eyring applied his theory of absolute reaction rates to this quasichemical representation of diffusion. Mass action law for diffusion leads to various nonlinear versions of Fick's law.
In mathematical ecology
The Lotka–Volterra equations describe dynamics of the predator-prey systems. The rate of predation upon the prey is assumed to be proportional to the rate at which the predators and the prey meet; this rate is evaluated as xy, where x is the number of prey, y is the number of predator. This is a typical example of the law of mass action.
In mathematical epidemiology
The law of mass action forms the basis of the compartmental model of disease spread in mathematical epidemiology, in which a population of humans, animals or other individuals is divided into categories of susceptible, infected, and recovered (immune). The principle of mass action is at the heart of the transmission term of compartmental models in epidemiology, which provide a useful abstraction of disease dynamics. The law of mass action formulation of the SIR model corresponds to the following "quasichemical" system of elementary reactions:
The list of components is S (susceptible individuals), I (infected individuals), and R (removed individuals, or just recovered ones if we neglect lethality);
The list of elementary reactions is
S + I -> 2I
I -> R.
If the immunity is unstable then the transition from R to S should be added that closes the cycle (SIRS model):
R -> S.
A rich system of law of mass action models was developed in mathematical epidemiology by adding components and elementary reactions.
Individuals in human or animal populations unlike molecules in an ideal solution do not mix homogeneously. There are some disease examples in which this non-homogeneity is great enough such that the outputs of the classical SIR model and their simple generalizations like SIS or SEIR, are invalid. For these situations, more sophisticated compartmental models or distributed reaction-diffusion models may be useful.
See also
Chemical equilibrium
Chemical potential
Disequilibrium ratio
Equilibrium constant
Reaction quotient
References
Further reading
Studies Concerning Affinity. P. Waage and C.M. Guldberg; Henry I. Abrash, Translator.
"Guldberg and Waage and the Law of Mass Action", E.W. Lund, J. Chem. Ed., (1965), 42, 548-550.
A simple explanation of the mass action law. H. Motulsky.
The Thermodynamic Equilibrium Constant
History of chemistry
Equilibrium chemistry
Chemical kinetics
Jacobus Henricus van 't Hoff | Law of mass action | [
"Chemistry"
] | 2,674 | [
"Equilibrium chemistry",
"Chemical kinetics",
"Chemical reaction engineering"
] |
318,052 | https://en.wikipedia.org/wiki/Gaussian%20rational | In mathematics, a Gaussian rational number is a complex number of the form p + qi, where p and q are both rational numbers.
The set of all Gaussian rationals forms the Gaussian rational field, denoted Q(i), obtained by adjoining the imaginary number i to the field of rationals Q.
Properties of the field
The field of Gaussian rationals provides an example of an algebraic number field that is both a quadratic field and a cyclotomic field (since i is a 4th root of unity). Like all quadratic fields it is a Galois extension of Q with Galois group cyclic of order two, in this case generated by complex conjugation, and is thus an abelian extension of Q, with conductor 4.
As with cyclotomic fields more generally, the field of Gaussian rationals is neither ordered nor complete (as a metric space). The Gaussian integers Z[i] form the ring of integers of Q(i). The set of all Gaussian rationals is countably infinite.
The field of Gaussian rationals is also a two-dimensional vector space over Q with natural basis .
Ford spheres
The concept of Ford circles can be generalized from the rational numbers to the Gaussian rationals, giving Ford spheres. In this construction, the complex numbers are embedded as a plane in a three-dimensional Euclidean space, and for each Gaussian rational point in this plane one constructs a sphere tangent to the plane at that point. For a Gaussian rational represented in lowest terms as (i.e. and are relatively prime), the radius of this sphere should be where is the squared modulus, and is the complex conjugate. The resulting spheres are tangent for pairs of Gaussian rationals and with , and otherwise they do not intersect each other.
References
Cyclotomic fields
it:Intero di Gauss#Campo dei quozienti | Gaussian rational | [
"Mathematics"
] | 404 | [
"Number theory stubs",
"Number theory"
] |
318,280 | https://en.wikipedia.org/wiki/Mairead%20Maguire | Mairead Maguire (born 27 January 1944), also known as Mairead Corrigan Maguire and formerly as Mairéad Corrigan, is a peace activist from Northern Ireland. She co-founded, with Betty Williams and Ciaran McKeown, the Women for Peace, which later became the Community for Peace People, an organization dedicated to encouraging a peaceful resolution of the Troubles in Northern Ireland. Maguire and Williams were awarded the 1976 Nobel Peace Prize.
Early life (1944–1976)
Maguire was born into a Roman Catholic community in Belfast, Northern Ireland, the second of eight children – five sisters and two brothers. Her parents were Andrew and Margaret Corrigan. She attended St. Vincent's Primary School, a private Catholic school, until the age of 14, at which time her family could no longer pay for her schooling. After working for a time as a babysitter at a Catholic community centre, she was able to save enough money to enroll in a year of business classes at Miss Gordon's Commercial College, which led her at the age of 16 to a job as an accounting clerk with a local factory. She volunteered regularly with the Legion of Mary, spending her evenings and weekends working with children and visiting inmates at Long Kesh prison. When she was 21 she began working as a secretary for the Guinness brewery, where she remained employed until December 1976.
Maguire told The Progressive in 2013 that her early Catholic heroes included Dorothy Day and the Berrigan brothers.
Northern Ireland peace movement (1976–1980)
Maguire became active with the Northern Ireland peace movement after three children of her sister, Anne Maguire, were run over and killed by a car driven by Danny Lennon, a Provisional Irish Republican Army (IRA) fugitive who had been fatally shot by British troops while trying to make a getaway. Danny Lennon had been released from prison in April 1976 after serving three years for suspected involvement in the IRA. On 10 August, Lennon and accomplice John Chillingworth were transporting an Armalite rifle through Andersonstown, Belfast, when British troops, claiming to have seen a rifle pointed at them, opened fire on the vehicle, instantly killing Lennon and critically wounding Chillingworth. The car Lennon drove went out of control and mounted a pavement on Finaghy Road North, colliding with Anne Maguire and three of her children who were out shopping. Joanne (8) and Andrew (6 weeks) died at the scene; John Maguire (2) succumbed to his injuries at a hospital the following day.
Betty Williams, a resident of Andersonstown who happened to be driving by, witnessed the tragedy and accused the IRA of firing at the British patrol and provoking the incident. In the days that followed she began gathering signatures for a peace petition from Protestants and Catholics and was able to assemble some 200 women to march for peace in Belfast. The march passed near the home of Maguire (then Mairead Corrigan) who joined it. She and Williams thus became "the joint leaders of a virtually spontaneous mass movement." A movement based on the presumption of a false narrative that the IRA fired upon the British patrol.
The next march, to the burial sites of the three Maguire children, brought 10,000 Protestant and Catholic women together. The marchers, including Maguire and Williams, were physically attacked by IRA members. By the end of the month Maguire and Williams had brought 35,000 people onto the streets of Belfast petitioning for peace between the republican and loyalist factions. Initially adopting the name "Women for Peace," the movement changed its name to the gender-neutral "Community of Peace People," or simply "Peace People," when Irish Press correspondent Ciaran McKeown joined. In contrast with the prevailing climate at the time, Maguire was convinced that the most effective way to end the violence was not through violence but through re-education. The organization published a biweekly paper, Peace by Peace, and provided for families of prisoners a bus service to and from Belfast's jails. In 1977, she and Betty Williams received the 1976 Nobel Peace Prize for their efforts. Aged 32 at the time, she was the youngest Nobel Peace Prize laureate until Malala Yousafzai was awarded the Nobel Peace Prize in 2014.
After the Nobel Prize (since 1980)
Though Betty Williams resigned from the Peace People in 1980, Maguire has continued her involvement in the organization to this day and has served as the group's honorary president. It has since taken on a more global agenda, addressing an array of social and political issues from around the world.
In January 1980, after a prolonged battle with depression over the loss of her children in the 1976 Finaghy Road incident, Maguire's sister Anne committed suicide. A year and a half later, in September 1981, Mairead married Jackie Maguire, who was her late sister's widower. She has three stepchildren and two children of her own, John Francis (b. 1982) and Luke (b. 1984).
In 1981 Maguire co-founded the Committee on the Administration of Justice, a nonsectarian organisation dedicated to defending human rights.
She is a member of the anti-abortion group Consistent Life Ethic, which is against abortion, capital punishment and euthanasia.
Maguire has been involved in a number of campaigns on behalf of political prisoners around the world. In 1993 she and six other Nobel Peace Prize laureates tried unsuccessfully to enter Myanmar from Thailand to protest the protracted detention of opposition leader Aung San Suu Kyi. She was a first signatory on a 2008 petition calling on Turkey to end its torture of Kurdish leader Abdullah Öcalan. In October 2010, she signed a petition calling on China to release Nobel Peace Prize laureate Liu Xiaobo from house arrest.
Maguire was selected in 2003 to serve on the honorary board of the International Coalition for the Decade, a coalition of national and international groups, presided over by Christian Renoux, whose aim was to promote the United Nations' 1998 vision of the first decade of the twenty-first century as the International Decade for the Promotion of a Culture of Peace and Non-Violence for the Children of the World.
In 2006, Maguire was one of the founders of the Nobel Women's Initiative along with fellow Peace Prize laureates Betty Williams, Shirin Ebadi, Wangari Maathai, Jody Williams, and Rigoberta Menchú Tum. The Initiative describes itself as six women representing North and South America, Europe, the Middle East, and Africa who decided to bring together their "extraordinary experiences in a united effort for peace with justice and equality" and "to help strengthen work being done in support of women's rights around the world".
Maguire supported the Occupy movement and has described WikiLeaks founder Julian Assange as "very courageous". She has also praised Chelsea Manning. "I think they've been tremendously courageous in telling the truth", she has said, adding that "the American government and NATO have destroyed Iraq and Afghanistan. Their next targets will be Syria and Iran".
Together with Desmond Tutu and Adolfo Pérez Esquivel, Maguire published a letter in support of Chelsea Manning, saying: "The words attributed to Manning reveal that he went through a profound moral struggle between the time he enlisted and when he became a whistleblower. Through his experience in Iraq, he became disturbed by top-level policy that undervalued human life and caused the suffering of innocent civilians and soldiers. Like other courageous whistleblowers, he was driven foremost by a desire to reveal the truth".
Maguire has also earned a degree from the Irish School of Ecumenics at Trinity College Dublin. She works with various interchurch and interfaith organizations and is a councilor with the International Peace Council. She is also a Patron of the Methodist Theological College, and of the Northern Ireland Council for Integrated Education.
In April 2019 Maguire collected the 2019 GUE/NGL Award for Journalists, Whistleblowers & Defenders of the Right to Information on behalf of Julian Assange who was at the time imprisoned by the United Kingdom.
United States
Maguire is an outspoken critic of U.S. and British policy in the Middle East, particularly in Iraq and Afghanistan. She has also been personally critical of U.S. President Barack Obama's leadership. Her activism in the U.S. has occasionally brought her into confrontations with the law.
After initially accepting an invitation to a 2012 Nobel summit in Chicago, she changed her mind because the event was hosted by the U.S. State Department, "and to me the Nobel Peace laureates should not be hosted by a State Department that is continuing with war, removing basic civil liberties and human rights and international law and then talking about peace to young people. That's a double standard".
Maguire said in a 2013 interview that ever since her 40-day fast and arrest outside the White House in 2003 (see below), "whenever I now come into America, I'm always questioned as to what my background is".
In 2015, Maguire spoke with Democracy Now in a sit-down interview titled, "No to Violence, Yes to Dialogue", which included two other Nobel Peace Prize Laureates, Jody Williams and Leymah Gbowee. Maguire discussed the desire "to end militarism and war, and to build peace and international law and human rights and democracy".
Iraq and Afghanistan
Maguire voiced strong opposition to the U.N. sanctions against Iraq, which is alleged by some to have resulted in hundreds of thousands of civilian deaths, calling them "unjust and inhuman", "a new kind of bomb", and "even more cruel than weapons". During a visit to Baghdad with Argentinian colleague Adolfo Pérez Esquivel in March 1999, Maguire urged then-U.S. President Bill Clinton and British Prime Minister Tony Blair to end the bombing of Iraq and to permit the lifting of U.N. sanction. "I have seen children dying with their mothers next to them and not being able to do anything", Maguire said. "They are not soldiers".
In the aftermath of al-Qaeda's attacks on the U.S. in September 2001, as it became clear that the U.S. would retaliate and deploy troops in Afghanistan, Maguire campaigned against the impending war. In India she claimed to have marched with "hundreds of thousands of Indian people walking for peace". In New York, Maguire was reported to have marched with 10,000 protesters, purportedly including families of 9/11 victims, as U.S. war planes were already en route to strike Taliban targets in Afghanistan.
In the period leading up to the March 2003 invasion of Iraq, Maguire campaigned vigorously against the anticipated hostilities. Speaking at the 23rd War Resisters' International Conference in Dublin, Ireland in August 2002, Maguire called on the Irish government to oppose the Iraq War "in every European and world forum of which they are a part". On 17 March 2003, St. Patrick's Day, Maguire protested the war outside the United Nations Headquarters with, among other activists, Frida Berrigan. On 19 March, Maguire addressed an audience of 300 people in a chapel at Le Moyne College in Syracuse, N.Y. "Armies with all their advanced weapons of mass destruction are facing the Iraqi people who have nothing", she told the crowd. "In anybody's language, it's not fair". Around this time, Maguire held a 30-day vigil and began a 40-day liquid fast outside the White House, joined by members of Pax Christi USA and Christian church leaders. As the war got under way in the days that followed, Maguire described the invasion as an "ongoing and shameful slaughter". "Daily we sit, facing Mecca in solidarity with our Muslim brothers and sisters in Iraq, and we ask Allah for forgiveness", she said in a statement to the press on 31 March. Maguire would later remark that the media in the U.S. distorted news from Iraq and that the Iraq War was carried out in pursuit of American "economic and military interests". In February 2006 she expressed her belief that George W. Bush and Tony Blair "should be made accountable for illegally taking the world to war and for war crimes against humanity".
Criticism of President Barack Obama
Maguire expressed disappointment with the selection of U.S. President Barack Obama as winner of the 2009 Nobel Peace Prize. "They say this is for his extraordinary efforts to strengthen international diplomacy and co-operation between peoples", she said, "and yet he continues the policy of militarism and occupation of Afghanistan, instead of dialogue and negotiations with all the parties to the conflict. [...] Giving this award to the leader of the most militarised country in the world, which has taken the human family against its will to war, will rightly be seen by many people around the world as a reward for his country's aggression and domination".
After declining to meet with the Dalai Lama during his visit to the U.S. in 2008, citing conflicting travel schedules, Obama declined to meet with the exiled Tibetan spiritual leader again in 2009. Maguire condemned what she considered Obama's deliberate refusal to meet with the Dalai Lama, calling it "horrifying".
Speaking at the Carl von Ossietzky Medal Award Ceremony in Berlin in December 2010, Maguire imputed criminal accountability to President Obama for violation of international law. "When President Obama says he wants to see a world without nuclear weapons and says, in respect of Iran and their alleged nuclear weapons ambitions, that 'all option are on the table,' this is clearly a threat to use nuclear weapons, clearly a criminal threat against Iran, under the world court advisory opinion. The Nuremberg Charter of 8 August 1945 says the threat or use of nuclear weapons is criminal, so officials in all nine nuclear weapons states who maintain and use nuclear deterrence as a threat are committing crimes and breaking international law".
Confrontations with the law
Maguire was twice arrested in the United States. On 17 March 2003, she was arrested outside the United Nations headquarters in New York City during a protest against the Iraq War. Later that month, on 27 March, she was one of 65 anti-war protesters briefly taken into custody by police after penetrating a security barricade near the White House.
In May 2009, following a visit to Guatemala, immigration authorities at the Houston Airport in Texas detained Maguire for a number of hours, during which time she was questioned, fingerprinted and photographed, and consequently missed her connecting flight to Northern Ireland. "They insisted I must tick the box in the Immigration form admitting to criminal activities," she explained. In late July that same year, Maguire was again detained by immigration authorities, this time at the Dulles International Airport in Virginia, on her way from Ireland to New Mexico to meet with colleague Jody Williams. As in May, the delay resulted in Maguire missing her connecting flight.
Israel
Maguire first visited Israel at age 40 in 1984. She came then as part of an interfaith initiative seeking forgiveness from Jews for years of persecution by Christians in Jesus' name. Her second visit was in June 2000, this time in response to invitations from Rabbis for Human Rights and the Israeli Committee Against House Demolitions. The two groups had taken upon themselves to defend Ahmed Shamasneh in an Israeli military court against charges of illegally constructing his home in the West Bank town of Qatanna, and Maguire traveled to Israel to observe the court proceedings and support the Shamasneh family.
In a 2013 interview, she omitted any mention of her 1984 trip to Israel, saying that "I first went to Israel/Palestine at the invitation of Rabbis for Human Rights and the Israeli Committee against House Demolitions" and "was absolutely horrified" at Palestinians' living conditions. It was after that visit that she "started going on a regular basis" because she was "very hopeful that there is a solution to the Israeli/Palestinian injustice. In Northern Ireland, people said there would never be a solution. But once people begin to have the political will and force their governments to sit down, it can happen".
Maguire has at times been fiercely critical of the State of Israel, even calling for its membership in the United Nations to be revoked or suspended. She has accused the Israeli government of "carrying out a policy of ethnic cleansing against Palestinians...in east Jerusalem" and supports boycott and divestment initiatives against Israel. Concomitantly, Maguire has also said that she loves Israel and that "to live in Israel for Jewish people, is to live in fear of suicide bombs and Kassam rockets".
A 2013 profile of Maguire in The Progressive noted that "she hasn't lost her passion yet" and that "it is the Israeli occupation of Palestine that has occupied much of her attention in recent years".
Mordechai Vanunu advocacy
Maguire has been a vocal supporter of Mordechai Vanunu, a former Israeli nuclear technician who revealed details of Israel's nuclear defence program to the British press in 1986 and subsequently served 18 years in prison for treason. Maguire flew to Israel in April 2004 to greet Vanunu upon his release and has since flown to meet with him in Israel on several occasions.
In an open letter addressed to the Israeli people in July 2010, after Vanunu was returned to prison for violating the terms of his parole, Maguire urged Jews in Israel to petition their government for Vanunu's release and freedom. She praised Vanunu as "a man of peace", "a great visionary", "a true Gandhian spirit" and compared his actions to those of Alfred Nobel.
References to the Holocaust
At a joint press conference with Mordechai Vanunu in Jerusalem in December 2004, Maguire compared Israel's nuclear weapons to the Nazi gas chambers in Auschwitz. "When I think about nuclear weapons, I've been to Auschwitz concentration camp". She added, "Nuclear weapons are only gas chambers perfected ... and for a people who already know what gas chambers are, how can you even think of building perfect gas chambers".
In January 2006, close to Holocaust Memorial Day, Maguire asked that Mordechai Vanunu be remembered together with the Jews that perished in the Holocaust. "As we, with sorrow and sadness, remember the Holocaust Victims, we remember too those individuals of conscience who refused to be silenced in the face of danger and paid with their freedom and lives in defending their Jewish brothers and sisters, and we remember our brother Mordechai Vanunu – the lonely Israeli prisoner in his own country, who refused to be silent".
In a speech delivered in February 2006 to the Nuclear Age Peace Foundation in Santa Barbara, California, Maguire again made a comparison between nuclear weapons and the Nazis. "Last April some of us protested at Dimona Nuclear Plant, in Israel, calling for it to be open to UN Inspection, and bombs to be destroyed. Israeli Jets flew overhead, and a train passed into the Dimona nuclear site. This brought back to me vivid memories of my visit to Auschwitz concentration camp, with its rail tracks, trains, destruction and death".
Maguire firmly denied comparing Israel to Nazi Germany in an interview with Tal Schneider of Lady Globes in November 2010. "I have for years been speaking out against nuclear weapons. I am actively opposed to nuclear weapons in Britain, in the United States, in Israel, in any country, because nuclear weapons are the ultimate destruction of humankind. But I have never said that Israel is like Nazi Germany, and I don't know why I am quoted like that in Israel. I also never compared Gaza to an extermination camp. I visited the death camps in Austria, with Nobel Prize laureate Elie Wiesel, and I think it is terrible that people did not try to stop the genocide of the Jewish people".
Palestinian activism
Maguire said in a 2007 speech that Israel's separation wall "is a monument to fear and failed politics" and that "for many Palestinians daily living is so hard, it is indeed an act of resistance." She praised the "inspirational work of the International Solidarity Movement" and paid tribute to the memory of "Rachel Corrie, who gave her life protesting the demolition of Palestinian homes by Israeli military", saying that "it is the Rachels of this world who reminds us that we are responsible for each other, and we are interconnected in a mysteriously spiritual and beautiful way".
On 20 April 2007, Maguire participated in a protest against the construction of Israel's separation barrier outside the Palestinian village of Bil'in. The protest was held in a no-access military zone. Israeli forces used tear-gas grenades and rubber-coated bullets in an effort to disperse the protesters, while the protesters hurled rocks at the Israeli troops, injuring two Border Guard policemen. One rubber bullet hit Maguire in the leg, whereupon she was transferred to an Israeli hospital for treatment. She was also reported to have inhaled large quantities of tear gas.
In October 2008, Maguire arrived in Gaza aboard the SS Dignity. Although Israel had insisted that the yacht would not be permitted to approach Gaza, then-Prime Minister Ehud Olmert ultimately capitulated and allowed the ship to sail to its destination without incident. During her stay in Gaza, Maguire met with Hamas leader Ismail Haniyeh. She was photographed accepting an honorary golden plate depicting the Palestinian flag draped over all of Israel and the occupied territories.
In March 2009, Maguire joined a campaign for the immediate and unconditional removal of Hamas from the European Union list of proscribed terrorist organisations.
On 30 June 2009, Maguire was taken into custody by the Israeli military along with twenty others, including former U.S. Congresswoman Cynthia McKinney. She was on board a small ferry, the MV Spirit of Humanity (formerly the Arion), said to be carrying humanitarian aid to the Gaza Strip, when Israel intercepted the vessel off the coast of Gaza. From an Israeli prison, she gave a lengthy interview with Democracy Now! via cell phone, and was deported on 7 July 2009 to Dublin. on 2 July 2009. In the interview, she rejected Israeli authorities' claim that aid can pass freely into Gaza, charging that "Gaza is like a huge prison...a huge occupied territory of one-and-a-half million people who have been subjected to collective punishment by the Israeli government". She further said that "the tragedy is that the American government, the UN and Europe, they remain silent in the face of the abuse of Palestinian human rights, like freedom, and it's really tragic". In addition, she claimed that when her boat was approached by Israeli naval vessels in international waters, "we were in grave danger of actually being killed at that point....really we were in a very, very dangerous position. So we were literally hijacked, taken at gunpoint by the Israeli military".
In May–June 2010, Maguire was a passenger on board the MV Rachel Corrie, one of seven vessels that were part of the Gaza Freedom Flotilla, a flotilla of pro-Palestinian activists that attempted to bust the Israeli-Egyptian blockade of the Gaza Strip. In an interview with BBC Radio Ulster while still at sea, Maguire called the blockade an "inhumane, illegal siege." Having been delayed due to mechanical problems, the Rachel Corrie did not actually sail with the flotilla and only approached the Gazan coast several days after the main flotilla did. In contrast with the violence that characterised the arrival of the first six ships, Israel's takeover of the Rachel Corrie was met only with passive resistance. Israeli naval forces were even lowered a ladder by the passengers to assist their ascent onto the deck. After the incident, Maguire said she did not feel her life was in danger as the ship's captain, Derek Graham, had been in touch with the Israeli navy to assure them that there would be no violent resistance.
On 28 September 2010, Maguire landed in Israel as part of a delegation of the Nobel Women's Initiative. She was refused an entry visa by Israeli authorities on the grounds that she had twice in the past tried to run Israel's naval embargo of the Gaza Strip and that a 10-year exclusion order was in effect against her. She fought her deportation with the help of Adalah, an NGO devoted to the rights of Palestinians in Israel. Fatmeh El-Ajou, an attorney for Adalah noted, "We believe that the decision to refuse entry to Ms. Maguire is based on illegitimate, irrelevant, and arbitrary political considerations". Her legal team filed a petition against the order with the Central District Court on Maguire's behalf, but the court ruled pronounced that the deportation order was valid. Maguire then appealed to Israel's Supreme Court. Initially, the Court proposed that Maguire be allowed to remain in the country for a few days on bail despite the deportation order; however, the state rejected the proposal, arguing that Maguire had known prior to her arrival she was barred from entering Israel and that her conduct amounted to taking the law into her own hands. A three-judge panel accepted the state's position and upheld the ruling of the Central District Court. At one point during the hearing, Maguire reportedly spoke up, saying that Israel must stop "its apartheid policy and the siege on Gaza". One of the judges scolded her saying, "This is no place for propaganda". Corrigan-Maguire was deported on a flight to the UK the following morning, 5 October 2010.
As preparations for a second Gaza flotilla got underway in the summer of 2011, with the Irish MV Saoirse expected to take part, Maguire expressed her support for the campaign and called on Israel to grant the flotilla passengers safe passage to Gaza.
Maguire said in a December 2011 interview that "Hamas is an elected party and should be recognized as such by all. It has the democratic vote and should be recognised". She pointed out that on her 2008 Gaza trip she had been invited to speak "to the Hamas parliament".
In March 2014, Maguire tried to arrive to Gaza through Egypt, as a part of activist delegation which also included the American anti-war activist Medea Benjamin. The members of the delegation were arrested in Cairo, questioned and deported.
In 2016, Maguire attempted to break Israel's naval blockade of the Gaza Strip along with 13 other activists on board the Women's Boat to Gaza, until they were stopped by the Israeli Navy approximately offshore. The boat was escorted to the port of Ashdod. The Israeli military said the interception was brief and without injuries. Maguire complained that she and the activists were "arrested, kidnapped, illegally, in international waters and taken against our wishes to Israel".
Comparison of Palestinians and Israelis
Maguire has more than once suggested that Palestinians are more interested in peace than the Israeli government. She said in a 2011 interview that when she and some colleagues left Gaza in 2008, "we were very hopeful because there is a passionate desire among the Palestinian people for peace, and then Operation Cast Lead started the following week. That was horrific". Israel, she said, had killed Palestinian farmers and fishermen who were just "trying to fish for their families", thus proving "that the Israeli government does not want peace". In a 2013 interview, she repeated the same point, saying that in Gaza in 2008, she had been told by Hamas and Fatah leaders "that they want dialogue and peace", yet a week later "Israel bombed Gaza, committing war crimes", showing "that there is no political will for peace in the Israeli government".
Russell Tribunal
In October 2012, Maguire traveled to New York City to serve on the Russell Tribunal on Israel/Palestine alongside writer Alice Walker, activist Angela Davis, former Congresswoman Cynthia McKinney, and Pink Floyd's Roger Waters. During her participation in the Russell Tribunal, Maguire, according to one report, "asked the question that seems to be taboo in the U.S.: Why does President Barack Obama allow Israel to threaten Iran with war when Iran has signed the NPT and Israel has at least 200 nuclear weapons? Why does the president not demand that Israel sign the NPT?"
After her work on the Russell Tribunal was completed Maguire said that the experience had "opened the mind, and deepened the understanding of all those present to the facts of the ongoing injustice which the Palestinians are daily suffering under Israeli siege and occupation. The RToP's findings and conclusions challenge Governments and civil society to have courage and act by implementing sanctions, BDS etc., thereby refusing to be silent and complicit in the face of Israel's violation of International Laws. The RToP was brilliant, informative and decisive, reminding us that all our Governments, and we the people, have a moral and legal responsibility to act to protect Human Rights and International Law and we cannot be silent when injustice is being done to anyone, anywhere".
Rohingya issue
In March 2018, Maguire and two Nobel peace laureates Shirin Ebadi and Tawakkol Karman visited Rohingya camps in Cox's Bazar and shared opinions on the crisis. After returning to Dhaka they discussed the Rohingya crisis with members of the civil society of Bangladesh.
Personal philosophy and vision
Mairead Maguire is a proponent of the belief that violence is a disease that humans develop but are not born with. She believes humankind is moving away from a mindset of violence and war and evolving to a higher consciousness of nonviolence and love. Among the figures she considers spiritual prophets in this regard are Jesus, Francis of Assisi, Gandhi, Khan Abdul Ghaffar Khan, Fr. John L. McKenzie, and Martin Luther King Jr.
Maguire professes to rejects violence in all its forms. "As a pacifist I believe that violence is never justified, and there are always alternatives to force and threat of force. We must challenge the society that tells us there is no such alternative. In all areas of our lives we should adopt nonviolence, in our lifestyles, our education, our commerce, our defence, and our governance." Maguire has called for the abolition of all armies and the establishment of a multi-national community of unarmed peacekeepers in their stead.
The Vision of Peace: Faith and Hope in Northern Ireland
Maguire has written a book, The Vision of Peace: Faith and Hope in Northern Ireland. Published in 2010, it is a collection of essays and letters, in many of which she discusses the connections between her political activities and her faith. Most of the book is about Northern Ireland, but Maguire also discusses the Holocaust, India, East Timor, and Yugoslavia. Maguire writes that "hope for the future depends on each of us taking non-violence into our hearts and minds and developing new and imaginative structures which are non-violent and life-giving for all.... Some people will argue that this is too idealistic. I believe it is very realistic.... We can rejoice and celebrate today because we are living in a miraculous time. Everything is changing and everything is possible."
Awards and honours
Maguire has received numerous awards and honours in recognition of her work. Yale University awarded Maguire an honorary Doctor of Laws degree in 1977. In the same year, she received the Golden Plate Award of the American Academy of Achievement. The College of New Rochelle awarded her an honorary degree 1978, as well. In 1998 Maguire received an honorary degree from Regis University, a Jesuit institution in Denver, Colorado. The University of Rhode Island awarded her an honorary degree in 2000. She was presented with the Science and Peace Gold Medal by the Albert Schweitzer International University in 2006, for meaningfully contributing to the spread of culture and the defence of world peace.
In 1990 she was awarded the Pacem in Terris Award, named after a 1963 encyclical letter by Pope John XXIII that called upon all people of good will to secure peace among all nations. The Davenport Catholic Interracial Council extolled Maguire for her peace efforts in Northern Ireland and for being "a global force against violence in the name of religion." Pacem in terris is Latin for "Peace on Earth".
The Nuclear Age Peace Foundation honoured Maguire with the Distinguished Peace Leadership Award in 1992, "for her moral leadership and steadfast commitment to social justice and nonviolence."
Criticism
Nobel Prize decision and Peace People movement
Referring to the decision to award Maguire and Betty Williams the 1976 Nobel Peace Prize, journalist Michael Binyon of The Times commented, "The Nobel committee has made controversial awards before. Some have appeared to reward hope rather than achievement." He described as sadly "negligible" the two women's contribution to bringing peace to Northern Ireland.
Alex Maskey of Sinn Féin charged that at the time of the Troubles, the Peace People movement was hijacked by the British government and used to turn public opinion against Irish republicanism. "For me and others, the Peace People and their good intentions were quickly exploited and absorbed into British state policy," Maskey opined.
Derek Brown, the Belfast correspondent for The Guardian, wrote that Maguire and Betty Williams were "both formidably articulate and, in the best possible sense, utterly naive." He described their call for an end to violence in response to the will of the people as an "awesomely impractical demand."
In his extensive study of the Peace People movement, Rob Fairmichael found that the Peace People were seen by some as being "more anti-IRA than anti-UDA," i.e. more opposed to republican factions than to loyalist ones. Fairmichael also noted that "Betty Williams and Mairead Corrigan were beaten up numerous times and at times the leaders were threatened by a hostile crowd." Fairmichael noted, as examples of forms that some of the extreme negative reactions took.
Prize money controversy
While most Nobel Prize laureates keep their prize money, it is not uncommon for prize winners to donate prize money to scientific, cultural or humanitarian causes. Maguire and Williams' decision to keep their prize funds created controversy in Northern Ireland. The move angered many people, including members of the Peace People, and fuelled unpleasant rumours about the two women. Rob Fairmichael writes of "gossip of fur coats" and concludes that the prize money controversy was perceived by the public, in the context of the Peace People's eventual decline, as specifically problematic.
Israeli and Pro-Israeli reactions
In the wake of the 2009 Gaza flotilla, Ben-Dror Yemini, a popular columnist for the Israeli daily Ma'ariv, wrote that Maguire was obsessed with Israel. "There is a lunatic coalition that does not concern itself with the slaughtered in Sri Lanka or with the oppressed Tibetans. They see only the struggle against the Israeli Satan." He further charged that Maguire chose to identify with a population that elected an openly antisemitic movement to lead it – one whose raison d'etre is the destruction of the Jewish state.
Eliaz Luf, the deputy head of the Israeli foreign mission to Canada, has argued that Maguire's activism plays into the hands of Hamas and other terrorist organisations in Gaza.
Michael Elterman, Chairman of the Canada-Israel Committee Pacific Region, warned that Maguire's actions, though probably well-intentioned, have promoted a hateful, antisemitic agenda.
In a 4 October 2010 editorial entitled "The disingenuous Nobel laureate," the Jerusalem Post called Maguire's comparison of Israel's nuclear weapons to the gas chambers of Auschwitz "outrageous" and maintained that "Israel can and must use its sovereignty to stop people like Maguire who are essentially seeking to endanger the lives of Israeli citizens." The Post applauded Maguire's expulsion from Israel, "not because of Maguire's outrageous comparison in 2004 of Israel's purported nuclear capability to Auschwitz's gas chambers, nor because of her absurd, reprehensible accusation made in court Monday that Israel is an "apartheid state" perpetrating "ethnic cleansing against Palestinians," but because she had taken "actions that undermine Israel's ability to protect itself.""
The Post argued that if Maguire and others "truly desire to improve the lives of Gazans, they should send their humanitarian aid in coordination with Israel," pressure Hamas "and the other radical Islamists who control the Gaza Strip to stop senseless ballistic attacks on Israeli towns and villages, kibbutzim and moshavim," and "insist that Hamas provide Gaza's citizens with a stable, responsible leadership that respects human rights and religious freedom, as well as that it accept the UN-recognized right of the Jewish people to self-determination and political sovereignty in their historical homeland." But Maguire "seems more intent on enabling Israel's terrorist enemies," exploiting "charges of a 'humanitarian crisis' in Gaza in order to empower Hamas terrorists." Jewish and Israeli opinion is not all negative. Following the June 2010 Gaza flotilla raid, Israeli Prime Minister Benjamin Netanyahu was careful to distinguish between Maguire's nonviolent resistance aboard the Rachel Corrie, which he referred to as "a flotilla of peace activists – with whom we disagree, but whose right to a different opinion we respect," and the conduct of the activists aboard the other six vessels, which he described as "a flotilla of hate, organized by violent, terrorism-supporting extremists." Gideon Levy strongly defended Maguire in the Israeli newspaper Haaretz in October 2010, calling her "the victim of state terror" after Israel refused to allow her to enter the country and kept her detained for several days.
See also
List of female Nobel laureates
List of peace activists
International Fellowship of Reconciliation
PeaceJam
References
Bibliography
External links
Peace People
Mairead Corrigan Maguire Peace People
International Fellowship of Reconciliation
Nobel Peace Prize Laureate Mairead Corrigan Peace Heroes
Irish Nobel Prize winners The Best Question
Nobel Women's Initiative
1944 births
Living people
Alumni of Trinity College Dublin
Nobel Peace Prize laureates
Nobel laureates from Northern Ireland
British Nobel laureates
Nonviolence advocates
Pacifists from Northern Ireland
People deported from Israel
Activists from Belfast
People of The Troubles (Northern Ireland)
Roman Catholic activists
Roman Catholics from Northern Ireland
Women activists from Northern Ireland
Women from Northern Ireland in politics
Women Nobel laureates
1976 in Northern Ireland
21st-century politicians from Northern Ireland
20th-century politicians from Northern Ireland | Mairead Maguire | [
"Technology"
] | 7,814 | [
"Women Nobel laureates",
"Women in science and technology"
] |
318,352 | https://en.wikipedia.org/wiki/Corona%20discharge | A corona discharge is an electrical discharge caused by the ionization of a fluid such as air surrounding a conductor carrying a high voltage. It represents a local region where the air (or other fluid) has undergone electrical breakdown and become conductive, allowing charge to continuously leak off the conductor into the air. A corona discharge occurs at locations where the strength of the electric field (potential gradient) around a conductor exceeds the dielectric strength of the air. It is often seen as a bluish glow in the air adjacent to pointed metal conductors carrying high voltages, and emits light by the same mechanism as a gas discharge lamp, chemiluminescence. Corona discharges can also happen in weather, such as thunderstorms, where objects like ship masts or airplane wings have a charge significantly different from the air around them (St. Elmo's fire).
In many high-voltage applications, corona is an unwanted side effect. Corona discharge from high-voltage electric power transmission lines constitutes an economically significant waste of energy for utilities. In high-voltage equipment like cathode-ray-tube televisions, radio transmitters, X-ray machines, and particle accelerators, the current leakage caused by coronas can constitute an unwanted load on the circuit. In the air, coronas generate gases such as ozone (O3) and nitric oxide (NO), and in turn, nitrogen dioxide (NO2), and thus nitric acid (HNO3) if water vapor is present. These gases are corrosive and can degrade and embrittle nearby materials, and are also toxic to humans and the environment.
Corona discharges can often be suppressed by improved insulation, corona rings, and making high-voltage electrodes in smooth rounded shapes. However, controlled corona discharges are used in a variety of processes such as air filtration, photocopiers, and ozone generators.
Introduction
A corona discharge is a process by which a current flows from an electrode with a high potential into a neutral fluid, usually air, by ionizing that fluid so as to create a region of plasma around the electrode. The ions generated eventually pass the charge to nearby areas of lower potential, or recombine to form neutral gas molecules.
When the potential gradient (electric field) is large enough at a point in the fluid, the fluid at that point ionizes and it becomes conductive. If a charged object has a sharp point, the electric field strength around that point will be much higher than elsewhere. Air near the electrode can become ionized (partially conductive), while regions more distant do not. When the air near the point becomes conductive, it has the effect of increasing the apparent size of the conductor. Since the new conductive region is less sharp, the ionization may not extend past this local region. Outside this region of ionization and conductivity, the charged particles slowly find their way to an oppositely charged object and are neutralized.
Along with the similar brush discharge, the corona is often called a "single-electrode discharge", as opposed to a "two-electrode discharge"—an electric arc. A corona forms only when the conductor is widely enough separated from conductors at the opposite potential that an arc cannot jump between them. If the geometry and gradient are such that the ionized region continues to grow until it reaches another conductor at a lower potential, a low resistance conductive path between the two will be formed, resulting in an electric spark or electric arc, depending upon the source of the electric field. If the source continues to supply current, a spark will evolve into a continuous discharge called an arc.
Corona discharge forms only when the electric field (potential gradient) at the surface of the conductor exceeds a critical value, the dielectric strength or disruptive potential gradient of the fluid. In air at sea level pressure of 101 kPa, the critical value is roughly 30 kV/cm, but this decreases with pressure, therefore, corona discharge is more of a problem at high altitudes. Corona discharge usually forms at highly curved regions on electrodes, such as sharp corners, projecting points, edges of metal surfaces, or small diameter wires. The high curvature causes a high potential gradient at these locations so that the air breaks down and forms plasma there first. On sharp points in the air, corona can start at potentials of 2–6 kV. In order to suppress corona formation, terminals on high-voltage equipment are frequently designed with smooth large-diameter rounded shapes like balls or toruses. Corona rings are often added to insulators of high-voltage transmission lines.
Coronas may be positive or negative. This is determined by the polarity of the voltage on the highly curved electrode. If the curved electrode is positive with respect to the flat electrode, it has a positive corona; if it is negative, it has a negative corona. (See below for more details.) The physics of positive and negative coronas are strikingly different. This asymmetry is a result of the great difference in mass between electrons and positively charged ions, with only the electron having the ability to undergo a significant degree of ionizing inelastic collision at common temperatures and pressures.
An important reason for considering coronas is the production of ozone around conductors undergoing corona processes in air. A negative corona generates much more ozone than the corresponding positive corona.
Applications
Corona discharge has a number of commercial and industrial applications:
Removal of unwanted electric charges from the surface of aircraft in flight and thus avoiding the detrimental effect of uncontrolled electrical discharge pulses on the performance of avionic systems
Manufacture of ozone
Sanitization of pool water
In an electrostatic precipitator, removal of solid pollutants from a waste gas stream, or scrubbing particles from the air in air-conditioning systems
Photocopying
Air ionisers
Production of photons for Kirlian photography to expose photographic film
EHD thrusters, lifters, and other ionic wind devices
Nitrogen laser
Ionization of a gaseous sample for subsequent analysis in a mass spectrometer or an ion mobility spectrometer
Static charge neutralization, as applied through antistatic devices like ionizing bars
Refrigeration of electronic devices by forced convection
Coronas can be used to generate charged surfaces, which is an effect used in electrostatic copying (photocopying). They can also be used to remove particulate matter from air streams by first charging the air, and then passing the charged stream through a comb of alternating polarity, to deposit the charged particles onto oppositely charged plates.
The free radicals and ions generated in corona reactions can be used to scrub the air of certain noxious products, through chemical reactions, and can be used to produce ozone.
Problems
Coronas can generate audible and radio-frequency noise, particularly near electric power transmission lines. Therefore, power transmission equipment is designed to minimize the formation of corona discharge.
Corona discharge is generally undesirable in:
Electric power transmission, where it causes:
Power loss
Audible noise
Electromagnetic interference
Purple glow
Ozone production
Insulation damage
Possible distress in animals that are sensitive to ultraviolet light
Electrical components such as transformers, capacitors, electric motors, and generators:
Corona can progressively damage the insulation inside these devices, leading to equipment failure.
Elastomer items such as O-rings can suffer ozone cracking.
Plastic film capacitors operating at mains voltage can suffer progressive loss of capacitance as corona discharges cause local vaporization of the metallization.
In many cases, coronas can be suppressed by corona rings, toroidal devices that serve to spread the electric field over larger areas and decrease the field gradient to below the corona threshold.
Mechanism
Corona discharge occurs when the electric field is strong enough to create a chain reaction; electrons in the air collide with atoms hard enough to ionize them, creating more free electrons that ionize more atoms. The diagrams below illustrate at a microscopic scale the process which creates a corona in the air next to a pointed electrode carrying a high negative voltage with respect to ground. The process is:
A neutral atom or molecule, in a region of the strong electric field (such as the high potential gradient near the curved electrode), is ionized by a natural environmental event (for example, being struck by an ultraviolet photon or cosmic ray particle), to create a positive ion and a free electron.
The electric field accelerates these oppositely charged particles in opposite directions, separating them, preventing their recombination, and imparting kinetic energy to each of them.
The electron has a much higher charge/mass ratio and so is accelerated to a higher velocity than the positive ion. It gains enough energy from the field that when it strikes another atom it ionizes it, knocking out another electron, and creating another positive ion. These electrons are accelerated and collide with other atoms, creating further electron/positive-ion pairs, and these electrons collide with more atoms, in a chain reaction process called an electron avalanche. Both positive and negative coronas rely on electron avalanches. In a positive corona, all the electrons are attracted inward toward the nearby positive electrode and the ions are repelled outwards. In a negative corona, the ions are attracted inward and the electrons are repelled outwards.
The glow of the corona is caused by electrons recombining with positive ions to form neutral atoms. When the electron falls back to its original energy level, it releases a photon of light. The photons serve to ionize other atoms, maintaining the creation of electron avalanches.
At a certain distance from the electrode, the electric field becomes low enough that it no longer imparts enough energy to the electrons to ionize atoms when they collide. This is the outer edge of the corona. Outside this, the ions move through the air without creating new ions. The outward moving ions are attracted to the opposite electrode and eventually reach it and combine with electrons from the electrode to become neutral atoms again, completing the circuit.
Thermodynamically, a corona is a very nonequilibrium process, creating a non-thermal plasma. The avalanche mechanism does not release enough energy to heat the gas in the corona region generally and ionize it, as occurs in an electric arc or spark. Only a small number of gas molecules take part in the electron avalanches and are ionized, having energies close to the ionization energy of 1–3 ev, the rest of the surrounding gas is close to ambient temperature.
The onset voltage of corona or corona inception voltage (CIV) can be found with Peek's law (1929), formulated from empirical observations. Later papers derived more accurate formulas.
Positive coronas
Properties
A positive corona is manifested as a uniform plasma across the length of a conductor. It can often be seen glowing blue/white, though many of the emissions are in the ultraviolet. The uniformity of the plasma is caused by the homogeneous source of secondary avalanche electrons described in the mechanism section, below. With the same geometry and voltages, it appears a little smaller than the corresponding negative corona, owing to the lack of a non-ionising plasma region between the inner and outer regions.
A positive corona has a much lower density of free electrons compared to a negative corona; perhaps a thousandth of the electron density, and a hundredth of the total number of electrons.
However, the electrons in a positive corona are concentrated close to the surface of the curved conductor, in a region of the high potential gradient (and therefore the electrons have high energy), whereas in a negative corona many of the electrons are in the outer, lower-field areas. Therefore, if electrons are to be used in an application which requires high activation energy, positive coronas may support a greater reaction constant than corresponding negative coronas; though the total number of electrons may be lower, the number of very high energy electrons may be higher.
Coronas are efficient producers of ozone in the air. A positive corona generates much less ozone than the corresponding negative corona, as the reactions which produce ozone are relatively low-energy. Therefore, the greater number of electrons of a negative corona leads to increased production.
Beyond the plasma, in the unipolar region, the flow is of low-energy positive ions toward the flat electrode.
Mechanism
As with a negative corona, a positive corona is initiated by an exogenous ionization event in a region of a high potential gradient. The electrons resulting from the ionization are attracted toward the curved electrode, and the positive ions repelled from it. By undergoing inelastic collisions closer and closer to the curved electrode, further molecules are ionized in an electron avalanche.
In a positive corona, secondary electrons, for further avalanches, are generated predominantly in the fluid itself, in the region outside the plasma or avalanche region. They are created by ionization caused by the photons emitted from that plasma in the various de-excitation processes occurring within the plasma after electron collisions, the thermal energy liberated in those collisions creating photons which are radiated into the gas. The electrons resulting from the ionization of a neutral gas molecule are then electrically attracted back toward the curved electrode, attracted into the plasma, and so begins the process of creating further avalanches inside the plasma.
Negative coronas
Properties
A negative corona is manifested in a non-uniform corona, varying according to the surface features and irregularities of the curved conductor. It often appears as tufts of the corona at sharp edges, the number of tufts altering with the strength of the field. The form of negative coronas is a result of its source of secondary avalanche electrons (see below). It appears a little larger than the corresponding positive corona, as electrons are allowed to drift out of the ionizing region, and so the plasma continues some distance beyond it. The total number of electrons and electron density is much greater than in the corresponding positive corona. However, they are of predominantly lower energy, owing to being in a region of lower potential gradient. Therefore, whilst for many reactions, the increased electron density will increase the reaction rate, the lower energy of the electrons will mean that reactions which require higher electron energy may take place at a lower rate.
Mechanism
Negative coronas are more complex than positive coronas in construction. As with positive coronas, the establishing of a corona begins with an exogenous ionization event generating a primary electron, followed by an electron avalanche.
Electrons ionized from the neutral gas are not useful in sustaining the negative corona process by generating secondary electrons for further avalanches, as the general movement of electrons in a negative corona is outward from the curved electrode. For negative corona, instead, the dominant process generating secondary electrons is the photoelectric effect, from the surface of the electrode itself. The work function of the electrons (the energy required to liberate the electrons from the surface) is considerably lower than the ionization energy of air at standard temperatures and pressures, making it a more liberal source of secondary electrons under these conditions. Again, the source of energy for the electron-liberation is a high-energy photon from an atom within the plasma body relaxing after excitation from an earlier collision. The use of ionized neutral gas as a source of ionization is further diminished in a negative corona by the high-concentration of positive ions clustering around the curved electrode.
Under other conditions, the collision of the positive species with the curved electrode can also cause electron liberation.
The difference, then, between positive and negative coronas, in the matter of the generation of secondary electron avalanches, is that in a positive corona they are generated by the gas surrounding the plasma region, the new secondary electrons travelling inward, whereas in a negative corona they are generated by the curved electrode itself, the new secondary electrons travelling outward.
A further feature of the structure of negative coronas is that as the electrons drift outwards, they encounter neutral molecules and, with electronegative molecules (such as oxygen and water vapor), combine to produce negative ions. These negative ions are then attracted to the positive uncurved electrode, completing the 'circuit'.
Electrical wind
Ionized gases produced in a corona discharge are accelerated by the electric field, producing a movement of gas or electrical wind. The air movement associated with a discharge current of a few hundred microamperes can blow out a small candle flame within about 1 cm of a discharge point. A pinwheel, with radial metal spokes and pointed tips bent to point along the circumference of a circle, can be made to rotate if energized by a corona discharge; the rotation is due to the differential electric attraction between the metal spokes and the space charge shield region that surrounds the tips.
See also
Alternating current
Atmospheric pressure chemical ionization
Crookes tube
Dielectric barrier discharge
Kirlian photography
St. Elmo's fire
References
Further reading
External links
Additional information about corona, its effects, characteristics and preventative measures
Electrical breakdown
Plasma phenomena
ja:放電#コロナ放電(局部破壊放電) | Corona discharge | [
"Physics"
] | 3,442 | [
"Physical phenomena",
"Plasma physics",
"Plasma phenomena",
"Electrical phenomena",
"Electrical breakdown"
] |
318,370 | https://en.wikipedia.org/wiki/RuBisCO | Ribulose-1,5-bisphosphate carboxylase/oxygenase, commonly known by the abbreviations RuBisCo, rubisco, RuBPCase, or RuBPco, is an enzyme () involved in the light-independent (or "dark") part of photosynthesis, including the carbon fixation by which atmospheric carbon dioxide is converted by plants and other photosynthetic organisms to energy-rich molecules such as glucose. It emerged approximately four billion years ago in primordial metabolism prior to the presence of oxygen on Earth. It is probably the most abundant enzyme on Earth. In chemical terms, it catalyzes the carboxylation of ribulose-1,5-bisphosphate (also known as RuBP).
Alternative carbon fixation pathways
RuBisCO is important biologically because it catalyzes the primary chemical reaction by which inorganic carbon enters the biosphere. While many autotrophic bacteria and archaea fix carbon via the reductive acetyl CoA pathway, the 3-hydroxypropionate cycle, or the reverse Krebs cycle, these pathways are relatively small contributors to global carbon fixation compared to that catalyzed by RuBisCO. Phosphoenolpyruvate carboxylase, unlike RuBisCO, only temporarily fixes carbon. Reflecting its importance, RuBisCO is the most abundant protein in leaves, accounting for 50% of soluble leaf protein in plants (20–30% of total leaf nitrogen) and 30% of soluble leaf protein in plants (5–9% of total leaf nitrogen). Given its important role in the biosphere, the genetic engineering of RuBisCO in crops is of continuing interest (see below).
Structure
In plants, algae, cyanobacteria, and phototrophic and chemoautotrophic Pseudomonadota (formerly proteobacteria), the enzyme usually consists of two types of protein subunit, called the large chain (L, about 55,000 Da) and the small chain (S, about 13,000 Da). The large-chain gene (rbcL) is encoded by the chloroplast DNA in plants. There are typically several related small-chain genes in the nucleus of plant cells, and the small chains are imported to the stromal compartment of chloroplasts from the cytosol by crossing the outer chloroplast membrane. The enzymatically active substrate (ribulose 1,5-bisphosphate) binding sites are located in the large chains that form dimers in which amino acids from each large chain contribute to the binding sites. A total of eight large chains (= four dimers) and eight small chains assemble into a larger complex of about 540,000 Da. In some Pseudomonadota and dinoflagellates, enzymes consisting of only large subunits have been found.
Magnesium ions () are needed for enzymatic activity. Correct positioning of in the active site of the enzyme involves addition of an "activating" carbon dioxide molecule () to a lysine in the active site (forming a carbamate). operates by driving deprotonation of the Lys210 residue, causing the Lys residue to rotate by 120 degrees to the trans conformer, decreasing the distance between the nitrogen of Lys and the carbon of . The close proximity allows for the formation of a covalent bond, resulting in the carbamate. is first enabled to bind to the active site by the rotation of His335 to an alternate conformation. is then coordinated by the His residues of the active site (His300, His302, His335), and is partially neutralized by the coordination of three water molecules and their conversion to −OH. This coordination results in an unstable complex, but produces a favorable environment for the binding of . Formation of the carbamate is favored by an alkaline pH. The pH and the concentration of magnesium ions in the fluid compartment (in plants, the stroma of the chloroplast) increases in the light. The role of changing pH and magnesium ion levels in the regulation of RuBisCO enzyme activity is discussed below. Once the carbamate is formed, His335 finalizes the activation by returning to its initial position through thermal fluctuation.
Enzymatic activity
RuBisCO is one of many enzymes in the Calvin cycle. When Rubisco facilitates the attack of at the C2 carbon of RuBP and subsequent bond cleavage between the C3 and C2 carbon, 2 molecules of glycerate-3-phosphate are formed. The conversion involves these steps: enolisation, carboxylation, hydration, C-C bond cleavage, and protonation.
Substrates
Substrates for RuBisCO are ribulose-1,5-bisphosphate and carbon dioxide (distinct from the "activating" carbon dioxide). RuBisCO also catalyses a reaction of ribulose-1,5-bisphosphate and molecular oxygen (O2) instead of carbon dioxide ().
Discriminating between the substrates and O2 is attributed to the differing interactions of the substrate's quadrupole moments and a high electrostatic field gradient. This gradient is established by the dimer form of the minimally active RuBisCO, which with its two components provides a combination of oppositely charged domains required for the enzyme's interaction with O2 and . These conditions help explain the low turnover rate found in RuBisCO: In order to increase the strength of the electric field necessary for sufficient interaction with the substrates’ quadrupole moments, the C- and N- terminal segments of the enzyme must be closed off, allowing the active site to be isolated from the solvent and lowering the dielectric constant. This isolation has a significant entropic cost, and results in the poor turnover rate.
Binding RuBP
Carbamylation of the ε-amino group of Lys210 is stabilized by coordination with the . This reaction involves binding of the carboxylate termini of Asp203 and Glu204 to the ion. The substrate RuBP binds displacing two of the three aquo ligands.
Enolisation
Enolisation of RuBP is the conversion of the keto tautomer of RuBP to an enediol(ate). Enolisation is initiated by deprotonation at C3. The enzyme base in this step has been debated, but the steric constraints observed in crystal structures have made Lys210 the most likely candidate. Specifically, the carbamate oxygen on Lys210 that is not coordinated with the Mg ion deprotonates the C3 carbon of RuBP to form a 2,3-enediolate.
Carboxylation
Carboxylation of the 2,3-enediolate results in the intermediate 3-keto-2-carboxyarabinitol-1,5-bisphosphate and Lys334 is positioned to facilitate the addition of the substrate as it replaces the third -coordinated water molecule and add directly to the enediol. No Michaelis complex is formed in this process. Hydration of this ketone results in an additional hydroxy group on C3, forming a gem-diol intermediate. Carboxylation and hydration have been proposed as either a single concerted step or as two sequential steps. Concerted mechanism is supported by the proximity of the water molecule to C3 of RuBP in multiple crystal structures. Within the spinach structure, other residues are well placed to aid in the hydration step as they are within hydrogen bonding distance of the water molecule.
C-C bond cleavage
The gem-diol intermediate cleaves at the C2-C3 bond to form one molecule of glycerate-3-phosphate and a negatively charged carboxylate. Stereo specific protonation of C2 of this carbanion results in another molecule of glycerate-3-phosphate. This step is thought to be facilitated by Lys175 or potentially the carbamylated Lys210.
Products
When carbon dioxide is the substrate, the product of the carboxylase reaction is an unstable six-carbon phosphorylated intermediate known as 3-keto-2-carboxyarabinitol-1,5-bisphosphate, which decays rapidly into two molecules of glycerate-3-phosphate. This product, also known as 3-phosphoglycerate, can be used to produce larger molecules such as glucose.
When molecular oxygen is the substrate, the products of the oxygenase reaction are phosphoglycolate and 3-phosphoglycerate. Phosphoglycolate is recycled through a sequence of reactions called photorespiration, which involves enzymes and cytochromes located in the mitochondria and peroxisomes (this is a case of metabolite repair). In this process, two molecules of phosphoglycolate are converted to one molecule of carbon dioxide and one molecule of 3-phosphoglycerate, which can reenter the Calvin cycle. Some of the phosphoglycolate entering this pathway can be retained by plants to produce other molecules such as glycine. At ambient levels of carbon dioxide and oxygen, the ratio of the reactions is about 4 to 1, which results in a net carbon dioxide fixation of only 3.5. Thus, the inability of the enzyme to prevent the reaction with oxygen greatly reduces the photosynthetic capacity of many plants. Some plants, many algae, and photosynthetic bacteria have overcome this limitation by devising means to increase the concentration of carbon dioxide around the enzyme, including carbon fixation, crassulacean acid metabolism, and the use of pyrenoid.
Rubisco side activities can lead to useless or inhibitory by-products. Important inhibitory by-products include xylulose 1,5-bisphosphate and glycero-2,3-pentodiulose 1,5-bisphosphate, both caused by "misfires" halfway in the enolisation-carboxylation reaction. In higher plants, this process causes RuBisCO self-inhibition, which can be triggered by saturating and RuBP concentrations and solved by Rubisco activase (see below).
Rate of enzymatic activity
Some enzymes can carry out thousands of chemical reactions each second. However, RuBisCO is slow, fixing only 3–10 carbon dioxide molecules each second per molecule of enzyme. The reaction catalyzed by RuBisCO is, thus, the primary rate-limiting factor of the Calvin cycle during the day. Nevertheless, under most conditions, and when light is not otherwise limiting photosynthesis, the speed of RuBisCO responds positively to increasing carbon dioxide concentration.
RuBisCO is usually only active during the day, as ribulose 1,5-bisphosphate is not regenerated in the dark. This is due to the regulation of several other enzymes in the Calvin cycle. In addition, the activity of RuBisCO is coordinated with that of the other enzymes of the Calvin cycle in several other ways:
By ions
Upon illumination of the chloroplasts, the pH of the stroma rises from 7.0 to 8.0 because of the proton (hydrogen ion, ) gradient created across the thylakoid membrane. The movement of protons into thylakoids is driven by light and is fundamental to ATP synthesis in chloroplasts (Further reading: Photosynthetic reaction centre; Light-dependent reactions). To balance ion potential across the membrane, magnesium ions () move out of the thylakoids in response, increasing the concentration of magnesium in the stroma of the chloroplasts. RuBisCO has a high optimal pH (can be >9.0, depending on the magnesium ion concentration) and, thus, becomes "activated" by the introduction of carbon dioxide and magnesium to the active sites as described above.
By RuBisCO activase
In plants and some algae, another enzyme, RuBisCO activase (Rca, , ), is required to allow the rapid formation of the critical carbamate in the active site of RuBisCO. This is required because ribulose 1,5-bisphosphate (RuBP) binds more strongly to the active sites of RuBisCO when excess carbamate is present, preventing processes from moving forward. In the light, RuBisCO activase promotes the release of the inhibitory (or — in some views — storage) RuBP from the catalytic sites of RuBisCO. Activase is also required in some plants (e.g., tobacco and many beans) because, in darkness, RuBisCO is inhibited (or protected from hydrolysis) by a competitive inhibitor synthesized by these plants, a substrate analog 2-carboxy-D-arabitinol 1-phosphate (CA1P). CA1P binds tightly to the active site of carbamylated RuBisCO and inhibits catalytic activity to an even greater extent. CA1P has also been shown to keep RuBisCO in a conformation that is protected from proteolysis. In the light, RuBisCO activase also promotes the release of CA1P from the catalytic sites. After the CA1P is released from RuBisCO, it is rapidly converted to a non-inhibitory form by a light-activated CA1P-phosphatase. Even without these strong inhibitors, once every several hundred reactions, the normal reactions with carbon dioxide or oxygen are not completed; other inhibitory substrate analogs are still formed in the active site. Once again, RuBisCO activase can promote the release of these analogs from the catalytic sites and maintain the enzyme in a catalytically active form. However, at high temperatures, RuBisCO activase aggregates and can no longer activate RuBisCO. This contributes to the decreased carboxylating capacity observed during heat stress.
By activase
The removal of the inhibitory RuBP, CA1P, and the other inhibitory substrate analogs by activase requires the consumption of ATP. This reaction is inhibited by the presence of ADP, and, thus, activase activity depends on the ratio of these compounds in the chloroplast stroma. Furthermore, in most plants, the sensitivity of activase to the ratio of ATP/ADP is modified by the stromal reduction/oxidation (redox) state through another small regulatory protein, thioredoxin. In this manner, the activity of activase and the activation state of RuBisCO can be modulated in response to light intensity and, thus, the rate of formation of the ribulose 1,5-bisphosphate substrate.
By phosphate
In cyanobacteria, inorganic phosphate (Pi) also participates in the co-ordinated regulation of photosynthesis: Pi binds to the RuBisCO active site and to another site on the large chain where it can influence transitions between activated and less active conformations of the enzyme. In this way, activation of bacterial RuBisCO might be particularly sensitive to Pi levels, which might cause it to act in a similar way to how RuBisCO activase functions in higher plants.
By carbon dioxide
Since carbon dioxide and oxygen compete at the active site of RuBisCO, carbon fixation by RuBisCO can be enhanced by increasing the carbon dioxide level in the compartment containing RuBisCO (chloroplast stroma). Several times during the evolution of plants, mechanisms have evolved for increasing the level of carbon dioxide in the stroma (see carbon fixation). The use of oxygen as a substrate appears to be a puzzling process, since it seems to throw away captured energy. However, it may be a mechanism for preventing carbohydrate overload during periods of high light flux. This weakness in the enzyme is the cause of photorespiration, such that healthy leaves in bright light may have zero net carbon fixation when the ratio of O2 to available to RuBisCO shifts too far towards oxygen. This phenomenon is primarily temperature-dependent: high temperatures can decrease the concentration of dissolved in the moisture of leaf tissues. This phenomenon is also related to water stress: since plant leaves are evaporatively cooled, limited water causes high leaf temperatures. plants use the enzyme PEP carboxylase initially, which has a higher affinity for . The process first makes a 4-carbon intermediate compound, hence the name plants, which is shuttled into a site of photosynthesis then decarboxylated, releasing to boost the concentration of .
Crassulacean acid metabolism (CAM) plants keep their stomata closed during the day, which conserves water but prevents the light-independent reactions (a.k.a. the Calvin Cycle) from taking place, since these reactions require to pass by gas exchange through these openings. Evaporation through the upper side of a leaf is prevented by a layer of wax.
Genetic engineering
Since RuBisCO is often rate-limiting for photosynthesis in plants, it may be possible to improve photosynthetic efficiency by modifying RuBisCO genes in plants to increase catalytic activity and/or decrease oxygenation rates. This could improve sequestration of and be a strategy to increase crop yields. Approaches under investigation include transferring RuBisCO genes from one organism into another organism, engineering Rubisco activase from thermophilic cyanobacteria into temperature sensitive plants, increasing the level of expression of RuBisCO subunits, expressing RuBisCO small chains from the chloroplast DNA, and altering RuBisCO genes to increase specificity for carbon dioxide or otherwise increase the rate of carbon fixation.
Mutagenesis in plants
In general, site-directed mutagenesis of RuBisCO has been mostly unsuccessful, though mutated forms of the protein have been achieved in tobacco plants with subunit C4 species, and a RuBisCO with more C4-like kinetic characteristics have been attained in rice via nuclear transformation. Robust and reliable engineering for yield of RuBisCO and other enzymes in the C3 cycle was shown to be possible, and it was first achieved in 2019 through a synthetic biology approach.
One avenue is to introduce RuBisCO variants with naturally high specificity values such as the ones from the red alga Galdieria partita into plants. This may improve the photosynthetic efficiency of crop plants, although possible negative impacts have yet to be studied. Advances in this area include the replacement of the tobacco enzyme with that of the purple photosynthetic bacterium Rhodospirillum rubrum. In 2014, two transplastomic tobacco lines with functional RuBisCO from the cyanobacterium Synechococcus elongatus PCC7942 (Se7942) were created by replacing the RuBisCO with the large and small subunit genes of the Se7942 enzyme, in combination with either the corresponding Se7942 assembly chaperone, RbcX, or an internal carboxysomal protein, CcmM35. Both mutants had increased fixation rates when measured as carbon molecules per RuBisCO. However, the mutant plants grew more slowly than wild-type.
A recent theory explores the trade-off between the relative specificity (i.e., ability to favour fixation over O2 incorporation, which leads to the energy-wasteful process of photorespiration) and the rate at which product is formed. The authors conclude that RuBisCO may actually have evolved to reach a point of 'near-perfection' in many plants (with widely varying substrate availabilities and environmental conditions), reaching a compromise between specificity and reaction rate. It has been also suggested that the oxygenase reaction of RuBisCO prevents depletion near its active sites and provides the maintenance of the chloroplast redox state.
Since photosynthesis is the single most effective natural regulator of carbon dioxide in the Earth's atmosphere, a biochemical model of RuBisCO reaction is used as the core module of climate change models. Thus, a correct model of this reaction is essential to the basic understanding of the relations and interactions of environmental models.
Expression in bacterial hosts
There currently are very few effective methods for expressing functional plant Rubisco in bacterial hosts for genetic manipulation studies. This is largely due to Rubisco's requirement of complex cellular machinery for its biogenesis and metabolic maintenance including the nuclear-encoded RbcS subunits, which are typically imported into chloroplasts as unfolded proteins. Furthermore, sufficient expression and interaction with Rubisco activase are major challenges as well. One successful method for expression of Rubisco in E. coli involves the co-expression of multiple chloroplast chaperones, though this has only been shown for Arabidopsis thaliana Rubisco.
Depletion in proteomic studies
Due to its high abundance in plants (generally 40% of the total protein content), RuBisCO often impedes analysis of important signaling proteins such as transcription factors, kinases, and regulatory proteins found in lower abundance (10-100 molecules per cell) within plants. For example, using mass spectrometry on plant protein mixtures would result in multiple intense RuBisCO subunit peaks that interfere and hide those of other proteins.
Recently, one efficient method for precipitating out RuBisCO involves the usage of protamine sulfate solution. Other existing methods for depleting RuBisCO and studying lower abundance proteins include fractionation techniques with calcium and phytate, gel electrophoresis with polyethylene glycol, affinity chromatography, and aggregation using DTT, though these methods are more time-consuming and less efficient when compared to protamine sulfate precipitation.
Evolution of RuBisCO
Phylogenetic studies
The chloroplast gene rbcL, which codes for the large subunit of RuBisCO has been widely used as an appropriate locus for analysis of phylogenetics in plant taxonomy.
Origin
Non-carbon-fixing proteins similar to RuBisCO, termed RuBisCO-like proteins (RLPs), are also found in the wild in organisms as common as Bacillus subtilis. This bacterium has a rbcL-like protein with a 2,3-diketo-5-methylthiopentyl-1-phosphate enolase function, part of the methionine salvage pathway. Later identifications found functionally divergent examples dispersed all over bacteria and archaea, as well as transitionary enzymes performing both RLP-type enolase and RuBisCO functions. It is now believed that the current RuBisCO evolved from a dimeric RLP ancestor, acquiring its carboxylase function first before further oligomerizing and then recruiting the small subunit to form the familiar modern enzyme. The small subunit probably first evolved in anaerobic and thermophilic organisms, where it enabled RuBisCO to catalyze its reaction at higher temperatures. In addition to its effect on stabilizing catalysis, it enabled the evolution of higher specificities for over O2 by modulating the effect that substitutions within RuBisCO have on enzymatic function. Substitutions that do not have an effect without the small subunit suddenly become beneficial when it is bound. Furthermore, the small subunit enabled the accumulation of substitutions that are only tolerated in its presence. Accumulation of such substitutions leads to a strict dependence on the small subunit, which is observed in extant Rubiscos that bind a small subunit.
C4
With the mass convergent evolution of the C4-fixation pathway in a diversity of plant lineages, ancestral C3-type RuBisCO evolved to have faster turnover of in exchange for lower specificity as a result of the greater localization of from the mesophyll cells into the bundle sheath cells. This was achieved through enhancement of conformational flexibility of the “open-closed” transition in the Calvin cycle. Laboratory-based phylogenetic studies have shown that this evolution was constrained by the trade-off between stability and activity brought about by the series of necessary mutations for C4 RuBisCO. Moreover, in order to sustain the destabilizing mutations, the evolution to C4 RuBisCO was preceded by a period in which mutations granted the enzyme increased stability, establishing a buffer to sustain and maintain the mutations required for C4 RuBisCO. To assist with this buffering process, the newly-evolved enzyme was found to have further developed a series of stabilizing mutations. While RuBisCO has always been accumulating new mutations, most of these mutations that have survived have not had significant effects on protein stability. The destabilizing C4 mutations on RuBisCO has been sustained by environmental pressures such as low concentrations, requiring a sacrifice of stability for new adaptive functions.
History of the term
The term "RuBisCO" was coined humorously in 1979, by David Eisenberg at a seminar honouring the retirement of the early, prominent RuBisCO researcher, Sam Wildman, and also alluded to the snack food trade name "Nabisco" in reference to Wildman's attempts to create an edible protein supplement from tobacco leaves.
The capitalization of the name has been long debated. It can be capitalized for each letter of the full name (Ribulose-1,5 bisphosphate carboxylase/oxygenase), but it has also been argued that is should all be in lower case (rubisco), similar to other terms like scuba or laser.
See also
Carbon cycle
Photorespiration
Pyrenoid
C3 carbon fixation
C4 carbon fixation
Crassulacean acid metabolism/CAM photosynthesis
Carboxysome
References
Further reading
External links
Photosynthesis
EC 4.1.1 | RuBisCO | [
"Chemistry",
"Biology"
] | 5,409 | [
"Biochemistry",
"Photosynthesis"
] |
318,374 | https://en.wikipedia.org/wiki/List%20of%20experiments | The following is a list of historically important scientific experiments and observations demonstrating something of great scientific interest, typically in an elegant or clever manner.
Astronomy
Ole Rømer makes the first quantitative estimate of the speed of light in 1676 by timing the motions of Jupiter's satellite Io with a telescope
Arno Penzias and Robert Wilson detect the cosmic microwave background radiation, giving support to the theory of the Big Bang (1964)
Kerim Kerimov launches Kosmos 186 and Kosmos 188 as experiments on automatic docking eventually leading to the development of space stations (1967)
The Supernova Cosmology Project and the High-Z Supernova Search Team discover, by observing Type Ia supernovae, that the expansion of the Universe is accelerating (1998)
Galileo Galilei uses a telescope to observe that the moons of Jupiter appear to circle Jupiter. This evidence supports the heliocentric model, and weakens the geocentric model of the cosmos (1609)
Biology
Robert Hooke, using a microscope, observes cells (1665).
Anton van Leeuwenhoek discovers microorganisms (1674–1676).
James Lind, publishes 'A Treatise of the Scurvy' which describes a controlled shipboard experiment using two identical populations but with only one variable, the consumption of citrus fruit (1753).
Edward Jenner tests his hypothesis for the protective action of mild cowpox infection for smallpox, the first vaccine (1796).
Gregor Mendel's experiments with the garden pea led him to surmise many of the fundamental laws of genetics (dominant vs recessive genes, the 1–2–1 ratio, see Mendelian inheritance) (1856–1863).
Charles Darwin demonstrates evolution by natural selection using many examples (1859).
Louis Pasteur uses S-shaped flasks to prevent spores from contaminating broth. This disproves the theory of Spontaneous generation (1861) extending the rancid meat experiment of Francesco Redi (1668) to the micro scale.
Charles Darwin and his son Francis, using dark-grown oat seedlings, discover the stimulus for phototropism is detected at the tip of the shoot (the coleoptile tip), but the bending takes place in the region below the tip (1880).
Emil von Behring and Kitasato Shibasaburō demonstrate passive immunity, protection of animals from infection by injection of immune serum (1890).
Thomas Hunt Morgan identifies a sex chromosome linked gene in Drosophila melanogaster (1910) and his student Alfred Sturtevant develops the first genetic map (1913).
Alexander Fleming demonstrates that the zone of inhibition around a growth of penicillin mould on a culture dish of bacteria is caused by a diffusible substance secreted by the mould (1928).
Frederick Griffith demonstrates (Griffith's experiment) that living cells can be transformed via a transforming principle, later discovered to be DNA (1928).
Karl von Frisch decodes the waggle dance honey bees use to communicate the location of flowers (1940).
George Wells Beadle and Edward Lawrie Tatum moot the "one gene-one enzyme hypothesis" based on induced mutations in bread mold Neurospora crassa (1941).
Luria–Delbrück experiment demonstrates that in bacteria, beneficial mutations arise in the absence of selection, rather than being a response to selection (1943).
Barbara McClintock breeds maize plants for color, which leads to the discovery of transposable elements or jumping genes (1944).
Linus Pauling and colleagues show in "Sickle Cell Anemia, a Molecular Disease" that a human genetic disease, sickle cell anemia, is caused by a molecular change in a specific protein, hemoglobin (1949).
Hershey–Chase experiment (by Alfred Hershey and Martha Chase) uses bacteriophage to prove that DNA is the hereditary material (1952).
Meselson–Stahl experiment proves that DNA replication is semiconservative (1958).
Crick, Brenner et al. experiment using frameshift mutations to support the triplet nature of the genetic code (1961).
Nirenberg and Matthaei experiment demonstrating in vitro protein synthesis using synthetic RNA as to substitute for messenger RNA (1961).
John Gurdon clones an animal, a frog tadpole, from an egg cell using the nucleus from an intestinal cell (1962).
Roger W. Sperry shows the potential independence of the two sides of the human brain using split-brain patients (1962–1965).
Nirenberg and Leder experiment, binding tRNA to ribosomes with synthetic RNA to decipher the genetic code (1964).
Demonstration of the role of reverse transcriptases in tumor viruses, independently by Howard Temin and David Baltimore, 1970.
Herbert Boyer and Stanley Cohen selectively clone genes in bacteria, using bacterial plasmids cut by specific endonucleases (1975).
Mary-Dell Chilton shows that crown gall tumors of plants are caused by the transfer of a small piece of DNA from the bacterium Agrobacterium tumefaciens into the host plant, where it becomes part of its genome (1977).
Napoli, Lemieux and Jorgensen discover the principle of RNA interference (1990).
Chemistry
Robert Boyle uses an air pump to determine the inverse relationship between the pressure and volume of a gas. This relationship came to be known as Boyle's law (1660–1662).
Joseph Priestley suspends a bowl of water above a beer vat at a brewery and synthesizes carbonated water (1767).
Antoine Lavoisier determines that oxygen combines with materials upon combustion, thus disproving phlogiston theory (1783).
Antoine Lavoisier determines that chemical reactions in a closed container do not alter total mass. From these observations he establishes the law of conservation of mass (1789).
Benjamin Thompson, Count Rumford demonstrates that the heat developed by the friction of boring cannon is nearly inexhaustible. This result was presented in opposition to caloric theory (1798).
Humphry Davy uses electrolysis to isolate elemental potassium, sodium, calcium, strontium, barium, magnesium, and chlorine (1807–1810).
Joseph Louis Gay-Lussac studies reactions among gases and determines that their volumes combine chemically in simple integer ratios (1809).
Robert Brown studies very small particles in water under the microscope and observes Brownian motion which was later named in his honor (1827).
Friedrich Wöhler synthesizes the organic compound urea using inorganic reactants, disproving the application of vitalism to chemical processes (1828).
Thomas Graham measures the rates of effusion for different gases and establishes Graham's law of effusion and diffusion (1833).
Julius Robert von Mayer and James Prescott Joule measure the heat generated by mechanical work. This establishes the principle of conservation of energy and the kinetic theory of heat (1842–1843).
Louis Pasteur separates a racemic mixture of two enantiomers by sorting individual crystals, and demonstrates their impact on the polarization of light (1849).
Anders Jonas Ångström observes the presence of hydrogen and other elements in the spectrum of the sun (1862).
François-Marie Raoult demonstrates that the decrease in the vapor pressure and freezing point of liquids caused by the addition of solutes is proportional to the number of solute molecules present. This establishes the concept of colligative properties (1878).
Svante Arrhenius studies the conductivity of salt solutions and determines that salts dissociate into ions in water (1884).
Svante Arrhenius determines the impact of temperature on reaction rates and formulates the concept of activation energy (1889).
William Ramsay and Lord Rayleigh (John Strutt) isolate the noble gases (1894–1898).
Henri Becquerel, Marie Curie, and Pierre Curie discover radioactivity and describe its properties (1896).
Mikhail Tsvet (Mikhail Semyonovich Tsvet) separates chlorophyll from other plant pigments using chromatography (1901).
Frederick Soddy and William Ramsay observe the production of helium from alpha particles during radioactive decay (1903).
Ernest Rutherford discovers that atoms have a very small positively charged nucleus in the gold-foil experiment, also known as the Geiger–Marsden experiment (1909).
Otto Hahn discovers nuclear isomerism (1921).
Albert Szent-Györgyi and Hans Adolf Krebs discover the citric acid cycle of oxidative metabolism (1935-1937).
Otto Hahn and Fritz Strassmann discover the nuclear fission of uranium (1938).
Glenn Theodore Seaborg and colleagues create and isolate five transuranium elements. They reorganize the periodic table to its current form. (1941–1950).
Miller–Urey experiment demonstrates that organic compounds can arise spontaneously from inorganic ones (1953).
Melvin Calvin and Andrew Benson delineate the path of carbon in photosynthesis using Chlorella and carbon dioxide labeled with carbon-14 (14CO2) (1945–1954).
Erwin Chargaff disproves the "tetranucleoide theory" of DNA structure and determines that the composition of double-stranded DNA follows the rule, %A = %T and %G = %C (Chargaff's rule). This discovery was critical to the formulation of the Watson-Crick Model of DNA structure.
Neil Bartlett mixes xenon and platinum hexafluoride leading to the first synthesis of a noble gas compound, xenon hexafluoroplatinate (1962).
Robert Burns Woodward announces the total synthesis of Vitamin B-12 by a team he led (1973). Insights from this work lead him and Roald Hoffmann to formulate the Woodward–Hoffmann rules for elucidating the stereochemistry of the products of organic reactions.
Frederick Sanger demonstrates the dideoxy- or chain termination method for determining DNA sequences (1975).
Kary Mullis demonstrates the polymerase chain reaction, a method for amplifying specific bits of DNA (1983).
Economics and political science
The experiments of Muhammad Yunus on the applications of microcredit and microfinance in rural Bangladesh (1971)
Robert Axelrod's prisoner's dilemma computer tournaments, later documented in The Evolution of Cooperation (1984)
Geology
Charles Mason conducts an experiment near the Scottish mountain of Schiehallion that attempts to measure the mean density of the Earth for the first time. Known as the Schiehallion experiment (1774)
Physics
Inclined plane experiment (1602–07): Galileo Galilei uses rolling balls to disprove the Aristotelian theory of motion.
Atmospheric pressure vs. altitude experiment (1648): Blaise Pascal carries a barometer up a church tower and a mountain to determine that atmospheric pressure is due to a column of air.
Magdeburg hemispheres (1654): Otto von Guericke demonstrates atmospheric pressure using a pair of hollow copper hemisphere.
Spring of air experiment (1660): Robert Boyle shows that the volume of a given amount of gas is inversely related to the pressure upon it.
Kite experiment (1700s): Benjamin Franklin beginning in 1747 describes experiments in letters to Peter Collinson demonstrating electrical principles which were published in a book called Experiments and Observations on Electricity.
Voltaic pile (1796): Alessandro Volta constructs a new source of electricity, the electrical battery.
Cavendish experiment (1798): Henry Cavendish's torsion bar experiment measures the force of gravity in a laboratory.
Double-slit experiment (c.1805): Thomas Young shows that light is a wave in his double-slit experiment.
Arago spot (1819): Observation of circular diffraction by François Arago, validated a new wave theory of light by Augustin-Jean Fresnel disproving skeptics like Siméon Denis Poisson.
Ørsted experiment (1820): Hans Christian Ørsted demonstrates the connection of electricity and magnetism by experiments involving a compass and electric circuits.
Discovery of electromagnetic induction (1831): Michael Faraday discovers magnetic induction in an experiment with a closed ring of soft iron, with two windings of wire.
Joule's experiment (1834):James Prescott Joule demonstrates the mechanical equivalent of heat, an important step in the development of thermodynamics.
Doppler experiment (1845): Christian Doppler arranges to have trumpets played from a passing train. The ground-observed pitch was higher than that played when the train was approaching then lower than that played as the train passed and moved away, demonstrating the Doppler effect.
Foucault pendulum (1851): Léon Foucault's creates a pendulum to demonstrate the Coriolis effect and the rotation of the Earth.
Michelson–Morley experiment (1887): exposes weaknesses of the prevailing variant of the theory of luminiferous aether.
Hertz wireless experiments (1887): Heinrich Hertz demonstrates free space electromagnetic waves, predicted by Maxwell's equations, with a simple dipole antenna and spark gap oscillator.
Thomson's experiments with cathode rays (1897): J. J. Thomson's cathode ray tube experiments (discovers the electron and its negative charge).
Eötvös experiment (1909): Loránd Eötvös publishes the result of the second series of experiments, clearly demonstrating that inertial and gravitational mass are one and the same.
Oil-drop experiment (1909): Robert Millikan demonstrates that electric charge occurs as quanta (whole units).
Geiger–Marsden experiments (1911): Ernest Rutherford's gold foil experiment demonstrated that the positive charge and mass of an atom is concentrated in a small, central atomic nucleus, disproving the then-popular plum pudding model of the atom.
Eddington experiment (1919): Arthur Eddington leads an expedition to the island of Principe to observe a total solar eclipse (gravitational lensing). This allows for an observation of the bending of starlight under gravity, a prediction of Albert Einstein's theory of relativity. It was confirmed (although it was later shown that the margin of error was as great as the observed bending).
Stern–Gerlach experiment (1920): Otto Stern and Walther Gerlach demonstrates particle spin.
Chicago Pile-1 (1942): Enrico Fermi and Leó Szilárd build the first critical nuclear reactor (1942)
Wu experiment (1956): Chien-Shiung Wu leads the team that disproves the conservation of parity in particle physics.
Cowan–Reines neutrino experiment (1955): Clyde L. Cowan and Frederick Reines confirm the existence of the neutrino.
Hafele-Keating experiment (1971): Joseph C. Hafele and Richard E. Keating show that atomic clocks flown around the world exhibit differences which are consistent with the predictions of special and general relativity.
Scout rocket experiment (1976): confirms the time dilation effect of gravity.
Aspect's experimentL Alain Aspect demonstrates the violation of Bell inequalities in quantum entanglement in the 1980s.
Psychology
Ivan Pavlov's experiments with dogs and classical conditioning (1900s)
John B. Watson and Rosalie Rayner conduct the Little Albert experiment showing evidence of classical conditioning (1920)
The Asch conformity experiments shows how group pressure can persuade an individual to conform to an obviously wrong opinion (1951)
B. F. Skinner's demonstrations of operant conditioning (1930s–1960s)
Harry Harlow's experiments with baby monkeys and wire and cloth surrogate mothers (1957–1974)
Stanley Milgram's experiments on human obedience (1963)
Walter Mischel's marshmallow experiment showing the importance to life outcomes of the ability to delay gratification (beginning late 1960s)
Philip Zimbardo's Stanford prison experiment (1971)
Allan and Beatrix Gardner's attempts to teach American Sign Language to the chimpanzee Washoe (1970s)
Martin Seligman studies learned helplessness in dogs (1970s)
Rosenhan experiment (1972). It involved the use of healthy associates or "pseudopatients", who briefly simulated auditory hallucinations in an attempt to gain admission to 12 different psychiatric hospitals. The hospital staff failed to detect a single pseudopatient. The study is considered an important and influential criticism of psychiatric diagnosis.
Kansas City preventive patrol experiment (1972–1973) It was designed to test the assumption that the presence (or potential presence) of police officers in marked cars reduced the likelihood of a crime being committed. No relationship was found.
Elizabeth Loftus' and John C. Palmer's car crash experiment shows that leading questions can produce false memories (1974)
Benjamin Libet's experiment on free will shows that a readiness potential appears before the notion of doing the task enters conscious experience, sparking debate about the illusory nature of free will yet again. (1983)
Vilayanur S. Ramachandran's experiment on phantom limbs with the Mirror Box throw light on the nature of 'learned paralysis' (1998)
See also
List of thought experiments
Timeline of scientific experiments
References
Science-related lists
History of science | List of experiments | [
"Technology"
] | 3,551 | [
"History of science",
"History of science and technology"
] |
318,378 | https://en.wikipedia.org/wiki/Crane%20%28machine%29 | A crane is a machine used to move materials both vertically and horizontally, utilizing a system of a boom, hoist, wire ropes or chains, and sheaves for lifting and relocating heavy objects within the swing of its boom. The device uses one or more simple machines, such as the lever and pulley, to create mechanical advantage to do its work. Cranes are commonly employed in transportation for the loading and unloading of freight, in construction for the movement of materials, and in manufacturing for the assembling of heavy equipment.
The first known crane machine was the shaduf, a water-lifting device that was invented in ancient Mesopotamia (modern Iraq) and then appeared in ancient Egyptian technology. Construction cranes later appeared in ancient Greece, where they were powered by men or animals (such as donkeys), and used for the construction of buildings. Larger cranes were later developed in the Roman Empire, employing the use of human treadwheels, permitting the lifting of heavier weights. In the High Middle Ages, harbour cranes were introduced to load and unload ships and assist with their construction—some were built into stone towers for extra strength and stability. The earliest cranes were constructed from wood, but cast iron, iron and steel took over with the coming of the Industrial Revolution.
For many centuries, power was supplied by the physical exertion of men or animals, although hoists in watermills and windmills could be driven by the harnessed natural power. The first mechanical power was provided by steam engines, the earliest steam crane being introduced in the 18th or 19th century, with many remaining in use well into the late 20th century. Modern cranes usually use internal combustion engines or electric motors and hydraulic systems to provide a much greater lifting capability than was previously possible, although manual cranes are still utilized where the provision of power would be uneconomic.
There are many different types of cranes, each tailored to a specific use. Sizes range from the smallest jib cranes, used inside workshops, to the tallest tower cranes, used for constructing high buildings. Mini-cranes are also used for constructing high buildings, to facilitate constructions by reaching tight spaces. Large floating cranes are generally used to build oil rigs and salvage sunken ships.
Some lifting machines do not strictly fit the above definition of a crane, but are generally known as cranes, such as stacker cranes and loader cranes.
Etymology
Cranes were so called from the resemblance to the long neck of the bird, cf. , French grue.
History
Ancient Near East
The first type of crane machine was the shadouf, which had a lever mechanism and was used to lift water for irrigation. It was invented in Mesopotamia (modern Iraq) circa 3000 BC. The shadouf subsequently appeared in ancient Egyptian technology circa 2000 BC.
Ancient Greece
A crane for lifting heavy loads was developed by the Ancient Greeks in the late 6th century BC. The archaeological record shows that no later than c. 515 BC distinctive cuttings for both lifting tongs and lewis irons begin to appear on stone blocks of Greek temples. Since these holes point at the use of a lifting device, and since they are to be found either above the center of gravity of the block, or in pairs equidistant from a point over the center of gravity, they are regarded by archaeologists as the positive evidence required for the existence of the crane.
The introduction of the winch and pulley hoist soon led to a widespread replacement of ramps as the main means of vertical motion. For the next 200 years, Greek building sites witnessed a sharp reduction in the weights handled, as the new lifting technique made the use of several smaller stones more practical than fewer larger ones. In contrast to the archaic period with its pattern of ever-increasing block sizes, Greek temples of the classical age like the Parthenon invariably featured stone blocks weighing less than 15–20 metric tons. Also, the practice of erecting large monolithic columns was practically abandoned in favour of using several column drums.
Although the exact circumstances of the shift from the ramp to the crane technology remain unclear, it has been argued that the volatile social and political conditions of Greece were more suitable to the employment of small, professional construction teams than of large bodies of unskilled labour, making the crane preferable to the Greek polis over the more labour-intensive ramp which had been the norm in the autocratic societies of Egypt or Assyria.
The first unequivocal literary evidence for the existence of the compound pulley system appears in the Mechanical Problems (Mech. 18, 853a32–853b13) attributed to Aristotle (384–322 BC), but perhaps composed at a slightly later date. Around the same time, block sizes at Greek temples began to match their archaic predecessors again, indicating that the more sophisticated compound pulley must have found its way to Greek construction sites by then.
Roman Empire
The heyday of the crane in ancient times came during the Roman Empire, when construction activity soared and buildings reached enormous dimensions. The Romans adopted the Greek crane and developed it further. There is much available information about their lifting techniques, thanks to rather lengthy accounts by the engineers Vitruvius (De Architectura 10.2, 1–10) and Heron of Alexandria (Mechanica 3.2–5). There are also two surviving reliefs of Roman treadwheel cranes, with the Haterii tombstone from the late first century AD being particularly detailed.
The simplest Roman crane, the trispastos, consisted of a single-beam jib, a winch, a rope, and a block containing three pulleys. Having thus a mechanical advantage of 3:1, it has been calculated that a single man working the winch could raise (3 pulleys x = 150), assuming that represent the maximum effort a man can exert over a longer time period. Heavier crane types featured five pulleys (pentaspastos) or, in case of the largest one, a set of three by five pulleys (Polyspastos) and came with two, three or four masts, depending on the maximum load. The polyspastos, when worked by four men at both sides of the winch, could readily lift (3 ropes x 5 pulleys x 4 men x = ). If the winch was replaced by a treadwheel, the maximum load could be doubled to at only half the crew, since the treadwheel possesses a much bigger mechanical advantage due to its larger diameter. This meant that, in comparison to the construction of the ancient Egyptian pyramids, where about 50 men were needed to move a 2.5 ton stone block up the ramp ( per person), the lifting capability of the Roman polyspastos proved to be 60 times higher ( per person).
However, numerous extant Roman buildings which feature much heavier stone blocks than those handled by the polyspastos indicate that the overall lifting capability of the Romans went far beyond that of any single crane. At the temple of Jupiter at Baalbek, for instance, the architrave blocks weigh up to 60 tons each, and one corner cornice block even over 100 tons, all of them raised to a height of about . In Rome, the capital block of Trajan's Column weighs 53.3 tons, which had to be lifted to a height of about (see construction of Trajan's Column).
It is assumed that Roman engineers lifted these extraordinary weights by two measures (see picture below for comparable Renaissance technique): First, as suggested by Heron, a lifting tower was set up, whose four masts were arranged in the shape of a quadrangle with parallel sides, not unlike a siege tower, but with the column in the middle of the structure (Mechanica 3.5). Second, a multitude of capstans were placed on the ground around the tower, for, although having a lower leverage ratio than treadwheels, capstans could be set up in higher numbers and run by more men (and, moreover, by draught animals). This use of multiple capstans is also described by Ammianus Marcellinus (17.4.15) in connection with the lifting of the Lateranense obelisk in the Circus Maximus (c. 357 AD). The maximum lifting capability of a single capstan can be established by the number of lewis iron holes bored into the monolith. In case of the Baalbek architrave blocks, which weigh between 55 and 60 tons, eight extant holes suggest an allowance of 7.5 ton per lewis iron, that is per capstan. Lifting such heavy weights in a concerted action required a great amount of coordination between the work groups applying the force to the capstans.
Middle Ages
During the High Middle Ages, the treadwheel crane was reintroduced on a large scale after the technology had fallen into disuse in western Europe with the demise of the Western Roman Empire. The earliest reference to a treadwheel (magna rota) reappears in archival literature in France about 1225, followed by an illuminated depiction in a manuscript of probably also French origin dating to 1240. In navigation, the earliest uses of harbor cranes are documented for Utrecht in 1244, Antwerp in 1263, Bruges in 1288 and Hamburg in 1291, while in England the treadwheel is not recorded before 1331.
Generally, vertical transport could be done more safely and inexpensively by cranes than by customary methods. Typical areas of application were harbors, mines, and, in particular, building sites where the treadwheel crane played a pivotal role in the construction of the lofty Gothic cathedrals. Nevertheless, both archival and pictorial sources of the time suggest that newly introduced machines like treadwheels or wheelbarrows did not completely replace more labor-intensive methods like ladders, hods and handbarrows. Rather, old and new machinery continued to coexist on medieval construction sites and harbors.
Apart from treadwheels, medieval depictions also show cranes to be powered manually by windlasses with radiating spokes, cranks and by the 15th century also by windlasses shaped like a ship's wheel. To smooth out irregularities of impulse and get over 'dead-spots' in the lifting process flywheels are known to be in use as early as 1123.
The exact process by which the treadwheel crane was reintroduced is not recorded, although its return to construction sites has undoubtedly to be viewed in close connection with the simultaneous rise of Gothic architecture. The reappearance of the treadwheel crane may have resulted from a technological development of the windlass from which the treadwheel structurally and mechanically evolved. Alternatively, the medieval treadwheel may represent a deliberate reinvention of its Roman counterpart drawn from Vitruvius' De architectura which was available in many monastic libraries. Its reintroduction may have been inspired, as well, by the observation of the labor-saving qualities of the waterwheel with which early treadwheels shared many structural similarities.
Structure and placement
The medieval treadwheel was a large wooden wheel turning around a central shaft with a treadway wide enough for two workers walking side by side. While the earlier 'compass-arm' wheel had spokes directly driven into the central shaft, the more advanced "clasp-arm" type featured arms arranged as chords to the wheel rim, giving the possibility of using a thinner shaft and providing thus a greater mechanical advantage.
Contrary to a popularly held belief, cranes on medieval building sites were neither placed on the extremely lightweight scaffolding used at the time nor on the thin walls of the Gothic churches which were incapable of supporting the weight of both hoisting machine and load. Rather, cranes were placed in the initial stages of construction on the ground, often within the building. When a new floor was completed, and massive tie beams of the roof connected the walls, the crane was dismantled and reassembled on the roof beams from where it was moved from bay to bay during construction of the vaults. Thus, the crane "grew" and "wandered" with the building with the result that today all extant construction cranes in England are found in church towers above the vaulting and below the roof, where they remained after building construction for bringing material for repairs aloft.
Less frequently, medieval illuminations also show cranes mounted on the outside of walls with the stand of the machine secured to putlogs.
Mechanics and operation
In contrast to modern cranes, medieval cranes and hoists — much like their counterparts in Greece and Rome — were primarily capable of a vertical lift, and not used to move loads for a considerable distance horizontally as well. Accordingly, lifting work was organized at the workplace in a different way than today. In building construction, for example, it is assumed that the crane lifted the stone blocks either from the bottom directly into place, or from a place opposite the centre of the wall from where it could deliver the blocks for two teams working at each end of the wall. Additionally, the crane master who usually gave orders at the treadwheel workers from outside the crane was able to manipulate the movement laterally by a small rope attached to the load. Slewing cranes which allowed a rotation of the load and were thus particularly suited for dockside work appeared as early as 1340. While ashlar blocks were directly lifted by sling, lewis or devil's clamp (German Teufelskralle), other objects were placed before in containers like pallets, baskets, wooden boxes or barrels.
It is noteworthy that medieval cranes rarely featured ratchets or brakes to forestall the load from running backward. This curious absence is explained by the high friction force exercised by medieval tread-wheels which normally prevented the wheel from accelerating beyond control.
Harbour usage
According to the "present state of knowledge" unknown in antiquity, stationary harbor cranes are considered a new development of the Middle Ages. The typical harbor crane was a pivoting structure equipped with double treadwheels. These cranes were placed docksides for the loading and unloading of cargo where they replaced or complemented older lifting methods like see-saws, winches and yards.
Two different types of harbor cranes can be identified with a varying geographical distribution: While gantry cranes, which pivoted on a central vertical axle, were commonly found at the Flemish and Dutch coastside, German sea and inland harbors typically featured tower cranes where the windlass and treadwheels were situated in a solid tower with only jib arm and roof rotating. Dockside cranes were not adopted in the Mediterranean region and the highly developed Italian ports where authorities continued to rely on the more labor-intensive method of unloading goods by ramps beyond the Middle Ages.
Unlike construction cranes where the work speed was determined by the relatively slow progress of the masons, harbor cranes usually featured double treadwheels to speed up loading. The two treadwheels whose diameter is estimated to be 4 m or larger were attached to each side of the axle and rotated together. Their capacity was 2–3 tons, which apparently corresponded to the customary size of marine cargo. Today, according to one survey, fifteen treadwheel harbor cranes from pre-industrial times are still extant throughout Europe. Some harbour cranes were specialised at mounting masts to newly built sailing ships, such as in Gdańsk, Cologne and Bremen. Beside these stationary cranes, floating cranes, which could be flexibly deployed in the whole port basin came into use by the 14th century.
A sheer hulk (or shear hulk) was used in shipbuilding and repair as a floating crane in the days of sailing ships, primarily to place the lower masts of a ship under construction or repair. Booms known as sheers were attached to the base of a hulk's lower masts or beam, supported from the top of those masts. Blocks and tackle were then used in such tasks as placing or removing the lower masts of the vessel under construction or repair. These lower masts were the largest and most massive single timbers aboard a ship, and erecting them without the assistance of either a sheer hulk or land-based masting sheer was extremely difficult.
The concept of sheer hulks originated with the Royal Navy in the 1690s, and persisted in Britain until the early nineteenth century. Most sheer hulks were decommissioned warships; Chatham, built in 1694, was the first of only three purpose-built vessels. There were at least six sheer hulks in service in Britain at any time throughout the 1700s. The concept spread to France in the 1740s with the commissioning of a sheer hulk at the port of Rochefort.
Early modern age
A lifting tower similar to that of the ancient Romans was used to great effect by the Renaissance architect Domenico Fontana in 1586 to relocate the 361 t heavy Vatican obelisk in Rome. From his report, it becomes obvious that the coordination of the lift between the various pulling teams required a considerable amount of concentration and discipline, since, if the force was not applied evenly, the excessive stress on the ropes would make them rupture.
Cranes were also used domestically during this period. The chimney or fireplace crane was used to swing pots and kettles over the fire and the height was adjusted by a trammel.
Industrial revolution
With the onset of the Industrial Revolution the first modern cranes were installed at harbours for loading cargo. In 1838, the industrialist and businessman William Armstrong designed a water-powered hydraulic crane. His design used a ram in a closed cylinder that was forced down by a pressurized fluid entering the cylinder and a valve regulated the amount of fluid intake relative to the load on the crane. This mechanism, the hydraulic jigger, then pulled on a chain to lift the load.
In 1845 a scheme was set in motion to provide piped water from distant reservoirs to the households of Newcastle. Armstrong was involved in this scheme and he proposed to Newcastle Corporation that the excess water pressure in the lower part of town could be used to power one of his hydraulic cranes for the loading of coal onto barges at the Quayside. He claimed that his invention would do the job faster and more cheaply than conventional cranes. The corporation agreed to his suggestion, and the experiment proved so successful that three more hydraulic cranes were installed on the Quayside.
The success of his hydraulic crane led Armstrong to establish the Elswick works at Newcastle, to produce his hydraulic machinery for cranes and bridges in 1847. His company soon received orders for hydraulic cranes from Edinburgh and Northern Railways and from Liverpool Docks, as well as for hydraulic machinery for dock gates in Grimsby. The company expanded from a workforce of 300 and an annual production of 45 cranes in 1850, to almost 4,000 workers producing over 100 cranes per year by the early 1860s.
Armstrong spent the next few decades constantly improving his crane design; his most significant innovation was the hydraulic accumulator. Where water pressure was not available on site for the use of hydraulic cranes, Armstrong often built high water towers to provide a supply of water at pressure. However, when supplying cranes for use at New Holland on the Humber Estuary, he was unable to do this, because the foundations consisted of sand. He eventually produced the hydraulic accumulator, a cast-iron cylinder fitted with a plunger supporting a very heavy weight. The plunger would slowly be raised, drawing in water, until the downward force of the weight was sufficient to force the water below it into pipes at great pressure. This invention allowed much larger quantities of water to be forced through pipes at a constant pressure, thus increasing the crane's load capacity considerably.
One of his cranes, commissioned by the Italian Navy in 1883 and in use until the mid-1950s, is still standing in Venice, where it is now in a state of disrepair.
Mechanical principles
There are three major considerations in the design of cranes. First, the crane must be able to lift the weight of the load; second, the crane must not topple; third, the crane must not fail structurally.
Stability
For stability, the sum of all moments about the base of the crane must be close to zero so that the crane does not overturn. In practice, the magnitude of load that is permitted to be lifted (called the "rated load" in the US) is some value less than the load that will cause the crane to tip, thus providing a safety margin.
Under United States standards for mobile cranes, the stability-limited rated load for a crawler crane is 75% of the tipping load. The stability-limited rated load for a mobile crane supported on outriggers is 85% of the tipping load. These requirements, along with additional safety-related aspects of crane design, are established by the American Society of Mechanical Engineers in the volume ASME B30.5-2018 Mobile and Locomotive Cranes.
Standards for cranes mounted on ships or offshore platforms are somewhat stricter because of the dynamic load on the crane due to vessel motion. Additionally, the stability of the vessel or platform must be considered.
For stationary pedestal or kingpost mounted cranes, the moment produced by the boom, jib, and load is resisted by the pedestal base or kingpost. Stress within the base must be less than the yield stress of the material or the crane will fail.
Dynamic Lift Factor
Overview
The dynamic lift factor (DLF), also known as the design dynamic factor, is a critical parameter in the crane design and operation. It accounts for the dynamic effects that can increase the load on a crane's structure and components during lifting operations. These effects include:
Hoisting acceleration and deceleration of the load, which is a significant factor;
Crane movement such as slewing or luffing;
Load swinging;
Wind forces acting on the crane, the load and the rigging; and
Operator error or other unexpected events.
The DLF for a new crane design can be determined with analytical calculations and mathematical models following the relevant design specifications. If available, data from previous tests of similar crane types can be used to estimate the DLF. More sophisticated methods, such as finite element analysis or other simulation techniques, may also be used to model the crane's behavior under various loading conditions, as deemed appropriate by the designer or certifying authority.To verify the actual DLF, control load tests can be conducted on the completed crane using instrumentation such as load cells, accelerometers, and strain gauges. This process is usually part of the crane's type approval.
In offshore lifting, where the crane and/or lifted object are on a floating vessel, the DLF is higher compared to onshore lifts because of the additional movement caused by wave action. This motion introduces additional acceleration forces and necessitates increased hoisting and lowering speeds to minimize the risk of repeated collisions when the load is near the deck. Additionally, the DLF increases further when lifting objects that are underwater or going through the splash zone. The wind speeds tend to be higher than onshore as well.
Though actual DLF values are determined through crane tests under representative operational conditions, design specifications can be used for guidance. The values vary according to the specification, which reflects the type of crane and its usage. Here are some example typical values:
Jib cranes typically have a lower DLF () compared to traveling gantry cranes () because they are stiffer;
For grab cranes, the DLF can increase by 20% to 30% reflecting the shock loads caused by the release of the lifted material; and
The DLF generally decreases as the mass of the lifted object increases, as cranes tend to operate at lower velocities with heavier loads to ensure safety and stability. For offshore lifts, the DLF typically decreases from 1.3 at 100 tonnes to 1.1 at 2500 tonnes.
Formulas
The methods for determining the DLF vary in the different crane specifications. The following formulas are examples from one specification.
The working load (suspended load) is the total weight that a crane is designed to safely lift under normal operating conditions. It is
where
is the working load,
is the acceleration of gravity,
is the maximum lifted mass, which is also called the crane working load limit (WLL) or safe working load (SWL), and
is the mass of lifting appliances or parts of the crane that move with the lifted mass.
The DLF is then used as a multiplier to determine the force applied to the crane structure and componentswhere
is the design force, and
is the DLF.
The DLF can then be calculated usingwhere
is relative velocity between lifted object and hook at the time of pick-up, and
is the stiffness of the crane system at the hook.
The relative velocity is dependent on the crane's operational requirements and the system stiffness at the hook can be determined by calculation or load deflection tests.
Types
The crane types outlined in this section are categorized based on their primary area of application:
Construction
Truck-mounted
Loader
Telescopic
Rough terrain
All terrain
Crawler
Pick and carry
Carry deck
Telescopic handler
Block setting
Tower
Climbing crane
Cargo Handling
Reach stacker
Sidelifter
Straddle carrier
Industrial
Ring
Hammerhead
Level luffing
Overhead
Gantry
Jib
Bulk handling
Stacker
Wind turbine installation vessel
Marine
Floating
Deck
Other Types
Railroad
Aerial
Construction
Truck-mounted
The most basic truck-mounted crane configuration is a "boom truck" or "lorry loader", which features a rear-mounted rotating telescopic-boom crane mounted on a commercial truck chassis.
Larger, heavier duty, purpose-built "truck-mounted" cranes are constructed in two parts: the carrier, often called the lower, and the lifting component, which includes the boom, called the upper. These are mated together through a turntable, allowing the upper to swing from side to side. These modern hydraulic truck cranes are usually single-engine machines, with the same engine powering the undercarriage and the crane. The upper is usually powered via hydraulics run through the turntable from the pump mounted on the lower. In older model designs of hydraulic truck cranes, there were two engines. One in the lower pulled the crane down the road and ran a hydraulic pump for the outriggers and jacks. The one in the upper ran the upper through a hydraulic pump of its own. Many older operators favor the two-engine system due to leaking seals in the turntable of aging newer design cranes. Hiab invented the world's first hydraulic truck mounted crane in 1947. The name, Hiab, comes from the commonly used abbreviation of Hydrauliska Industri AB, a company founded in Hudiksvall, Sweden 1944 by Eric Sundin, a ski manufacturer who saw a way to utilize a truck's engine to power loader cranes through the use of hydraulics.
Generally, these cranes are able to travel on highways, eliminating the need for special equipment to transport the crane unless weight or other size constrictions are in place such as local laws. If this is the case, most larger cranes are equipped with either special trailers to help spread the load over more axles or are able to disassemble to meet requirements. An example is counterweights. Often a crane will be followed by another truck hauling the counterweights that are removed for travel. In addition some cranes are able to remove the entire upper. However, this is usually only an issue in a large crane and mostly done with a conventional crane such as a Link-Belt HC-238. When working on the job site, outriggers are extended horizontally from the chassis then vertically to level and stabilize the crane while stationary and hoisting. Many truck cranes have slow-travelling capability (a few miles per hour) while suspending a load. Great care must be taken not to swing the load sideways from the direction of travel, as most anti-tipping stability then lies in the stiffness of the chassis suspension. Most cranes of this type also have moving counterweights for stabilization beyond that provided by the outriggers. Loads suspended directly aft are the most stable, since most of the weight of the crane acts as a counterweight. Factory-calculated charts (or electronic safeguards) are used by crane operators to determine the maximum safe loads for stationary (outriggered) work as well as (on-rubber) loads and travelling speeds.
Truck cranes range in lifting capacity from about to about . Although most only rotate about 180 degrees, the more expensive truck mounted cranes can turn a full 360 degrees.
Loader
A loader crane (also called a knuckle-boom crane or articulating crane) is an hydraulically powered articulated arm fitted to a truck or trailer, and is used for loading/unloading the vehicle cargo. The numerous jointed sections can be folded into a small space when the crane is not in use. One or more of the sections may be telescopic. Often the crane will have a degree of automation and be able to unload or stow itself without an operator's instruction.
Unlike most cranes, the operator must move around the vehicle to be able to view his load; hence modern cranes may be fitted with a portable cabled or radio-linked control system to supplement the crane-mounted hydraulic control levers.
In the United Kingdom and Canada, this type of crane is often known colloquially as a "Hiab", partly because this manufacturer invented the loader crane and was first into the UK market, and partly because the distinctive name was displayed prominently on the boom arm.
A rolloader crane is a loader crane mounted on a chassis with wheels. This chassis can ride on the trailer. Because the crane can move on the trailer, it can be a light crane, so the trailer is allowed to transport more goods.
Telescopic
A telescopic crane has a boom that consists of a number of tubes fitted one inside the other. A hydraulic cylinder or other powered mechanism extends or retracts the tubes to increase or decrease the total length of the boom. These types of booms are often used for short term construction projects, rescue jobs, lifting boats in and out of the water, etc. The relative compactness of telescopic booms makes them adaptable for many mobile applications.
Though not all telescopic cranes are mobile cranes, many of them are truck-mounted.
A telescopic tower crane has a telescopic mast and often a superstructure (jib) on top so that it functions as a tower crane. Some telescopic tower cranes also have a telescopic jib.
Rough terrain
A rough terrain crane has a boom mounted on an undercarriage atop four rubber tires that is designed for off-road pick-and-carry operations. Outriggers are used to level and stabilize the crane for hoisting.
These telescopic cranes are single-engine machines, with the same engine powering the undercarriage and the crane, similar to a crawler crane. The engine is usually mounted in the undercarriage rather than in the upper, as with crawler crane. Most have 4 wheel drive and 4 wheel steering for traversing tighter and slicker terrain than a standard truck crane, with less site prep.
All-terrain
An all-terrain crane is a hybrid combining the roadability of a truck-mounted and on-site maneuverability of a rough-terrain crane. It can both travel at speed on public roads and maneuver on rough terrain at the job site using all-wheel and crab steering.
AT's have 2–12 axles and are designed for lifting loads up to .
Crawler
Main article: Lattice boom crawler crane
A crawler crane has its boom mounted on an undercarriage fitted with a set of crawler tracks that provide both stability and mobility. Crawler cranes range in lifting capacity from about as seen from the XGC88000 crawler crane.
The main advantage of a crawler crane is its ready mobility and use, since the crane is able to operate on sites with minimal improvement and stable on its tracks without outriggers. Wide tracks spread the weight out over a great area and are far better than wheels at traversing soft ground without sinking in. A crawler crane is also capable of traveling with a load. Its main disadvantage is its weight, making it difficult and expensive to transport. Typically a large crawler must be disassembled at least into boom and cab and moved by trucks, rail cars or ships to its next location.
Pick and carry
A pick and carry crane is similar to a mobile crane in that is designed to travel on public roads; however, pick and carry cranes have no stabiliser legs or outriggers and are designed to lift the load and carry it to its destination, within a small radius, then be able to drive to the next job. Pick and carry cranes are popular in Australia, where large distances are encountered between job sites. One popular manufacturer in Australia was Franna, who have since been bought by Terex, and now all pick and carry cranes are commonly called "Frannas", even though they may be made by other manufacturers. Nearly every medium- and large-sized crane company in Australia has at least one and many companies have fleets of these cranes. The capacity range is between as a maximum lift, although this is much less as the load gets further from the front of the crane. Pick and carry cranes have displaced the work usually completed by smaller truck cranes, as the set-up time is much quicker. Many steel fabrication yards also use pick and carry cranes, as they can "walk" with fabricated steel sections and place these where required with relative ease.
Smaller pick and carry cranes may be based on an articulated tractor chassis, with the boom mounted over the front wheels. In Australia these are popularly known as "wobbly cranes".
Carry deck
A carry deck crane is a small 4 wheel crane with a 360-degree rotating boom placed right in the centre and an operators cab located at one end under this boom. The rear section houses the engine and the area above the wheels is a flat deck. Very much an American invention the Carry deck can hoist a load in a confined space and then load it on the deck space around the cab or engine and subsequently move to another site. The Carry Deck principle is the American version of the pick and carry crane and both allow the load to be moved by the crane over short distances.
Telescopic handler
Telescopic handlers are forklift-like trucks that have a set of forks mounted on a telescoping extendable boom like a crane. Early telescopic handlers only lifted in one direction and did not rotate; however, several of the manufacturers have designed telescopic handlers that rotate 360 degrees through a turntable and these machines look almost identical to the Rough Terrain Crane. These new 360-degree telescopic handler/crane models have outriggers or stabiliser legs that must be lowered before lifting; however, their design has been simplified so that they can be more quickly deployed. These machines are often used to handle pallets of bricks and install frame trusses on many new building sites and they have eroded much of the work for small telescopic truck cranes. Many of the world's armed forces have purchased telescopic handlers and some of these are the much more expensive fully rotating types. Their off-road capability and their on site versatility to unload pallets using forks, or lift like a crane make them a valuable piece of machinery.
Block-setting crane
A block-setting crane is a form of crane. They were used for installing the large stone blocks used to build breakwaters, moles and stone piers.
Tower
Tower cranes are a modern form of balance crane that consist of the same basic parts. Fixed to the ground on a concrete slab (and sometimes attached to the sides of structures), tower cranes often give the best combination of height and lifting capacity and are used in the construction of tall buildings. The base is then attached to the mast which gives the crane its height. Further, the mast is attached to the slewing unit (gear and motor) that allows the crane to rotate. On top of the slewing unit there are three main parts which are: the long horizontal jib (working arm), shorter counter-jib, and the operator's cab.
Optimization of tower crane location in the construction sites has an important effect on material transportation costs of a project, but site operators need to ensure they assess where the jib will oversail the property of other landowners and tenants as it rotates over the site. Under English law a landowner also owns the airspace above their property and developers will need to agree terms with adjacent property owners before oversailing their land.
The long horizontal jib is the part of the crane that carries the load. The counter-jib carries a counterweight, usually of concrete blocks, while the jib suspends the load to and from the center of the crane. The crane operator either sits in a cab at the top of the tower or controls the crane by radio remote control from the ground. In the first case the operator's cab is most usually located at the top of the tower attached to the turntable, but can be mounted on the jib, or partway down the tower. The lifting hook is operated by the crane operator using electric motors to manipulate wire rope cables through a system of sheaves. The hook is located on the long horizontal arm to lift the load which also contains its motor.
In order to hook and unhook the loads, the operator usually works in conjunction with a signaller (known as a "dogger", "rigger" or "swamper"). They are most often in radio contact, and always use hand signals. The rigger or dogger directs the schedule of lifts for the crane, and is responsible for the safety of the rigging and loads.
Tower cranes can achieve a height under hook of over 100 metres.
Components
Tower cranes are used extensively in construction and other industry to hoist and move materials. There are many types of tower cranes. Although they are different in type, the main parts are the same, as follows:
Mast: the main supporting tower of the crane. It is made of steel trussed sections that are connected together during installation.
Slewing unit: the slewing unit sits at the top of the mast. This is the engine that enables the crane to rotate.
Operating cabin: on most tower cranes the operating cabin sits just above the slewing unit. It contains the operating controls, load-movement indicator system (LMI), scale, anemometer, etc.
Jib: the jib, or operating arm, extends horizontally from the crane. A "luffing" jib is able to move up and down; a fixed jib has a rolling trolley car that runs along the underside to move loads horizontally.
Counter jib: holds counterweights, hoist motor, hoist drum and the electronics.
Hoist winch: the hoist winch assembly consists of the hoist winch (motor, gearbox, hoist drum, hoist rope, and brakes), the hoist motor controller, and supporting components, such as the platform. Many tower cranes have transmissions with two or more speeds.
Hook: the hook is used to connect the material to the crane, suspended from the hoist rope either at the tip (on luffing jib cranes) or routed through the trolley (on hammerhead cranes).
Weights: Large, moveable concrete counterweights are mounted toward the rear of the counterdeck, to compensate for the weight of the goods lifted and keep the center of gravity over the supporting tower.
Assembly
A tower crane is usually assembled by a telescopic jib (mobile) crane of greater reach (also see "self-erecting crane" below) and in the case of tower cranes that have risen while constructing very tall skyscrapers, a smaller crane (or derrick) will often be lifted to the roof of the completed tower to dismantle the tower crane afterwards, which may be more difficult than the installation.
Tower cranes can be operated by remote control, removing the need for the crane operator to sit in a cab atop the crane.
Operation
Each model and distinctive style of tower crane has a predetermined lifting chart that can be applied to any radii available, depending on its configuration. Similar to a mobile crane, a tower crane may lift an object of far greater mass closer to its center of rotation than at its maximum radius. An operator manipulates several levers and pedals to control each function of the crane.
Safety
When a tower crane is used in proximity to buildings, roads, power lines, or other tower cranes, a tower crane anti-collision system is used. This operator support system reduces the risk of a dangerous interaction occurring between a tower crane and another structure.
In some countries, such as France, tower crane anti-collision systems are mandatory.
Self-erecting tower cranes
Generally a type of pedestrian operated tower crane, self-erecting tower cranes are transported as a single unit and can be assembled by a qualified technician without the assistance of a larger mobile crane. They are bottom slewing cranes that stand on outriggers, have no counter jib, have their counterweights and ballast at the base of the mast, cannot climb themselves, have a reduced capacity compared to standard tower cranes, and seldom have an operator's cabin.
In some cases, smaller self-erecting tower cranes may have axles permanently fitted to the tower section to make maneuvering the crane onsite easier.
Tower cranes can also use a hydraulic-powered jack frame to raise themselves to add new tower sections without any additional other cranes assisting beyond the initial assembly stage. This is how it can grow to nearly any height needed to build the tallest skyscrapers when tied to a building as the building rises. The maximum unsupported height of a tower crane is around 265 ft. For a video of a crane getting taller, see "Crane Building Itself" on YouTube.
For another animation of such a crane in use, see "SAS Tower Construction Simulation" on YouTube. Here, the crane is used to erect a scaffold, which, in turn, contains a gantry to lift sections of a bridge spire.
Climbing crane
Many tower cranes are designed to "jump" in stages, effectively lifting themselves to the next level. A specialty example of a climbing crane was introduced by Lagerwey Wind and Enercon to construct a wind turbine tower, where instead of erecting a large crane a smaller climbing crane can raise itself with the structure's construction, lift the generator housing to its top, add the rotor blades, then climb down.
Cargo Handling
Rubber tyred gantry crane
Reach stacker
A reach stacker is a vehicle used for handling intermodal cargo containers in small terminals or medium-sized ports. Reach stackers are able to transport a container short distances very quickly and pile them in various rows depending on its access.
Sidelifter
A sidelifter crane is a road-going truck or semi-trailer, able to hoist and transport ISO standard containers. Container lift is done with parallel crane-like hoists, which can lift a container from the ground or from a railway vehicle.
Travel lift
A travel lift (also called a boat gantry crane, or boat crane) is a crane with two rectangular side panels joined by a single spanning beam at the top of one end. The crane is mobile with four groups of steerable wheels, one on each corner. These cranes allow boats with masts or tall super structures to be removed from the water and transported around docks or marinas. Not to be confused mechanical device used for transferring a vessel between two levels of water, which is also called a boat lift.
Straddle carrier
A Straddle carrier moves and stacks intermodal containers.
Industrial
Ring
Ring cranes are some of the largest and heaviest land-based cranes ever designed. A ring-shaped track support the main superstructure allowing for extremely heavy loads (up to thousands of tonnes).
Hammerhead
The "hammerhead", or giant cantilever, crane is a fixed-jib crane consisting of a steel-braced tower on which revolves a large, horizontal, double cantilever; the forward part of this cantilever or jib carries the lifting trolley, the jib is extended backwards in order to form a support for the machinery and counterbalancing weight. In addition to the motions of lifting and revolving, there is provided a so-called "racking" motion, by which the lifting trolley, with the load suspended, can be moved in and out along the jib without altering the level of the load. Such horizontal movement of the load is a marked feature of later crane design. These cranes are generally constructed in large sizes and can lift up to 350 tons.
The design of Hammerkran evolved first in Germany around the turn of the 19th century and was adopted and developed for use in British shipyards to support the battleship construction program from 1904 to 1914. The ability of the hammerhead crane to lift heavy weights was useful for installing large pieces of battleships such as armour plate and gun barrels. Giant cantilever cranes were also installed in naval shipyards in Japan and in the United States. The British government also installed a giant cantilever crane at the Singapore Naval Base (1938) and later a copy of the crane was installed at Garden Island Naval Dockyard in Sydney (1951). These cranes provided repair support for the battle fleet operating far from Great Britain.
In the British Empire, the engineering firm Sir William Arrol & Co. was the principal manufacturer of giant cantilever cranes; the company built a total of fourteen. Among the sixty built in the world, few remain; seven in England and Scotland of about fifteen worldwide.
The Titan Clydebank is one of the four Scottish cranes on the River Clyde and preserved as a tourist attraction.
Level luffing
Normally a crane with a hinged jib will tend to have its hook also move up and down as the jib moves (or luffs). A level luffing crane is a crane of this common design, but with an extra mechanism to keep the hook at the same level when the jib is pivoted in or out.
Overhead
An overhead crane, also known as a bridge crane, is a type of crane where the hook-and-line mechanism runs along a horizontal beam that itself runs along two widely separated rails. Often it is in a long factory building and runs along rails along the building's two long walls. It is similar to a gantry crane. Overhead cranes typically consist of either a single beam or a double beam construction. These can be built using typical steel beams or a more complex box girder type. Pictured on the right is a single bridge box girder crane with the hoist and system operated with a control pendant. Double girder bridge are more typical when needing heavier capacity systems from 10 tons and above. The advantage of the box girder type configuration results in a system that has a lower deadweight yet a stronger overall system integrity. Also included would be a hoist to lift the items, the bridge, which spans the area covered by the crane, and a trolley to move along the bridge.
The most common overhead crane use is in the steel industry. At every step of the manufacturing process, until it leaves a factory as a finished product, steel is handled by an overhead crane. Raw materials are poured into a furnace by crane, hot steel is stored for cooling by an overhead crane, the finished coils are lifted and loaded onto trucks and trains by overhead crane, and the fabricator or stamper uses an overhead crane to handle the steel in his factory. The automobile industry uses overhead cranes for handling of raw materials. Smaller workstation cranes handle lighter loads in a work-area, such as CNC mill or saw.
Almost all paper mills use bridge cranes for regular maintenance requiring removal of heavy press rolls and other equipment. The bridge cranes are used in the initial construction of paper machines because they facilitate installation of the heavy cast iron paper drying drums and other massive equipment, some weighing as much as 70 tons.
In many instances the cost of a bridge crane can be largely offset with savings from not renting mobile cranes in the construction of a facility that uses a lot of heavy process equipment.
This electric overhead traveling crane is most common type of overhead crane, found in many factories. These cranes are electrically operated by a control pendant, radio/IR remote pendant, or from an operator cabin attached to the crane.
Gantry
A gantry crane has a hoist in a fixed machinery house or on a trolley that runs horizontally along rails, usually fitted on a single beam (mono-girder) or two beams (twin-girder). The crane frame is supported on a gantry system with equalized beams and wheels that run on the gantry rail, usually perpendicular to the trolley travel direction. These cranes come in all sizes, and some can move very heavy loads, particularly the extremely large examples used in shipyards or industrial installations. A special version is the container crane (or "Portainer" crane, named by the first manufacturer), designed for loading and unloading ship-borne containers at a port.
Most container cranes are of this type.
Jib
A jib crane is a type of crane - not to be confused with a crane rigged with a jib to extend its main boom - where a horizontal member (jib or boom), supporting a moveable hoist, is fixed to a wall or to a floor-mounted pillar. Jib cranes are used in industrial premises and on military vehicles. The jib may swing through an arc, to give additional lateral movement, or be fixed. Similar cranes, often known simply as hoists, were fitted on the top floor of warehouse buildings to enable goods to be lifted to all floors.
Bulk-handling
Bulk-handling cranes are designed from the outset to carry a shell grab or bucket, rather than using a hook and a sling. They are used for bulk cargoes, such as coal, minerals, scrap metal etc.
Stacker
A crane with a forklift type mechanism used in automated (computer-controlled) warehouses (known as an automated storage and retrieval system (AS/RS)). The crane moves on a track in an aisle of the warehouse. The fork can be raised or lowered to any of the levels of a storage rack and can be extended into the rack to store and retrieve the product. The product can in some cases be as large as an automobile. Stacker cranes are often used in the large freezer warehouses of frozen food manufacturers. This automation avoids requiring forklift drivers to work in below-freezing temperatures every day.
Marine
Floating
Floating cranes are used mainly in bridge building and port construction, but they are also used for occasional loading and unloading of especially heavy or awkward loads on and off ships. Some floating cranes are mounted on pontoons, others are specialized crane barges with a lifting capacity exceeding and have been used to transport entire bridge sections. Floating cranes have also been used to salvage sunken ships.
Crane vessels are often used in offshore construction.
The largest revolving cranes can be found on SSCV Sleipnir, which has two cranes with a capacity of each. For 50 years, the largest such crane was "Herman the German" at the Long Beach Naval Shipyard, one of three constructed by Nazi Germany and captured in the war. The crane was sold to the Panama Canal in 1996 where it is now known as Titan.
Deck
Deck cranes, also known as shipboard or cargo cranes, are located on ships and boats, used for cargo operations where no shore unloading facilities are available, raising and lowering loads (such as shellfish dredges and fish nets) into the water, and small boat unloading and retrieval. Most are diesel-hydraulic or electric-hydraulic, supporting an increasingly automated control interface.
Other Types
Railroad
A railroad crane has flanged wheels for use on railroads.
The simplest form is a crane mounted on a flatcar. More capable devices are purpose-built. Different types of crane are used for maintenance work, recovery operations and freight loading in goods yards and scrap handling facilities.
Aerial
Aerial cranes or "sky cranes" usually are helicopters designed to lift large loads. Helicopters are able to travel to and lift in areas that are difficult to reach by conventional cranes. Helicopter cranes are most commonly used to lift loads onto shopping centers and high-rise buildings. They can lift anything within their lifting capacity, such as air conditioning units, cars, boats, swimming pools, etc. They also perform disaster relief after natural disasters for clean-up, and during wild-fires they are able to carry huge buckets of water to extinguish fires.
Some aerial cranes, mostly concepts, have also used lighter-than air aircraft, such as airships.
Efficiency increase of cranes
Lifetime of existing cranes made of welded metal structures can often be extended for many years by after treatment of welds. During development of cranes, load level (lifting load) can be significantly increased by taking into account the IIW recommendations, leading in most cases to an increase of the permissible lifting load and thus to an efficiency increase.
Similar machines
The generally accepted definition of a crane is a machine for lifting and moving heavy objects by means of ropes or cables suspended from a movable arm. As such, a lifting machine that does not use cables, or else provides only vertical and not horizontal movement, cannot strictly be called a 'crane'.
Types of crane-like lifting machine include:
gin pole
Block and tackle
Capstan (nautical)
Hoist (device)
Winch
Windlass
Cherry picker
More technically advanced types of such lifting machines are often known as "cranes", regardless of the official definition of the term.
Special examples
Finnieston Crane, a.k.a. the Stobcross Crane
Category A-listed example of a "hammerhead" (cantilever) crane in Glasgow's former docks, built by the William Arrol company.
tall, capacity, built 1926
Taisun
double bridge crane at Yantai, China.
capacity, World Record Holder
tall, span, lift-height
Kockums Crane
shipyard crane formerly at Kockums, Sweden.
tall, capacity, since moved to Ulsan, South Korea
Samson and Goliath (cranes)
two gantry cranes at the Harland & Wolff shipyard in Belfast built by Krupp
Goliath is tall, Samson is
span , lift-height , capacity each, combined
Breakwater Crane Railway
self-propelled steam crane that formerly ran the length of the breakwater at Douglas.
ran on gauge track, the broadest in the British Isles
Liebherr TCC 78000
Heavy-duty gantry crane used for heavy lifting operated in Rostock, Germany.
capacity, lift-height
Crane operators
Crane operators are skilled workers and heavy equipment operators.
Key skills that are needed for a crane operator include:
An understanding of how to use and maintain machines and tools
Good team working skills
Attention to details
Good spatial awareness.
Patience and the ability to stay calm in stressful situations
Terminology
The ISO 4306 series of specifications establish the vocabulary for cranes:
Part 1: General
Part 2: Mobile cranes
Part 3: Tower cranes
Part 4: Jib cranes
Part 5: Bridge and gantry cranes
Luffing
Slewing
Hoisting
See also
Accredited Crane Operator Certification
Banksman
Cherry picker
Davit
Floating sheerleg
Gantry crane
Lifting devices with one, two, and three legs:
derrick
sheers
gyn
Overhead crane
Pallet
Patient lift
Sidelifter
Steam shovel
Taisun
Telescopic handler
References
Sources
History of cranes
Construction equipment
Heavy equipment
Lifting equipment
Vertical transport devices
Ancient Egyptian technology
Ancient Greek technology
Ancient inventions
Articles containing video clips | Crane (machine) | [
"Physics",
"Technology",
"Engineering"
] | 11,348 | [
"Machines",
"Transport systems",
"Construction equipment",
"Lifting equipment",
"Physical systems",
"Construction",
"Vertical transport devices",
"Cranes (machines)",
"Engineering vehicles",
"Industrial machinery"
] |
318,379 | https://en.wikipedia.org/wiki/Sud%20Aviation%20Super-Caravelle | The Sud Aviation Super-Caravelle was an early design for a supersonic transport. Unlike most competing designs which envisioned larger trans-Atlantic aircraft and led to the likes of the Boeing 2707, the Super-Caravelle was a much smaller, shorter range design intended to replace Sud Aviation's earlier and successful Caravelle. Design work started in 1960 and was announced in 1961 at the Paris Air Show, but was later merged with similar work at the British Aircraft Corporation (originally the Bristol 223) to create the Concorde project in November 1962. After work had begun on designing Concorde, the Super Caravelle name was instead used on a lengthened version of the original Caravelle design, the SE-210B.
Design
The Super-Caravelle looks very much like a smaller version of Concorde. It used Concorde's unique ogive wing planform, and was otherwise similar in shape and layout with the exception of the nose area, which was more conventional and only the outermost section over the radar "drooped" for visibility on takeoff and landing. In normal use it was designed to carry up to 109 passengers between at about Mach 2. The size and range requirements were set to make the Super-Caravelle "perfect" for Air France's European and African routes.
Concorde was originally to be delivered in two versions, a longer-range transatlantic version similar to the Bristol 223 that was eventually delivered as Concorde, and a smaller version for shorter range routes similar to the Super-Caravelle. After consultations with prospective customers, the smaller design was dropped.
Specifications
See also
Further reading
Operators’ reference drawing , ,
John Wegg, Caravelle - The Complete Story 2005, Airways International Inc.
References
Abandoned civil aircraft projects
Concorde
1960s French airliners
Quadjets
Super Caravelle
Supersonic transports
Tailless delta-wing aircraft | Sud Aviation Super-Caravelle | [
"Physics"
] | 383 | [
"Physical systems",
"Transport",
"Supersonic transports"
] |
318,407 | https://en.wikipedia.org/wiki/Bristol%20Type%20223 | The Bristol Type 223 was an early design for a supersonic transport. In the late 1950s and early 1960s the Bristol Aeroplane Company studied a number of models as part of a large British inter-company effort funded by the government. These models eventually culminated in the Type 223, a transatlantic transport for about 100 passengers at a speed around Mach 2. At about the same time Sud Aviation in France was developing the similar Super-Caravelle design, and in November 1962 the efforts were merged to create the Concorde project.
Development
Background
In the UK, as elsewhere in the 1950s, the aero industry had been producing a series of supersonic test aircraft and had extensively studied the problems of sustained high-speed flight. By the mid-1950s, two designs had been shown to have a lift-to-drag ratio suitable for supersonic cruise, a sharply swept M-wing pioneered at Armstrong-Whitworth for slightly supersonic flight and very slender delta wings suitable for a wide range of speeds. Higher speeds up to Mach 3 had been considered and found to be possible, but it appeared that a practical upper limit was Mach 2.2; above this speed the duralumin used for most aircraft construction would start to soften due to the heat of friction, and some new material would have to be used instead. Stainless steel was considered, but the Bristol 188 proved this to be difficult and expensive.
STAC
By 1956 there was enough official interest in this research for the Supersonic Transport Aircraft Committee, or STAC, to be formed under Sir Morien Morgan to investigate the creation of a supersonic transport. Its first report, in 1959, recommended two designs. One was an M-wing Mach 1.2 medium range airliner and the other a straight wing, Mach 1.8 design with six wingtip engines. Soon after, however, studies at the Royal Aircraft Establishment began to favour the gothic delta and design contracts using this planform went to Hawker Siddeley and Bristol in late 1959. Both were asked to look at both Mach 2.2 aluminium alloy and Mach 2.7 stainless steel structures. Bristol's Mach 2.7 design was labelled the Type 213. Their designer, Archibald Russell, was influenced by the constructional problems and expense encountered with the Bristol 188 and favoured the lower speed alloy aircraft.
The thin wing design of the Type 213 was preferred by the STC and a 1961 contract encouraged a detailed series of studies of a 130-seat, Mach 2.2 aircraft powered by six Bristol Olympus engines under the generic Type 198 label. Aware of the great expense of the project, STAC required Bristol to share the cost with an overseas partner. In 1961, Sud Aviation revealed their plans for the Super-Caravelle at the Paris Air Show, a smaller aircraft than the Type 198. Bristol proposed a design which came between the Super Caravelle and the Type 198 which they called the Type 223; the French were looking at a slightly larger version of the Super Caravelle and the two companies had a specification for agreement to build an aircraft jointly. Throughout 1962 they and their respective governments negotiated the formation of a consortium to share development and production costs, estimated at £15m-£170m. On 29 November 1962 an agreement was jointly signed by the UK Minister for Aviation, Julian Amery and the French ambassador, Geoffrey de Courcel and the Concorde project was underway.
Specifications
See also
References
Abandoned civil aircraft projects of the United Kingdom
Concorde
Type 223
1960s British airliners
Supersonic transports
Quadjets
Tailless delta-wing aircraft | Bristol Type 223 | [
"Physics"
] | 719 | [
"Physical systems",
"Transport",
"Supersonic transports"
] |
318,413 | https://en.wikipedia.org/wiki/French%20Institute%20for%20Research%20in%20Computer%20Science%20and%20Automation | The National Institute for Research in Digital Science and Technology (Inria) () is a French national research institution focusing on computer science and applied mathematics.
It was created under the name French Institute for Research in Computer Science and Automation (IRIA) () in 1967 at Rocquencourt near Paris, part of Plan Calcul. Its first site was the historical premises of SHAPE (central command of NATO military forces), which is still used as Inria's main headquarters. In 1980, IRIA became INRIA. Since 2011, it has been styled Inria.
Inria is a Public Scientific and Technical Research Establishment (EPST) under the double supervision of the French Ministry of National Education, Advanced Instruction and Research and the Ministry of Economy, Finance and Industry.
Administrative status
Inria has nine research centers distributed across France (in Bordeaux, Grenoble-Inovallée, Lille, Lyon, Nancy, Paris-Rocquencourt, Rennes, Saclay, and Sophia Antipolis) and one center abroad in Santiago de Chile, Chile. It also contributes to academic research teams outside of those centers.
Inria Rennes is part of the joint Institut de recherche en informatique et systèmes aléatoires (IRISA) with several other entities.
Before December 2007, the three centers of Bordeaux, Lille and Saclay formed a single research center called INRIA Futurs.
In October 2010, Inria, with Pierre and Marie Curie University (Now Sorbonne University) and Paris Diderot University started IRILL, a center for innovation and research initiative for free software.
Inria employs 3800 people. Among them are 1300 researchers, 1000 Ph.D. students and 500 postdoctorates.
Research
Inria does both theoretical and applied research in computer science. In the process, it has produced many widely used programs, such as
Bigloo, a Scheme implementation
CADP, a tool box for the verification of asynchronous concurrent systems
Caml, a language from the ML family
Caml Light and OCaml implementations
Chorus, microkernel-based distributed operating system
CompCert, verified C compiler for PowerPC, ARM and x86_32
Contrail
Coq, a proof assistant
CYCLADES, pioneered the use of datagrams, functional layering, and the end-to-end strategy.
Eigen (C++ library)
Esterel, a programming language for State Automata
Geneauto — code-generation from model
Graphite, a research platform for computer graphics, 3D modeling and numerical geometry
Gudhi — A C++ library with Python interface for computational topology and topological data analysis
Le Lisp, a portable Lisp implementation
medInria, a medical image processing software, popularly used for MRI images.
GNU MPFR, an arbitrary-precision floating-point library
OpenViBE, a software platform dedicated to designing, testing and using brain–computer interfaces.
Pharo, an open-source Smalltalk derived from Squeak .
scikit-learn, a machine learning software package
Scilab, a numerical computation software package
SimGrid
SmartEiffel, a free Eiffel compiler
SOFA, an open source framework for multi-physics simulation with an emphasis on medical simulation.
TOM, a pattern matching language
ViSP, an open source visual servoing platform library
XtreemFS
XtreemOS, a grid distributed operating system
Zenon, an extensible automated theorem prover producing checkable proofs
Inria furthermore leads French AI Research, ranking 12th worldwide in 2019, based on accepted publications at the prestigious Conference on Neural Information Processing Systems.
History
During the summer of 1988, the INRIA connected its Sophia-Antipolis unit to the NSFNet via Princeton using a satellite link leased to France Telecom and MCI. The link became operational on 8 August 1988, and allowed INRIA researchers to access the US network and allowed NASA researchers access to an astronomical database based in Strasbourg. This was the first international connection to NSFNET and the first time that French networks were connected directly to a network using TCP/IP, the Internet protocol. The Internet in France was limited to research and education for some years to come.
References
Further reading
External links
See also
Stratégie nationale pour l'intelligence artificielle
Computer science research organizations
History of computing in France
Scientific agencies of the government of France
Theoretical computer science
Computer science institutes in France
Members of the European Research Consortium for Informatics and Mathematics
Information technology research institutes
Carnot label | French Institute for Research in Computer Science and Automation | [
"Mathematics",
"Technology"
] | 916 | [
"Theoretical computer science",
"Applied mathematics",
"History of computing",
"History of computing in France"
] |
318,439 | https://en.wikipedia.org/wiki/Text%20mining | Text mining, text data mining (TDM) or text analytics is the process of deriving high-quality information from text. It involves "the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources." Written resources may include websites, books, emails, reviews, and articles. High-quality information is typically obtained by devising patterns and trends by means such as statistical pattern learning. According to Hotho et al. (2005), there are three perspectives of text mining: information extraction, data mining, and knowledge discovery in databases (KDD). Text mining usually involves the process of structuring the input text (usually parsing, along with the addition of some derived linguistic features and the removal of others, and subsequent insertion into a database), deriving patterns within the structured data, and finally evaluation and interpretation of the output. 'High quality' in text mining usually refers to some combination of relevance, novelty, and interest. Typical text mining tasks include text categorization, text clustering, concept/entity extraction, production of granular taxonomies, sentiment analysis, document summarization, and entity relation modeling (i.e., learning relations between named entities).
Text analysis involves information retrieval, lexical analysis to study word frequency distributions, pattern recognition, tagging/annotation, information extraction, data mining techniques including link and association analysis, visualization, and predictive analytics. The overarching goal is, essentially, to turn text into data for analysis, via the application of natural language processing (NLP), different types of algorithms and analytical methods. An important phase of this process is the interpretation of the gathered information.
A typical application is to scan a set of documents written in a natural language and either model the document set for predictive classification purposes or populate a database or search index with the information extracted. The document is the basic element when starting with text mining. Here, we define a document as a unit of textual data, which normally exists in many types of collections.
Text analytics
Text analytics describes a set of linguistic, statistical, and machine learning techniques that model and structure the information content of textual sources for business intelligence, exploratory data analysis, research, or investigation. The term is roughly synonymous with text mining; indeed, Ronen Feldman modified a 2000 description of "text mining" in 2004 to describe "text analytics". The latter term is now used more frequently in business settings while "text mining" is used in some of the earliest application areas, dating to the 1980s, notably life-sciences research and government intelligence.
The term text analytics also describes that application of text analytics to respond to business problems, whether independently or in conjunction with query and analysis of fielded, numerical data. It is a truism that 80% of business-relevant information originates in unstructured form, primarily text. These techniques and processes discover and present knowledge – facts, business rules, and relationships – that is otherwise locked in textual form, impenetrable to automated processing.
Text analysis processes
Subtasks—components of a larger text-analytics effort—typically include:
Dimensionality reduction is an important technique for pre-processing data. It is used to identify the root word for actual words and reduce the size of the text data.
Information retrieval or identification of a corpus is a preparatory step: collecting or identifying a set of textual materials, on the Web or held in a file system, database, or content corpus manager, for analysis.
Although some text analytics systems apply exclusively advanced statistical methods, many others apply more extensive natural language processing, such as part of speech tagging, syntactic parsing, and other types of linguistic analysis.
Named entity recognition is the use of gazetteers or statistical techniques to identify named text features: people, organizations, place names, stock ticker symbols, certain abbreviations, and so on.
Disambiguation—the use of contextual clues—may be required to decide where, for instance, "Ford" can refer to a former U.S. president, a vehicle manufacturer, a movie star, a river crossing, or some other entity.
Recognition of pattern-identified entities: Features such as telephone numbers, e-mail addresses, quantities (with units) can be discerned via regular expression or other pattern matches.
Document clustering: identification of sets of similar text documents.
Coreference resolution: identification of noun phrases and other terms that refer to the same object.
Extraction of relationships, facts and events: identification of associations among entities and other information in texts.
Sentiment analysis: discerning of subjective material and extracting information about attitudes: sentiment, opinion, mood, and emotion. This is done at the entity, concept, or topic level and aims to distinguish opinion holders and objects.
Quantitative text analysis: a set of techniques stemming from the social sciences where either a human judge or a computer extracts semantic or grammatical relationships between words in order to find out the meaning or stylistic patterns of, usually, a casual personal text for the purpose of psychological profiling etc.
Pre-processing usually involves tasks such as tokenization, filtering and stemming.
Applications
Text mining technology is now broadly applied to a wide variety of government, research, and business needs. All these groups may use text mining for records management and searching documents relevant to their daily activities. Legal professionals may use text mining for e-discovery, for example. Governments and military groups use text mining for national security and intelligence purposes. Scientific researchers incorporate text mining approaches into efforts to organize large sets of text data (i.e., addressing the problem of unstructured data), to determine ideas communicated through text (e.g., sentiment analysis in social media) and to support scientific discovery in fields such as the life sciences and bioinformatics. In business, applications are used to support competitive intelligence and automated ad placement, among numerous other activities.
Security applications
Many text mining software packages are marketed for security applications, especially monitoring and analysis of online plain text sources such as Internet news, blogs, etc. for national security purposes. It is also involved in the study of text encryption/decryption.
Biomedical applications
A range of text mining applications in the biomedical literature has been described, including computational approaches to assist with studies in protein docking, protein interactions, and protein-disease associations. In addition, with large patient textual datasets in the clinical field, datasets of demographic information in population studies and adverse event reports, text mining can facilitate clinical studies and precision medicine. Text mining algorithms can facilitate the stratification and indexing of specific clinical events in large patient textual datasets of symptoms, side effects, and comorbidities from electronic health records, event reports, and reports from specific diagnostic tests. One online text mining application in the biomedical literature is PubGene, a publicly accessible search engine that combines biomedical text mining with network visualization. GoPubMed is a knowledge-based search engine for biomedical texts. Text mining techniques also enable us to extract unknown knowledge from unstructured documents in the clinical domain
Software applications
Text mining methods and software is also being researched and developed by major firms, including IBM and Microsoft, to further automate the mining and analysis processes, and by different firms working in the area of search and indexing in general as a way to improve their results. Within the public sector, much effort has been concentrated on creating software for tracking and monitoring terrorist activities. For study purposes, Weka software is one of the most popular options in the scientific world, acting as an excellent entry point for beginners. For Python programmers, there is an excellent toolkit called NLTK for more general purposes. For more advanced programmers, there's also the Gensim library, which focuses on word embedding-based text representations.
Online media applications
Text mining is being used by large media companies, such as the Tribune Company, to clarify information and to provide readers with greater search experiences, which in turn increases site "stickiness" and revenue. Additionally, on the back end, editors are benefiting by being able to share, associate and package news across properties, significantly increasing opportunities to monetize content.
Business and marketing applications
Text analytics is being used in business, particularly, in marketing, such as in customer relationship management. Coussement and Van den Poel (2008) apply it to improve predictive analytics models for customer churn (customer attrition). Text mining is also being applied in stock returns prediction.
Sentiment analysis
Sentiment analysis may involve analysis of products such as movies, books, or hotel reviews for estimating how favorable a review is for the product.
Such an analysis may need a labeled data set or labeling of the affectivity of words.
Resources for affectivity of words and concepts have been made for WordNet and ConceptNet, respectively.
Text has been used to detect emotions in the related area of affective computing. Text based approaches to affective computing have been used on multiple corpora such as students evaluations, children stories and news stories.
Scientific literature mining and academic applications
The issue of text mining is of importance to publishers who hold large databases of information needing indexing for retrieval. This is especially true in scientific disciplines, in which highly specific information is often contained within the written text. Therefore, initiatives have been taken such as Nature's proposal for an Open Text Mining Interface (OTMI) and the National Institutes of Health's common Journal Publishing Document Type Definition (DTD) that would provide semantic cues to machines to answer specific queries contained within the text without removing publisher barriers to public access.
Academic institutions have also become involved in the text mining initiative:
The National Centre for Text Mining (NaCTeM), is the first publicly funded text mining centre in the world. NaCTeM is operated by the University of Manchester in close collaboration with the Tsujii Lab, University of Tokyo. NaCTeM provides customised tools, research facilities and offers advice to the academic community. They are funded by the Joint Information Systems Committee (JISC) and two of the UK research councils (EPSRC & BBSRC). With an initial focus on text mining in the biological and biomedical sciences, research has since expanded into the areas of social sciences.
In the United States, the School of Information at University of California, Berkeley is developing a program called BioText to assist biology researchers in text mining and analysis.
The Text Analysis Portal for Research (TAPoR), currently housed at the University of Alberta, is a scholarly project to catalogue text analysis applications and create a gateway for researchers new to the practice.
Methods for scientific literature mining
Computational methods have been developed to assist with information retrieval from scientific literature. Published approaches include methods for searching, determining novelty, and clarifying homonyms among technical reports.
Digital humanities and computational sociology
The automatic analysis of vast textual corpora has created the possibility for scholars to analyze
millions of documents in multiple languages with very limited manual intervention. Key enabling technologies have been parsing, machine translation, topic categorization, and machine learning.
The automatic parsing of textual corpora has enabled the extraction of actors and their relational networks on a vast scale, turning textual data into network data. The resulting networks, which can contain thousands of nodes, are then analyzed by using tools from network theory to identify the key actors, the key communities or parties, and general properties such as robustness or structural stability of the overall network, or centrality of certain nodes. This automates the approach introduced by quantitative narrative analysis, whereby subject-verb-object triplets are identified with pairs of actors linked by an action, or pairs formed by actor-object.
Content analysis has been a traditional part of social sciences and media studies for a long time. The automation of content analysis has allowed a "big data" revolution to take place in that field, with studies in social media and newspaper content that include millions of news items. Gender bias, readability, content similarity, reader preferences, and even mood have been analyzed based on text mining methods over millions of documents. The analysis of readability, gender bias and topic bias was demonstrated in Flaounas et al. showing how different topics have different gender biases and levels of readability; the possibility to detect mood patterns in a vast population by analyzing Twitter content was demonstrated as well.
Software
Text mining computer programs are available from many commercial and open source companies and sources.
Intellectual property law
Situation in Europe
]
Under European copyright and database laws, the mining of in-copyright works (such as by web mining) without the permission of the copyright owner is illegal. In the UK in 2014, on the recommendation of the Hargreaves review, the government amended copyright law to allow text mining as a limitation and exception. It was the second country in the world to do so, following Japan, which introduced a mining-specific exception in 2009. However, owing to the restriction of the Information Society Directive (2001), the UK exception only allows content mining for non-commercial purposes. UK copyright law does not allow this provision to be overridden by contractual terms and conditions.
The European Commission facilitated stakeholder discussion on text and data mining in 2013, under the title of Licenses for Europe. The fact that the focus on the solution to this legal issue was licenses, and not limitations and exceptions to copyright law, led representatives of universities, researchers, libraries, civil society groups and open access publishers to leave the stakeholder dialogue in May 2013.
Situation in the United States
US copyright law, and in particular its fair use provisions, means that text mining in America, as well as other fair use countries such as Israel, Taiwan and South Korea, is viewed as being legal. As text mining is transformative, meaning that it does not supplant the original work, it is viewed as being lawful under fair use. For example, as part of the Google Book settlement the presiding judge on the case ruled that Google's digitization project of in-copyright books was lawful, in part because of the transformative uses that the digitization project displayed—one such use being text and data mining.
Situation in Australia
There is no exception in copyright law of Australia for text or data mining within the Copyright Act 1968. The Australian Law Reform Commission has noted that it is unlikely that the "research and study" fair dealing exception would extend to cover such a topic either, given it would be beyond the "reasonable portion" requirement.
Implications
Until recently, websites most often used text-based searches, which only found documents containing specific user-defined words or phrases. Now, through use of a semantic web, text mining can find content based on meaning and context (rather than just by a specific word). Additionally, text mining software can be used to build large dossiers of information about specific people and events. For example, large datasets based on data extracted from news reports can be built to facilitate social networks analysis or counter-intelligence. In effect, the text mining software may act in a capacity similar to an intelligence analyst or research librarian, albeit with a more limited scope of analysis. Text mining is also used in some email spam filters as a way of determining the characteristics of messages that are likely to be advertisements or other unwanted material. Text mining plays an important role in determining financial market sentiment.
See also
Concept mining
Document processing
Full text search
List of text mining software
Market sentiment
Name resolution (semantics and text extraction)
Named entity recognition
News analytics
Ontology learning
Record linkage
Sequential pattern mining (string and sequence mining)
w-shingling
Web mining, a task that may involve text mining (e.g. first find appropriate web pages by classifying crawled web pages, then extract the desired information from the text content of these pages considered relevant)
References
Citations
Sources
Ananiadou, S. and McNaught, J. (Editors) (2006). Text Mining for Biology and Biomedicine. Artech House Books.
Bilisoly, R. (2008). Practical Text Mining with Perl. New York: John Wiley & Sons.
Feldman, R., and Sanger, J. (2006). The Text Mining Handbook. New York: Cambridge University Press.
Hotho, A., Nürnberger, A. and Paaß, G. (2005). "A brief survey of text mining". In Ldv Forum, Vol. 20(1), p. 19-62
Indurkhya, N., and Damerau, F. (2010). Handbook of Natural Language Processing, 2nd Edition. Boca Raton, FL: CRC Press.
Kao, A., and Poteet, S. (Editors). Natural Language Processing and Text Mining. Springer.
Konchady, M. Text Mining Application Programming (Programming Series). Charles River Media.
Manning, C., and Schutze, H. (1999). Foundations of Statistical Natural Language Processing. Cambridge, MA: MIT Press.
Miner, G., Elder, J., Hill. T, Nisbet, R., Delen, D. and Fast, A. (2012). Practical Text Mining and Statistical Analysis for Non-structured Text Data Applications. Elsevier Academic Press.
McKnight, W. (2005). "Building business intelligence: Text data mining in business intelligence". DM Review, 21–22.
Srivastava, A., and Sahami. M. (2009). Text Mining: Classification, Clustering, and Applications. Boca Raton, FL: CRC Press.
Zanasi, A. (Editor) (2007). Text Mining and its Applications to Intelligence, CRM and Knowledge Management. WIT Press.
External links
Marti Hearst: What Is Text Mining? (October 2003)
Automatic Content Extraction, Linguistic Data Consortium
Automatic Content Extraction, NIST
Applied data mining
Computational linguistics
Natural language processing
Statistical natural language processing
Text | Text mining | [
"Technology"
] | 3,655 | [
"Natural language processing",
"Natural language and computing",
"Computational linguistics"
] |
318,466 | https://en.wikipedia.org/wiki/Agent%20Smith | Agent Smith (later simply Smith) is a fictional character and the main antagonist of The Matrix franchise. The character was primarily portrayed by Hugo Weaving in the first trilogy of films and voiced by Christopher Corey Smith in The Matrix: Path of Neo (2005), with Ian Bliss and Gideon Emery playing his human form, Bane, in the films and Path of Neo respectively. He also makes a cameo in the anime film The Animatrix (2003), voiced by Matt McKenzie. Jonathan Groff and Yahya Abdul-Mateen II portray Smith in The Matrix Resurrections (2021), the latter playing Morpheus in a dual role.
In 2008, Agent Smith was selected by Empire as the 84th Greatest Movie Character of All Time. In 2013, Weaving reprised the role for a General Electric advertisement. He is considered to be the archenemy of Neo, the main protagonist of the story.
Overview
Smith began as an Agent, an AI program in the Matrix programmed to keep order within the system by terminating human simulacra that would bring instability to the simulated reality, as well as any rogue programs that no longer serve a purpose to the Machine collective. To this end, Smith and his fellow Agents possess a number of superhuman attributes from their ability to bend the rules of the Matrix. Smith manifests his physical form by inhabiting and overwriting the simulated body of a human wired into the Matrix; by moving from body to body, he can reform himself if he is "killed" (which only kills the host body) and appear virtually anywhere. He can overcome the limitations of gravity and the human body, giving him speed and strength sufficient to dodge bullets flawlessly, punch through concrete with his bare hands, jump impossible distances, and easily recover from devastating physical assaults. He and other Agents wear white dress shirts under black business suits with matching black neckties, and sunglasses with darkened rectangular lenses. They use earpiece radios that allow them to communicate with each other instantaneously and perceive the actions of other humans wired into the Matrix via a type of shared consciousness. When Smith removes his earpiece during the first film, he is left unaware of the attack on the building in which he is holding Morpheus. Smith is armed in the first film with the Desert Eagle, chambered for high-caliber .50 AE ammunition, as is standard with all Agents within the Matrix.
At the end of the first film, Smith appears to have been deleted by Neo. However, in the sequels, Smith is revealed to have been linked to Neo, which enabled him to resist being sent to the Source – the Machines' mainframe, where obsolete or malfunctioning programs are deleted. No longer an Agent, Smith is liberated from the Machines' control and exists as a renegade program that manifests himself akin to a self-replicating computer virus compared to his original Agent-based ability to inhabit a single body wired into the Matrix. Smith gains the power to copy his physical form onto any entity in the Matrix by phasing his hand into their body and spreading a black liquid that transforms them into a copy of himself, resulting in an ever-growing army of Smiths connected by a single consciousness. By copying himself onto a human redpill in the process of disconnecting from the Matrix, Smith overwrites their consciousness and takes control of their body in the real world. This is seen when Smith takes over Bane's body in The Matrix Reloaded; however, he is repelled when he attempts to do the same to Morpheus and Neo. Smith's real power comes from his ability to absorb memories and powers from his victims, human and program alike, culminating in him taking over the Oracle and fighting Neo in the final battle of the Matrix series. Neo allows himself to be overwritten during the battle, thus giving the Machines an opportunity to delete Smith and return the Matrix and its inhabitants to normal.
Character history
The Matrix
In the first film, Smith is one of the three Agents sent to deal with Morpheus. After Neo is successfully removed from the Matrix, Smith arranges Morpheus' capture by bribing Cypher, a disillusioned member of Morpheus' crew, with reintegration into The Matrix. Upon his capture of Morpheus, he then attempts, to no avail, to get Morpheus to supply the codes to Zion's mainframe, eventually being forced to admit to Morpheus his personal motives of wishing to get away from the Matrix regarding his accessing its mainframe. However, he briefly removes his earpiece and thus misses key intel about Neo and Trinity's entry into Morpheus's holding area. When Neo manages to free Morpheus, Smith orders the dispatch of Sentinels to the Nebuchadnezzar and then interferes with Neo's escape. Neo manages to put up a fight against Smith, and narrowly escapes after Smith attempts to have Neo run over by a train. Smith survives and, alongside his fellow Agents, engages in a lengthy cross-town chase. Ultimately, Smith anticipates Neo’s final destination and guns him down. Neo revives, realizes his power as the One, and subsequently defeats Smith by entering his body and destroying the code from within.
The Matrix Reloaded
As a result of his contact with Neo from the first film, Smith is "unplugged" in the second film, no longer an Agent of the system but a "free man". This is signified by the lack of an earpiece, which he sends to Neo in an envelope as a message early in the film. His appearance has changed in the second film as well; his sunglasses now have an angular shape different from the Agents' oblong lenses, approximating the shape of the ones Neo wears. His suit and tie are now jet black, as opposed to the dark green tint from the first film. He still possesses the abilities of an Agent, but instead of being able to jump from one human to another, he is able to copy himself over any human or program in the Matrix through direct contact; this includes humans wired into the Matrix, non-Agent programs with human forms, redpills, and other Agents. Smith retains the memories and abilities, if any, of the one over which he copies himself. This ability is much like how a virus replicates, creating an ironic contrast with the first film, where Smith likens humanity to a virus. He also implies after Neo defeated his replacement agents – Thompson, Jackson, and Johnson, that Smith had existed during and was familiar with at least the fifth iteration of the Matrix and the events therein.
He makes the claim that Neo has set him free. However, he believes there is an unseen purpose that still binds him to Neo. He tries to copy his programming onto Neo, but when this fails, he and dozens of his clones attack him, forcing Neo to flee. Later, he and his clones try to stop Neo from reaching the machine mainframe, without success, although he nonetheless was successful in mortally wounding the Keymaker.
Smith copies himself onto Bane (Ian Bliss), a crew member of the Zion hovercraft Caduceus. While waiting to leave the Matrix with a message from The Oracle, Bane is attacked and overwritten by Smith, who then takes control of his body in the real world. Smith tests his control over the body by making Bane cut his own left hand palm, in preparation for an assassination attempt on Neo that he quickly abandons. He later sabotages the Zion fleet's defense of the city by triggering one ship's electromagnetic pulse weapon too early, knocking out the other ships and allowing the Sentinels to overrun them.
The Matrix Revolutions
By the start of the third film, Smith has managed to copy himself over nearly every humanoid in the Matrix, giving him complete control over the "Core Network" (the underlying foundation of the inner workings of the Matrix), thus rendering him unstoppable even for the Machines themselves. The Oracle explains to Neo that he and Smith have become equal in power and that Smith is Neo's negative, a result of the Matrix's equation trying to balance itself. She tells Neo that Smith will destroy both the Matrix and the real world unless he is stopped. Smith soon assimilates the Oracle, gaining her power of foresight, and later manifests reality-bending powers equivalent to Neo's, such as the ability to fly. Meanwhile, in the real world, Bane (now under Smith's control) stows away on a ship being used by Neo and Trinity and tries to kill them both. Neo is blinded in the fight, but discovers that his new awareness of Machine technology allows him to perceive Smith's essence despite his destroyed eyes, allowing him to take Smith by surprise and kill him.
Near the climax of the film, Neo offers a deal with the Machines to get rid of Smith in exchange for Zion's safety, warning them that Smith is beyond their control and will eventually spread to the machine city, which will result in destruction of both mankind and machines. Knowing that Neo is right, the Machines agree to his terms and command all Sentinels attacking Zion to stand down and wait for orders. They later give Neo a connection to enter the Matrix to stop Smith on their behalf. Although the Matrix is now populated exclusively by Smith and his clones, the Smith that has obtained the Oracle's powers battles Neo alone; as he explains, he has foreseen his victory, and has no need for the help of his copies. The two are almost evenly matched as the fight begins, though Neo's combat abilities seem arguably superior to that of Smith, the latter attacking more out of brute force, rather than the technical skill he displayed in the first film. This lasts, until Neo is able to punch Smith strongly enough to slam him into the street at least 20 ft away. As the fight continues, however, it becomes clear that Neo cannot win with his finite stamina against the tireless Smith, who begins to dominate Neo in the fight; by the end of the fight, he is able to brutally beat Neo into near defeat. In the midst of this battle, Smith explains to Neo his final nihilistic revelation: "It was your life that taught me the purpose of all life. The purpose of life is to end."
When Neo is near defeat, Smith demands to know why he continues to fight despite knowing he cannot win. Neo calmly responds, "Because I choose to" and is viciously pummeled by the enraged Smith as a result. Suddenly recognizing the scene from his prophecy, Smith is compelled to deliver the line he said in it: "I say.... Everything that has a beginning has an end, Neo." His own words confuse and frighten him and Neo realizes that he cannot overpower Smith and allows himself to be assimilated. Because Agent Smith has assimilated the anomaly (Neo), he is now directly connected to the Source through Neo and the machines are able to destroy all copies of his programming and reboot the Matrix without errors. The process apparently kills Neo, but it also purges the Matrix of Smith's infection, restoring all who had been infected to their original forms. Neo's body is carried away by the machines, and an uncertain peace is established between Zion and the machine world.
The Matrix Resurrections
Smith returns in The Matrix Resurrections, portrayed by Jonathan Groff. Despite his defeat at the end of The Matrix Revolutions, Smith survived destruction because Neo survived, though he lost the ability to copy himself over others, instead retaining only the abilities he possessed when he was an Agent. When the Analyst created the new version of the Matrix in order to keep Neo subdued so that the Machines' energy crisis could be solved, Smith took on a new shell in order to remain hidden. The Analyst, the creator of the new Matrix, found that Neo and Smith were bonded, and he chose to turn that bond into a 'chain': as Neo was suppressed, Smith was similarly suppressed, taking the role of Thomas Anderson's business partner, with an eye for the bottom line. Neo, in his original persona of Thomas Anderson, created a video game series based on his suppressed memories. After Neo reawakens to the Matrix, Smith regained his memories and attacked Neo, stating that he had come to like the freedom that he had been granted, and that Neo's potential return to unawareness threatened that freedom. Smith then appears at Simulatte, during Neo and Trinity's confrontation with the Analyst, saving them and aids them in fighting the Analyst's forces. Smith shoots the Analyst, causing him to vanish. Addressing Neo as Tom, Smith declares their unexpected alliance to be over, and states that the difference between the two of them is that "anyone could've been you whereas I've always been anyone." Smith then departs from his host body, leaving the man confused by the experience.
Neo also subconsciously created a version of Agent Smith in a modal influenced by his suppressed memories. This version of Agent Smith (portrayed by Yahya Abdul-Mateen II) was based upon Neo's memories of Morpheus amalgamated with his memories of the original Agent Smith, and was set free by Bugs, and became the new Morpheus.
In other media
The Animatrix
While it is unknown if it is actually him or merely just another Agent, as he was not directly named, an Agent with a heavy resemblance to Smith appears in The Animatrix film "Beyond", ordering a group of exterminators to capture Yoko and a group of kids and destroy a programming glitch in the form of an abandoned building that was causing whoever entered there to achieve complex athletic stunts without danger of serious injury or death. Earlier, he and the Agents perceived the abandoned building as an instability to the Matrix programming, and were already planning to eliminate it. Another Agent appears in "World Record," again resembling Smith, but wearing a trench coat over his usual suit and tie, where he and his fellow Agents attempt to stop a marathon runner named Dan from breaking a world record and disrupting his "signal," or connection to the Matrix, which means being able to escape from the Matrix. The Agents possess Dan's competitors and try to stop him from reaching the finish line and break his record. He appears in the end, reporting that Dan is a wheelchair user and thus unable to run or walk again, until he notices him trying to get up and repeatedly whisper "free," enraging him. However, when Dan instead falls on the floor and is helped up, the Agent is nowhere to be seen.
The Matrix Online
Despite his destruction at the end of the film series, Agent Smith (or at least the remnants of his programming) managed to return and made several appearances inside the movie's official continuation, the MMORPG The Matrix Online.
The first infection was noted in Machine mission controller Agent Gray, whose background information confirms that he was overwritten by Smith at some point during the timeline of the second and third films. This infection had somehow survived the reboot at the end of the third film and rose to the surface once again during chapter 1.2, The Hunt For Morpheus. The Agent, in both a storyline related mission and live event, showed signs of uncharacteristic speech and emotion and eventually led an assault against Zionist redpills declaring 'their stench unbearable any longer'. As a result of his actions the agent was apprehended by his fellow system representatives and scheduled for a 'thorough code cleansing'. He has shown no signs of direct infection since.
Machine liaison officer DifferenceEngine, following a similar scenario to that of the previous Agent Gray infection, also took on the dialect and emotional characteristics of the famous exile agent. Instead of attacking redpills, this instance insisted on finding 'Mr. Anderson'. In the end, the human/machine head relations liaison, Agent Pace, was made aware of the program's infection and subsequent crusade; she proceeded to lock down his RSI and return his program to the Source for analysis. His subsequent fate is unknown.
The third victim of infection was the notorious bluepill Shane Black. This man was an unfortunate victim of the Smith Virus who, once infected, gained the ability to spread the code to others. This quickly led to a small scale outbreak, with several more bluepills becoming infected and joining forces in their hunt for power. He and the other infected were eventually cleansed and returned to their bluepill lives. Shane Black's troubles continued, as he was one of the bluepills recorded to have first witnessed Unlimited redpills practicing their newfound powers at the Uriah wharf. This triggered a resurgence of the memories formed during his Smith infection and he soon became volatile and insane. He is reported to have been mercifully killed shortly afterwards.
The most recent appearance of the Smith virus was during the third anniversary events. The virus manifested itself in the form of black-suited men (although they lacked the distinct likeness of Smith). As redpills began to fight back using specialist code from the Oracle, the virus vanished suddenly, stating that he had obtained a new and more dangerous form. The nature of this form was never revealed.
The Matrix: Path of Neo
The Matrix: Path of Neo, a video game covering the events of the entire film trilogy, features a different ending than that shown in The Matrix Revolutions, with a new final boss: the MegaSmith. The MegaSmith was used for gameplay reasons, because though the Wachowskis thought the martyr approach suitable for film, they also believed that in an interactive medium such as a video game (based upon the successful completion of goals), this would not work. So, this character was created to be the more appropriate "final boss" of Path of Neo, with the final battle described by the siblings as "A little Hulk versus Galactus action". The MegaSmith is composed of destroyed buildings, cars, and parts of the road, with the "spectator Smiths" standing around the crater and in the streets acting as the MegaSmith's muscles, resulting in Smith not only becoming the city's people, but the city itself.
After Neo knocks Smith into the crater in the level "Aerial Battle", Smith is sent flying through the ground and up through the street. As Neo relaxes, the surrounding Smiths walk away from the crater and begin assembling a gigantic, thirty-storey tall version of Smith from debris and vehicles. Neo flies up to face MegaSmith. After the fight, in which Neo significantly damages MegaSmith, Neo flies straight into MegaSmith's mouth, causing the Smiths throughout the Matrix to overload and explode. The player is then shown a short scene from The Matrix: Revolutions of the streets shining with light emanating from the destroyed Smiths.
The Lego Batman Movie (2017)
Agent Smith briefly appears in The Lego Batman Movie. He appears as one of the inmates of the Phantom Zone. Agent Smith and his clones appear surveilling Joker's vandalized Wayne Island, and later appear as one of the multiple enemies attacking the heroes. Smith's clones also appear as enemies in the Lego Batman Movie story pack for Lego Dimensions, adapting their role in the film.
His voice actor was uncredited.
Space Jam: A New Legacy (2021)
Agent Smith also appeared in the live-action/animated film Space Jam: A New Legacy, which was also distributed by Warner Bros. He is among the Warner Bros. Serververse inhabitants that watch the basketball game between the Tune Squad and the Goon Squad.
MultiVersus (2024)
Smith appears as a Bruiser fighter in the fighting game MultiVersus, voiced by Sky Soleil.
Personality
From the start, it is evident that Agent Smith is significantly stronger, smarter, and more individualistic than the other Agents. While the other agents rarely act without consulting each other via their earpieces, to the point where they often finish each other's sentences, Smith is usually the one giving orders or using his earpiece to gather information for his own ends. Smith also appears to be the leader of other Agents in the first film, as he has the authority to launch Sentinel attacks in the real world. As with other Agents, Smith generally approaches problems through a pragmatic point of view but, if necessary, will also act with brute force and apparent rage, especially when provoked by Neo.
The earpieces represent some form of control mechanism by the machines. It is notable that when he is interrogating Morpheus, he sends the other agents from the room, then removes his earpiece, releasing himself from the link to the machines before expressing his opinion of humanity. Early in the second film, Smith's earpiece is sent to Neo in an envelope as a message from Smith, representing Smith's newfound freedom.
Agent Smith complains to Morpheus that the Matrix and its inhabitants smell disgusting, "if there is such a thing [as smell]". Smith has an open hatred of humans and their weakness of the flesh. He compares humanity to a virus; a disease organism that uncontrollably replicates and would inevitably destroy their environment were it not for the machine intelligences keeping them in check (however, viruses are not organisms). Ironically, Smith eventually becomes a computer virus, multiplying until he has overrun the entire Matrix.
At the same time, Smith develops an animosity towards the Matrix itself, feeling that he is as much a prisoner of it as the humans he is tasked with controlling. He later develops an immense and increasingly open desire for the destruction of both mankind and machines.
He was also shown to be a nihilist, which eventually culminates in his statement that the purpose of life is to end, and crediting Neo's life for his determining this. During his final showdown with Neo, Smith angrily dismisses causes such as freedom, truth, peace, and love as simply human attempts to justify a meaningless and purposeless existence, and is completely unable to comprehend why Neo continues to fight him despite the knowledge that he cannot win.
The Wachowskis have commented that Smith's gradual humanization throughout The Matrix is a process intended to mirror and balance Neo's own increasing power and understanding of the machine world.
Portrayal
French actor Jean Reno was originally offered the role of Agent Smith in The Matrix, but he declined as he was at one point of his career in which he did not want to leave his native France, unwilling to move to Australia for a four-and-a-half months shooting. Hugo Weaving was ultimately cast as Smith. According to Weaving, he enjoyed playing the character because it amused him. He went to develop a neutral accent but with more specific character for the role. He wanted Smith to sound neither human nor robotic. He also said that the Wachowskis' voices influenced his voice in the film. When filming for The Matrix began, Weaving mentioned that he was excited to be a part of something that would extend him.
Following the announcement that Warner Bros. was planning a relaunch of The Matrix franchise, Hugo Weaving stated that he was open to reprising the role but only if the Wachowskis were involved. In 2019, The Matrix Resurrections was confirmed for a 2021 release, but Weaving would not be returning. Originally, Weaving was approached to reprise the role by Lana Wachowski, but he had scheduling conflicts with his involvement in Tony Kushner's theatrical adaptation of The Visit, leading Wachowski to conclude that the dates would not work and write him out from the film. Jonathan Groff was cast to replace Weaving in the role, with Yahya Abdul-Mateen II portraying a version of Smith inside a modal created by Neo.
Design
All Agents (other than Agents Perry and Pace from The Matrix Online game, and the modal version of Agent Smith that becomes Morpheus in The Matrix Resurrections) are white males, as opposed to the population of Zion, which contains people of many ethnic groups. Agents wear rectangular sunglasses, black business suits and neckties, and earpiece radios. This is similar to a stereotypical portrayal of a government agent or "Man in Black." When Smith loses his status as an Agent, his suit and tie lose the greenish hue present on everything in the Matrix, suggesting he is no longer a part of it, and his sunglasses take on an angled contour that approximates the rounded shape of the ones Neo wears. Smith also removes his earpiece and sends it to Neo. In contrast to the other Agents who show apathy toward the human race, Smith harbors an acute disgust with humanity. In the first film, he expresses a desire to leave the Matrix to escape its repulsive taint, and reasons that with Zion destroyed, his services will no longer be required, allowing him in some sense to 'leave' the Matrix. This at least partially explains his extreme antagonism towards Neo, who fights relentlessly to save Zion.
Other Agents have common English names like Brown, Jones, and Thompson. It was mentioned in the Philosopher Commentary on the DVD collection that the names of Smith, Brown, and Jones may be endemic to the system itself, demonstrating a very "robotic" mindset on the part of the Machines.
Neo's solitary role as the One is contrasted by Smith, who, by replicating himself, becomes "the many". When Neo asks the Oracle about Smith, the Oracle explains that Smith is Neo's opposite and his negative, the result of the Matrix's governing equations trying to balance themselves.
Unlike the other characters in The Matrix, Smith almost always refers to Neo as "Mr. Anderson". He calls him "Neo" only once in each part of the trilogy: the first time when he is interviewing Neo about his double life, the second when he is dropping off his earpiece for Neo, and the third when he is repeating a line of his vision to Neo.
Weaving said of the film series in 2003 that it was always going to be a trilogy, and that as Neo's nemesis, Smith was always going to be there, describing Smith as "more of a free agent" later on in the series.
Reception
Christopher Borrelli praised the writing of Smith, noting that the character "had all the good lines", and praising Weaving's portrayal of the character as showing "refreshingly nihilistic wit".
The character has been described as a 1950s "organization man", like Sergeant Joe Friday from Dragnet.
Hugo Weaving reprised the role of Smith in a parody used for a 2013 GE General Electric advertisement, in which multiple copies of him appear throughout a hospital and the advertisement concludes with Smith offering a choice of a red or blue lollipop to a boy.
See also
Men in black
Simulated reality
References
External links
Fictional artificial intelligences
Fictional assassins
Film characters introduced in 1999
Fictional characters who can duplicate themselves
Fictional characters who can move at superhuman speeds
Fictional characters with superhuman strength
Fictional characters with body or mind control abilities
Fictional computer viruses
Fictional government agents
Fictional gunfighters in films
Fictional mass murderers
Fictional superorganisms
Fictional super soldiers
Martial artist characters in films
Science fiction film characters
The Matrix (franchise) characters
Advertising characters
Male film villains
Video game bosses
Action film villains
Male characters in advertising
Film supervillains | Agent Smith | [
"Biology"
] | 5,564 | [
"Superorganisms",
"Fictional superorganisms"
] |
318,484 | https://en.wikipedia.org/wiki/National%20Radio%20Astronomy%20Observatory | The National Radio Astronomy Observatory (NRAO) is a federally funded research and development center of the United States National Science Foundation operated under cooperative agreement by Associated Universities, Inc. for the purpose of radio astronomy. NRAO designs, builds, and operates its own high-sensitivity radio telescopes for use by scientists around the world.
Locations
Charlottesville, Virginia
The NRAO headquarters is located on the campus of the University of Virginia in Charlottesville, Virginia. The North American ALMA Science Center and the NRAO Technology Center and Central Development Laboratory are also in Charlottesville.
Green Bank, West Virginia
NRAO was, until October 2016, the operator of the world's largest fully steerable radio telescope, the Robert C. Byrd Green Bank Telescope, which stands near Green Bank, West Virginia. The observatory contains several other telescopes, among them the telescope that utilizes an equatorial mount uncommon for radio telescopes, three telescopes forming the Green Bank Interferometer, a telescope used by school groups and organizations for small scale research, a fixed radio "horn" built to observe the radio source Cassiopeia A, as well as a reproduction of the original antenna built by Karl Jansky while he worked for Bell Labs to detect the interference that was discovered to be previously unknown natural radio waves emitted by the universe.
Green Bank is in the National Radio Quiet Zone, which is coordinated by NRAO for protection of the Green Bank site as well as the Sugar Grove Station monitoring site operated by the NSA. The zone consists of a piece of land where fixed transmitters must coordinate their emissions before a license is granted. The land was set aside by the Federal Communications Commission in 1958. No fixed radio transmitters are allowed within the area closest to the telescope. All other fixed radio transmitters including TV and radio towers inside the zone are required to transmit such that interference at the antennas is minimized by methods including limited power and using highly directional antennas. With the advent of wireless technology and microprocessors in everything from cameras to cars, it is difficult to keep the sites free of radio interference. To aid in limiting outside interference, the area surrounding the Green Bank Observatory was at one time planted with pines characterized by needles of a certain length to block electromagnetic interference at the wavelengths used by the observatory. At one point, the observatory faced the problem of North American flying squirrels tagged with United States Fish and Wildlife Service telemetry transmitters. Electric fences, electric blankets, faulty automobile electronics, and other radio wave emitters have caused great trouble for the astronomers in Green Bank. All vehicles on the premises are powered by diesel motors to minimize interference by ignition systems.
Until its collapse on November 15, 1988, a 300 ft radio telescope stood at the Green Bank Observatory's unique site. It was the largest radio telescope on Earth when it was brought online for its first observation at 12:42 am on September 21, 1962. The telescope's first observation was of the remnants of Tycho's supernova that had exploded 11 November 1572. Two major overhauls installed a new surface in 1970 to correct for maintenance, snow damage, and warping from its sheer size; then a new, bigger project building was constructed in 1972 that incorporated a Farady cage around the control room itself. The telescope stood at 240 ft in height, weighed 600 tons, had a 2-minute arc accuracy, and had a surface accuracy of ~1 inch. The collapse in 1988 was found to be due to unanticipated stresses which cracked a hidden, yet weight- and stress-supporting steel connector plate, in the support structure of the massive telescope. A cascade failure of the structure occurred at 9:43 pm, causing the entire telescope to implode. The debris from the collapse was cleared by June 1989, and West Virginia Senator Robert C. Byrd would pioneer a campaign in Congress to replace it with the Green Bank Telescope, construction for which began in 1990.
Socorro, New Mexico
The NRAO's facility in Socorro is the Pete Domenici Array Operations Center (AOC). Located on the New Mexico Tech university campus, the AOC serves as the headquarters for the Very Large Array (VLA), which was the setting for the 1997 movie Contact, and is also the control center for the Very Long Baseline Array (VLBA). The ten VLBA telescopes are in Hawaii, the U.S. Virgin Islands, and eight other sites across the continental United States.
Tucson, Arizona
Offices were located on the University of Arizona campus. NRAO formerly operated the 12-Meter Telescope on Kitt Peak. NRAO suspended operations at this telescope and funding was rerouted to the Atacama Large Millimeter Array (ALMA) instead. The Arizona Radio Observatory now operates the 12-Meter Telescope.
San Pedro de Atacama, Chile
The Atacama Large Millimeter Array (ALMA) site in Chile is at ~ altitude near Cerro Chajnantor in northern Chile. This is about east of the historic village of San Pedro de Atacama, southeast of the mining town of Calama, and about east-northeast of the coastal port of Antofagasta.
Jansky Prize
The Karl G. Jansky Lectureship is a prestigious lecture awarded by the board of trustees of the NRAO. The Lectureship is awarded "to recognize outstanding contributions to the advancement of radio astronomy." Recipients have included Fred Hoyle, Charles Townes, Edward M. Purcell, Subrahmanyan Chandrasekhar, Philip Morrison, Vera Rubin, Jocelyn Bell Burnell, Frank J. Low, and Mark Reid. The lecture is delivered in Charlottesville, Green Bank, and in Socorro.
See also
List of astronomical observatories
National Optical Astronomy Observatory
References
External links
Radio observatories
Astronomy institutes and departments
Federally Funded Research and Development Centers
University of Virginia
National Science Foundation
Radio astronomy research institutes
Research institutes in Virginia
Science and technology in Virginia
Research institutes established in 1956
Scientific organizations established in 1956
1956 establishments in Virginia
Organizations based in Virginia | National Radio Astronomy Observatory | [
"Astronomy"
] | 1,208 | [
"Astronomy organizations",
"Astronomy institutes and departments"
] |
318,516 | https://en.wikipedia.org/wiki/Arno%20Allan%20Penzias | Arno Allan Penzias (; April 26, 1933 – January 22, 2024) was an American physicist and radio astronomer. Along with Robert Woodrow Wilson, he discovered the cosmic microwave background radiation, for which he shared the Nobel Prize in Physics in 1978.
Early life and education
Penzias was born in Munich, Germany, the son of Justine (née Eisenreich) and Karl Penzias, who ran a leather business. His grandparents had come to Munich from Poland and were among the leaders of the Reichenbachstrasse shul. At age six, he and his brother Gunther were among the Jewish children evacuated to Britain as part of the Kindertransport rescue operation. Some time later, his parents also fled Nazi Germany, first for the United Kingdom, and then for the United States, and the family settled in the Bronx, New York City in 1940. In 1946, Penzias became a naturalized citizen of the United States.
He graduated from Brooklyn Technical High School in 1951 and after enrolling to study chemistry at the City College of New York, he changed majors and graduated 1954 with a degree in physics, ranked near the top of his class. Following graduation, Penzias served for two years as a radar officer in the U.S. Army Signal Corps. This led to a research assistantship in the Columbia University Radiation Laboratory, which was then heavily involved in microwave physics. Penzias worked under Charles H. Townes, who later invented the maser. Penzias enrolled as a graduate student at Columbia University in 1956, where he earned a master's degree and a PhD in physics, the latter in 1962.
Career
Penzias went on to work at Bell Labs in Holmdel Township, New Jersey, where, with Robert Woodrow Wilson, he worked on ultra-sensitive cryogenic microwave receivers, intended for radio astronomy observations. In 1964, on building their most sensitive antenna/receiver system, the pair encountered radio noise that they could not explain. It was far less energetic than the radiation given off by the Milky Way, and it was isotropic, so they assumed their instrument was subject to interference by terrestrial sources. They tried, and then rejected, the hypothesis that the radio noise emanated from New York City. An examination of the microwave horn antenna showed it was full of bat and pigeon droppings, which Penzias described as "white dielectric material". After the pair removed the dung buildup the noise remained. Having rejected all sources of interference, Penzias contacted Robert H. Dicke, who suggested it might be the background radiation predicted by some cosmological theories. The pair agreed with Dicke to publish side-by-side letters in the Astrophysical Journal, with Penzias and Wilson describing their observations and Dicke suggesting the interpretation as the cosmic microwave background (CMB), the radio remnant of the Big Bang. This proved to be landmark evidence for the Big Bang and provided substantial confirmation for predictions made by Ralph Asher Alpher, Robert Herman and George Gamow in the 1940s and 1950s.
Personal life
Penzias was a resident of Highland Park, New Jersey, in the 1990s. In 1996, Penzias married Silicon Valley executive Sherry Levit. He had a son, David, and two daughters, Mindy Penzias Dirks, and Rabbi Shifra (Laurie) Weiss-Penzias. Penzias also had a stepson, Carson, and a stepdaughter, Victoria.
Penzias died from complications of Alzheimer's disease at an assisted living facility in San Francisco, on January 22, 2024, at the age of 90.
Honors and awards
Penzias was elected a Fellow of the American Academy of Arts and Sciences and the National Academy of Sciences in 1975. In 1977, Penzias and Wilson received the Henry Draper Medal of the National Academy of Sciences. The two were awarded the 1978 Nobel Prize in Physics for their discovery of cosmic microwave background radiation, sharing it with Pyotr Kapitsa. Kapitsa's work on low-temperature physics was unrelated to Penzias' and Wilson's. In 1979, Penzias received the Golden Plate Award of the American Academy of Achievement. He was also the recipient of The International Center in New York's Award of Excellence. In 1998, he was awarded the IRI Medal from the Industrial Research Institute.
On April 26, 2019, the Nürnberger Astronomische Gesellschaft e.V. (NAG) inaugurated the 3-meter radio telescope at the Regiomontanus-Sternwarte, the public observatory of Nuremberg, and dedicated this instrument to Arno Penzias.
On September 11, 2023, the Radio Club of America said that Penzias would be honored with the inauguration of the "Dr. Arno A. Penzias Award for Contributions to Basic Research in the Radio Sciences." The club said the award recognizes his significant contributions to basic research involving radio frequency and related subjects and that it would inspire future generations of scientific professionals. The club also announced that the first recipient of the new award will be named in 2024.
Works
See also
Discovery of cosmic microwave background radiation
List of Jewish Nobel laureates
References
External links
including the Nobel Lecture, December 8, 1978, "The Origin of Elements"
The first part of the article "Ideas" authored by Arno Penzias that was published in Science Reporter magazine
The second part of the article "Ideas" authored by Arno Penzias
A Whisper From Space (IMDb)
Nürnberger Astronomische Gesellschaft e.V.: Web-Seite Arno-Penzias-Radioteleskop (german)
1933 births
2024 deaths
American astronomers
American Nobel laureates
Brooklyn Technical High School alumni
City College of New York alumni
Columbia Graduate School of Arts and Sciences alumni
Columbia University staff
Deaths from Alzheimer's disease in California
Deaths from dementia in California
Emigrants from Nazi Germany to the United States
Fellows of the American Academy of Arts and Sciences
Fellows of the American Physical Society
Jewish American military personnel
Jewish American physicists
Jewish astronomers
Kindertransport refugees
Members of the United States National Academy of Engineering
Members of the United States National Academy of Sciences
Military personnel from New York City
Military personnel from New York (state)
Nobel laureates in Physics
People from Highland Park, New Jersey
Radio astronomers
Scientists at Bell Labs
Scientists from New York (state)
United States Army officers
United States Army personnel of the Korean War
United States Army Signal Corps personnel | Arno Allan Penzias | [
"Astronomy"
] | 1,319 | [
"Astronomers",
"Jewish astronomers"
] |
318,577 | https://en.wikipedia.org/wiki/Cavendish%20experiment | The Cavendish experiment, performed in 1797–1798 by English scientist Henry Cavendish, was the first experiment to measure the force of gravity between masses in the laboratory and the first to yield accurate values for the gravitational constant. Because of the unit conventions then in use, the gravitational constant does not appear explicitly in Cavendish's work. Instead, the result was originally expressed as the relative density of Earth, or equivalently the mass of Earth. His experiment gave the first accurate values for these geophysical constants.
The experiment was devised sometime before 1783 by geologist John Michell, who constructed a torsion balance apparatus for it. However, Michell died in 1793 without completing the work. After his death the apparatus passed to Francis John Hyde Wollaston and then to Cavendish, who rebuilt the apparatus but kept close to Michell's original plan. Cavendish then carried out a series of measurements with the equipment and reported his results in the Philosophical Transactions of the Royal Society in 1798.
The experiment
The apparatus consisted of a torsion balance made of a wooden rod horizontally suspended from a wire, with two , lead spheres, one attached to each end. Two massive , lead balls, suspended separately, could be positioned away from or to either side of the smaller balls, away. The experiment measured the faint gravitational attraction between the small and large balls, which deflected the torsion balance rod by about 0.16" (or only 0.03" with a stiffer suspending wire).
The two large balls could be positioned either away from or to either side of the torsion balance rod. Their mutual attraction to the small balls caused the arm to rotate, twisting the suspension wire. The arm rotated until it reached an angle where the twisting force of the wire balanced the combined gravitational force of attraction between the large and small lead spheres. By measuring the angle of the rod and knowing the twisting force (torque) of the wire for a given angle, Cavendish was able to determine the force between the pairs of masses. Since the gravitational force of the Earth on the small ball could be measured directly by weighing it, the ratio of the two forces allowed the relative density of the Earth to be calculated, using Newton's law of gravitation.
Cavendish found that the Earth's density was times that of water (although due to a simple arithmetic error, found in 1821 by Francis Baily, the erroneous value appears in his paper). The current accepted value is 5.514 g/cm3.
To find the wire's torsion coefficient, the torque exerted by the wire for a given angle of twist, Cavendish timed the natural oscillation period of the balance rod as it rotated slowly clockwise and counterclockwise against the twisting of the wire. For the first 3 experiments the period was about 15 minutes and for the next 14 experiments the period was half of that, about 7.5 minutes. The period changed because after the third experiment Cavendish put in a stiffer wire. The torsion coefficient could be calculated from this and the mass and dimensions of the balance. Actually, the rod was never at rest; Cavendish had to measure the deflection angle of the rod while it was oscillating.
Cavendish's equipment was remarkably sensitive for its time. The force involved in twisting the torsion balance was very small, , (the weight of only 0.0177 milligrams) or about of the weight of the small balls. To prevent air currents and temperature changes from interfering with the measurements, Cavendish placed the entire apparatus in a mahogany box about 1.98 meters wide, 1.27 meters tall, and 14 cm thick, all in a closed shed on his estate. Through two holes in the walls of the shed, Cavendish used telescopes to observe the movement of the torsion balance's horizontal rod. The key observable was of course the deflection of the torsion balance rod, which Cavendish measured to be about 0.16" (or only 0.03" for the stiffer wire used mostly). Cavendish was able to measure this small deflection to an accuracy of better than using vernier scales on the ends of the rod.
The accuracy of Cavendish's result was not exceeded until C. V. Boys' experiment in 1895. In time, Michell's torsion balance became the dominant technique for measuring the gravitational constant (G) and most contemporary measurements still use variations of it.
Cavendish's result provided additional evidence for a planetary core made of metal, an idea first proposed by Charles Hutton based on his analysis of the 1774 Schiehallion experiment. Cavendish's result of 5.4 g·cm−3, 23% bigger than Hutton's, is close to 80% of the density of liquid iron, and 80% higher than the density of the Earth's outer crust, suggesting the existence of a dense iron core.
Reformulation of Cavendish's result to G
The formulation of Newtonian gravity in terms of a gravitational constant did not become standard until long after Cavendish's time. Indeed, one of the first references to G is in 1873, 75 years after Cavendish's work.
Cavendish expressed his result in terms of the density of the Earth. He referred to his experiment in correspondence as 'weighing the world'. Later authors reformulated his results in modern terms.
After converting to SI units, Cavendish's value for the Earth's density, 5.448 g cm−3, gives
G = ,
which differs by only 1% from the 2014 CODATA value of .
Today, physicists often use units where the gravitational constant takes a different form. The Gaussian gravitational constant used in space dynamics is a defined constant and the Cavendish experiment can be considered as a measurement of this constant.
In Cavendish's time, physicists used the same units for mass and weight, in effect taking g as a standard acceleration. Then, since R was known, ρ played the role of an inverse gravitational constant. The density of the Earth was hence a much sought-after quantity at the time, and there had been earlier attempts to measure it, such as the Schiehallion experiment in 1774.
Derivation of G and the Earth's mass
The following is not the method Cavendish used, but describes how modern physicists would calculate the results from his experiment. From Hooke's law, the torque on the torsion wire is proportional to the deflection angle of the balance. The torque is where is the torsion coefficient of the wire. However, a torque in the opposite direction is also generated by the gravitational pull of the masses. It can be written as a product of the attractive force of a large ball on a small ball and the distance L/2 to the suspension wire. Since there are two balls, each experiencing force F at a distance from the axis of the balance, the torque due to gravitational force is LF. At equilibrium (when the balance has been stabilized at an angle ), the total amount of torque must be zero as these two sources of torque balance out. Thus, we can equate their magnitudes given by the formulas above, which gives the following:
For F, Newton's law of universal gravitation is used to express the attractive force between a large and small ball:
Substituting F into the first equation above gives
To find the torsion coefficient () of the wire, Cavendish measured the natural resonant oscillation period T of the torsion balance:
Assuming the mass of the torsion beam itself is negligible, the moment of inertia of the balance is just due to the small balls. Treating them as point masses, each at L/2 from the axis, gives:
,
and so:
Solving this for , substituting into (1), and rearranging for G, the result is:
.
Once G has been found, the attraction of an object at the Earth's surface to the Earth itself can be used to calculate the Earth's mass and density:
Definitions of terms
References
Sources
Establishes that Cavendish didn't determine G.
Discusses Michell's contributions, and whether Cavendish determined G.
Review of gravity measurements since 1740.
External links
Cavendish’s experiment in the Feynman Lectures on Physics
Sideways Gravity in the Basement, The Citizen Scientist, July 1, 2005. Homebrew Cavendish experiment, showing calculation of results and precautions necessary to eliminate wind and electrostatic errors.
"Big 'G'", Physics Central, retrieved Dec. 8, 2013. Experiment at Univ. of Washington to measure the gravitational constant using variation of Cavendish method.
. Discusses current state of measurements of G.
Model of Cavendish's torsion balance, retrieved Aug. 28, 2007, at Science Museum, London.
Physics experiments
1790s in science
1797 in science
1798 in science
Geodesy
Gravity
Royal Society | Cavendish experiment | [
"Physics",
"Mathematics"
] | 1,801 | [
"Applied mathematics",
"Geodesy",
"Experimental physics",
"Physics experiments"
] |
318,580 | https://en.wikipedia.org/wiki/International%20Date%20Line | The International Date Line (IDL) is the line extending between the South and North Poles that is the boundary between one calendar day and the next. It passes through the Pacific Ocean, roughly following the 180.0° line of longitude and deviating to pass around some territories and island groups. Crossing the date line eastbound decreases the date by one day, while crossing the date line westbound increases the date.
The line is a cartographic convention, and is not defined by international law. This has made it difficult for cartographers to agree on its precise course, and has allowed countries through whose waters it passes to move it at times for their convenience.
Geography
Circumnavigating the globe
People traveling westward around the world must set their clocks:
Back by one hour for every 15° of longitude crossed, and
Forward by 24 hours upon crossing the International Date Line.
People traveling eastward must set their clocks:
Forward by one hour for every 15° of longitude crossed, and
Back by 24 hours upon crossing the International Date Line.
Moving forward or back 24 hours generally also implies a one day date change.
The 14th century Arab geographer Abulfeda predicted that circumnavigators would accumulate a one-day offset to the local date. This phenomenon was confirmed in 1522 at the end of the Magellan–Elcano expedition, the first successful circumnavigation. After sailing westward around the world from Spain, the expedition called at Cape Verde for provisions on Wednesday, 9 July 1522 (ship's time). However, the locals told them that it was actually Thursday, 10 July 1522. The crew was surprised, as they had recorded each day of the three-year journey without omission. Cardinal Gasparo Contarini, the Venetian ambassador to Spain, was the first European to give a correct explanation of the discrepancy.
Description
This description is based on the most common understanding of the de facto International Date Line. See below, and map above at right.
The IDL is roughly based on the meridian of 180° longitude, roughly down the middle of the Pacific Ocean, and halfway around the world from the IERS Reference Meridian, the successor to the historic Greenwich prime meridian running through the Royal Greenwich Observatory. In many places, the IDL follows the 180° meridian exactly. In other places, however, the IDL deviates east or west away from that meridian. These various deviations generally accommodate the political and/or economic affiliations of the affected areas.
Proceeding from north to south, the first deviation of the IDL from 180° is to pass to the east of Wrangel Island and the Chukchi Peninsula, the easternmost part of Russian Siberia. (Wrangel Island lies directly on the meridian at 71°32′N 180°0′E, also noted as 71°32′N 180°0′W.) It then passes through the Bering Strait between the Diomede Islands at a distance of from each island at 168°58′37″ W. It then bends considerably west of 180°, passing west of St. Lawrence Island and St. Matthew Island.
The IDL crosses between the U.S. Aleutian Islands (Attu Island being the westernmost) and the Commander Islands, which belong to Russia. It then bends southeast again to return to 180°. Thus, all of Russia is to the west of the IDL, and all of the United States is to the east except for the insular areas of Guam, the Northern Mariana Islands, and Wake Island, reaching the hypothetical, but not used UTC–13:00 time zone.
The IDL remains on the 180° meridian until passing the equator. Two U.S.-owned uninhabited atolls, Howland Island and Baker Island, just north of the equator in the central Pacific Ocean (and ships at sea between 172.5°W and 180°), have the earliest time on Earth (UTC−12:00 hours).
The IDL circumscribes Kiribati by swinging far to the east, almost reaching the 150°W meridian. Kiribati's easternmost islands, the southern Line Islands south of Hawaii, have the latest time on Earth, UTC+14:00 hours.
South of Kiribati, the IDL returns westwards but remains east of 180°, passing between Samoa and American Samoa. Accordingly, Samoa, Tokelau, Wallis and Futuna, Fiji, Tonga, Tuvalu, and New Zealand's Kermadec Islands and Chatham Islands are all west of the IDL and have the same date. American Samoa, the Cook Islands, Niue, and French Polynesia are east of the IDL and one day behind.
The IDL then bends southwest to return to 180°. It follows that meridian until reaching Antarctica, which has multiple time zones. Conventionally, the IDL is not drawn into Antarctica on most maps. (See below.)
Facts dependent on the IDL
According to the clock, the first areas to experience a new day and a New Year are islands that use UTC+14:00. These include portions of the Republic of Kiribati, including Millennium Island in the Line Islands. The first major cities to experience a new day are Auckland and Wellington, New Zealand (UTC+12:00; UTC+13:00 with daylight saving time).
A 1994 realignment of the IDL made Caroline Island one of the first points of land on Earth to reach January 1, 2000, on the calendar (UTC+14:00). As a result, this atoll was renamed Millennium Island.
The areas that are the first to see the daylight of a new day vary by the season. Around the June solstice, the first area would be any place within the Kamchatka Time Zone (UTC+12:00) that is far enough north to experience midnight sun on the given date. At the equinoxes, the first place to see daylight would be the uninhabited Millennium Island in Kiribati, which is the easternmost land located west of the IDL.
Near the December solstice, the first places would be Antarctic research stations using New Zealand Time (UTC+13:00) during summer that experience midnight sun. These include Amundsen-Scott South Pole Station, McMurdo Station, Scott Base and Zucchelli Station.
De facto and de jure date lines
There are two ways time zones and thereby the location of the International Date Line are determined, one on land and adjacent territorial waters, and the other on open seas.
All nations unilaterally determine their standard time zones, applicable only on land and adjacent territorial waters. This date line can be called de facto since it is not based on international law, but on national laws. These national zones do not extend into international waters.
The nautical date line, not the same as the IDL, is a de jure construction determined by international agreement. It is the result of the 1917 Anglo-French Conference on Time-keeping at Sea, which recommended that all ships, both military and civilian, adopt hourly standard time zones on the high seas. The United States adopted its recommendation for U.S. military and merchant marine ships in 1920. This date line is implied but not explicitly drawn on time zone maps. It follows the 180° meridian except where it is interrupted by territorial waters adjacent to land, forming gaps—it is a pole-to-pole dashed line. The 15° gore that is offset from UTC by 12 hours is bisected by the nautical date line into two 7.5° gores that differ from UTC by ±12 hours.
In theory, ships are supposed to adopt the standard time of a country if they are within its territorial waters within of land, then revert to international time zones (15° wide pole-to-pole gores) as soon as they leave. In practice, ships use these time zones only for radio communication and similar purposes. For internal (within-ship) purposes, such as work and meal hours, ships use a time zone of their own choosing.
Cartographic practice and convention
The IDL on the map in this article and all other maps is based on the de facto line and is an artificial construct of cartographers, as the precise course of the line in international waters is arbitrary. The IDL does not extend into Antarctica on the world time zone maps by the United States Central Intelligence Agency (CIA) or the United Kingdom's His Majesty's Nautical Almanac Office (HMNAO). The IDL on modern CIA maps now reflects the most recent shifts in the IDL (see below). The current HMNAO map does not draw the IDL in conformity with recent shifts in the IDL; it draws a line virtually identical to that adopted by the UK's Hydrographic Office about 1900. Instead, HMNAO labels island groups with their time zones, which do reflect the most recent IDL shifts. This approach is consistent with the principle of national and nautical time zones: the islands of eastern Kiribati are actually "islands" of Asian date (west side of IDL) in a sea of American date (east side of IDL). Similarly, the western Aleutian Islands are islands of American date in a sea of Asian date.
No international organization, nor any treaty between nations, has fixed the IDL drawn by cartographers: the 1884 International Meridian Conference explicitly refused to propose or agree to any time zones, stating that they were outside its purview. The conference resolved that the Universal Day, midnight-to-midnight Greenwich Mean Time (now redefined and updated as Coordinated Universal Time, or UTC), which it did agree to, "shall not interfere with the use of local or standard time where desirable". From this comes the utility and importance of UTC or "Z" ("Zulu") time: it permits a single universal reference for time that is valid for all points on the globe at the same moment.
Historic alterations
Philippines (1521 and 1844)
Ferdinand Magellan claimed the Philippines for Spain on Saturday, , having sailed westwards from Seville across the Atlantic Ocean and the Pacific Ocean. As part of New Spain, the Philippines had its most important communication with Acapulco in Mexico, so it was on the eastern side of the IDL despite being on the western edge of the Pacific Ocean. As a result, the Philippines was one day behind its Asian neighbours for 323 years, 9 months and 14 days from Saturday, 16 March 1521 (Julian Calendar) until Monday, 30 December 1844 (Gregorian Calendar).
After Mexico gained its independence from Spain on 27 September 1821, Philippine trade interests turned to Imperial China, the Dutch East Indies and adjacent areas, so the Philippines decided to join its Asian neighbours on the west side of the IDL. To advance the calendar by one day, on 16 August 1844 the then governor-general Narciso Claveria, ordered that Tuesday, should be removed from the calendar. Monday, was followed immediately by Wednesday, . The change also applied to the other remaining Spanish colonies in the Pacific: the Caroline Islands, Guam, Mariana Islands, Marshall Islands and Palau as part of the Captaincy General of the Philippines. European publications were generally unaware of this change until the early 1890s, so they erroneously gave the International Date Line a large western bulge for the next half century.
Tahiti & French Polynesia (early 1797 and late 1846)
On 5 March 1797, missionaries of the London Missionary Society arrived on Tahiti from England. They had first tried to pass Cape Horn, but failing that, went along Cape of Good Hope and the Indian Ocean instead. As such, they introduced the date of the eastern hemisphere on the island. It was not until the ending of the Franco-Tahitian War and the restoration of the French Protectorate over the Tahitian Kingdom (which Tahitian nationalists had tried to fight off for two years of intense war with more than 1000 deaths) that the French commissioner Armand Joseph Bruat and the regent of the Tahitian Kingdom Paraita ordered that Tahiti had to follow the western hemisphere on 29 December 1846.
Alaska (1867)
Alaska was on the western side of the International Date Line, since Russian settlers reached Alaska from Siberia. In addition, the Russian Empire was still using the Julian calendar, which had fallen 12 days behind the Gregorian calendar. In 1867, the United States purchased Russian America and moved the territory to the east side of the International Date Line. The transfer ceremony took place at 3:30p.m. local mean time (00:31 GMT) in the capital of New Archangel (Sitka), on Saturday, (Julian), which was Saturday, (Gregorian) in Europe. Since Alaska moved to the eastern side of the International Date Line, the date and time also moved back to 3:30p.m. local time Friday, (00:31 GMT Saturday), now known as Alaska Day.
Samoan Islands and Tokelau (1892 and 2011)
The Samoan Islands, now divided into Samoa and American Samoa, were on the west side of the IDL until 1892. In that year, King Mālietoa Laupepa was persuaded by American traders to adopt the American date (three hours behind California) to replace the former Asian date (four hours ahead of Japan). The change was made by repeating , American Independence Day.
In 2011, Samoa shifted back to the west side of the IDL by removing Friday, from its calendar. This changed the time zone from UTC−11:00 to UTC+13:00 (UTC-10 to UTC+14 Dst ). Samoa made the change because Australia and New Zealand have become its biggest trading partners, and also have large communities of expatriates. Being 21 hours behind made business difficult because having weekends on backward days meant only four days of the week were shared workdays.
The IDL now passes between Samoa and American Samoa, which remains on the east (American) side of the line.
Tokelau is a territory of New Zealand north of Samoa whose principal transportation and communications links with the rest of the world pass through Samoa. For that reason, Tokelau crossed the IDL along with Samoa in 2011, albeit strictly speaking 1 hour later, as they did not do Summer Time (Daylight Saving Time in American English), which Samoa did then.
Kwajalein (c. 1945 and 1993)
Kwajalein atoll, like the rest of the Marshall Islands, passed from Spanish to German to Japanese control during the nineteenth and twentieth centuries. During that period it was west of the IDL. Although Kwajalein formally became part of the Trust Territory of the Pacific Islands with the rest of the Marshalls after World War II, the United States established a military installation there. Because of that, Kwajalein used the Hawaiian date, so was effectively east of the International Date Line (unlike the rest of the Marshalls). Kwajalein returned to the west side of the IDL by removing Saturday, from its calendar. Moreover, Kwajalein's work week was changed to Tuesday through Saturday to match the Hawaiian work week of Monday through Friday on the other side of the IDL.
Eastern Kiribati (1994)
As a British colony, the Kiribati colony was centered in the Gilbert Islands, just west of the IDL of the time. Upon independence in 1979, it acquired the claim to the Phoenix and Line Islands, east of the IDL, from the United States. As a result, the country straddled the IDL. Government and commercial concerns on opposite sides of the line could only conduct routine business by radio or telephone on the four days of the week which were weekdays on both sides. To eliminate this anomaly, Kiribati introduced a change of date for its eastern half by removing Saturday, from its calendar. Because of this, Friday, 30 December 1994, was followed by Sunday, 1 January 1995. After the change, the IDL in effect moved eastwards to go around the entire country. Strictly legal, the 1917 nautical IDL convention is still valid. For example, when it is Monday on Kiribati's islands, it is still Sunday in the surrounding ocean, though maps are usually not drawn this way.
As a consequence of the 1994 change, Kiribati's easternmost territory, the Line Islands, including the inhabited island of Kiritimati (Christmas Island), started the year 2000 before any other country, a feature upon which the Kiribati government capitalized as a potential tourist draw.
Date lines according to religious principles
Christianity
Generally, the Christian calendar and Christian churches recognize the IDL. Christmas for example, is celebrated on 25 December (according to either the Gregorian or the Julian calendar, depending upon which of the two is used by the particular church) as that date falls in countries located on either side of the IDL. Thus, whether it is Western Christmas or Orthodox Christmas, Christians in Samoa, immediately west of the IDL, will celebrate the holiday a day before Christians in American Samoa, which is immediately east of the IDL.
A problem with the general rule above arises in certain Christian churches that solemnly observe a Sabbath day as a particular day of the week, when those churches are located in countries near the IDL. Notwithstanding the difference in dates, the same sunrise happened over American Samoa as happens over Samoa a few minutes later, and the same sunset happens over Samoa as happened over American Samoa a few minutes earlier. In other words, the secular days are legally different but they are physically the same; that causes questions to arise under religious law. Because the IDL is an arbitrary imposition, the question can arise as to which Saturday on either side of the IDL (or, more fundamentally, on either side of 180 degrees longitude) is the "real" Saturday. This issue (which also arises in Judaism) is a particular problem for Seventh Day Adventists, Seventh Day Baptists, and similar churches located in countries near the IDL.
In Tonga, Seventh Day Adventists (who usually observe Saturday, the seventh-day Sabbath) observe Sunday because Tonga lies east of the 180° meridian. Sunday as observed in Tonga (west of the IDL, as with Kiribati, Samoa, and parts of Fiji and Tuvalu) is considered by the Seventh-day Adventist Church to be the same day as Saturday observed east of the IDL.
Most Seventh Day Adventists in Samoa planned to observe Sabbath on Sunday after Samoa's crossing the IDL in December 2011, but SDA groups in Samatau village and other places (approximately 300 members) decided to accept the IDL adjustment and observe the Sabbath on Saturday. Debate continues within the Seventh-day Adventist community in the Pacific as to which day is really the seventh-day Sabbath.
The Samoan Independent Seventh-day Adventist Church, which is not affiliated to the worldwide Seventh-day Adventist Church, has decided to continue worshiping on Saturday, after a six-day week at the end of 2011.
Islam
The Islamic calendar and Muslim communities recognize the convention of the IDL. In particular, the day for holding the Jumu'ah prayer appears to be local Friday everywhere in the world. The IDL is not a factor in the start and end of Islamic lunar months. These depend solely on sighting the new crescent moon. As an example, the fasts of the month of Ramadan begin the morning after the crescent is sighted. That this day may vary in different parts of the world is well known in Islam. (See .)
Judaism
The concept of an International Date Line in Jewish law is first mentioned by 12th-century decisors. But it was not until the introduction of improved transportation and communications systems in the 20th century that the question of an International Date Line truly became a question of practical Jewish law.
As a practical matter, the conventional International Date Line—or another line in the Pacific Ocean close to it—serves as a de facto date line for purposes of Jewish law, at least in existing Jewish communities. For example, residents of the Jewish communities of Japan, New Zealand, Hawaii, and French Polynesia all observe Shabbat on local Saturday. However, there is not unanimity as to how Jewish law reaches that conclusion. For this reason, some authorities rule that certain aspects of Sabbath observance are required on Sunday (in Japan and New Zealand) or Friday (in Hawaii and French Polynesia) in addition to Saturday. Additionally, there are differences of opinion as to which day or days individual Jews traveling in the Pacific region away from established Jewish communities should observe Shabbat.
For individuals crossing the IDL, the change of calendar date influences some aspects of practice under Jewish law. Yet other aspects depend on an individual's experience of sunsets and sunrises to count days, notwithstanding the calendar date.
Cultural references and traditions
The Island of the Day Before
The IDL is a central factor in Umberto Eco's book The Island of the Day Before (1994), in which the protagonist finds himself on a becalmed ship, with an island close at hand on the other side of the IDL. Unable to swim, the protagonist indulges in increasingly imaginative speculation regarding the physical, metaphysical and religious importance of the IDL.
Around the World in Eighty Days
The concept behind the IDL (though not the IDL itself, which did not yet exist) appears as a plot device in Jules Verne's book Around the World in Eighty Days (1873). The main protagonist, Phileas Fogg, travels eastward around the world. He had bet with his friends that he could do it in 80 days. To win the wager, Fogg must return by 8:45 p.m. on Saturday, 21 December 1872. However, the journey suffers a series of delays and when Fogg reaches London, he believes it is 8:50 p.m. on Saturday, 21 December and that he has lost the wager by a margin of only five minutes. The next day, however, it is revealed that the day is Saturday, not Sunday, and Fogg arrives at his club just in time to win the bet. Verne explains:
In journeying eastward he had gone towards the sun, and the days therefore diminished for him as many times four minutes as he crossed degrees in this direction. There are three hundred and sixty degrees on the circumference of the earth; and these three hundred and sixty degrees, multiplied by four minutes, gives precisely twenty-four hours — that is, the day unconsciously gained. In other words, while Phileas Fogg, going eastward, saw the sun pass the meridian eighty times, his friends in London only saw it pass the meridian seventy-nine times.
Fogg had thought it was one day later than it actually was, because he had not accounted for this fact. During his journey, he had added a full day to his clock, at the rhythm of an hour per fifteen degrees, or four minutes per degree, as Verne writes. At the time, the concept of a de jure International Date Line did not exist. If it did, he would have been made aware that it would be a day less than it used to be once he reached this line. Thus, the day he would add to his clock throughout his journey would be thoroughly removed upon crossing this imaginary line. But a de facto date line did exist since the U.K., India, and the U.S. had the same calendar with different local times, and he should have noticed when he arrived in the U.S. that the local date was not the same as in his diary (his servant Jean Passepartout kept his watch set to London time, despite the clues from his surroundings).
Line-crossing ceremonies relating to the IDL
Ceremonies aboard ships to mark a sailor's or passenger's first crossing of the Equator, as well as crossing the International Date Line, have been long-held traditions in navies and in other maritime services around the world.
Notes
References
Bering Sea
Chukchi Sea
Kiribati
Samoa
Pacific Ocean
Time | International Date Line | [
"Physics",
"Mathematics"
] | 4,957 | [
"Physical quantities",
"Time",
"Quantity",
"Spacetime",
"Wikipedia categories named after physical quantities"
] |
318,598 | https://en.wikipedia.org/wiki/Paul%20Halmos | Paul Richard Halmos (; 3 March 3 1916 – 2 October 2006) was a Hungarian-born American mathematician and probabilist who made fundamental advances in the areas of mathematical logic, probability theory, operator theory, ergodic theory, and functional analysis (in particular, Hilbert spaces). He was also recognized as a great mathematical expositor. He has been described as one of The Martians.
Early life and education
Born in the Kingdom of Hungary into a Jewish family, Halmos immigrated to the United States at age 13. He obtained his B.A. from the University of Illinois, majoring in mathematics while also fulfilling the requirements for a degree in philosophy. He obtained the degree after only three years, and was 19 years old when he graduated. He then began a Ph.D. in philosophy, still at the Champaign–Urbana campus. However, after failing his masters' oral exams, he shifted to mathematics and graduated in 1938. Joseph L. Doob supervised his dissertation, titled Invariants of Certain Stochastic Transformations: The Mathematical Theory of Gambling Systems.
Career
Shortly after his graduation, Halmos left for the Institute for Advanced Study, lacking both job and grant money. Six months later, he was working under John von Neumann, which proved a decisive experience. While at the Institute, Halmos wrote his first book, Finite Dimensional Vector Spaces, which immediately established his reputation as a fine expositor of mathematics.
From 1967 to 1968 he was the Donegall Lecturer in Mathematics at Trinity College Dublin.
Halmos taught at Syracuse University, the University of Chicago (1946–60), the University of Michigan (~1961–67), the University of Hawaii (1967–68), Indiana University (1969–85), and the University of California at Santa Barbara (1976–78). From his 1985 retirement from Indiana until his death, he was affiliated with the Mathematics department at Santa Clara University (1985–2006).
Accomplishments
In a series of papers reprinted in his 1962 Algebraic Logic, Halmos devised polyadic algebras, an algebraic version of first-order logic differing from the better known cylindric algebras of Alfred Tarski and his students. An elementary version of polyadic algebra is described in monadic Boolean algebra.
In addition to his original contributions to mathematics, Halmos was an unusually clear and engaging expositor of university mathematics. He won the Lester R. Ford Award in 1971 and again in 1977 (shared with W. P. Ziemer, W. H. Wheeler, S. H. Moolgavkar, J. H. Ewing and W. H. Gustafson). Halmos chaired the American Mathematical Society committee that wrote the AMS style guide for academic mathematics, published in 1973. In 1983, he received the AMS's Leroy P. Steele Prize for exposition.
In the American Scientist 56(4): 375–389 (Winter 1968), Halmos argued that mathematics is a creative art, and that mathematicians should be seen as artists, not number crunchers. He discussed the division of the field into and , further arguing that mathematicians and painters think and work in related ways.
Halmos's 1985 "automathography" I Want to Be a Mathematician is an account of what it was like to be an academic mathematician in 20th century America. He called the book "automathography" rather than "autobiography", because its focus is almost entirely on his life as a mathematician, not his personal life. The book contains the following quote on Halmos' view of what doing mathematics means:
In these memoirs, Halmos claims to have invented the "iff" notation for the words "if and only if" and to have been the first to use the "tombstone" notation to signify the end of a proof, and this is generally agreed to be the case. The tombstone symbol ∎ (Unicode U+220E) is sometimes called a halmos.
In 1994, Halmos received the Deborah and Franklin Haimo Award for Distinguished College or University Teaching of Mathematics.
In 2005, Halmos and his wife Virginia Halmos funded the Euler Book Prize, an annual award given by the Mathematical Association of America for a book that is likely to improve the view of mathematics among the public. The first prize was given in 2007, the 300th anniversary of Leonhard Euler's birth, to John Derbyshire for his book about Bernhard Riemann and the Riemann hypothesis: Prime Obsession.
In 2009 George Csicsery featured Halmos in a documentary film also called I Want to Be a Mathematician.
Books by Halmos
Books by Halmos have led to so many reviews that lists have been assembled.
1942. Finite-Dimensional Vector Spaces. Springer-Verlag.
1950. Measure Theory. Springer Verlag.
1951. Introduction to Hilbert Space and the Theory of Spectral Multiplicity. Chelsea.
1956. Lectures on Ergodic Theory. Chelsea.
1960. Naive Set Theory. Springer Verlag.
1962. Algebraic Logic. Chelsea.
1963. Lectures on Boolean Algebras. Van Nostrand.
1967. A Hilbert Space Problem Book. Springer-Verlag.
1973. (with Norman E. Steenrod, Menahem M. Schiffer, and Jean A. Dieudonne). How to Write Mathematics. American Mathematical Society.
1978. (with V. S. Sunder). Bounded Integral Operators on L² Spaces. Springer Verlag
1985. I Want to Be a Mathematician. Springer-Verlag.
1987. I Have a Photographic Memory. Mathematical Association of America.
1991. Problems for Mathematicians, Young and Old, Dolciani Mathematical Expositions, Mathematical Association of America.
1996. Linear Algebra Problem Book, Dolciani Mathematical Expositions, Mathematical Association of America.
1998. (with Steven Givant). Logic as Algebra, Dolciani Mathematical Expositions No. 21, Mathematical Association of America.
2009. (posthumous, with Steven Givant), Introduction to Boolean Algebras, Springer.
See also
Crinkled arc
Commutator subspace
Invariant subspace problem
Naive set theory
Criticism of non-standard analysis
The Martians (scientists)
Notes
References
Includes a bibliography of Halmos's writings through 1991.
External links
"Paul Halmos: A Life in Mathematics", Mathematical Association of America (MAA)
Finite-Dimensional Vector Spaces
"Examples of Operators" a series of video lectures on operators in Hilbert Space given by Paul Halmos during his 2-week stay in Australia, Briscoe Center Digital Collections
1916 births
2006 deaths
20th-century Hungarian mathematicians
20th-century American mathematicians
Algebraists
American logicians
American people of Hungarian-Jewish descent
American statisticians
Donegall Lecturers of Mathematics at Trinity College Dublin
Functional analysts
Hungarian emigrants to the United States
Hungarian Jews
Indiana University faculty
Jewish American scientists
Mathematical analysts
Measure theorists
Operator theorists
Probability theorists
American set theorists
The American Mathematical Monthly editors
University of Chicago faculty
University of Illinois Urbana-Champaign alumni
University of Michigan faculty | Paul Halmos | [
"Mathematics"
] | 1,411 | [
"Mathematical analysts",
"Mathematical analysis",
"Algebra",
"Algebraists"
] |
318,648 | https://en.wikipedia.org/wiki/Musica%20universalis | The musica universalis (literally universal music), also called music of the spheres or harmony of the spheres, is a philosophical concept that regards proportions in the movements of celestial bodies—the Sun, Moon, and planets—as a form of music. The theory, originating in ancient Greece, was a tenet of Pythagoreanism, and was later developed by 16th-century astronomer Johannes Kepler. Kepler did not believe this "music" to be audible, but felt that it could nevertheless be heard by the soul. The idea continued to appeal to scholars until the end of the Renaissance, influencing many schools of thought, including humanism.
History
The concept of the "music of the spheres" incorporates the metaphysical principle that mathematical relationships express qualities or "tones" of energy that manifests in numbers, visual angles, shapes and sounds—all connected within a pattern of proportion. Pythagoras first identified that the pitch of a musical note is an inverse proportion to the length of the string that produces it, and that intervals between harmonious sound frequencies form simple numerical ratios. Pythagoras proposed that the Sun, Moon and planets all emit their own unique hum based on their orbital revolution, and that the quality of life on Earth reflects the tenor of celestial sounds which are physically imperceptible to the human ear. Subsequently, Plato described astronomy and music as "twinned" studies of sensual recognition: astronomy for the eyes, music for the ears, and both requiring knowledge of numerical proportions.
Aristotle characterized the theory as follows:
Aristotle rejected the idea, however, as incompatible with his own cosmological model, and on the grounds that "excessive noises ... shatter the solid bodies even of inanimate things", and therefore any sounds made by the planets would necessarily exert a tremendous physical force upon the body.
Boethius, in his influential work De Musica, described three categories of music:
musica mundana (sometimes referred to as musica universalis)
musica humana (the internal music of the human body)
musica quae in quibusdam constituta est instrumentis (sounds made by singers and instrumentalists)
Boethius believed that musica mundana could only be discovered through the intellect, but that the order found within it was the same as that found in audible music, and that both reflect the beauty of God.
Harmonices Mundi
Musica universalis—which had existed as a metaphysical concept since the time of the Greeks—was often taught in quadrivium, and this intriguing connection between music and astronomy stimulated the imagination of Johannes Kepler as he devoted much of his time after publishing the Mysterium Cosmographicum (Mystery of the Cosmos), looking over tables and trying to fit the data to what he believed to be the true nature of the cosmos as it relates to musical sound. In 1619, Kepler published Harmonices Mundi (literally Harmonies of the World), expanding on the concepts he introduced in Mysterium and positing that musical intervals and harmonies describe the motions of the six known planets of the time. He believed that this harmony—while inaudible—could be heard by the soul, and that it gave a "very agreeable feeling of bliss, afforded him by this music in the imitation of God." In Harmonices, Kepler—who took issue with Pythagorean observations—laid out an argument for a Christian-centric creator who had made an explicit connection between geometry, astronomy, and music, and that the planets were arranged intelligently.
Harmonices is split into five books, or chapters. The first and second books give a brief discussion on regular polyhedra and their congruences, reiterating the idea he introduced in Mysterium that the five regular solids known about since antiquity define the orbits of the planets and their distances from the sun. Book three focuses on defining musical harmonies, including consonance and dissonance, intervals (including the problems of just tuning), their relations to string length which was a discovery made by Pythagoras, and what makes music pleasurable to listen to in his opinion. In the fourth book, Kepler presents a metaphysical basis for this system, along with arguments as to why the harmony of the worlds appeals to the intellectual soul in the same manner that the harmony of music appeals to the human soul. Here, he also uses the naturalness of this harmony as an argument for heliocentrism. In book five, Kepler describes in detail the orbital motion of the planets and how this motion nearly perfectly matches musical harmonies. Finally, after a discussion on astrology in book five, Kepler ends Harmonices by describing his third law, which states that—for any planet—the cube of the semi-major axis of its elliptical orbit is proportional to the square of its orbital period.
In the final book of Harmonices, Kepler explains how the ratio of the maximum and minimum angular speeds of each planet (i.e., its speeds at the perihelion and aphelion) is very nearly equivalent to a consonant musical interval. Furthermore, the ratios between these extreme speeds of the planets compared against each other create even more mathematical harmonies. These speeds explain the eccentricity of the orbits of the planets in a natural way that appealed to Kepler's religious beliefs in a heavenly creator.
While Kepler did believe that the harmony of the worlds was inaudible, he related the motions of the planets to musical concepts in book four of Harmonices. He makes an analogy between comparing the extreme speeds of one planet and the extreme speeds of multiple planets with the difference between monophonic and polyphonic music. Because planets with larger eccentricities have a greater variation in speed they produce more "notes." Earth's maximum and minimum speeds, for example, are in a ratio of roughly 16 to 15, or that of a semitone, whereas Venus' orbit is nearly circular, and therefore only produces a singular note. Mercury, which has the largest eccentricity, has the largest interval, a minor tenth, or a ratio of 12 to 5. This range, as well as the relative speeds between the planets, led Kepler to conclude that the Solar System was composed of two basses (Saturn and Jupiter), a tenor (Mars), two altos (Venus and Earth), and a soprano (Mercury), which had sung in "perfect concord," at the beginning of time, and could potentially arrange themselves to do so again. He was certain of the link between musical harmonies and the harmonies of the heavens and believed that "man, the imitator of the Creator," had emulated the polyphony of the heavens so as to enjoy "the continuous duration of the time of the world in a fraction of an hour."
Kepler was so convinced of a creator that he was convinced of the existence of this harmony despite a number of inaccuracies present in Harmonices. Many of the ratios differed by an error greater than simple measurement error from the true value for the interval, and the ratio between Mars' and Jupiter's angular velocities does not create a consonant interval, though every other combination of planets does. Kepler brushed aside this problem by making the argument, with the math to support it, that because these elliptical paths had to fit into the regular solids described in Mysterium the values for both the dimensions of the solids and the angular speeds would have to differ from the ideal values to compensate. This change also had the benefit of helping Kepler retroactively explain why the regular solids encompassing each planet were slightly imperfect. Philosophers posited that the Creator liked variation in the celestial music.
Kepler's books are well-represented in the Library of Sir Thomas Browne, who also expressed a belief in the music of the spheres:
For there is a musicke where-ever there is a harmony, order or proportion; and thus farre we may maintain the musick of the spheres; for those well ordered motions, and regular paces, though they give no sound unto the eare, yet to the understanding they strike a note most full of harmony. Whatsoever is harmonically composed, delights in harmony.
Orbital resonance
In celestial mechanics, orbital resonance occurs when orbiting bodies exert regular, periodic gravitational influence on each other, usually because their orbital periods are related by a ratio of small integers. This has been referred to as a "modern take" on the theory of musica universalis. This idea has been further explored in a musical animation, created by an artist at the European Southern Observatory, of the planetary system TOI-178, which has five planets locked in a chain of orbital resonances.
Cultural influence
William Shakespeare makes reference to the music of the spheres in The Merchant of Venice:
In the 1910s, Danish composer Rued Langgaard composed a pioneering orchestral work titled Music of the Spheres.
Paul Hindemith also made use of the concept in his 1957 opera, Die Harmonie der Welt ("The Harmony of the World"), based upon the life of Johannes Kepler.
A number of other modern compositions have been inspired by the concept of musica universalis. Among these are Harmony of the Spheres by Neil Ardley, live-only track ''La musique des sphères'' by Magma/VanderTop, Music of the Spheres by Mike Oldfield, The Earth Sings Mi Fa Mi by The Receiving End of Sirens, Music of the Spheres by Ian Brown, "Cosmogony" by Björk, and the Coldplay album Music of the Spheres.
Music of the Spheres was also the title of a companion piece to the video game Destiny, composed by Martin O'Donnell, Michael Salvatori, and Paul McCartney.
A concert band arrangement by Philip Sparke has also used the name "Music of the Spheres" and is often used as a set test piece, with a notable studio performance recorded by the YBS Band while led by maestro Professor David King.
Reference is made to the music of the spheres in the short story The Horror in the Museum by H. P. Lovecraft.
In the video game Overwatch, the playable character Sigma often claims the universe is singing to him.
During the 2008 BBC Proms Doctor Who segment, a short interactive mini-episode starring David Tennant and written by showrunner Russell T Davies titled Music of the Spheres was played. This sees the Doctor attempting to compose Ode to the Universe, basing his works on the Music of the Spheres. This piece continues the metaphysical theories of the musica universalis by arguing that the audience themselves are part of the composition.
See also
Asteroseismology
Gravitational waves
Plato's Timaeus
This Is My Father's World
Titius–Bode law
Sacred geometry
Shabd
Notes
Sources
Further reading
Martineau, John (2002). A Little Book of Coincidence in the Solar System. Gardener's Books.
External links
"The Music of the Spheres". In Our Time. BBC Radio 4. June 19, 2008.
"The Harmony of the Spheres". AudioCipher. December 31, 2021.
Ancient astronomy
Concepts in aesthetics
Concepts in metaphysics
Concepts in the philosophy of science
Early scientific cosmologies
Esoteric cosmology
Numerology
Philosophy of music
Pythagorean philosophy | Musica universalis | [
"Astronomy",
"Mathematics"
] | 2,285 | [
"History of astronomy",
"Mathematical objects",
"Numerology",
"Numbers",
"Ancient astronomy"
] |
318,667 | https://en.wikipedia.org/wiki/IBM%201013 | The IBM 1013 Card Transmission Terminal was a device manufactured by IBM in 1961 which transmitted the data held on 80-column cards to a remote computer or another 1013.
The speed was generally considered 100 cards per minute but could be faster if programmed to send/receive only a portion of the cards if all 80 columns were not used. It needed a full-duplex circuit to operate but at any given time could only transmit or receive.
References
External links
"IBM 1013 Card Transmission Terminal / Communicating Reader-Punch" at Computer History Museum
1013
1013 | IBM 1013 | [
"Technology"
] | 114 | [
"Computing stubs",
"Computer hardware stubs"
] |
318,669 | https://en.wikipedia.org/wiki/Schizosaccharomyces%20pombe | Schizosaccharomyces pombe, also called "fission yeast", is a species of yeast used in traditional brewing and as a model organism in molecular and cell biology. It is a unicellular eukaryote, whose cells are rod-shaped. Cells typically measure 3 to 4 micrometres in diameter and 7 to 14 micrometres in length. Its genome, which is approximately 14.1 million base pairs, is estimated to contain 4,970 protein-coding genes and at least 450 non-coding RNAs.
These cells maintain their shape by growing exclusively through the cell tips and divide by medial fission to produce two daughter cells of equal size, which makes them a powerful tool in cell cycle research.
Fission yeast was isolated in 1893 by Paul Lindner from East African millet beer. The species name pombe is the Swahili word for beer. It was first developed as an experimental model in the 1950s: by Urs Leupold for studying genetics, and by Murdoch Mitchison for studying the cell cycle.
Paul Nurse, a fission yeast researcher, successfully merged the independent schools of fission yeast genetics and cell cycle research. Together with Lee Hartwell and Tim Hunt, Nurse won the 2001 Nobel Prize in Physiology or Medicine for work on cell cycle regulation.
The sequence of the S. pombe genome was published in 2002, by a consortium led by the Sanger Institute, becoming the sixth model eukaryotic organism whose genome has been fully sequenced. S. pombe researchers are supported by the PomBase MOD (model organism database). This has fully unlocked the power of this organism, with many genes orthologous to human genes identified — 70% to date, including many genes involved in human disease. In 2006, sub-cellular localization of almost all the proteins in S. pombe was published using green fluorescent protein as a molecular tag.
Schizosaccharomyces pombe has also become an important organism in studying the cellular responses to DNA damage and the process of DNA replication.
Approximately 160 natural strains of S. pombe have been isolated. These have been collected from a variety of locations including Europe, North and South America, and Asia. The majority of these strains have been collected from cultivated fruits such as apples and grapes, or from the various alcoholic beverages, such as Brazilian Cachaça. S. pombe is also known to be present in fermented tea, kombucha. It is not clear at present whether S. pombe is the major fermenter or a contaminant in such brews. The natural ecology of Schizosaccharomyces yeasts is not well-studied.
History
Schizosaccharomyces pombe was first discovered in 1893 when a group working in a Brewery Association Laboratory in Germany was looking at sediment found in millet beer imported from East Africa that gave it an acidic taste. The term schizo, meaning "split" or "fission", had previously been used to describe other Schizosaccharomycetes. The addition of the word pombe was due to its isolation from East African beer, as pombe means "beer" in Swahili. The standard S. pombe strains were isolated by Urs Leupold in 1946 and 1947 from a culture that he obtained from the yeast collection in Delft, The Netherlands. It was deposited there by A. Osterwalder under the name S. pombe var. liquefaciens, after he isolated it in 1924 from French wine (most probably rancid) at the Federal Experimental Station of Vini- and Horticulture in Wädenswil, Switzerland. The culture used by Urs Leupold contained (besides others) cells with the mating types h90 (strain 968), h- (strain 972), and h+ (strain 975). Subsequent to this, there have been two large efforts to isolate S. pombe from fruit, nectar, or fermentations: one by Florenzano et al. in the vineyards of western Sicily, and the other by Gomes et al. (2002) in four regions of southeast Brazil.
Ecology
The fission yeast S. pombe belongs to the division Ascomycota, which represents the largest and most diverse group of fungi. Free-living ascomycetes are commonly found in tree exudates, on plant roots and in surrounding soil, on ripe and rotting fruits, and in association with insect vectors that transport them between substrates. Many of these associations are symbiotic or saprophytic, although numerous ascomycetes (and their basidiomycete cousins) represent important plant pathogens that target myriad plant species, including commercial crops. Among the ascomycetous yeast genera, the fission yeast Schizosaccharomyces is unique because of the deposition of α-(1,3)-glucan or pseudonigeran in the cell wall in addition to the better known β-glucans and the virtual lack of chitin. Species of this genus also differ in mannan composition, which shows terminal d-galactose sugars in the side-chains of their mannans. S. pombe undergo aerobic fermentation in the presence of excess sugar. S. pombe can degrade L-malic acid, one of the dominant organic acids in wine, which makes them diverse among other Saccharomyces strains.
Comparison with budding yeast (Saccharomyces cerevisiae)
The yeast species Schizosaccharomyces pombe and Saccharomyces cerevisiae are both extensively studied; these two species diverged approximately 300 to 600 million years before present, and are significant tools in molecular and cellular biology. Some of the technical discriminants between these two species are:
S. cerevisiae has approximately 5,600 open reading frames; S. pombe has approximately 5,070 open reading frames.
Despite similar gene numbers, S. cerevisiae has only about 250 introns, while S. pombe has nearly 5,000.
S. cerevisiae has 16 chromosomes, S. pombe has 3.
S. cerevisiae is often diploid while S. pombe is usually haploid.
S. pombe has a shelterin-like telomere complex while S. cerevisiae does not.
S. cerevisiae is in the G1 phase of the cell cycle for an extended period (as a consequence, G1-S transition is tightly controlled), while S. pombe remains in the G2 phase of the cell cycle for an extended period (as a consequence, G2-M transition is under tight control).
Both species share genes with higher eukaryotes that they do not share with each other. S. pombe has RNAi machinery genes like those in vertebrates, while this is missing from S. cerevisiae. S. cerevisiae also has greatly simplified heterochromatin compared to S. pombe. Conversely, S. cerevisiae has well-developed peroxisomes, while S. pombe does not.
S. cerevisiae has small point centromere of 125 bp, and sequence-defined replication origins of about the same size. On the converse, S. pombe has large, repetitive centromeres (40–100 kb) more similar to mammalian centromeres, and degenerate replication origins of at least 1kb.
S. pombe pathways and cellular processes
S. pombe gene products (proteins and RNAs) participate in many cellular processes common across all life. The fission yeast GO slim provides a categorical high level overview of the biological role of all S. pombe gene products.
Life cycle
The fission yeast is a single-celled fungus with simple, fully characterized genome and a rapid growth rate. It has long been used in brewing, baking, and molecular genetics. S. pombe is a rod-shaped cell, approximately 3 μm in diameter, that grows entirely by elongation at the ends. After mitosis, division occurs by the formation of a septum, or cell plate, that cleaves the cell at its midpoint.
The central events of cell reproduction are chromosome duplication, which takes place in S (Synthetic) phase, followed by chromosome segregation and nuclear division (mitosis) and cell division (cytokinesis), which are collectively called M (Mitotic) phase. G1 is the gap between M and S phases, and G2 is the gap between S and M phases. In the fission yeast, the G2 phase is particularly extended, and cytokinesis (daughter-cell segregation) does not happen until a new S (Synthetic) phase is launched.
Fission yeast governs mitosis by mechanisms that are similar to those in multicellular animals. It normally proliferates in a haploid state. When starved, cells of opposite mating types (P and M) fuse to form a diploid zygote that immediately enters meiosis to generate four haploid spores. When conditions improve, these spores germinate to produce proliferating haploid cells.
Cytokinesis
The general features of cytokinesis are shown here. The site of cell division is determined before anaphase. The anaphase spindle (in green on the figure) is then positioned so that the segregated chromosomes are on opposite sides of the predetermined cleavage plane.
Size control
In fission yeast, where growth governs progression through G2/M, a wee1 mutation causes entry into mitosis at an abnormally small size, resulting in a shorter G2. G1 is lengthened, suggesting that progression through Start (beginning of cell cycle) is responsive to growth when the G2/M control is lost. Furthermore, cells in poor nutrient conditions grow slowly and therefore take longer to double in size and divide. Low nutrient levels also reset the growth threshold so that cell progresses through the cell cycle at a smaller size. Upon exposure to stressful conditions [heat (40 °C) or the oxidizing agent hydrogen peroxide] S. pombe cells undergo aging as measured by increased cell division time and increased probability of cell death. Finally, wee1 mutant fission yeast cells are smaller than wild-type cells, but take just as long to go through the cell cycle. This is possible because small yeast cells grow slower, that is, their added total mass per unit time is smaller than that of normal cells.
A spatial gradient is thought to coordinate cell size and mitotic entry in fission yeast.
The Pom1 protein kinase (green) is localized to the cell cortex, with the highest concentration at the cell tips. The cell-cycle regulators Cdr2, Cdr1 and Wee1 are present in cortical nodes in the middle of the cell (blue and red dots). a, In small cells, the Pom1 gradient reaches most of the cortical nodes (blue dots). Pom1 inhibits Cdr2, preventing Cdr2 and Cdr1 from inhibiting Wee1, and allowing Wee1 to phosphorylate Cdk1, thus inactivating cyclin-dependent kinase (CDK) activity and preventing entry into mitosis. b, In long cells, the Pom1 gradient does not reach the cortical nodes (red dots), and therefore Cdr2 and Cdr1 remain active in the nodes. Cdr2 and Cdr1 inhibit Wee1, preventing phosphorylation of Cdk1 and thereby leading to activation of CDK and mitotic entry. (This simplified diagram omits several other regulators of CDK activity.)
Mating-type switching
Fission yeast switches mating type by a replication-coupled recombination event, which takes place during S phase of the cell cycle. Fission yeast uses intrinsic asymmetry of the DNA replication process to switch the mating type; it was the first system where the direction of replication was shown to be required for the change of the cell type. Studies of the mating-type switching system lead to a discovery and characterization of a site-specific replication termination site RTS1, a site-specific replication pause site MPS1, and a novel type of chromosomal imprint, marking one of the sister chromatids at the mating-type locus mat1. In addition, work on the silenced donor region has led to great advances in understanding formation and maintenance of heterochromatin.
Responses to DNA damage
Schizosaccharomyces pombe is a facultative sexual microorganism that can undergo mating when nutrients are limiting. Exposure of S. pombe to hydrogen peroxide, an agent that causes oxidative stress leading to oxidative DNA damage, strongly induces mating and formation of meiotic spores. This finding suggests that meiosis, and particularly meiotic recombination, may be an adaptation for repairing DNA damage. Supporting this view is the finding that single base lesions of the type dU:dG in the DNA of S. pombe stimulate meiotic recombination. This recombination requires uracil-DNA glycosylase, an enzyme that removes uracil from the DNA backbone and initiates base excision repair. On the basis of this finding, it was proposed that base excision repair of either a uracil base, an abasic site, or a single-strand nick is sufficient to initiate recombination in S. pombe. Other experiments with S. pombe indicated that faulty processing of DNA replication intermediates, i.e. Okazaki fragments, causes DNA damages such as single-strand nicks or gaps, and that these stimulate meiotic recombination.
As a model system
Fission yeast has become a notable model system to study basic principles of a cell that can be used to understand more complex organisms like mammals and in particular humans. This single cell eukaryote is nonpathogenic and easily grown and manipulated in the lab. Fission yeast contains one of the smallest numbers of genes of a known genome sequence for a eukaryote, and has only three chromosomes in its genome. Many of the genes responsible for cell division and cellular organization in fission yeast cell are also found in the human's genome. Cell cycle regulation and division are crucial for growth and development of any cell. Fission yeast's conserved genes has been heavily studied and the reason for many recent biomedical developments. Fission yeast is also a practical model system to observe cell division because fission yeast's are cylindrically shaped single celled eukaryotes that divide and reproduce by medial fission. This can easily be seen using microscopy. Fission yeast also have an extremely short generation time, 2 to 4 hours, which also makes it an easy model system to observe and grow in the laboratory Fission yeast's simplicity in genomic structure yet similarities with mammalian genome, ease of ability to manipulate, and ability to be used for drug analysis is why fission yeast is making many contributions to biomedicine and cellular biology research, and a model system for genetic analysis.
Genome
Schizosaccharomyces pombe is often used to study cell division and growth because of conserved genomic regions also seen in humans including: heterochromatin proteins, large origins of replication, large centromeres, conserved cellular checkpoints, telomere function, gene splicing, and many other cellular processes. S. pombes genome was fully sequenced in 2002, the sixth eukaryotic genome to be sequenced as part of the Genome Project. An estimated 4,979 genes were discovered within three chromosomes containing about 14Mb of DNA. This DNA is contained within 3 different chromosomes in the nucleus with gaps in the centromeric (40kb) and telomeric (260kb) regions. After the initial sequencing of the fission yeast's genome, other previous non-sequenced regions of the genes have been sequenced. Structural and functional analysis of these gene regions can be found on large scale fission yeast databases such as PomBase.
Forty-three percent of the genes in the Genome Project were found to contain introns in 4,739 genes. Fission yeast does not have as many duplicated genes compared to budding yeast, only containing 5%, making fission yeast a great model genome to observe and gives researchers the ability to create more functional research approaches. S. pombes having a large number of introns gives opportunities for an increase of range of protein types produced from alternative splicing and genes that code for comparable genes in human.
81% of the three centromeres in fission yeast have been sequenced. The lengths of the three centromeres were found to be 34, 65, and 110 kb. This is 300–100 times longer than the centromeres of budding yeast. An extremely high level of conservation (97%) is also seen over 1,780-bp region in the DGS regions of the centromere. This elongation of centromeres and its conservative sequences makes fission yeast a practical model system to use to observe cell division and in humans because of their likeness.
PomBase reports over 69% of protein coding genes have human orthologs and over 500 of these are associated with human disease . This makes S. pombe a great system to use to study human genes and disease pathways, especially cell cycle and DNA checkpoint systems.
The genome of S. pombe contains meiotic drivers and drive suppressors called wtf genes.
Genetic diversity
Biodiversity and evolutionary study of fission yeast was carried out on 161 strains of Schizosaccharomyces pombe collected from 20 countries. Modeling of the evolutionary rate showed that all strains derived from a common ancestor that has lived since ~2,300 years ago. The study also identified a set of 57 strains of fission yeast that each differed by ≥1,900 SNPs, and all detected 57 strains of fission yeast were prototrophic (able to grow on the same minimal medium as the reference strain). A number of studies on S.pombe genome support the idea that the genetic diversity of fission yeast strains is slightly less than budding yeast. Indeed, only limited variations of S.pombe occur in proliferation in different environments. In addition, the amount of phenotypic variation segregating in fission yeast is less than that seen, in S. cerevisiae. Since most strains of fission yeast were isolated from brewed beverages, there is no ecological or historical context to this dispersal.
Cell cycle analysis
DNA replication in yeast has been increasingly studied by many researchers. Further understanding of DNA replication, gene expression, and conserved mechanisms in yeast can provide researchers with information on how these systems operate in mammalian cells in general and human cells in particular. Other stages, such as cellular growth and aging, are also observed in yeast in order to understand these mechanisms in more complex systems.
S. pombe stationary phase cells undergo chronological aging due to production of reactive oxygen species that cause DNA damages. Most such damages can ordinarily be repaired by DNA base excision repair and nucleotide excision repair. Defects in these repair processes lead to reduced survival.
Cytokinesis is one of the components of cell division that is often observed in fission yeast. Well-conserved components of cytokinesis are observed in fission yeast and allow us to look at various genomic scenarios and pinpoint mutations. Cytokinesis is a permanent step and very crucial to the wellbeing of the cell. Contractile ring formation in particular is heavily studied by researchers using S. pombe as a model system. The contractile ring is highly conserved in both fission yeast and human cytokinesis. Mutations in cytokinesis can result in many malfunctions of the cell including cell death and development of cancerous cells. This is a complex process in human cell division, but in S. pombe simpler experiments can yield results that can then be applied for research in higher-order model systems such as humans.
One of the safety precautions that the cell takes to ensure precise cell division occurs is the cell-cycle checkpoint. These checkpoints ensure any mutagens are eliminated. This is done often by relay signals that stimulate ubiquitination of the targets and delay cytokinesis. Without mitotic check points such as these, mutagens are created and replicated, resulting in multitudes of cellular issues including cell death or tumorigenesis seen in cancerous cells. Paul Nurse, Leland Hartwell, and Tim Hunt were awarded the Nobel Prize in Physiology or Medicine in 2001. They discovered key conserved checkpoints that are crucial for a cell to divide properly. These findings have been linked to cancer and diseased cells and are a notable finding for biomedicine.
Researchers using fission yeast as a model system also look at organelle dynamics and responses and the possible correlations between yeast cells and mammalian cells. Mitochondria diseases, and various organelle systems such as the Golgi apparatus and endoplasmic reticulum, can be further understood, by observing fission yeast's chromosome dynamics and protein expression levels and regulation.
Meiotic recombination
RecA and RecA-like proteins are required for recombinational repair of DNA double-strand breaks. Five RecA-like proteins have been described in S. pombe that are linked to meiotic recombination, and all five RecA homologs appear to be required for normal levels of meiotic recombination.
Biomedical tool
However, there are limitations with using fission yeast as a model system: its multidrug resistance. "The MDR response involves overexpression of two types of drug efflux pumps, the ATP-binding cassette (ABC) family... and the major facilitator superfamily". Paul Nurse and some of his colleagues have recently created S. pombe strains sensitive to chemical inhibitors and common probes to see whether it is possible to use fission yeast as a model system of chemical drug research.
For example, Doxorubicin, a very common chemotherapeutic antibiotic, has many adverse side-effects. Researchers are looking for ways to further understand how doxorubicin works by observing the genes linked to resistance by using fission yeast as a model system. Links between doxorubicin adverse side-effects and chromosome metabolism and membrane transport were seen. Metabolic models for drug targeting are now being used in biotechnology, and further advances are expected in the future using the fission yeast model system.
Experimental approaches
Fission yeast is easily accessible, easily grown and manipulated to make mutants, and able to be maintained at either a haploid or diploid state. S. pombe is normally a haploid cell but, when put under stressful conditions, usually nitrogen deficiency, two cells will conjugate to form a diploid that later form four spores within a tetrad ascus. This process is easily visible and observable under any microscope and allows us to look at meiosis in a simpler model system to see how this phenomenon operates.
Virtually any genetics experiment or technique can, therefore, be applied to this model system such as: tetrad dissection, mutagens analysis, transformations, and microscopy techniques such as FRAP and FRET. New models, such as Tug-Of-War (gTOW), are also being used to analyze yeast robustness and observe gene expression. Making knock-in and knock-out genes is fairly easy and with the fission yeast's genome being sequenced this task is very accessible and well known.
See also
DNA damage (naturally occurring)
DNA repair
Yeast
PomBase
References
External links
PomBase — The Pombe Genome Database
MicrobeWiki page on Schizosaccharomyces pombe
Ascomycota
Fungal models
Yeasts
Fungi described in 1893
Fungus species
ru:Делящиеся дрожжи | Schizosaccharomyces pombe | [
"Biology"
] | 4,959 | [
"Fungi",
"Fungus species",
"Yeasts",
"Model organisms",
"Fungal models"
] |
318,672 | https://en.wikipedia.org/wiki/Whakapapa | Whakapapa (, ), or genealogy, is a fundamental principle in Māori culture. Reciting one's whakapapa proclaims one's Māori identity, places oneself in a wider context, and links oneself to land and tribal groupings and their mana.
Experts in whakapapa can trace and recite a lineage not only through the many generations in a linear sense, but also between such generations in a lateral sense.
Link with ancestry
Raymond Firth, an acclaimed New Zealand economist and anthropologist during the early 20th century, asserted that there are four different levels of Māori kinship terminology that are as follows:
Some scholars have attributed this type of genealogical activity as being tantamount to ancestor worship. Most Māori would probably attribute this to ancestor reverence. Tribes and sub-tribes are mostly named after an ancestor (either male or female): for example, Ngāti Kahungunu means 'descendants of Kahungunu (a famous chief who lived mostly in what is now called the Hawke's Bay region).
According to Atholl Anderson, "[the] intensely pactical value of whakapapa that guaranteed their general accuracy". Ethnographer Walter Ong said of European dismissiveness of the accuracy of oral history like whakapapa: "Oral cultures must invest great energy in saying over and over again what has been learnt arduously over the ages".
Word associations
Many physiological terms are also genealogical in 'nature'. For example, the terms 'iwi', 'hapu', and 'whānau' (as noted above) can also be translated in order as 'bones', 'pregnant', and 'give birth'. The prize winning Māori author, Keri Hulme, named her best known novel as The Bone People: a title linked directly to the dual meaning of the word 'iwi as both 'bone' and 'tribal people'.
Most formal orations (or whaikōrero''') begin with the "nasal" expression - Tihei Mauriora! This is translated as the 'Sneeze of Life'. In effect, the orator (whose 'sneeze' reminds us of a newborn clearing his or her airways to take the first breath of life) is announcing that 'his' speech has now begun, and that his 'airways' are clear enough to give a suitable oration.
Whakapapa in the mental health system
Whakapapa is defined as the "genealogical descent of all living things from God to the present time. "Since all living things including rocks and mountains are believed to possess whakapapa, it is further defined as "a basis for the organisation of knowledge in the respect of the creation and development of all things".
Hence, whakapapa also implies a deep connection to land and the roots of one's ancestry. In order to trace one's whakapapa it is essential to identify the location where one's ancestral heritage began; "you can’t trace it back any further". "Whakapapa links all people back to the land and sea and sky and outer universe, therefore, the obligations of whanaungatanga extend to the physical world and all being in it".
While some family and community health organisations may require details of whakapapa as part of client assessment, it is generally better if whakapapa is disclosed voluntarily by whānau, if they are comfortable with this. Usually details of a client's whakapapa are not required since sufficient information can be obtained through their iwi identification. Cases where whakapapa may be required include adoption cases or situations where whakapapa information may be of benefit to the client's health and well-being.
Whakapapa is also believed to determine an individual's intrinsic tapu. "Sharing whakapapa enables the identification of obligations...and gaining trust of participants". Additionally, since whakapapa is believed to be "inextricably linked to the physical gene", concepts of tapu would still apply. Therefore, it is essential to ensure that appropriate cultural protocols are adhered to.
Misuse of such private and privileged information is of great concern to Māori. While whakapapa information may be disclosed to a kaimatai hinengaro'' in confidence, this information may be stored in databases that could be accessed by others. While most health professions are embracing technological advances of data storage, this may be an area of further investigation so that confidential information pertaining to a client's whakapapa cannot be disclosed to others.
Additionally, it may be beneficial to find out if the client is comfortable with whakapapa information being stored in ways that have the potential to be disclosed to others. To combat such issues, a Māori Code of Ethics has been suggested. A Māori Code of Ethics may prevent "the mismanagement or manipulation of either the information or the informants".
Sport
Although this rule was not rigorously applied in the past, people today have to prove whakapapa to become members of the international Māori All Blacks rugby union team, New Zealand Māori rugby league team and New Zealand Māori cricket team to qualify.
Notes
References
Genealogy
Iwi and hapū
Māori culture
Māori words and phrases
Māori society | Whakapapa | [
"Biology"
] | 1,092 | [
"Phylogenetics",
"Genealogy"
] |
318,742 | https://en.wikipedia.org/wiki/Hyperbolic%20space | In mathematics, hyperbolic space of dimension n is the unique simply connected, n-dimensional Riemannian manifold of constant sectional curvature equal to −1. It is homogeneous, and satisfies the stronger property of being a symmetric space. There are many ways to construct it as an open subset of with an explicitly written Riemannian metric; such constructions are referred to as models. Hyperbolic 2-space, H2, which was the first instance studied, is also called the hyperbolic plane.
It is also sometimes referred to as Lobachevsky space or Bolyai–Lobachevsky space after the names of the author who first published on the topic of hyperbolic geometry. Sometimes the qualificative "real" is added to distinguish it from complex hyperbolic spaces.
Hyperbolic space serves as the prototype of a Gromov hyperbolic space, which is a far-reaching notion including differential-geometric as well as more combinatorial spaces via a synthetic approach to negative curvature. Another generalisation is the notion of a CAT(−1) space.
Formal definition and models
Definition
The -dimensional hyperbolic space or hyperbolic -space, usually denoted , is the unique simply connected, -dimensional complete Riemannian manifold with a constant negative sectional curvature equal to −1. The unicity means that any two Riemannian manifolds that satisfy these properties are isometric to each other. It is a consequence of the Killing–Hopf theorem.
Models of hyperbolic space
To prove the existence of such a space as described above one can explicitly construct it, for example as an open subset of with a Riemannian metric given by a simple formula. There are many such constructions or models of hyperbolic space, each suited to different aspects of its study. They are isometric to each other according to the previous paragraph, and in each case an explicit isometry can be explicitly given. Here is a list of the better-known models which are described in more detail in their namesake articles:
Poincaré half-space model: this is the upper-half space with the metric
Poincaré disc model: this is the unit ball of with the metric . The isometry to the half-space model can be realised by a homography sending a point of the unit sphere to infinity.
Hyperboloid model: In contrast with the previous two models this realises hyperbolic -space as isometrically embedded inside the -dimensional Minkowski space (which is not a Riemannian but rather a Lorentzian manifold). More precisely, looking at the quadratic form on , its restriction to the tangent spaces of the upper sheet of the hyperboloid given by are definite positive, hence they endow it with a Riemannian metric that turns out to be of constant curvature −1. The isometry to the previous models can be realised by stereographic projection from the hyperboloid to the plane , taking the vertex from which to project to be for the ball and a point at infinity in the cone inside projective space for the half-space.
Beltrami–Klein model: This is another model realised on the unit ball of ; rather than being given as an explicit metric it is usually presented as obtained by using stereographic projection from the hyperboloid model in Minkowski space to its horizontal tangent plane (that is, ) from the origin .
Symmetric space: Hyperbolic -space can be realised as the symmetric space of the simple Lie group (the group of isometries of the quadratic form with positive determinant); as a set the latter is the coset space . The isometry to the hyperboloid model is immediate through the action of the connected component of on the hyperboloid.
Geometric properties
Parallel lines
Hyperbolic space, developed independently by Nikolai Lobachevsky, János Bolyai and Carl Friedrich Gauss, is a geometric space analogous to Euclidean space, but such that Euclid's parallel postulate is no longer assumed to hold. Instead, the parallel postulate is replaced by the following alternative (in two dimensions):
Given any line L and point P not on L, there are at least two distinct lines passing through P that do not intersect L.
It is then a theorem that there are infinitely many such lines through P. This axiom still does not uniquely characterize the hyperbolic plane up to isometry; there is an extra constant, the curvature , that must be specified. However, it does uniquely characterize it up to homothety, meaning up to bijections that only change the notion of distance by an overall constant. By choosing an appropriate length scale, one can thus assume, without loss of generality, that .
Euclidean embeddings
The hyperbolic plane cannot be isometrically embedded into Euclidean 3-space by Hilbert's theorem. On the other hand the Nash embedding theorem implies that hyperbolic n-space can be isometrically embedded into some Euclidean space of larger dimension (5 for the hyperbolic plane by the Nash embedding theorem).
When isometrically embedded to a Euclidean space every point of a hyperbolic space is a saddle point.
Volume growth and isoperimetric inequality
The volume of balls in hyperbolic space increases exponentially with respect to the radius of the ball rather than polynomially as in Euclidean space. Namely, if is any ball of radius in then:
where is the total volume of the Euclidean -sphere of radius 1.
The hyperbolic space also satisfies a linear isoperimetric inequality, that is there exists a constant such that any embedded disk whose boundary has length has area at most . This is to be contrasted with Euclidean space where the isoperimetric inequality is quadratic.
Other metric properties
There are many more metric properties of hyperbolic space that differentiate it from Euclidean space. Some can be generalised to the setting of Gromov-hyperbolic spaces, which is a generalisation of the notion of negative curvature to general metric spaces using only the large-scale properties. A finer notion is that of a CAT(−1)-space.
Hyperbolic manifolds
Every complete, connected, simply connected manifold of constant negative curvature −1 is isometric to the real hyperbolic space Hn. As a result, the universal cover of any closed manifold M of constant negative curvature −1, which is to say, a hyperbolic manifold, is Hn. Thus, every such M can be written as Hn/Γ, where Γ is a torsion-free discrete group of isometries on Hn. That is, Γ is a lattice in .
Riemann surfaces
Two-dimensional hyperbolic surfaces can also be understood according to the language of Riemann surfaces. According to the uniformization theorem, every Riemann surface is either elliptic, parabolic or hyperbolic. Most hyperbolic surfaces have a non-trivial fundamental group ; the groups that arise this way are known as Fuchsian groups. The quotient space H2/Γ of the upper half-plane modulo the fundamental group is known as the Fuchsian model of the hyperbolic surface. The Poincaré half plane is also hyperbolic, but is simply connected and noncompact. It is the universal cover of the other hyperbolic surfaces.
The analogous construction for three-dimensional hyperbolic surfaces is the Kleinian model.
See also
Dini's surface
Hyperbolic 3-manifold
Ideal polyhedron
Mostow rigidity theorem
Murakami–Yano formula
Pseudosphere
References
Footnotes
Bibliography
Ratcliffe, John G., Foundations of hyperbolic manifolds, New York, Berlin. Springer-Verlag, 1994.
Reynolds, William F. (1993) "Hyperbolic Geometry on a Hyperboloid", American Mathematical Monthly 100:442–455.
Wolf, Joseph A. Spaces of constant curvature, 1967. See page 67.
Homogeneous spaces
Hyperbolic geometry
Topological spaces | Hyperbolic space | [
"Physics",
"Mathematics"
] | 1,605 | [
"Mathematical structures",
"Group actions",
"Homogeneous spaces",
"Space (mathematics)",
"Topological spaces",
"Topology",
"Geometry",
"Symmetry"
] |
318,779 | https://en.wikipedia.org/wiki/Embryophyte | The embryophytes () are a clade of plants, also known as Embryophyta () or land plants. They are the most familiar group of photoautotrophs that make up the vegetation on Earth's dry lands and wetlands. Embryophytes have a common ancestor with green algae, having emerged within the Phragmoplastophyta clade of freshwater charophyte green algae as a sister taxon of Charophyceae, Coleochaetophyceae and Zygnematophyceae. Embryophytes consist of the bryophytes and the polysporangiophytes. Living embryophytes include hornworts, liverworts, mosses, lycophytes, ferns, gymnosperms and angiosperms (flowering plants). Embryophytes have diplobiontic life cycles.
The embryophytes are informally called "land plants" because they thrive primarily in terrestrial habitats (despite some members having evolved secondarily to live once again in semiaquatic/aquatic habitats), while the related green algae are primarily aquatic. Embryophytes are complex multicellular eukaryotes with specialized reproductive organs. The name derives from their innovative characteristic of nurturing the young embryo sporophyte during the early stages of its multicellular development within the tissues of the parent gametophyte. With very few exceptions, embryophytes obtain biological energy by photosynthesis, using chlorophyll a and b to harvest the light energy in sunlight for carbon fixation from carbon dioxide and water in order to synthesize carbohydrates while releasing oxygen as a byproduct.
Description
The Embryophytes emerged either a half-billion years ago, at some time in the interval between the mid-Cambrian and early Ordovician, or almost a billion years ago, during the Tonian or Cryogenian, probably from freshwater charophytes, a clade of multicellular green algae similar to extant Klebsormidiophyceae. The emergence of the Embryophytes depleted atmospheric CO2 (a greenhouse gas), leading to global cooling, and thereby precipitating glaciations. Embryophytes are primarily adapted for life on land, although some are secondarily aquatic. Accordingly, they are often called land plants or terrestrial plants.
On a microscopic level, the cells of charophytes are broadly similar to those of chlorophyte green algae, but differ in that in cell division the daughter nuclei are separated by a phragmoplast. They are eukaryotic, with a cell wall composed of cellulose and plastids surrounded by two membranes. The latter include chloroplasts, which conduct photosynthesis and store food in the form of starch, and are characteristically pigmented with chlorophylls a and b, generally giving them a bright green color. Embryophyte cells also generally have an enlarged central vacuole enclosed by a vacuolar membrane or tonoplast, which maintains cell turgor and keeps the plant rigid.
In common with all groups of multicellular algae they have a life cycle which involves alternation of generations. A multicellular haploid generation with a single set of chromosomes – the gametophyte – produces sperm and eggs which fuse and grow into a diploid multicellular generation with twice the number of chromosomes – the sporophyte which produces haploid spores at maturity. The spores divide repeatedly by mitosis and grow into a gametophyte, thus completing the cycle. Embryophytes have two features related to their reproductive cycles which distinguish them from all other plant lineages. Firstly, their gametophytes produce sperm and eggs in multicellular structures (called 'antheridia' and 'archegonia'), and fertilization of the ovum takes place within the archegonium rather than in the external environment. Secondly, the initial stage of development of the fertilized egg (the zygote) into a diploid multicellular sporophyte, takes place within the archegonium where it is both protected and provided with nutrition. This second feature is the origin of the term 'embryophyte' – the fertilized egg develops into a protected embryo, rather than dispersing as a single cell. In the bryophytes the sporophyte remains dependent on the gametophyte, while in all other embryophytes the sporophyte generation is dominant and capable of independent existence.
Embryophytes also differ from algae by having metamers. Metamers are repeated units of development, in which each unit derives from a single cell, but the resulting product tissue or part is largely the same for each cell. The whole organism is thus constructed from similar, repeating parts or metamers. Accordingly, these plants are sometimes termed 'metaphytes' and classified as the group Metaphyta (but Haeckel's definition of Metaphyta places some algae in this group). In all land plants a disc-like structure called a phragmoplast forms where the cell will divide, a trait only found in the land plants in the streptophyte lineage, some species within their relatives Coleochaetales, Charales and Zygnematales, as well as within subaerial species of the algae order Trentepohliales, and appears to be essential in the adaptation towards a terrestrial life style.
Evolution
The green algae and land plants form a clade, the Viridiplantae. According to molecular clock estimates, the Viridiplantae split to into two clades: chlorophytes and streptophytes. The chlorophytes, with around 700 genera, were originally marine algae, although some groups have since spread into fresh water. The streptophyte algae (i.e. excluding the land plants) have around 122 genera; they adapted to fresh water very early in their evolutionary history and have not spread back into marine environments.
Some time during the Ordovician, streptophytes invaded the land and began the evolution of the embryophyte land plants. Present day embryophytes form a clade. Becker and Marin speculate that land plants evolved from streptophytes because living in fresh water pools pre-adapted them to tolerate a range of environmental conditions found on land, such as exposure to rain, tolerance of temperature variation, high levels of ultra-violet light, and seasonal dehydration.
The preponderance of molecular evidence as of 2006 suggested that the groups making up the embryophytes are related as shown in the cladogram below (based on Qiu et al. 2006 with additional names from Crane et al. 2004).
An updated phylogeny of Embryophytes based on the work by Novíkov & Barabaš-Krasni 2015 and Hao and Xue 2013 with plant taxon authors from Anderson, Anderson & Cleal 2007 and some additional clade names. Puttick et al./Nishiyama et al. are used for the basal clades.
Diversity
Non-vascular land plants
The non-vascular land plants, namely the mosses (Bryophyta), hornworts (Anthocerotophyta), and liverworts (Marchantiophyta), are relatively small plants, often confined to environments that are humid or at least seasonally moist. They are limited by their reliance on water needed to disperse their gametes; a few are truly aquatic. Most are tropical, but there are many arctic species. They may locally dominate the ground cover in tundra and Arctic–alpine habitats or the epiphyte flora in rain forest habitats.
They are usually studied together because of their many similarities. All three groups share a haploid-dominant (gametophyte) life cycle and unbranched sporophytes (the plant's diploid generation). These traits appear to be common to all early diverging lineages of non-vascular plants on the land. Their life-cycle is strongly dominated by the haploid gametophyte generation. The sporophyte remains small and dependent on the parent gametophyte for its entire brief life. All other living groups of land plants have a life cycle dominated by the diploid sporophyte generation. It is in the diploid sporophyte that vascular tissue develops. In some ways, the term "non-vascular" is a misnomer. Some mosses and liverworts do produce a special type of vascular tissue composed of complex water-conducting cells. However, this tissue differs from that of "vascular" plants in that these water-conducting cells are not lignified. It is unlikely that water-conducting cells in the mosses is homologous with the vascular tissue in "vascular" plants.
Like the vascular plants, they have differentiated stems, and although these are most often no more than a few centimeters tall, they provide mechanical support. Most have leaves, although these typically are one cell thick and lack veins. They lack true roots or any deep anchoring structures. Some species grow a filamentous network of horizontal stems, but these have a primary function of mechanical attachment rather than extraction of soil nutrients (Palaeos 2008).
Rise of vascular plants
During the Silurian and Devonian periods (around ), plants evolved which possessed true vascular tissue, including cells with walls strengthened by lignin (tracheids). Some extinct early plants appear to be between the grade of organization of bryophytes and that of true vascular plants (eutracheophytes). Genera such as Horneophyton have water-conducting tissue more like that of mosses, but a different life-cycle in which the sporophyte is branched and more developed than the gametophyte. Genera such as Rhynia have a similar life-cycle but have simple tracheids and so are a kind of vascular plant. It was assumed that the gametophyte dominant phase seen in bryophytes used to be the ancestral condition in terrestrial plants, and that the sporophyte dominant stage in vascular plants was a derived trait. However, the gametophyte and sporophyte stages were probably equally independent from each other, and that the mosses and vascular plants in that case are both derived, and have evolved in opposite directions.
During the Devonian period, vascular plants diversified and spread to many different land environments. In addition to vascular tissues which transport water throughout the body, tracheophytes have an outer layer or cuticle that resists drying out. The sporophyte is the dominant generation, and in modern species develops leaves, stems and roots, while the gametophyte remains very small.
Lycophytes and euphyllophytes
All the vascular plants which disperse through spores were once thought to be related (and were often grouped as 'ferns and allies'). However, recent research suggests that leaves evolved quite separately in two different lineages. The lycophytes or lycopodiophytes – modern clubmosses, spikemosses and quillworts – make up less than 1% of living vascular plants. They have small leaves, often called 'microphylls' or 'lycophylls', which are borne all along the stems in the clubmosses and spikemosses, and which effectively grow from the base, via an intercalary meristem. It is believed that microphylls evolved from outgrowths on stems, such as spines, which later acquired veins (vascular traces).
Although the living lycophytes are all relatively small and inconspicuous plants, more common in the moist tropics than in temperate regions, during the Carboniferous period tree-like lycophytes (such as Lepidodendron) formed huge forests that dominated the landscape.
The euphyllophytes, making up more than 99% of living vascular plant species, have large 'true' leaves (megaphylls), which effectively grow from the sides or the apex, via marginal or apical meristems. One theory is that megaphylls evolved from three-dimensional branching systems by first '' – flattening to produce a two dimensional branched structure – and then 'webbing' – tissue growing out between the flattened branches. Others have questioned whether megaphylls evolved in the same way in different groups.
Ferns and horsetails
The ferns and horsetails (the Polypodiophyta) form a clade; they use spores as their main method of dispersal. Traditionally, whisk ferns and horsetails were historically treated as distinct from 'true' ferns. Living whisk ferns and horsetails do not have the large leaves (megaphylls) which would be expected of euphyllophytes. This has probably resulted from reduction, as evidenced by early fossil horsetails, in which the leaves are broad with branching veins.
Ferns are a large and diverse group, with some 12,000 species. A stereotypical fern has broad, much divided leaves, which grow by unrolling.
Seed plants
Seed plants, which first appeared in the fossil record towards the end of the Paleozoic era, reproduce using desiccation-resistant capsules called seeds. Starting from a plant which disperses by spores, highly complex changes are needed to produce seeds. The sporophyte has two kinds of spore-forming organs or sporangia. One kind, the megasporangium, produces only a single large spore, a megaspore. This sporangium is surrounded by sheathing layers or integuments which form the seed coat. Within the seed coat, the megaspore develops into a tiny gametophyte, which in turn produces one or more egg cells. Before fertilization, the sporangium and its contents plus its coat is called an ovule; after fertilization a seed. In parallel to these developments, the other kind of sporangium, the microsporangium, produces microspores. A tiny gametophyte develops inside the wall of a microspore, producing a pollen grain. Pollen grains can be physically transferred between plants by the wind or animals, most commonly insects. Pollen grains can also transfer to an ovule of the same plant, either with the same flower or between two flowers of the same plant (self-fertilization). When a pollen grain reaches an ovule, it enters via a microscopic gap in the coat, the micropyle. The tiny gametophyte inside the pollen grain then produces sperm cells which move to the egg cell and fertilize it. Seed plants include two clades with living members, the gymnosperms and the angiosperms or flowering plants. In gymnosperms, the ovules or seeds are not further enclosed. In angiosperms, they are enclosed within the carpel. Angiosperms typically also have other, secondary structures, such as petals, which together form a flower.
Meiosis in sexual land plants provides a direct mechanism for repairing DNA in reproductive tissues. Sexual reproduction appears to be needed for maintaining long-term genomic integrity and only infrequent combinations of extrinsic and intrinsic factors allow for shifts to asexuality.
References
Bibliography
Plants
Dapingian first appearances
Extant Ordovician first appearances | Embryophyte | [
"Biology"
] | 3,200 | [
"Plants"
] |
318,830 | https://en.wikipedia.org/wiki/Animalcule | Animalcule (; ) is an archaic term for microscopic organisms that included bacteria, protozoans, and very small animals. The word was invented by 17th-century Dutch scientist Antonie van Leeuwenhoek to refer to the microorganisms he observed in rainwater.
Some better-known types of animalcule include:
Actinophrys, and other heliozoa, termed sun animalcules.
Amoeba, termed Proteus animalcules.
Noctiluca scintillans, commonly termed the sea sparkles.
Paramecium, termed slipper animalcules.
Rotifers, termed wheel animalcules.
Stentor, termed trumpet animalcules.
Vorticella, and other peritrichs, termed bell animalcules.
The concept seems to have been proposed at least as early as about 30 BC, as evidenced by this translation from Marcus Varro's Rerum Rusticarum Libri Tres:
Note also if there be any swampy ground, both for the reasons given above, and because certain minute animals, invisible to the eye, breed there, and, borne by the air, reach the inside of the body by way of the mouth and nose, and cause diseases which are difficult to be rid of.
The term was also used during the 17th century by Henry Oldenburg, the first Secretary of the Royal Society and founding editor of Philosophical Transactions, to translate the Dutch words used by van Leeuwenhoek to describe microorganisms that he discovered.
In Gilbert and Sullivan's The Pirates of Penzance, the word appears in adjectival form in the 'Major-General's Song', in which Major-General Stanley sings, 'I know the scientific names of beings animalculous...'
The term continued to be current at least as late as 1879.
See also
Caminalcule
Polycule
Infusoria
Van Leeuwenhoek's microscopic discovery of microbial life (microorganisms)
References
Zoology
Antonie van Leeuwenhoek
Biology and natural history in the Dutch Republic | Animalcule | [
"Biology"
] | 432 | [
"Zoology"
] |
318,869 | https://en.wikipedia.org/wiki/Iceland%20spar | Iceland spar, formerly called Iceland crystal ( , ) and also called optical calcite, is a transparent variety of calcite, or crystallized calcium carbonate, originally brought from Iceland, and used in demonstrating the polarization of light.
Formation and composition
Iceland spar is a colourless, transparent variety of calcium carbonate (CaCO3). It crystallizes in the trigonal system, typically forming rhombohedral crystals. It has a Mohs hardness of 3 and exhibits double refraction, splitting a ray of light into two rays that travel at different speeds and directions.
Iceland spar forms in sedimentary environments, mainly limestone and dolomite rocks, but it also forms in hydrothermal veins and evaporite deposits. It precipitates from solutions rich in calcium and carbonate ions, influenced by temperature, pressure, and impurities.
The most common crystal structure of Iceland spar is rhombohedral, but other structures, such as scalenohedral or prismatic, can form depending on formation conditions. Iceland spar is primarily found in Iceland but can occur in different parts of the world with suitable geological conditions.
Characteristics and optical properties
Iceland spar is characterized by its large, readily cleavable crystals, easily divided into parallelepipeds. This feature makes it easily identifiable and workable. One of the most remarkable properties of Iceland spar is its birefringence, where the crystal's refractive index differs for light of different polarizations. When a ray of unpolarized light passes through the crystal, it is divided into two rays of mutually perpendicular polarization directed at various angles. This double refraction causes objects seen through the crystal to appear doubled.
Iceland spar possesses several optical properties other than double refraction and birefringence. It is highly transparent to visible light, allowing light to pass through with minimal absorption or scattering, which is ideal for optical applications requiring clarity. Iceland spar can produce vivid colours when viewed under polarized light due to its birefringent nature. This effect is known as the "Becke line" and can be used to determine a mineral's refractive index. Additionally, Iceland spar is optically active, meaning it can rotate the plane of polarization of light passing through it, a property resulting from its asymmetrical atomic arrangement. These optical properties contribute to the mineral's scientific use and aesthetic appeal.
Historical significance
Iceland spar holds historical importance in optics and the study of light. One of its most notable properties is its ability to exhibit double refraction. This phenomenon was first described by the Danish scientist Erasmus Bartholin in 1669, who observed it in a specimen of Iceland spar.
The study of double refraction in Iceland spar played a role in developing the wave theory of light. Scientists such as Christiaan Huygens, Isaac Newton, and Sir George Stokes studied this phenomenon and contributed to the understanding of light as a wave. Huygens, in particular, used double refraction to support his wave theory of light, in contrast to Newton's corpuscular theory. Augustin-Jean Fresnel published a complete explanation of double refraction in light polarization in the 1820s.
The understanding of double refraction in Iceland spar also led to the development of polarized light microscopy, which is used in various scientific fields to study the properties of materials. Iceland spar has been used historically in optical instruments like polarizing microscopes and navigation equipment.
Mining
Mines producing Iceland spar include many mines producing related calcite and aragonite. Iceland spar occurs in various locations worldwide, historically named after Iceland due to its abundance on the island. Other productive sources include China and the greater Sonoran Desert region, in Santa Eulalia, Chihuahua, Mexico, and New Mexico, United States. The clearest specimens, as well as the largest, have been from the Helgustaðir mine in Iceland.
Surveying tools and techniques are combined to reduce the risk and cost of exploration to identify deposits. Geological maps and remote sensing techniques, such as satellite imagery and aerial photography, are used for initial exploration and regional assessment to identify potential areas for further exploration. Geophysical surveys, including magnetometry, gravity surveys, and electromagnetic surveys, are then employed to detect anomalies indicating mineralization. Field mapping of surface geology and mineralogy also plays a role in identifying potential mineralization zones.
The mining process for Iceland spar varies based on the specific geological conditions of the deposit. Open-pit mining or quarrying is common for surface deposits. Once extracted, the calcite is processed to remove impurities, prepared for various applications, including optical instruments and jewelry, and used as a source of calcium carbonate in industries like construction and agriculture.
Environmental issues
Some potential environmental issues associated with Iceland spar mining include habitat destruction, water pollution, air pollution, soil degradation, and visual impact. Mining activities can destroy natural habitats, mainly if the mining site is located in ecologically sensitive areas, leading to the loss of biodiversity and disrupting local ecosystems. Water sources can be contaminated through the discharge of chemicals used in the extraction and processing of minerals, impacting aquatic life and water quality. Mining activities can also lead to soil erosion and degradation, mainly if proper land reclamation measures are not implemented after mining ceases. Open-pit mining operations can have a significant visual impact on the landscape, altering the natural scenery of an area. These measures may include erosion control, environmentally friendly mining techniques, and the reclamation of mined areas to restore them to a natural state.
Health concerns
Mining, including Iceland spar mining, poses various health risks to workers and nearby communities. Some key health concerns associated with mining activities include respiratory issues, noise-induced hearing loss, chemical exposure, musculoskeletal disorders, injuries and accidents, and mental health issues. Dust generated during mining operations can contain harmful particles, leading to respiratory problems. The high noise levels generated by mining activities can cause hearing loss over time if proper protective measures are not in place. Miners may also be exposed to harmful chemicals used in the extraction and processing of minerals, which can cause various health issues. The physical demands of mining work, such as heavy lifting and repetitive motions, can result in musculoskeletal disorders. Injuries and accidents are also common risks in mining, including falls, equipment-related incidents, and mine collapses. The demanding nature of mining work, along with long hours and isolation, can contribute to mental health issues such as stress, anxiety, and depression. Mining companies must implement health and safety measures to mitigate these risks to protect workers and nearby communities, including personal protective equipment, dust control measures, and health and safety training. Regularly monitoring air quality, noise levels, and other potential hazards is essential to ensure a safe working environment.
Uses
Iceland spar has been historically used in telecommunications due to its unique optical properties. One of its key features, birefringence, made it worthwhile in early optical technologies, such as developing optical instruments like polarizing microscopes and constructing optical rangefinders and gunsights.
While uncommon, Iceland spar has historically been used in navigation as a polarizing filter to determine the sun's direction on overcast days. It has been speculated that the sunstone (, a different mineral from the gem-quality sunstone) mentioned in medieval Icelandic texts, such as Rauðúlfs þáttr, was Iceland spar, and that Vikings used its light-polarizing property to tell the direction of the sun on cloudy days for navigational purposes. The polarization of sunlight in the Arctic can be detected, and the direction of the sun identified to within a few degrees in both cloudy and twilight conditions using the sunstone and the naked eye. The process involves moving the stone across the visual field to reveal a yellow entoptic pattern on the fovea of the eye, probably Haidinger's brush. The recovery of an Iceland spar sunstone from a ship of the Elizabethan era that sank in 1592 off Alderney suggests that this navigational technology may have persisted after the invention of the magnetic compass.
William Nicol (1770–1851) invented the first polarizing prism, using Iceland spar to create his Nicol prism.
Modern applications
Despite being historically significant, Iceland spar still holds an essential place in modern applications. Due to its optical properties, Iceland spar is still used in instruments like polarizing microscopes, lenses, and filters. Iceland spar is also used in optical instruments for geological and biological microscopy as its birefringence helps to reveal material structure. It is also a practical tool used in education and research to demonstrate optical principles. Though its applications are less widespread than in the past, Iceland spar continues to contribute to various scientific and technological endeavours.
As a type of calcite, Iceland spar can be used in construction as a building material in cement and concrete. Its high purity and brightness make it an ideal filler in paints and coatings. In metallurgy, calcite acts as a flux to lower the melting point of metals during smelting and refining. Additionally, it is used in agriculture as a soil conditioner and neutralizer to adjust soil pH levels and improve crop yields. Calcite also contributes to environmental remediation efforts, treating acidic water and soil by neutralizing acidity and removing heavy metals.
Geological significance
Due to Iceland spar typically forming in sedimentary environments, particularly limestone and dolomite rocks, its formation is closely tied to these carbonate rocks' deposition and diagenesis (compaction and cementation). Studying Iceland spar can provide valuable information about past environmental conditions, such as the presence of ancient seas and marine life, as carbonate rocks like limestone often form in marine environments. The presence of Iceland spar can also indicate the presence of hydrothermal activity, as calcite can form in hydrothermal veins.
Conservation and protection
Due to their scientific and historical significance, conservation efforts related to Iceland spar primarily focus on preserving specimens and mining sites. One of the challenges in preserving Iceland spar specimens is the risk of damage during extraction, handling, and storage. Mining sites that yield high-quality Iceland spar specimens are also of interest for conservation. These sites may be designated protected areas to prevent overexploitation.
Cultural impact
The Thomas Pynchon novel Against the Day uses the doubling effect of Iceland spar as a theme.
See also
Spar sunstone
References
Calcium minerals
Carbonate minerals
Medieval history of Iceland
Optical materials
Polarization (waves)
Transparent materials
Trigonal minerals | Iceland spar | [
"Physics"
] | 2,147 | [
"Physical phenomena",
"Astrophysics",
"Optical phenomena",
"Optical materials",
"Materials",
"Transparent materials",
"Polarization (waves)",
"Matter"
] |
318,878 | https://en.wikipedia.org/wiki/Rainer%20Weiss | Rainer "Rai" Weiss ( , ; born September 29, 1932) is a German-born American physicist, known for his contributions in gravitational physics and astrophysics. He is a professor of physics emeritus at MIT and an adjunct professor at LSU. He is best known for inventing the laser interferometric technique which is the basic operation of LIGO. He was Chair of the COBE Science Working Group.
In 2017, Weiss was awarded the Nobel Prize in Physics, along with Kip Thorne and Barry Barish, "for decisive contributions to the LIGO detector and the observation of gravitational waves".
Weiss has helped realize a number of challenging experimental tests of fundamental physics. He is a member of the Fermilab Holometer experiment, which uses a 40m laser interferometer to measure properties of space and time at quantum scale and provide Planck-precision tests of quantum holographic fluctuation.
Early life and education
Rainer Weiss was born in Berlin, Germany, the son of Gertrude Loesner and Frederick A. Weiss. His father, a physician, neurologist, and psychoanalyst, was forced out of Germany by Nazis because he was Jewish and an active member of the Communist Party. His mother, an actress, was Christian. His aunt was the sociologist Hilda Weiss.
The family fled first to Prague, but Germany's occupation of Czechoslovakia after the 1938 Munich Agreement caused them to flee again; the philanthropic Stix family of St. Louis helped them obtain visas to enter the United States. Weiss spent his youth in New York City, where he attended Columbia Grammar School. He studied at MIT, dropped out during his junior year, but eventually returned to receive his S.B. degree in 1955 and Ph.D. degree in 1962 under Jerrold Zacharias.
He taught at Tufts University from 1960 to 1962, was a postdoctoral scholar at Princeton University from 1962 to 1964, and then joined the faculty at MIT in 1964.
In a 2022 interview given to Federal University of Pará in Brazil, Weiss talks about his life and career, the memories of his childhood and youth, his undergraduate and graduate studies at MIT, and the future of gravitational waves astronomy.
Achievements
Weiss brought two fields of fundamental physics research from birth to maturity: characterization of the cosmic background radiation, and interferometric gravitational wave observation.
In 1973 he made pioneering measurements of the spectrum of the cosmic microwave background radiation, taken from a weather balloon, showing that the microwave background exhibited the thermal spectrum characteristic of the remnant radiation from the Big Bang. He later became co-founder and science advisor of the NASA Cosmic Background Explorer (COBE) satellite, which made detailed mapping of the radiation.
Weiss also pioneered the concept of using lasers for an interferometric gravitational wave detector, suggesting that the path length required for such a detector would necessitate kilometer-scale arms. He built a prototype in the 1970s, following earlier work by Robert L. Forward. He co-founded the NSF LIGO (gravitational-wave detection) project, which was based on his report "A study of a long Baseline Gravitational Wave Antenna System".
Both of these efforts couple challenges in instrument science with physics important to the understanding of the Universe.
In February 2016, he was one of the four scientists of LIGO/Virgo collaboration presenting at the press conference for the announcement that the first direct gravitational wave observation had been made in September 2015.
Honors and awards
Rainer Weiss has been recognized by numerous awards including:
In 2006, with John C. Mather, he and the COBE team received the Gruber Prize in Cosmology.
In 2007, with Ronald Drever, he was awarded the APS Einstein Prize for his work.
In 2016 and 2017, for the achievement of gravitational waves detection, he received:
The Special Breakthrough Prize in Fundamental Physics,
Gruber Prize in Cosmology,
Shaw Prize,
Kavli Prize in Astrophysics
The Harvey Prize together with Kip Thorne and Ronald Drever.
The Smithsonian magazine's American Ingenuity Award in the Physical Science category, with Kip Thorne and Barry Barish.
The Willis E. Lamb Award for Laser Science and Quantum Optics, 2017.
Princess of Asturias Award (2017) (jointly with Kip Thorne and Barry Barish).
The Nobel Prize in Physics (2017) (jointly with Kip Thorne and Barry Barish)
Fellowship of the Norwegian Academy of Science and Letters
In 2018, he was awarded the American Astronomical Society's Joseph Weber Award for Astronomical Instrumentation "for his invention of the interferometric gravitational-wave detector, which led to the first detection of long-predicted gravitational waves."
In 2020 he was elected a Legacy Fellow of the American Astronomical Society.
Selected publications
Notes
See also
List of Jewish Nobel laureates
References
Further reading
External links
Rainer Weiss's website at MIT
LIGO Group at the MIT Kavli Institute for Astrophysics and Space Research
Q&A: Rainer Weiss on LIGO's origins at news.mit.edu
Archived at Ghostarchive and the Wayback Machine:
including the Nobel Lecture 8 December 2017 LIGO and Gravitational Waves I
1932 births
Living people
Nobel laureates in Physics
American Nobel laureates
American people of German-Jewish descent
21st-century American physicists
Jewish American physicists
Jewish German physicists
Emigrants from Nazi Germany
German emigrants to Czechoslovakia
Immigrants to the United States
Gravitational-wave astronomy
Massachusetts Institute of Technology alumni
Massachusetts Institute of Technology School of Science faculty
Members of the United States National Academy of Sciences
Members of the Norwegian Academy of Science and Letters
Columbia Grammar & Preparatory School alumni
Massachusetts Institute of Technology School of Science alumni
Kavli Prize laureates in Astrophysics
Fellows of the American Astronomical Society
Experimental physicists
Fellows of the American Physical Society | Rainer Weiss | [
"Physics",
"Astronomy"
] | 1,155 | [
"Astrophysics",
"Experimental physics",
"Gravitational-wave astronomy",
"Astronomical sub-disciplines",
"Experimental physicists"
] |
318,895 | https://en.wikipedia.org/wiki/Deodorant | A deodorant is a substance applied to the body to prevent or mask body odor caused by bacterial breakdown of perspiration, for example in the armpits, groin, or feet. A subclass of deodorants, called antiperspirants, prevents sweating itself, typically by blocking sweat glands. Antiperspirants are used on a wider range of body parts, at any place where sweat would be inconvenient or unsafe, since unwanted sweating can interfere with comfort, vision, and grip (due to slipping). Other types of deodorant allow sweating but prevent bacterial action on sweat, since human sweat only has a noticeable smell when it is decomposed by bacteria.
In the United States, the Food and Drug Administration classifies and regulates most deodorants as cosmetics, but classifies antiperspirants as over-the-counter drugs.
The first commercial deodorant, Mum, was introduced and patented in the late nineteenth century by an inventor in Philadelphia, Pennsylvania, Edna Murphey. The product was briefly withdrawn from the market in the US. The modern formulation of the antiperspirant was patented by Jules Montenier on January 28, 1941. This formulation was first found in "Stopette" deodorant spray, which Time magazine called "the best-selling deodorant of the early 1950s".
Use of deodorant with aluminium compounds has been suspected of being linked to breast cancer, but research has not proven any such link.
Overview
The human body produces perspiration (sweat) via two types of sweat gland: eccrine sweat glands which cover much of the skin and produce watery odourless sweat, and apocrine sweat glands in the armpits and groin, which produce a more oily "heavy" sweat containing a proportion of waste proteins, fatty acids and carbohydrates, that can be metabolized by bacteria to produce compounds that cause body odor. In addition, the vagina produces secretions which are not a form of sweat but may be undesired and also masked with deodorants.
Human perspiration of all types is largely odorless until its organic components are fermented by bacteria that thrive in hot, humid environments. The human underarm is among the most consistently warm areas on the surface of the human body, and sweat glands readily provide moisture containing a fraction of organic matter, which when excreted, has a vital cooling effect. When adult armpits are washed with alkaline pH soap, the skin loses its protective acid mantle (pH 4.5–6), raising the skin pH and disrupting the skin barrier. Many bacteria are adapted to the slightly alkaline environment within the human body, so they can thrive within this elevated pH environment. This makes the skin more than usually susceptible to bacterial colonization. Bacteria on the skin feed on the waste proteins and fatty acids in the sweat from the apocrine glands and on dead skin and hair cells, releasing trans-3-methyl-2-hexenoic acid in their waste, which is the primary cause of body odor.
Underarm hair wicks the moisture away from the skin and aids in keeping the skin dry enough to prevent or diminish bacterial colonization. The hair is less susceptible to bacterial growth and therefore reduces bacterial odor. The apocrine sweat glands are inactive until puberty, which is why body odor often only becomes noticeable at that time.
Deodorant products work in one of two ways – by preventing sweat from occurring, or by allowing it to occur but preventing bacterial activity that decomposes sweat on the skin.
History
Modern deodorants
In 1888, the first modern commercial deodorant, Mum, was developed and patented by a U.S. inventor in Philadelphia, Pennsylvania, Edna Murphey; the small company was bought by Bristol-Myers in 1931. In the late 1940s, Helen Barnett Diserens developed an underarm applicator based on the newly invented ball-point pen. In 1952, the company began marketing the product under the name Ban Roll-On. The product was briefly withdrawn from the market in the U.S., but it is once again available at retailers in the U.S. under the brand Ban. In the UK it is sold under the names Mum Solid and Mum Pump Spray. Chattem acquired the Ban deodorant brand in 1998 and subsequently sold it to Kao Corporation in 2000.
In 1903, the first commercial antiperspirant was Everdry. The modern formulation of the antiperspirant was patented by Jules Montenier on January 28, 1941. This patent addressed the problem of the excessive acidity of aluminum chloride and its excessive irritation of the skin, by combining it with a soluble nitrile or a similar compound. This formulation was first found in "Stopette" deodorant spray, which Time magazine called "the best-selling deodorant of the early 1950s". "Stopette" gained its prominence as the first and long-time sponsor of the game show What's My Line?; it was later eclipsed by many other brands once the 1941 patent expired.
Between 1942 and 1957, the market for deodorants increased 600 times to become a $70 million market. Deodorants were originally marketed primarily to women, but by 1957 the market had expanded to male users, and estimates were that 50% of men were using deodorants by that date. The Ban Roll-On product led the market in sales.
In the early 1960s, the first aerosol antiperspirant in the marketplace was Gillette's Right Guard, whose brand was later sold to Henkel in 2006. Aerosols were popular because they let the user dispense a spray without coming in contact with the underarm area. By the late 1960s, half of all the antiperspirants sold in the U.S. were aerosols, and continued to grow in all sales to 82% by the early 1970s. However, the late 1970s saw two developments which greatly reduced the popularity of these products. First, in 1977 the U.S. Food and Drug Administration banned the active ingredient used in aerosols, aluminium zirconium chemicals, due to safety concerns over long term inhalation. Second, the U.S. Environmental Protection Agency limited the use of chlorofluorocarbon (CFC) propellants used in aerosols due to awareness that these gases can contribute to depleting the ozone layer. As the popularity of aerosols slowly decreased, stick antiperspirants became more popular.
Classification
Deodorant
In the United States, deodorants are classified and regulated as cosmetics by the U.S. Food and Drug Administration (FDA) and are designed to eliminate odor. Deodorants are often alcohol-based. Alcohol initially stimulates sweating but may also temporarily kill bacteria. Other active ingredients in deodorants include sodium stearate, sodium chloride, and stearyl alcohol. Deodorants can be formulated with other, more persistent antimicrobials such as triclosan that slow bacterial growth or with metal chelant compounds such as EDTA. Deodorants may contain perfume fragrances or natural essential oils intended to mask the odor of perspiration. Some of the first patented deodorants used zinc oxide, acids, ammonium chloride, sodium bicarbonate, and formaldehyde (which is now known as a carcinogen), and some of these ingredients were messy, irritating to the skin.
Over-the-counter products, often labeled as "natural deodorant crystal", contain the chemical rock crystals potassium alum or ammonium alum, which prevents bacterial action on sweat. These have gained popularity as an alternative health product, in spite of concerns about possible risks related to aluminum (see below – all alum salts contain aluminum in the form of aluminum sulphate salts) and contact dermatitis.
Vaginal deodorant, in the form of sprays, suppositories, and wipes, is often used by women to mask vaginal secretions. Vaginal deodorants can sometimes cause dermatitis.
Deodorant antiperspirant
In the United States, deodorants combined with antiperspirant agents are classified as drugs by the FDA. Antiperspirants attempt to stop or significantly reduce perspiration and thus reduce the moist climate in which bacteria thrive. Aluminium chloride, aluminium chlorohydrate, and aluminium-zirconium compounds, most notably aluminium zirconium tetrachlorohydrex gly are frequently used in antiperspirants. Aluminium chlorohydrate and aluminium-zirconium tetrachlorohydrate gly are the most frequent active ingredients in commercial antiperspirants. Aluminium-based complexes react with the electrolytes in the sweat to form a gel plug in the duct of the sweat gland. The plugs prevent the gland from excreting liquid and are removed over time by the natural sloughing of the skin. The metal salts work in another way to prevent sweat from reaching the surface of the skin: the aluminium salts interact with the keratin fibrils in the sweat ducts and form a physical plug that prevents sweat from reaching the skin's surface. Aluminium salts also have a slight astringent effect on the pores; causing them to contract, further preventing sweat from reaching the surface of the skin. The blockage of a large number of sweat glands reduces the amount of sweat produced in the underarms, though this may vary from person to person. Methenamine in the form of cream or spray is effective in the treatment of excessive sweating and attendant odor. Antiperspirants are usually best applied before bed.
Product formulations and formats
Formulations
Common and historical formulations for deodorants include the following active ingredients:
Aluminum salts (aluminum chlorohydrate, aluminum zirconium tetrachlorohydrex gly, and others) – used as the basis for almost all non-prescription (everyday) antiperspirants. The aluminum reacts within the sweat gland to form a colloid which physically prevents sweating.
Alum (typically potassium alum or ammonium alum, also described as "rock alum", or "rock crystal", or "natural deodorant"). Alum is a natural crystalline product widely used both historically and in modern times as a deodorant, because it inhibits bacterial action. The word 'alum' is a historical term for aluminum sulfate salts, therefore all alum products will contain aluminum, albeit in a different chemical form from antiperspirants.
Bactericidal products such as triclosan (TCS), octenidine dihydrochloride, and parabens kill bacteria on the skin.
Alcohols and related compounds such as propylene glycol – these products can have both drying and bactericidal effects.
Methenamine (hexamethylenetetramine, also known as hexamine or urotropin) is a powerful antiperspirant, often used for severe sweat-related issues, as well as prevention of sweating within the sockets of prosthetic devices used by amputees.
Acidifiers and pH neutral products – deodorants that prevent bacterial action by enhancing (or at least, not depleting) the skin's natural slight acidity, known as the acid mantle, which naturally reduces bacterial action but can be compromised by typically alkaline soaps and skin products.
Masking scents – other strong or overriding scents of a pleasing type may be used, used to mask bodily odors. Typically these are strongly smelling plant extracts or synthetic aromas.
Activated charcoal and other products capable of absorbing sweat and/or smell. Although charcoal most often has a black color, the activated charcoal used in deodorants may be a very light color for aesthetic reasons.
Less commonly used, products such as milk of magnesia (a thick liquid suspension of magnesium hydroxide) are sometimes used as deodorants. Many milk of magnesia products contain small amounts of sodium hypochlorite (bleach) at very low levels that are safe for ingestion and skin application. Sodium hypochlorite is a powerful bactericide, and it is possible that its presence in a product that can dry onto the skin, may explain this use as a deodorant. (Safety info: bleach is caustic and extremely poisonous, and can be lethal, in higher concentrations.)
Formats
Deodorants and antiperspirants come in many forms. What is commonly used varies in different countries. In Europe, aerosol sprays are popular, as are cream and roll-on forms. In North America, solid or gel forms are dominant.
Health effects
After using a deodorant containing zirconium, the skin may develop an allergic, axillary granuloma response. Antiperspirants with propylene glycol, when applied to the axillae, can cause irritation and may promote sensitization to other ingredients in the antiperspirant. Deodorant crystals containing synthetically made potassium alum were found to be a weak irritant to the skin. Unscented deodorant is available for those with sensitive skin. Frequent use of deodorants was associated with blood concentrations of the synthetic musk galaxolide.
Aluminum
Many deodorants and antiperspirants contain aluminium in the form of aluminium salts such as aluminium chlorohydrate.
The US Food and Drug Administration, in a 2003 paper discussing deodorant safety, concluded that "despite many investigators looking at this issue, the agency does not find data from topical and inhalation chronic exposure animal and human studies submitted to date sufficient to change the monograph status of aluminum containing antiperspirants", therefore allowing their use and stating they will keep monitoring the scientific literature. Members of the Scientific Committee on Consumer Safety (SCCS) of the European Commission concluded similarly in 2015, that "due to the lack of adequate data on dermal penetration to estimate the internal dose of aluminium following cosmetic uses, risk assessment cannot be performed." In the light of new data in 2020 the SCCS considered aluminium compounds safe up to 6.25% in non-spray deodorants or non-spray antiperspirants and 10.60% in spray deodorants or spray antiperspirants.
Myths and claims related to aluminium compounds in deodorants
Common myths and marketing claims for aluminium in deodorants (including aluminum in alum products) include claims:
That aluminium in deodorants applied to the skin is a risk factor for some cancers (notably breast cancer) and some forms of dementia
That aluminium in antiperspirants can enter the body (possibly through shaving cuts)
That aluminium in alum "natural deodorant" products is "safer" because it is "too large" to enter the body
Of note, the parts of the body which are commonly shaved and also commonly treated with deodorants, such as the armpits, contain substantial deposits of subcutaneous fat. Shaving cuts would be extremely unlikely to penetrate sufficiently beyond the very outer layers of the skin, for much if any product to enter the bloodstream.
Alzheimer's disease
A 2014 review of 469 peer-reviewed studies examining the effect of exposure to aluminum products concluded "that health risks posed by exposure to inorganic depend on its physical and chemical forms and that the response varies with route of administration, magnitude, duration and frequency of exposure. These results support previous conclusions that there is little evidence that exposure to metallic Al, the Al oxides or its salts increases risk for Alzheimer's disease, genetic damage or cancer".
Breast cancer
The claim that breast cancer is believed to be linked with deodorant use has been widely circulated and appears to originate from a spam email sent in 1999; however, there is no evidence to support the existence of such a link. The myth circulates in two forms:
Antiperspirants block the "purging" of toxins which build up in the body and cause breast cancer: As sweat glands simply do not have this function, the claim is scientifically implausible. Perspiration from the eccrine sweat glands is 99% water, with some salt (sodium chloride) and only trace amounts of lactic acid (almost entirely processed in the liver), urea (almost entirely excreted by the kidneys), and only very small amounts of all other components. Perspiration from the apocrine sweat glands (those in the armpits and groin, which are more responsible for body odor) also include waste proteins, carbohydrates, and fatty acids which would otherwise be processed by other organs such as the liver.
It is possible that there has been confusion between sweat glands, and the lymph nodes deep within the armpits which form part of the immune system and help filter toxins, but if so, there is no evidence at all of such "blocking" of lymph nodes, nor any scientifically plausible route by which this could result from deodorant use.
Aluminum in antiperspirants can enter the body (possibly through cuts) and cause breast cancer: There is no current evidence to support this claim, nor any convincing evidence that it is true. A fact often cited to back up this claim is that more breast cancers occur in the part of the breast near the armpits. However, breast tissue is not evenly spread out, and the part of the breast near the armpit (the Tail of Spence) simply contains much more breast tissue than the other quadrants, making it much more likely that any cancer would occur in that location. See above for current scientific knowledge regarding aluminum in deodorants.
The National Cancer Institute states that "no scientific evidence links the use of these products to the development of breast cancer" and that "no clear evidence that the use of aluminum-containing underarm antiperspirants or cosmetics increases the risk of breast cancer", but also concludes that studies of antiperspirants and deodorants and breast cancer have provided conflicting results, additional research would be needed to determine whether a relationship exists".
Another constituent of deodorant products that has given cause for concern are parabens, a chemical additive. However parabens do not cause cancer.
Kidney dysfunction
The FDA has "acknowledge[d] that small amounts of aluminum can be absorbed from the gastrointestinal tract and through the skin", leading to a warning "that people with kidney disease may not be aware that the daily use of antiperspirant drug products containing aluminum may put them at a higher risk because of exposure to aluminum in the product." The agency warns people with kidney dysfunction to consult a doctor before using antiperspirants containing aluminum.
Aerosol burns and frostbite
If aerosol deodorant is held close to the skin for long enough, it can cause an aerosol burn—a form of frostbite. In controlled tests, spray deodorants have been shown to cause temperature drops of over 60 °C in a short period of time.
Clothing
Aluminium zirconium tetrachlorohydrex gly, a common antiperspirant, can react with sweat to create yellow stains on clothing. Underarm liners are an antiperspirant alternative that does not leave stains.
See also
Air freshener
Aluminum chlorohydrate
Perfume
References
External links
Antiperspirants/Deodorants and breast cancer
Personal hygiene products
Aerosols
American inventions
Body odor | Deodorant | [
"Chemistry"
] | 4,122 | [
"Aerosols",
"Colloids"
] |
318,920 | https://en.wikipedia.org/wiki/Dosimetry | Radiation dosimetry in the fields of health physics and radiation protection is the measurement, calculation and assessment of the ionizing radiation dose absorbed by an object, usually the human body. This applies both internally, due to ingested or inhaled radioactive substances, or externally due to irradiation by sources of radiation.
Internal dosimetry assessment relies on a variety of monitoring, bio-assay or radiation imaging techniques, whilst external dosimetry is based on measurements with a dosimeter, or inferred from measurements made by other radiological protection instruments.
Radiation dosimetry is extensively used for radiation protection; routinely applied to monitor occupational radiation workers, where irradiation is expected, or where radiation is unexpected, such as in the contained aftermath of the Three Mile Island, Chernobyl or Fukushima radiological release incidents. The public dose take-up is measured and calculated from a variety of indicators such as ambient measurements of gamma radiation, radioactive particulate monitoring, and the measurement of levels of radioactive contamination.
Other significant radiation dosimetry areas are medical, where the required treatment absorbed dose and any collateral absorbed dose is monitored, and environmental, such as radon monitoring in buildings.
Measuring radiation dose
External dose
There are several ways of measuring absorbed doses from ionizing radiation. People in occupational contact with radioactive substances, or who may be exposed to radiation, routinely carry personal dosimeters. These are specifically designed to record and indicate the dose received. Traditionally, these were lockets fastened to the external clothing of the monitored person, which contained photographic film known as film badge dosimeters. These have been largely replaced with other devices such as Thermoluminescent dosimetry(TLD), optically stimulated luminescence(OSL), or Fluorescent Nuclear Tract Detector(FNTD) badges.
The International Committee on Radiation Protection (ICRP) guidance states that if a personal dosimeter is worn on a position on the body representative of its exposure, assuming whole-body exposure, the value of Personal Dose Equivalent Hp(10), is sufficient to estimate an effective dose value suitable for radiological protection. Personal Dose Equivalent is a radiation quantity specifically designed to be used for radiation measurements by personal dosimeters. Dosimeters are known as "legal dosimeters" if they have been approved for use in recording personnel dose for regulatory purposes. In cases of non-uniform irradiation such personal dosimeters may not be representative of certain specific areas of the body, where additional dosimeters are used in the area of concern.
A number of electronic devices known as Electronic Personal Dosimeters (EPDs) have come into general use using semiconductor detection and programmable processor technology. These are worn as badges but can give an indication of instantaneous dose rate and an audible and visual alarm if a dose rate or a total integrated dose is exceeded. A good deal of information can be made immediately available to the wearer of the recorded dose and current dose rate via a local display. They can be used as the main stand-alone dosimeter, or as a supplement to other devices. EPD's are particularly useful for real-time monitoring of dose where a high dose rate is expected which will time-limit the wearer's exposure.
In certain circumstances, a dose can be inferred from readings taken by fixed instrumentation in an area in which the person concerned has been working. This would generally only be used if personal dosimetry had not been issued, or a personal dosimeter has been damaged or lost. Such calculations would take a pessimistic view of the likely received dose.
Internal dose
Internal dosimetry is used to evaluate the committed dose due to the intake of radionuclides into the human body.
Medical dosimetry
Medical dosimetry is the calculation of absorbed dose and optimization of dose delivery in radiation therapy. It is often performed by a professional health physicist with specialized training in that field. In order to plan the delivery of radiation therapy, the radiation produced by the sources is usually characterized with percentage depth dose curves and dose profiles measured by a medical physicist.
In radiation therapy, three-dimensional dose distributions are often evaluated using a technique known as gel dosimetry.
Environmental dosimetry
Environmental dosimetry is used where it is likely that the environment will generate a significant radiation dose. An example of this is radon monitoring. The largest single source of radiation exposure to the general public is naturally occurring radon gas, which comprises approximately 55% of the annual background dose. It is estimated that radon is responsible for 10% of lung cancers in the United States. Radon is a radioactive gas generated by the decay of uranium, which is present in varying amounts in the Earth's crust. Certain geographic areas, due to the underlying geology, continually generate radon which permeates its way to the Earth's surface. In some cases the dose can be significant in buildings where the gas can accumulate. A number of specialised dosimetry techniques are used to evaluate the dose that a building's occupants may receive.
Radiation exposure monitoring
Records of legal dosimetry results are usually kept for a set period of time, depending upon the legal requirements of the nation in which they are used.
Medical radiation exposure monitoring is the practice of collecting dose information from radiology equipment and using the data to help identify opportunities to reduce unnecessary dose in medical situations.
Measures of dose
To enable consideration of stochastic health risk, calculations are performed to convert the physical quantity absorbed dose into equivalent and effective doses, the details of which depend on the radiation type and biological context. For applications in radiation protection and dosimetry assessment the (ICRP) and the International Commission on Radiation Units and Measurements (ICRU) have published recommendations and data which are used to calculate these.
Units of measure
There are a number of different measures of radiation dose, including absorbed dose (D) measured in:
gray (Gy) energy absorbed per unit of mass (J·kg−1)
Equivalent dose (H) measured in sieverts (Sv)
Effective dose (E) measured in sieverts
Kerma (K) measured in grays
dose area product (DAP) measured in gray centimeters2
dose length product (DLP) measured in gray centimeters
rads a deprecated unit of absorbed radiation dose, defined as 1 rad = 0.01 Gy = 0.01 J/kg
Roentgen a legacy unit of measurement for the exposure of X-rays
Each measure is often simply described as ‘dose’, which can lead to confusion. Non-SI units are still used, particularly in the USA, where dose is often reported in rads and dose equivalent in rems. By definition, 1 Gy = 100 rad and 1 Sv = 100 rem.
The fundamental quantity is the absorbed dose (D), which is defined as the mean energy imparted [by ionising radiation] (dE) per unit mass (dm) of material (D = dE/dm) The SI unit of absorbed dose is the gray (Gy) defined as one joule per kilogram. Absorbed dose, as a point measurement, is suitable for describing localised (i.e. partial organ) exposures such as tumour dose in radiotherapy. It may be used to estimate stochastic risk provided the amount and type of tissue involved is stated. Localised diagnostic dose levels are typically in the 0–50 mGy range. At a dose of 1 milligray (mGy) of photon radiation, each cell nucleus is crossed by an average of 1 liberated electron track.
Equivalent dose
The absorbed dose required to produce a certain biological effect varies between different types of radiation, such as photons, neutrons or alpha particles. This is taken into account by the equivalent dose (H), which is defined as the mean dose to organ T by radiation type R (DT,R), multiplied by a weighting factor WR . This designed to take into account the biological effectiveness (RBE) of the radiation type, For instance, for the same absorbed dose in Gy, alpha particles are 20 times as biologically potent as X or gamma rays. The measure of ‘dose equivalent’ is not organ averaged and now only used for "operational quantities". Equivalent dose is designed for estimation of stochastic risks from radiation exposures. Stochastic effect is defined for radiation dose assessment as the probability of cancer induction and genetic damage.
As dose is averaged over the whole organ; equivalent dose is rarely suitable for evaluation of acute radiation effects or tumour dose in radiotherapy. In the case of estimation of stochastic effects, assuming a linear dose response, this averaging out should make no difference as the total energy imparted remains the same.
Effective dose
Effective dose is the central dose quantity for radiological protection used to specify exposure limits to ensure that the occurrence of stochastic health effects is kept below unacceptable levels and that tissue reactions are avoided.
It is difficult to compare the stochastic risk from localised exposures of different parts of the body (e.g. a chest x-ray compared to a CT scan of the head), or to compare exposures of the same body part but with different exposure patterns (e.g. a cardiac CT scan with a cardiac nuclear medicine scan). One way to avoid this problem is to simply average out a localised dose over the whole body. The problem of this approach is that the stochastic risk of cancer induction varies from one tissue to another.
The effective dose E is designed to account for this variation by the application of specific weighting factors for each tissue (WT). Effective dose provides the equivalent whole body dose that gives the same risk as the localised exposure. It is defined as the sum of equivalent doses to each organ (HT), each multiplied by its respective tissue weighting factor (WT).
Weighting factors are calculated by the International Commission for Radiological Protection (ICRP), based on the risk of cancer induction for each organ and adjusted for associated lethality, quality of life and years of life lost. Organs that are remote from the site of irradiation will only receive a small equivalent dose (mainly due to scattering) and therefore contribute little to the effective dose, even if the weighting factor for that organ is high.
Effective dose is used to estimate stochastic risks for a ‘reference’ person, which is an average of the population. It is not suitable for estimating stochastic risk for individual medical exposures, and is not used to assess acute radiation effects.
Dose versus source or field strength
Radiation dose refers to the amount of energy deposited in matter and/or biological effects of radiation, and should not be confused with the unit of radioactive activity (becquerel, Bq) of the source of radiation, or the strength of the radiation field (fluence). The article on the sievert gives an overview of dose types and how they are calculated. Exposure to a source of radiation will give a dose which is dependent on many factors, such as the activity, duration of exposure, energy of the radiation emitted, distance from the source and amount of shielding.
Background radiation
The worldwide average background dose for a human being is about 3.5 mSv per year , mostly from cosmic radiation and natural isotopes in the earth. The largest single source of radiation exposure to the general public is naturally occurring radon gas, which comprises approximately 55% of the annual background dose. It is estimated that radon is responsible for 10% of lung cancers in the United States.
Calibration standards for measuring instruments
Because the human body is approximately 70% water and has an overall density close to 1 g/cm3, dose measurement is usually calculated and calibrated as dose to water.
National standards laboratories such as the National Physical Laboratory, UK (NPL) provide calibration factors for ionization chambers and other measurement devices to convert from the instrument's readout to absorbed dose. The standards laboratories operates as a primary standard, which is normally calibrated by absolute calorimetry (the warming of substances when they absorb energy). A user sends their secondary standard to the laboratory, where it is exposed to a known amount of radiation (derived from the primary standard) and a factor is issued to convert the instrument's reading to that dose. The user may then use their secondary standard to derive calibration factors for other instruments they use, which then become tertiary standards, or field instruments.
The NPL operates a graphite-calorimeter for absolute photon dosimetry. Graphite is used instead of water as its specific heat capacity is one-sixth that of water and therefore the temperature increase in graphite is 6 times higher than the equivalent in water and measurements are more accurate. Significant problems exist in insulating the graphite from the surrounding environment in order to measure the tiny temperature changes. A lethal dose of radiation to a human is approximately 10–20 Gy. This is 10–20 joules per kilogram. A 1 cm3 piece of graphite weighing 2 grams would therefore absorb around 20–40 mJ. With a specific heat capacity of around 700 J·kg−1·K−1, this equates to a temperature rise of just 20 mK.
Dosimeters in radiotherapy (linear particle accelerator in external beam therapy) are routinely calibrated using ionization chambers or diode technology or gel dosimeters.
Radiation-related quantities
The following table shows radiation quantities in SI and non-SI units.
Although the United States Nuclear Regulatory Commission permits the use of the units curie, rad, and rem alongside SI units, the European Union European units of measurement directives required that their use for "public health ... purposes" be phased out by 31 December 1985.
See also
Computational human phantom
Health effects of radon
Radiation dose reconstruction
Notes
References
External links
Ionization chamber
– "The confusing world of radiation dosimetry" – M.A. Boyd, U.S. Environmental Protection Agency. An account of chronological differences between USA and ICRP dosimetry systems.
Tim Stephens and Keith Pantridge, 'Dosimetry, Personal Monitoring Film' (a short article on Dosimetry from the point of view of its relation to photography, in Philosophy of Photography, volume 2, number 2, 2011, pp. 153–158.)
Radiobiology
Radiation therapy
Nuclear physics
Medical physics
Radiation protection | Dosimetry | [
"Physics",
"Chemistry",
"Biology"
] | 2,933 | [
"Applied and interdisciplinary physics",
"Radiobiology",
"Medical physics",
"Nuclear physics",
"Radioactivity"
] |
318,980 | https://en.wikipedia.org/wiki/Endorheic%20basin | An endorheic basin ( ; also endoreic basin and endorreic basin) is a drainage basin that normally retains water and allows no outflow to other external bodies of water (e.g. rivers and oceans); instead, the water drainage flows into permanent and seasonal lakes and swamps that equilibrate through evaporation. Endorheic basins are also called closed basins, terminal basins, and internal drainage systems.
Endorheic regions contrast with open lakes (exorheic regions), where surface waters eventually drain into the ocean. In general, water basins with subsurface outflows that lead to the ocean are not considered endorheic; but cryptorheic. Endorheic basins constitute local base levels, defining a limit of the erosion and deposition processes of nearby areas. Endorheic water bodies include the Caspian Sea, which is the world's largest inland body of water.
Etymology
The term endorheic derives from the French word , which combines ( 'within') and 'flow'.
Endorheic lakes
Endorheic lakes (terminal lakes) are bodies of water that do not flow into an ocean or a sea. Most of the water that falls to Earth percolates into the oceans and the seas by way of a network of rivers, lakes, and wetlands. Analogous to endorheic lakes is the class of bodies of water located in closed watersheds (endorheic watersheds) where the local topography prevents the drainage of water into the oceans and the seas. These endorheic watersheds (containing water in rivers or lakes that form a balance of surface inflows, evaporation and seepage) are often called sinks.
Endorheic lakes are typically located in the interior of a landmass, far from an ocean, and in areas of relatively low rainfall. Their watersheds are often confined by natural geologic land formations such as a mountain range, cutting off water egress to the ocean. The inland water flows into dry watersheds where the water evaporates, leaving a high concentration of minerals and other inflow erosion products. Over time this input of erosion products can cause the endorheic lake to become relatively saline (a "salt lake"). Since the main outflow pathways of these lakes are chiefly through evaporation and seepage, endorheic lakes are usually more sensitive to environmental pollutant inputs than water bodies that have access to oceans, as pollution can be trapped in them and accumulate over time.
Occurrence
Endorheic regions can occur in any climate but are most commonly found in desert locations. This reflects the balance between tectonic subsidence and rates of evaporation and sedimentation. Where the basin floor is dropping more rapidly than water and sediments can accumulate, any lake in the basin will remain below the sill level (the level at which water can find a path out of the basin). Low rainfall or rapid evaporation in the watershed favor this case. In areas where rainfall is higher, riparian erosion will generally carve drainage channels (particularly in times of flood), or cause the water level in the terminal lake to rise until it finds an outlet, breaking the enclosed endorheic hydrological system's geographical barrier and opening it to the surrounding terrain. The Black Sea was likely such a lake, having once been an independent hydrological system before the Mediterranean Sea broke through the terrain separating the two. Lake Bonneville was another such lake, overflowing its basin in the Bonneville flood. The Malheur/Harney lake system in Oregon is normally cut off from drainage to the ocean, but has an outflow channel to the Malheur River. This is presently dry, but may have flowed as recently as 1,000 years ago.
Examples of relatively humid regions in endorheic basins often exist at high elevation. These regions tend to be marshy and are subject to substantial flooding in wet years. The area containing Mexico City is one such case, with annual precipitation of and characterized by waterlogged soils that require draining.
Endorheic regions tend to be far inland with their boundaries defined by mountains or other geological features that block their access to oceans. Since the inflowing water can evacuate only through seepage or evaporation, dried minerals or other products collect in the basin, eventually making the water saline and also making the basin vulnerable to pollution. Continents vary in their concentration of endorheic regions due to conditions of geography and climate. Australia has the highest percentage of endorheic regions at 21 per cent while North America has the least at five per cent. Approximately 18 per cent of the Earth's land drains to endorheic lakes or seas, the largest of these land areas being the interior of Asia.
In deserts, water inflow is low and loss to solar evaporation high, drastically reducing the formation of complete drainage systems. In the extreme case, where there is no discernible drainage system, the basin is described as arheic. Closed water flow areas often lead to the concentration of salts and other minerals in the basin. Minerals leached from the surrounding rocks are deposited in the basin, and left behind when the water evaporates. Thus endorheic basins often contain extensive salt pans (also called salt flats, salt lakes, alkali flats, dry lake beds, or playas). These areas tend to be large, flat hardened surfaces and are sometimes used for aviation runways, or land speed record attempts, because of their extensive areas of perfectly level terrain.
Both permanent and seasonal endorheic lakes can form in endorheic basins. Some endorheic basins are essentially stable because climate change has reduced precipitation to the degree that a lake no longer forms. Even most permanent endorheic lakes change size and shape dramatically over time, often becoming much smaller or breaking into several smaller parts during the dry season. As humans have expanded into previously uninhabitable desert areas, the river systems that feed many endorheic lakes have been altered by the construction of dams and aqueducts. As a result, many endorheic lakes in developed or developing countries have contracted dramatically, resulting in increased salinity, higher concentrations of pollutants, and the disruption of ecosystems.
Even within exorheic basins, there can be "non-contributing", low-lying areas that trap runoff and prevent it from contributing to flows downstream during years of average or below-average runoff. In flat river basins, non-contributing areas can be a large fraction of the river basin, e.g. Lake Winnipeg's basin. A lake may be endorheic during dry years and can overflow its basin during wet years, e.g., the former Tulare Lake.
Because the Earth's climate has recently been through a warming and drying phase with the end of the Ice Ages, many endorheic areas such as Death Valley that are now dry deserts were large lakes relatively recently. During the last ice age, the Sahara may have contained lakes larger than any now existing.
Climate change coupled with the mismanagement of water in these endorheic regions has led to devastating losses in ecosystem services and toxic surges of pollutants. The desiccation of saline lakes produces fine dust particles that impair agriculture productivity and harm human health. Anthropogenic activity has also caused a redistribution of water from these hydrologically landlocked basins such that endorheic water loss has contributed to sea level rise, and it is estimated that most of the terrestrial water lost ends up in the ocean. In regions such as Central Asia, where people depend on endorheic basins and other surface water sources to satisfy their water needs, human activity greatly impacts the availability of that water.
Notable endorheic basins and lakes
Africa
Large endorheic regions in Africa are located in the Sahara Desert, the Sahel, the Kalahari Desert, and the East African Rift:
Chad Basin, in the northern centre of Africa. It covers an area of approximately 2.434 million km2.
Qattara Depression, in Egypt.
Chott Melrhir, in Algeria.
Chott el Djerid, in Tunisia.
The Okavango River, in the Kalahari Desert, is part of an endorheic basin region, the Okavango Basin, that also includes the Okavango Delta, Lake Ngami, the Nata River, and a number of salt pans such as Makgadikgadi Pan.
Etosha Pan in Namibia's Etosha National Park.
Turkana Basin, in Kenya, whose basin includes the Omo River of Ethiopia.
Lake Chilwa, in Malawi.
Afar Depression, in Eritrea, Ethiopia, and Djibouti, which contains the Awash River
Some Rift Valley lakes, such as Lake Abijatta, Lake Chew Bahir, Lake Shala, Lake Chamo, and Lake Awasa.
Lake Mweru Wantipa, in Zambia.
Lake Magadi, in Kenya.
Lake Rukwa, in Tanzania.
Antarctica
Endorheic lakes exist in Antarctica's McMurdo Dry Valleys, Victoria Land, the largest ice-free area.
Don Juan Pond in Wright Valley is fed by groundwater from a rock glacier and remains unfrozen throughout the year.
Lake Vanda in Wright Valley has a perennial ice cover, the edges of which melt in the summer, allowing flow from the longest river in Antarctica, the Onyx River. The lake is over 70 m deep and is hypersaline.
Lake Bonney is in Taylor Valley and has a perennial ice cover and two lobes separated by the Bonney Riegel. Glacial melt and discharge from Blood Falls feed the lake. Its unique glacial history has resulted in hypersaline brine in the bottom waters and fresh water at the surface.
Lake Hoare, in Taylor Valley, is the freshest of the Dry Valley lakes, receiving its melt almost exclusively from the Canada Glacier. The lake has an ice cover and forms a moat during the Austral summer.
Lake Fryxell is adjacent to the Ross Sea in Taylor Valley. The lake has an ice cover and receives its water from numerous glacial meltwater streams for approximately six weeks out of the year. Its salinity increases with depth.
Asia
Much of Western and Central Asia is a giant endorheic region made up of a number of contiguous closed basins. The region contains several basins and terminal lakes, including:
The Caspian Sea, the largest lake on Earth. A large part of western Russia, drained by the Volga River, is part of the Caspian basin.
Lake Urmia in Western Azerbaijan Province of Iran.
The Aral Sea, whose tributary rivers have been diverted, leading to a dramatic shrinkage of the lake. The resulting ecological disaster has brought the plight of internal drainage basins to public attention.
Lake Balkhash, in Kazakhstan.
Issyk-Kul Lake and Chatyr-Kul Lake in Kyrgyzstan.
Lop Lake, in the Tarim Basin of China's Xinjiang Uygur Autonomous Region.
The Dzungarian Basin in Xinjiang, separated from the Tarim Basin by the Tian Shan. The most notable terminal lake in the basin is the Manas Lake.
The Central Asian Internal Drainage Basin, in southern and western Mongolia, contains a series of closed drainage basins, such as the Khyargas Nuur basin, the Uvs Nuur basin, which includes Üüreg Lake, and the Pu-Lun-To River Basin.
Qaidam Basin, in Qinghai Province, China, as well as nearby Qinghai Lake.
Sistan Basin covering areas of Iran and Afghanistan
Pangong Tso and Aksai Chin Lake on the China-India border
Many small lakes and rivers of the Iranian Plateau, including Gavkhouni marshes and Namak Lake.
Other endorheic lakes and basins in Asia include:
The Dead Sea, the lowest surface point on Earth and one of its saltiest bodies of water lies between Israel and Jordan.
Sambhar Lake, in Rajasthan, north-western India
Lake Van in eastern Turkey
Sabkhat al-Jabbul, extensive salt flats and a lake in Syria.
Solar Lake, Sinai, near the Israeli-Egypt border.
Lake Tuz, in Turkey, in south part of Central Anatolia Region.
Sawa lake in Iraq, in Muthanna Governorate.
Australia
Australia, being very dry and having exceedingly low runoff ratios due to its ancient soils, has many endorheic drainages. The most important are:
Lake Eyre basin, which drains into the highly variable Lake Eyre and includes Lake Frome.
Lake Torrens, usually an endorheic lake to the west of the Flinders Ranges in South Australia, that flows to the sea after extreme rainfall events.
Lake Corangamite, a highly saline crater lake in western Victoria.
Lake George, formerly connected to the Murray-Darling Basin
Europe
Though a large portion of Europe drains to the endorheic Caspian Sea, Europe's wet climate means it contains relatively few terminal lakes itself: any such basin is likely to continue to fill until it reaches an overflow level connecting it with an outlet or erodes the barrier blocking its exit.
There are some seemingly endorheic lakes, but they are cryptorheic, being drained either through manmade canals, via karstic phenomena, or other subsurface seepage.
Lake Neusiedl, in Austria and Hungary.
Lake Trasimeno, in Italy.
Fucine Lake, in Italy. Now drained.
Lake Velence, in Hungary.
Lake Prespa, between Albania, Greece and North Macedonia.
Rahasane Turlough, the largest turlough in Ireland.
Laacher See, in Germany.
The Lasithi Plateau in Crete, Greece, is a high endorheic plateau.
A few minor true endorheic lakes exist in Spain (e.g. Laguna de Gallocanta, Estany de Banyoles), Italy, Cyprus (Larnaca and Akrotiri salt lakes) and Greece.
North and Central America
The Great Basin is North America's largest and the world's ninth largest endorheic basin, covering nearly all of Nevada, much of Oregon and Utah, and portions of California, Idaho, and Wyoming. Notable enclosed basins include Death Valley, the hottest location on Earth; the Black Rock Desert and Bonneville Salt Flats, location of many of the new vehicle land speed records set since the 1930s; the Great Salt Lake, remnant of Lake Bonneville; and the Salton Sea.
The Valley of Mexico. In Pre-Columbian times, the Valley was substantially covered with five lakes, including Lake Texcoco, Lake Xochimilco, and Lake Chalco.
Guzmán Basin, in northern Mexico and the southwestern United States. The Mimbres River of New Mexico drains into this basin.
Lago Atitlán, a volcanic caldera lake in the highlands of Guatemala. It is cryptorheic.
Lago Coatepeque, El Salvador.
Bolsón de Mapimí, in northern Mexico.
Willcox Playa of southern Arizona.
Tulare Lake in the San Joaquin Valley in Central California, fed by the Kaweah and Tule Rivers plus southern distributaries of the Kings. Historically, it would drain into the San Joaquin River in very wet years. Agricultural development and irrigation diversions have left the lake dry.
Buena Vista Lake at the southmost end of the San Joaquin Valley in Southern California, fed by the Kern River. Historically, it would drain into Tulare Lake and the San Joaquin River in exceptionally wet years. Agricultural development and irrigation diversions have left the lake dry.
Crater Lake, in Oregon, a cryptorheic lake with subsurface drainage to the Wood River. It is filled directly by rain and snow and has very little mineral or salt buildup.
The Great Divide Basin in Wyoming, a small endorheic basin that straddles the Continental Divide of the Americas.
Devils Lake, in North Dakota.
Devil's Lake, in Wisconsin, cryptorheic.
Tule Lake and the Lost River basin in California and Oregon.
Little Manitou Lake in Saskatchewan.
Old Wives Lake, on the Laurentian Divide in Saskatchewan.
Quill Lakes, in Saskatchewan.
Pakowki Lake, on the Laurentian Divide in Alberta.
Paynes Prairie, in Florida. Since 1927, it has been drained by canal to the Atlantic Ocean via the River Styx.
Spotted Lake, Osoyoos, British Columbia, Canada.
Several lakes on the western Chilcotin Plateau sit on the divide between the Fraser River drainage to the east and the Homathko drainage to the west. Such examples include Choelquoit Lake, Eagle Lake, and Martin lake.
Frame Lake in Yellowknife, capital of Canada's Northwest Territories.
New Mexico has several desert endorheic basins, including:
The Tularosa Basin, a rift valley.
Zuñi Salt Lake, a maar.
The Mimbres River Basin, in Grant County.
The San Agustin Basin, in Catron and Socorro Counties.
Lago Enriquillo on the island of Hispaniola in the Caribbean Sea.
Many small lakes and ponds in North Dakota and the Northern Great Plains are endorheic, and some have salt encrustations along their shores.
South America
Laguna del Carbón, in Gran Bajo de San Julián, Argentina – the lowest point in the Western and Southern hemispheres
Lake Mar Chiquita in Argentina.
The Altiplano includes a number of closed basins such as the Salar de Coipasa, and Titicaca–Poopó system.
Lake Valencia, in Venezuela.
Salar de Atacama, in the Atacama Desert, Chile.
Ancient
Some of Earth's ancient endorheic systems and lakes include:
The Black Sea, until its merger with the Mediterranean.
The Mediterranean Sea itself and all its tributary basins, during its Messinian desiccation (approximately five million years ago) as it became disconnected from the Atlantic Ocean.
The Orcadian Basin in Scotland during the Devonian period. Now identifiable as lacustrine sediments buried around and off the coast.
Lake Tanganyika in Africa. Currently high enough to connect to rivers entering the sea.
Lake Lahontan in North America.
Lake Bonneville in North America. The basin was not always endorheic; at times, it overflowed through Red Rock Pass to the Snake River and the sea.
Lake Chewaucan in North America.
Tularosa Basin and Lake Cabeza de Vaca in North America. The basin was formerly much larger than it is today, including the ancestral Rio Grande north of Texas, which fed a large lake area.
Ebro and Duero basins, draining most of northern Spain during the Neogene and perhaps Pliocene. Climate change and erosion of the Catalan coastal mountains, as well as the deposition of alluvium in the terminal lake, allowed the Ebro basin to overflow into the sea during the middle-to-late Miocene.
See also
Triple divide
References
External links
Primer on endorheic lakes
The Silk Roads and Eurasian Geography
Bodies of water
Lacustrine landforms
Drainage basins
Hydrology
Lakes | Endorheic basin | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 3,973 | [
"Lakes",
"Hydrology",
"Drainage basins",
"Environmental engineering"
] |
319,026 | https://en.wikipedia.org/wiki/Bystander%20intervention | Bystander intervention is a type of training used in post-secondary education institutions to prevent sexual assault or rape, binge drinking and harassment and unwanted comments of racist, homophobic, or transphobic nature. A bystander is a person who is present at an event, party, or other setting who notices a problematic situation, such as a someone making sexual advances on a drunk person. The bystander then takes on personal responsibility and takes action to intervene, with the goal of preventing the situation from escalating.
The bystander who is intervening has several options, including distracting either of the people, getting help from others, checking in later, or directly intervening. There are risks to bystander intervention; it can lead to fights, it can ruin the mood for the people who were "intervened" into, and it can lead to confrontations. Bystander intervention may also be called "bystander education", because the model is based on a system of educating trainers and leaders who will then go on to train people from their community.
Prevention of sexual assault
One bystander intervention researcher suggests that a potential sexual assault should be stopped by pretending to spill a drink on a drunk person who is trying to make sexual moves on another intoxicated person, to distract him and "...stop bad behavior before it crosses the line from drunken partying to sexual assault". Advocates hope that bystander intervention programs can yield the same results on sexual assault that designated driver initiatives have had in reducing impaired driving; another similarity is that both programs do not discourage drinking itself, only the combination of drinking and law-breaking. Some US universities are introducing bystander education initiatives to comply with Title IX, which requires US universities which receive federal funding to not discriminate on basis of gender.
Research
A study on bystander intervention by the University of New Hampshire showed that 38 percent of the men who participated in a bystander intervention campaign training said they intervened to stop a sexual assault, versus only 12 percent of the control group (who did not see the campaign). An Ohio University study compared men who took a bystander intervention session with a group of men who did not have the training; 1.5 percent of the bystander intervention participants said they had committed sexual assault over the last four months, versus 6.7 percent from the untrained group. One challenge with bystander education programs is that a study has shown that white female students are less likely to intervene in a hypothetical situation where they see an intoxicated black woman being led towards a bedroom at a party by a non-intoxicated male, as white students feel "less personal responsibility" to help women of colour and they feel that the black woman is deriving pleasure from the situation.
See also
Green Dot Bystander Intervention
Bystander effect
References
Rape
Harm reduction
Violence
Educational programs | Bystander intervention | [
"Biology"
] | 587 | [
"Behavior",
"Aggression",
"Human behavior",
"Violence"
] |
319,106 | https://en.wikipedia.org/wiki/Pulse%20computation | Pulse computation is a hybrid of digital and analog computation that uses aperiodic electrical spikes, as opposed to the periodic voltages in a digital computer or the continuously varying voltages in an analog computer. Pulse streams are unclocked, so they can arrive at arbitrary times and can be generated by analog processes, although each spike is allocated a binary value, as it would be in a digital computer.
Pulse computation is primarily studied as part of the field of neural networks. The processing unit in such a network is called a "neuron".
References
Computational neuroscience | Pulse computation | [
"Technology"
] | 115 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
319,122 | https://en.wikipedia.org/wiki/Pinwheel%20Galaxy | The Pinwheel Galaxy (also known as Messier 101, M101 or NGC 5457) is a face-on, unbarred, and counterclockwise spiral galaxy located from Earth in the constellation Ursa Major. It was discovered by Pierre Méchain in 1781 and was communicated that year to Charles Messier, who verified its position for inclusion in the Messier Catalogue as one of its final entries.
On February 28, 2006, NASA and the European Space Agency released a very detailed image of the Pinwheel Galaxy, which was the largest and most detailed image of a galaxy by Hubble Space Telescope at the time. The image was composed of 51 individual exposures, plus some extra ground-based photos.
Discovery
Pierre Méchain, the discoverer of the galaxy, described it as a "nebula without star, very obscure and pretty large, 6' to 7' in diameter, between the left hand of Bootes and the tail of the great Bear. It is difficult to distinguish when one lits the [grating] wires."
William Herschel wrote in 1784 that the galaxy was one of several which "...in my 7-, 10-, and 20-feet [focal length] reflectors shewed a mottled kind of nebulosity, which I shall call resolvable; so that I expect my present telescope will, perhaps, render the stars visible of which I suppose them to be composed."
Lord Rosse observed the galaxy in his 72-inch-diameter Newtonian reflector during the second half of the 19th century. He was the first to make extensive note of the spiral structure and made several sketches.
Though the galaxy can be detected with binoculars or a small telescope, to observe the spiral structure in a telescope without a camera requires a fairly large instrument, very dark skies, and a low-power eyepiece.
Structure and composition
M101 is a large galaxy, with a diameter of 170,000 light-years. By comparison, the Milky Way has a diameter of 87,400 light-years. It has around a trillion stars. It has a disk mass on the order of 100 billion solar masses, along with a small central bulge of about 3 billion solar masses. Its characteristics can be compared to those of Andromeda Galaxy.
M101 has a high population of H II regions, many of which are very large and bright. H II regions usually accompany the enormous clouds of high density molecular hydrogen gas contracting under their own gravitational force where stars form. H II regions are ionized by large numbers of extremely bright and hot young stars; those in M101 are capable of creating hot superbubbles. In a 1990 study, 1,264 H II regions were cataloged in the galaxy. Three are prominent enough to receive New General Catalogue numbers—NGC 5461, NGC 5462, and NGC 5471.
M101 is asymmetrical due to the tidal forces from interactions with its companion galaxies. These gravitational interactions compress interstellar hydrogen gas, which then triggers strong star formation activity in M101's spiral arms that can be detected in ultraviolet images.
In 2001, the X-ray source P98, located in M101, was identified as an ultra-luminous X-ray source—a source more powerful than any single star but less powerful than a whole galaxy—using the Chandra X-ray Observatory. It received the designation M101 ULX-1. In 2005, Hubble and XMM-Newton observations showed the presence of an optical counterpart, strongly indicating that M101 ULX-1 is an X-ray binary. Further observations showed that the system deviated from expected models—the black hole is just 20 to 30 solar masses, and consumes material (including captured stellar wind) at a higher rate than theory suggests.
It is estimated that M101 has about 150 globular clusters, the same as the number of the Milky Way's globular clusters.
Companion galaxies
M101 has six prominent companion galaxies: NGC 5204, NGC 5474, NGC 5477, NGC 5585, UGC 8837 and UGC 9405. As stated above, the gravitational interaction between it and its satellites may have spawned its grand design pattern. The galaxy has probably distorted the second-listed companion. The list comprises most or all of the M101 Group.
Supernovae and luminous red nova
Six internal supernovae have been recorded:
SN 1909A was discovered by Max Wolf in January 1909 and reached magnitude 12.1.
SN 1951H was discovered by Milton Humason on 1 September 1951 and reached magnitude 17.5.
SN 1970G (typeII, mag. 11.5) was discovered by Miklós Lovas on 30 July 1970.
On August 24, 2011, a Type Ia supernova, SN 2011fe, initially designated PTF 11kly, was discovered in M101. It had visual magnitude 17.2 at discovery and reached 9.9 at its peak.
On February 10, 2015, a luminous red nova, known as M101 OT2015-1 was discovered in the Pinwheel Galaxy.
On May 19, 2023, SN 2023ixf was discovered in M101, and immediately classified as a Type II supernova.
See also
List of Messier objects
– a similar face-on spiral galaxy
– a similar face-on spiral galaxy that is sometimes called the Southern Pinwheel Galaxy
– a similar face-on spiral galaxy
– another galaxy sometimes called the Pinwheel Galaxy
References
External links
SEDS: Spiral Galaxy M101
Intermediate spiral galaxies
M101 Group
Ursa Major
Messier objects
NGC objects
08981
50063
026
Astronomical objects discovered in 1781
Discoveries by Pierre Méchain | Pinwheel Galaxy | [
"Astronomy"
] | 1,172 | [
"Ursa Major",
"Constellations"
] |
319,138 | https://en.wikipedia.org/wiki/French%20paradox | The French paradox is an apparently paradoxical epidemiological observation that French people have a relatively low incidence of coronary heart disease (CHD), while having a diet relatively rich in saturated fats, in apparent contradiction to the widely held belief that the high consumption of such fats is a risk factor for CHD. The paradox is that if the thesis linking saturated fats to CHD is valid, the French ought to have a higher rate of CHD than comparable countries where the per capita consumption of such fats is lower.
It has also been suggested that the French paradox is an illusion, created in part by differences in the way that French authorities collect health statistics, as compared to other countries, and in part by the long-term effects, in the coronary health of French citizens, of changes in dietary patterns that were adopted years earlier.
Identifying and quantifying the French paradox
In 1991, Serge Renaud, a scientist from Bordeaux University, France—considered today the father of the phrase—presented the results of his scientific study into the term and actual scientific data behind the perception of the phrase. This was followed by a public documentary broadcast on the American CBS News television channel, 60 Minutes.
In 1991, Renaud extended his studies in partnership with then junior researchers, cardiologist Michel de Lorgeril and dietician Patricia Salen. The three enhanced Renaud's study, with their paper concluding that: a diet based on southwestern Mediterranean cuisine (which is high in omega-3 oils, antioxidants and includes "moderate consumption" of red wine) created lower cases of cancer, myocardial infarction and cardiovascular disease; partly through increasing HDL cholesterol whilst reducing LDL cholesterol.
Statistical illusion hypothesis
In 1999, Malcolm Law and Nicholas Wald published a study in the British Medical Journal, using data from a 1994 study of alcohol and diet to explain how the French paradox might actually be an illusion, caused by two statistical distortions.
First, Law and Wald attributed about 20% of the difference in the observed rates of CHD between France and the United Kingdom to the under-certification of CHD in France, relative to the UK.
Second, Law and Wald presented a time-lag hypothesis: if there were a delay in serum cholesterol concentrations increasing and a subsequent increase in ischaemic heart disease mortality, then the current rate of mortality from CHD is more likely to be linked to past levels of serum cholesterol and fat consumption than to current serum cholesterol levels and patterns of fat consumption. They wrote,
We propose that the difference is due to the time lag between increases in consumption of animal fat and serum cholesterol concentrations and the resulting increase in mortality from heart disease—similar to the recognised time lag between smoking and lung cancer. Consumption of animal fat and serum cholesterol concentrations increased only recently in France but did so decades ago in Britain.
Evidence supports this explanation: mortality from heart disease across countries, including France, correlates strongly with levels of animal fat consumption and serum cholesterol in the past (30 years ago) but only weakly to recent levels. Based on past levels, mortality data for France are not discrepant
In addition, the French population has become increasingly overweight. A study published by the French Institute of Health and Medical Research (INSERM) revealed an increase in obesity from 8.5% in 1997 to 14.5% in 2009, with women showing a greater tendency toward obesity than men.
Impact
Cultural impact
The overall impact of the popular perception, in the English-speaking world, that the French paradox is a real phenomenon, has been to give added credibility to health claims associated with specific French dietary practices.
This was seen most dramatically when, in 1991, an early account of the then-novel concept of the French paradox was aired in the United States on 60 Minutes. The broadcast left the impression that France's high levels of red wine consumption accounted for much of the country's lower incidence of cardiac disease. Within a year, the consumption of red wine in the United States had increased by 40% and some wine sellers began promoting their products as "health food."
The cultural impact of the French paradox can be seen in the large number of book titles in the diet-and-health field that purport to give the reader access to the secrets behind the paradox:
The Fat Fallacy: The French Diet Secrets to Permanent Weight Loss (William Clower, 2003);
The French Don't Diet Plan: 10 Simple Steps to Stay Thin for Life (William Clower, 2006)
French Women Don't Get Fat (Mireille Guiliano, 2004, which became a #1 best-seller in 2006)
Cholesterol and The French Paradox (Frank Cooper, 2009);
The French Women Don't Get Fat Cookbook (Mireille Guiliano, 2010).
Other books sought to boost their credibility by reference to the French paradox. The American edition of The Dukan Diet, written by Pierre Dukan, a Paris-based doctor, is marketed with the subtitle, "The real reason the French stay thin".
Scientific impact
The existence of the French paradox has caused some researchers to speculate that the link between dietary consumption of saturated fats and coronary heart disease might not be as strong as had previously been thought. This has resulted in a review of the earlier studies that suggested this link.
Some researchers have thrown into question the entire claimed connection between natural saturated fat consumption and cardiovascular disease. In 2006, this view received some indirect support from the results of the Nurses' Health Study run by the Women's Health Initiative. After accumulating approximately 8 years of data on the diet and health of 49,000 post-menopausal American women, the researchers found that the balance of saturated versus unsaturated fats did not appear to affect heart disease risk, whereas the consumption of trans fat was associated with significantly increased risk of cardiovascular disease.
Similarly, the authors of a 2009 review of dietary studies concluded that there was insufficient evidence to establish a causal link between consumption of saturated fats and coronary heart disease risk.
Possible explanations
Explanations based on the high per capita consumption of red wine in France
It has been suggested that France's high red wine consumption is a primary factor in the trend. This hypothesis was expounded in a 60 Minutes broadcast in 1991. The program catalysed a large increase in North American demand for red wines from around the world. It is believed that one of the components of red wine potentially related to this effect is resveratrol; however, the authors of a 2003 study concluded that the amount of resveratrol absorbed by drinkers of red wine is small enough that it is unlikely to explain the paradox.
Explanations based on multiple factors
In "Lifestyle in France and the United States" (2010), one study reviewed identifies three major factors likely to be involved in the paradox:
Walking (On average, French people walk briskly much more often than Americans.)
Water (On average, French people drink more water and fewer sweetened drinks than Americans.)
Fruit and vegetables (On average, French people consume more fresh fruits and vegetables than Americans do.)
In his 2003 book, The Fat Fallacy: The French Diet Secrets to Permanent Weight Loss, Will Clower suggests the French paradox may be narrowed down to a few key factors, namely:
Good fats versus bad fats – French people get up to 80% of their fat intake from dairy and vegetable sources, including whole milk, cheeses, and whole milk yogurt.
Higher quantities of fish (at least three times a week).
Smaller portions, eaten more slowly and divided among courses that let the body begin to digest food already consumed before more food is added.
Lower sugar intake – American low-fat and no-fat foods often contain high concentrations of sugar. French diets avoid these products preferring full-fat versions without added sugar.
Low incidence of snacks between meals.
Avoidance of common American food items, such as soda, deep-fried foods, snack foods, and especially prepared foods that can typically make up a large percentage of the foods found in American grocery stores.
Clower tends to downplay the common beliefs that wine consumption and smoking are greatly responsible for the French paradox. While a higher percentage of French people smoke, this is not greatly higher than the U.S. (35% in France vs. 25% in U.S.) and is unlikely to account for the weight difference between countries.
Early life nutrition
One proposed explanation of the French paradox regards possible effects (epigenetic or otherwise) of dietary improvements in the first months and years of life, exerted across multiple generations. Following defeat in the Franco-Prussian War in 1871, the French government introduced an aggressive nutritional program providing high quality foods to pregnant women and young children with the aim of fortifying future generations of soldiers (the program was implemented about three decades prior to an analogous initiative in England in response to the Boer War). It has been suggested that the particular timing of this historical intervention might help explain the relatively low rates of obesity and heart disease found in France.
See also
References
Citations
Sources
Perdue, W. Lewis, et al. the French Paradox and Beyond. Sonoma, CA: Renaissance, 1993.
Further reading
External links
How To Live Forever The Economist 3 January 2008
Cardiovascular diseases
Health paradoxes
Wine
Epidemiology
Public health in France | French paradox | [
"Environmental_science"
] | 1,931 | [
"Epidemiology",
"Environmental social science"
] |
319,141 | https://en.wikipedia.org/wiki/Eagle%20Nebula | The Eagle Nebula (catalogued as Messier 16 or M16, and as NGC 6611, and also known as the Star Queen Nebula) is a young open cluster of stars in the constellation Serpens, discovered by Jean-Philippe de Cheseaux in 1745–46. Both the "Eagle" and the "Star Queen" refer to visual impressions of the dark silhouette near the center of the nebula, an area made famous as the "Pillars of Creation" imaged by the Hubble Space Telescope. The nebula contains several active star-forming gas and dust regions, including the aforementioned Pillars of Creation. The Eagle Nebula lies in the Sagittarius Arm of the Milky Way.
Characteristics
The Eagle Nebula is part of a diffuse emission nebula, or H II region, which is catalogued as IC 4703. This region of active current star formation is about 5700 light-years distant. A spire of gas that can be seen coming off the nebula in the northeastern part is approximately 9.5 light-years or about 90 trillion kilometers long.
The cluster associated with the nebula has approximately 8100 stars, which are mostly concentrated in a gap in the molecular cloud to the north-west of the Pillars.
The brightest star (HD 168076) has an apparent magnitude of +8.24, easily visible with good binoculars. It is actually a binary star formed of an O3.5V star plus an O7.5V companion. This star has a mass of roughly 80 solar masses, and a luminosity up to 1 million times that of the Sun.
The cluster's age has been estimated to be 1–2 million years.
The descriptive names reflect impressions of the shape of the central pillar rising from the southeast into the central luminous area. The name "Star Queen Nebula" was introduced by Robert Burnham, Jr., reflecting his characterization of the central pillar as the Star Queen shown in silhouette.
"Pillars of Creation" region
Images produced by Jeff Hester and Paul Scowen using the Hubble Space Telescope in 1995 greatly improved scientific understanding of processes inside the nebula. One of these became famous as the "Pillars of Creation", depicting a large region of star formation. Its small dark pockets are believed to be protostars (Bok globules). The pillar structure resembles that of a much larger instance in the Soul Nebula of Cassiopeia, imaged with the Spitzer Space Telescope in 2005 equally characterized as "Pillars of Star Creation". or "Pillars of Star Formation". These columns – which resemble stalagmites protruding from the floor of a cavern – are composed of interstellar hydrogen gas and dust, which act as incubators for new stars. Inside the columns and on their surface astronomers have found knots or globules of denser gas, called EGGs ("Evaporating Gaseous Globules"). Stars are being formed inside some of these.
X-ray images from the Chandra observatory compared with Hubble's "Pillars" image have shown that X-ray sources (from young stars) do not coincide with the pillars, but rather randomly dot the nebula. Any protostars in the pillars' EGGs are not yet hot enough to emit X-rays.
Evidence from the Spitzer Space Telescope originally suggested that the pillars in M16 may be threatened by a "past supernova". Hot gas observed by Spitzer in 2007 suggested they were already – likely – being disturbed by a supernova that exploded 8,000 to 9,000 years ago. Due to the distance the main blast of light would have reached Earth for a brief time 1,000 to 2,000 years ago. A more slowly moving, theorized, shock wave would have taken a few thousand years to move through the nebula and would have blown away the delicate pillars. However, in 2014 the pillars were imaged a second time by Hubble, in both visible light and infrared light. The images being 20 years later provided a new, detailed account of the rate of evaporation occurring within the pillars. No supernova is evidenced within them, and it is estimated in some form they still exist – and will appear for at least 100,000 more years.
Gallery
See also
List of Messier objects
NGC 1193
References
External links
The Eagle's EGGs – ESO Photo Release
ESO: An Eagle of Cosmic Proportions incl. Photos & Animations
ESO: VST Captures Three-In-One incl. Photos & Animations
Messier 16, SEDS Messier pages
Spacetelescope.org, Hubble telescope images on M16
Darkatmospheres.com, Eagle Nebula M16 (wide)
NASA.gov, APOD February 8, 2009 picture Eagle Nebula
Eagle Nebula (Messier 16) at Constellation Guide
Carina–Sagittarius Arm
Messier objects
Serpens
Open clusters
NGC objects
H II regions
Sharpless objects
17451231
Star-forming regions | Eagle Nebula | [
"Astronomy"
] | 1,007 | [
"Constellations",
"Serpens"
] |
319,150 | https://en.wikipedia.org/wiki/Messier%204 | Messier 4 or M4 (also known as NGC 6121 or the Spider Globular Cluster) is a globular cluster in the constellation of Scorpius. It was discovered by Philippe Loys de Chéseaux in 1745 and catalogued by Charles Messier in 1764. It was the first globular cluster in which individual stars were resolved.
Visibility
M4 is conspicuous in even the smallest of telescopes as a fuzzy ball of light. It appears about the same size as the Moon in the sky. It is one of the easiest globular clusters to find, being located only 1.3 degrees west of the bright star Antares, with both objects being visible in a wide-field telescope. Modestly sized telescopes will begin to resolve individual stars, of which the brightest in M4 are of apparent magnitude 10.8.
Characteristics
M4 is a rather loosely concentrated cluster of class IX and measures 75 light-years across. It features a characteristic "bar" structure across its core, visible to moderate sized telescopes. The structure consists of 11th-magnitude stars and is approximately 2.5' long and was first noted by William Herschel in 1783. At least 43 variable stars have been observed within M4.
M4 is approximately 6,000 light-years away, making it the closest globular cluster to the Solar System. It has an estimated age of 12.2 billion years.
In astronomy, the abundance of elements other than hydrogen and helium is called the metallicity, and it is usually denoted by the abundance ratio of iron to hydrogen as compared to the Sun. For this cluster, the measured abundance of iron is equal to:
This value is the logarithm of the ratio of iron to hydrogen relative to the same ratio in the Sun. Thus the cluster has an abundance of iron equal to 8.5% of the iron abundance in the Sun. This strongly suggests this cluster hosts two distinct stellar populations, differing by age. Thus the cluster probably saw two main cycles or phases of star formation.
The space velocity components are (U, V, W) = (, , ) km/s. This confirms an orbit around the Milky Way of a period of with eccentricity 0.80 ± 0.03: during periapsis it comes within from the galactic core, while at apoapsis it travels out to . The inclination is at (an angle of) from the galactic plane, thus it reaches as much as above the disk. When passing through the disk, this cluster does so at less than 5 kpc from the galactic nucleus. The cluster undergoes tidal shock during each passage, which can cause the repeated shedding of stars. Thus the cluster may have been much more massive.
Notable stars
Photographs by the Hubble Space Telescope in 1995 found white dwarf stars in M4 that are among the oldest known stars in our galaxy; aged 13 billion years. One has been found to be a binary star with a pulsar companion, PSR B1620−26 and a planet orbiting it with a mass of 2.5 times that of Jupiter (). One star in Messier 4 was also found to have much more of the rare light element lithium than expected.
CX-1 Is located in M4. It is known as a possible millisecond pulsar/neutron star binary. It orbits in 6.31 hours.
Spinthariscope analogy
The view of Messier 4 through a good telescope was likened by Robert Burnham Jr. to that of hyperkinetic luminous alpha particles seen in a spinthariscope.
Central black hole
In 2023, an analysis of Hubble Space Telescope and European Space Agency's Gaia spacecraft data from Messier 4 revealed an excess mass of roughly 800 solar masses in the center of this cluster, which appears to not be extended. This could thus be considered as kinematic evidence for an intermediate-mass black hole (even if an unusually compact cluster of compact objects like white dwarfs, neutron stars or stellar-mass black holes cannot be completely discounted).
References
See also
List of Messier objects
External links
M4, SEDS Messier pages
M4, Galactic Globular Clusters Database page
Messier 004
Messier 004
004
Messier 004
? | Messier 4 | [
"Astronomy"
] | 865 | [
"Scorpius",
"Constellations"
] |
319,153 | https://en.wikipedia.org/wiki/Messier%2083 | Messier 83 or M83, also known as the Southern Pinwheel Galaxy and NGC 5236, is a barred spiral galaxy approximately 15 million light-years away in the constellation borders of Hydra and Centaurus. Nicolas-Louis de Lacaille discovered M83 on 17 February 1752 at the Cape of Good Hope. Charles Messier added it to his catalogue of nebulous objects (now known as the Messier Catalogue) in March 1781.
It is one of the closest and brightest barred spiral galaxies in the sky, and is visible with binoculars. It has an isophotal diameter at about . Its nickname of the Southern Pinwheel derives from its resemblance to the Pinwheel Galaxy (M101).
Characteristics
M83 is a massive, grand design spiral galaxy. Its morphological classification in the De Vaucouleurs system is SAB(s)c, where the 'SAB' denotes a weak-barred spiral, '(s)' indicates a pure spiral structure with no ring, and 'c' means the spiral arms are loosely wound. The peculiar dwarf galaxy NGC 5253 lies near M83, and the two likely interacted within the last billion years resulting in starburst activity in their central regions.
The star formation rate in M83 is higher along the leading edge of the spiral arms, as predicted by density wave theory. NASA's Galaxy Evolution Explorer project on 16 April 2008 reported finding large numbers of new stars in the outer reaches of the galaxy— from the center. It had been thought that these areas lacked the materials necessary for star formation.
Supernovae
Six supernovae have been observed in M83:
SN 1923A (type unknown, mag. 14) was discovered by Carl Otto Lampland on 5 May 1923.
SN 1945B (type unknown, mag. 14.2) was discovered by William Liller on 13 July 1945.
SN 1950B (type unknown, mag. 14.5) was discovered by Guillermo Haro on 15 March 1950.
SN 1957D (type unknown, mag. 15) was discovered by H. S. Gates on 28 December 1957.
SN 1968L (type II-P, mag. 11.9) was discovered by J. C. Bennett on 17 July 1968.
SN 1983N (type Ia, mag. 11.9) was discovered by Robert Evans from Australia on July 3, 1983. On July 6, it was observed with the Very Large Array and became the first type I supernova to have a radio emission detected. The supernova reached peak optical brightness on July 17, achieving an apparent visual magnitude of 11.54. Although identified as type I, the spectrum was considered peculiar. A year after the explosion, about of iron was discovered in the ejecta. This was the first time that such a large amount of iron was unambiguously detected from a supernova explosion. SN 1983N became the modern prototype of a hydrogen deficient type Ib supernova, with the progenitor being inferred as a Wolf–Rayet star.
Environment
M83 is at the center of one of two subgroups within the Centaurus A/M83 Group, a nearby galaxy group. Centaurus A is at the center of the other subgroup. These are sometimes identified as one group, and sometimes as two. However, the galaxies around Centaurus A and the galaxies around M83 are physically close to each other, and both subgroups appear not to be moving relative to each other.
See also
List of Messier objects
M83 (band), the band named after the galaxy
References
External links
ESO Photo Release eso0136, An Infrared Portrait of the Barred Spiral Galaxy Messier 83
M83, SEDS Messier pages
Spiral Galaxy Messier 83 at the astro-photography site of Takayuki Yoshida
M83 The Southern Pinwheel
X-rays Discovered From Young Supernova Remnant (SN 1957D)
Messier 83 (Southern Pinwheel Galaxy) at Constellation Guide
17520223
Barred spiral galaxies
Centaurus A/M83 Group
Hydra (constellation)
Intermediate spiral galaxies
083
-05-32-050
NGC objects
048082
444-081
13341-2936
Starburst galaxies
366 | Messier 83 | [
"Astronomy"
] | 876 | [
"Hydra (constellation)",
"Constellations"
] |
319,163 | https://en.wikipedia.org/wiki/NGC%202070 | NGC 2070 (also known as Caldwell 103) is a large open cluster and candidate super star cluster forming the heart of the bright region in the centre-south-east of the Large Magellanic Cloud. This cluster was discovered by French astronomer Nicolas-Louis de Lacaille in 1752. It is at the centre of the Tarantula Nebula and produces most of the energy that makes the latter's gas and dust visible. Its central condensation is the star cluster R136, one of the most energetic star clusters known. Among its stars are many of great dimension, including the second most massive star known, R136a1, at 215 and 6.16 million .
References
External links
2070
Tarantula Nebula
Dorado
Open clusters
Large Magellanic Cloud
Star-forming regions | NGC 2070 | [
"Astronomy"
] | 163 | [
"Dorado",
"Constellations"
] |
319,252 | https://en.wikipedia.org/wiki/Strict%20function | In computer science and computer programming, a function f is said to be strict if, when applied to a non-terminating expression, it also fails to terminate. A strict function in the denotational semantics of programming languages is a function f where . The entity , called bottom, denotes an expression that does not return a normal value, either because it loops endlessly or because it aborts due to an error such as division by zero. A function that is not strict is called non-strict. A strict programming language is one in which user-defined functions are always strict.
Intuitively, non-strict functions correspond to control structures. Operationally, a strict function is one that always evaluates its argument; a non-strict function is one that might not evaluate some of its arguments. Functions having more than one parameter can be strict or non-strict in each parameter independently, as well as jointly strict in several parameters simultaneously.
As an example, the if-then-else expression of many programming languages, called ?: in languages inspired by C, may be thought of as a function of three parameters. This function is strict in its first parameter, since the function must know whether its first argument evaluates to true or to false before it can return; but it is non-strict in its second parameter, because (for example) if(false,,1) = 1, as well as non-strict in its third parameter, because (for example) if(true,2,) = 2. However, it is jointly strict in its second and third parameters, since if(true,,) = and if(false,,) = .
In a non-strict functional programming language, strictness analysis refers to any algorithm used to prove the strictness of a function with respect to one or more of its arguments. Such functions can be compiled to a more efficient calling convention, such as call by value, without changing the meaning of the enclosing program.
See also
Eager evaluation
Lazy evaluation
Short-circuit evaluation
References
Formal methods
Denotational semantics
Evaluation strategy | Strict function | [
"Engineering"
] | 423 | [
"Software engineering",
"Formal methods"
] |
319,341 | https://en.wikipedia.org/wiki/Guidance%20system | A guidance system is a virtual or physical device, or a group of devices implementing a controlling the movement of a ship, aircraft, missile, rocket, satellite, or any other moving object. Guidance is the process of calculating the changes in position, velocity, altitude, and/or rotation rates of a moving object required to follow a certain trajectory and/or altitude profile based on information about the object's state of motion.
A guidance system is usually part of a Guidance, navigation and control system, whereas navigation refers to the systems necessary to calculate the current position and orientation based on sensor data like those from compasses, GPS receivers, Loran-C, star trackers, inertial measurement units, altimeters, etc. The output of the navigation system, the navigation solution, is an input for the guidance system, among others like the environmental conditions (wind, water, temperature, etc.) and the vehicle's characteristics (i.e. mass, control system availability, control systems correlation to vector change, etc.). In general, the guidance system computes the instructions for the control system, which comprises the object's actuators (e.g., thrusters, reaction wheels, body flaps, etc.), which are able to manipulate the path and orientation of the object without direct or continuous human control.
One of the earliest examples of a true guidance system is that used in the German V-1 during World War II. The navigation system consisted of a simple gyroscope, an airspeed sensor, and an altimeter. The guidance instructions were target altitude, target velocity, cruise time, and engine cut off time.
A guidance system has three major sub-sections: Inputs, Processing, and Outputs. The input section includes sensors, course data, radio and satellite links, and other information sources. The processing section, composed of one or more CPUs, integrates this data and determines what actions, if any, are necessary to maintain or achieve a proper heading. This is then fed to the outputs which can directly affect the system's course. The outputs may control speed by interacting with devices such as turbines, and fuel pumps, or they may more directly alter course by actuating ailerons, rudders, or other devices.
History
Inertial guidance systems were originally developed for rockets. American rocket pioneer Robert Goddard experimented with rudimentary gyroscopic systems. Dr. Goddard's systems were of great interest to contemporary German pioneers including Wernher von Braun. The systems entered more widespread use with the advent of spacecraft, guided missiles, and commercial airliners.
US guidance history centers around 2 distinct communities. One driven out of Caltech and NASA Jet Propulsion Laboratory, the other from the German scientists that developed the early V2 rocket guidance and MIT. The GN&C system for V2 provided many innovations and was the most sophisticated military weapon in 1942 using self-contained closed loop guidance. Early V2s leveraged 2 gyroscopes and lateral accelerometer with a simple analog computer to adjust the azimuth for the rocket in flight. Analog computer signals were used to drive 4 external rudders on the tail fins for flight control. Von Braun engineered the surrender of 500 of his top rocket scientists, along with plans and test vehicles, to the Americans. They arrived in Fort Bliss, Texas in 1945 and were subsequently moved to Huntsville, Alabama, in 1950 (aka Redstone arsenal). Von Braun's passion was interplanetary space flight. However his tremendous leadership skills and experience with the V-2 program made him invaluable to the US military. In 1955 the Redstone team was selected to put America's first satellite into orbit putting this group at the center of both military and commercial space.
The Jet Propulsion Laboratory traces its history from the 1930s, when Caltech professor Theodore von Karman conducted pioneering work in rocket propulsion. Funded by Army Ordnance in 1942, JPL's early efforts would eventually involve technologies beyond those of aerodynamics and propellant chemistry. The result of the Army Ordnance effort was JPL's answer to the German V-2 missile, named MGM-5 Corporal, first launched in May 1947. On December 3, 1958, two months after the National Aeronautics and Space Administration (NASA) was created by Congress, JPL was transferred from Army jurisdiction to that of this new civilian space agency. This shift was due to the creation of a military focused group derived from the German V2 team. Hence, beginning in 1958, NASA JPL and the Caltech crew became focused primarily on unmanned flight and shifted away from military applications with a few exceptions. The community surrounding JPL drove tremendous innovation in telecommunication, interplanetary exploration and earth monitoring (among other areas).
In the early 1950s, the US government wanted to insulate itself against over dependency on the German team for military applications. Among the areas that were domestically "developed" was missile guidance. In the early 1950s the MIT Instrumentation Laboratory (later to become the Charles Stark Draper Laboratory, Inc.) was chosen by the Air Force Western Development Division to provide a self-contained guidance system backup to Convair in San Diego for the new Atlas intercontinental ballistic missile. The technical monitor for the MIT task was a young engineer named Jim Fletcher who later served as the NASA Administrator. The Atlas guidance system was to be a combination of an on-board autonomous system, and a ground-based tracking and command system. This was the beginning of a philosophic controversy, which, in some areas, remains unresolved. The self-contained system finally prevailed in ballistic missile applications for obvious reasons. In space exploration, a mixture of the two remains.
In the summer of 1952, Dr. Richard Battin and Dr. J. Halcombe ("Hal") Laning Jr., researched computational based solutions to guidance as computing began to step out of the analog approach. As computers of that time were very slow (and missiles very fast) it was extremely important to develop programs that were very efficient. Dr. J. Halcombe Laning, with the help of Phil Hankins and Charlie Werner, initiated work on MAC, an algebraic programming language for the IBM 650, which was completed by early spring of 1958. MAC became the work-horse of the MIT lab. MAC is an extremely readable language having a three-line format, vector-matrix notations and mnemonic and indexed subscripts. Today's Space Shuttle (STS) language called HAL, (developed by Intermetrics, Inc.) is a direct offshoot of MAC. Since the principal architect of HAL was Jim Miller, who co-authored with Hal Laning a report on the MAC system, it is a reasonable speculation that the space shuttle language is named for Jim's old mentor, and not, as some have suggested, for the electronic superstar of the Arthur Clarke movie "2001-A Space Odyssey." (Richard Battin, AIAA 82–4075, April 1982)
Hal Laning and Richard Battin undertook the initial analytical work on the Atlas inertial guidance in 1954. Other key figures at Convair were Charlie Bossart, the Chief Engineer, and Walter Schweidetzky, head of the guidance group. Walter had worked with Wernher von Braun at Peenemuende during World War II.
The initial "Delta" guidance system assessed the difference in position from a reference trajectory. A velocity to be gained (VGO) calculation is made to correct the current trajectory with the objective of driving VGO to Zero. The mathematics of this approach were fundamentally valid, but dropped because of the challenges in accurate inertial navigation (e.g. IMU Accuracy) and analog computing power. The challenges faced by the "Delta" efforts were overcome by the "Q system" of guidance. The "Q" system's revolution was to bind the challenges of missile guidance (and associated equations of motion) in the matrix Q. The Q matrix represents the partial derivatives of the velocity with respect to the position vector. A key feature of this approach allowed for the components of the vector cross product (v, xdv,/dt) to be used as the basic autopilot rate signals-a technique that became known as "cross-product steering." The Q-system was presented at the first Technical Symposium on Ballistic Missiles held at the Ramo-Wooldridge Corporation in Los Angeles on June 21 and 22, 1956. The "Q System" was classified information through the 1960s. Derivations of this guidance are used for today's military missiles. The CSDL team remains a leader in the military guidance and is involved in projects for most divisions of the US military.
On August 10 of 1961 NASA awarded MIT a contract for preliminary design study of a guidance and navigation system for Apollo program. (see Apollo on-board guidance, navigation, and control system, Dave Hoag, International Space Hall of Fame Dedication Conference in Alamogordo, N.M., October 1976 ). Today's space shuttle guidance is named PEG4 (Powered Explicit Guidance). It takes into account both the Q system and the predictor-corrector attributes of the original "Delta" System (PEG Guidance). Although many updates to the shuttles navigation system have taken place over the last 30 years (ex. GPS in the OI-22 build), the guidance core of today's Shuttle GN&C system has evolved little. Within a manned system, there is a human interface needed for the guidance system. As Astronauts are the customer for the system, many new teams are formed that touch GN&C as it is a primary interface to "fly" the vehicle. For the Apollo and STS (Shuttle system) CSDL "designed" the guidance, McDonnell Douglas wrote the requirements and IBM programmed the requirements.
Much system complexity within manned systems is driven by "redundancy management" and the support of multiple "abort" scenarios that provide for crew safety. Manned US Lunar and Interplanetary guidance systems leverage many of the same guidance innovations (described above) developed in the 1950s. So while the core mathematical construct of guidance has remained fairly constant, the facilities surrounding GN&C continue to evolve to support new vehicles, new missions and new hardware. The center of excellence for the manned guidance remains at MIT (CSDL) as well as the former McDonnell Douglas Space Systems (in Houston).
See also
Automotive navigation system
Autopilot
Guide rail
List of missiles
Robotic navigation
Precision-guided munition
Guided bomb
Missile
Missile guidance
Terminal guidance
Proximity sensor
Artillery fuze
Magnetic proximity fuze
Proximity fuze
References
Further reading
An Introduction to the Mathematics and Methods of Astrodynamics, Revised Edition (AIAA Education Series) Richard Battin, May 1991
Space Guidance Evolution-A Personal Narrative, Richard Battin, AIAA 82–4075, April 1982
Military technology
Uncrewed vehicles
Applications of control engineering
NASA spin-off technologies
de:Navigationssystem
stq:Autonavigation | Guidance system | [
"Engineering"
] | 2,246 | [
"Control engineering",
"Applications of control engineering"
] |
319,342 | https://en.wikipedia.org/wiki/Microfilament | Microfilaments, also called actin filaments, are protein filaments in the cytoplasm of eukaryotic cells that form part of the cytoskeleton. They are primarily composed of polymers of actin, but are modified by and interact with numerous other proteins in the cell. Microfilaments are usually about 7 nm in diameter and made up of two strands of actin. Microfilament functions include cytokinesis, amoeboid movement, cell motility, changes in cell shape, endocytosis and exocytosis, cell contractility, and mechanical stability. Microfilaments are flexible and relatively strong, resisting buckling by multi-piconewton compressive forces and filament fracture by nanonewton tensile forces. In inducing cell motility, one end of the actin filament elongates while the other end contracts, presumably by myosin II molecular motors. Additionally, they function as part of actomyosin-driven contractile molecular motors, wherein the thin filaments serve as tensile platforms for myosin's ATP-dependent pulling action in muscle contraction and pseudopod advancement. Microfilaments have a tough, flexible framework which helps the cell in movement.
Actin was first discovered in rabbit skeletal muscle in the mid 1940s by F.B. Straub. Almost 20 years later, H.E. Huxley demonstrated that actin is essential for muscle constriction. The mechanism in which actin creates long filaments was first described in the mid 1980s. Later studies showed that actin has an important role in cell shape, motility, and cytokinesis.
Organization
Actin filaments are assembled in two general types of structures: bundles and networks. Bundles can be composed of polar filament arrays, in which all barbed ends point to the same end of the bundle, or non-polar arrays, where the barbed ends point towards both ends. A class of actin-binding proteins, called cross-linking proteins, dictate the formation of these structures. Cross-linking proteins determine filament orientation and spacing in the bundles and networks. These structures are regulated by many other classes of actin-binding proteins, including motor proteins, branching proteins, severing proteins, polymerization promoters, and capping proteins.
In vitro self-assembly
Measuring approximately 6 nm in diameter, microfilaments are the thinnest fibers of the cytoskeleton. They are polymers of actin subunits (globular actin, or G-actin), which as part of the fiber are referred to as filamentous actin, or F-actin. Each microfilament is made up of two helical, interlaced strands of subunits. Much like microtubules, actin filaments are polarized. Electron micrographs have provided evidence of their fast-growing barbed-ends and their slow-growing pointed-end. This polarity has been determined by the pattern created by the binding of myosin S1 fragments: they themselves are subunits of the larger myosin II protein complex. The pointed end is commonly referred to as the minus (−) end and the barbed end is referred to as the plus (+) end.
In vitro actin polymerization, or nucleation, starts with the self-association of three G-actin monomers to form a trimer. ATP-bound actin then itself binds the barbed end, and the ATP is subsequently hydrolyzed. ATP hydrolysis occurs with a half time of about 2 seconds, while the half time for the dissociation of the inorganic phosphate is about 6 minutes. This autocatalyzed event reduces the binding strength between neighboring subunits, and thus generally destabilizes the filament. In vivo actin polymerization is catalyzed by a class of filament end-tracking molecular motors known as actoclampins. Recent evidence suggests that the rate of ATP hydrolysis and the rate of monomer incorporation are strongly coupled.
Subsequently, ADP-actin dissociates slowly from the pointed end, a process significantly accelerated by the actin-binding protein, cofilin. ADP bound cofilin severs ADP-rich regions nearest the (−)-ends. Upon release, the free actin monomer slowly dissociates from ADP, which in turn rapidly binds to the free ATP diffusing in the cytosol, thereby forming the ATP-actin monomeric units needed for further barbed-end filament elongation. This rapid turnover is important for the cell's movement. End-capping proteins such as CapZ prevent the addition or loss of monomers at the filament end where actin turnover is unfavorable, such as in the muscle apparatus.
Actin polymerization together with capping proteins were recently used to control the 3-dimensional growth of protein filament so as to perform 3D topologies useful in technology and the making of electrical interconnect. Electrical conductivity is obtained by metallisation of the protein 3D structure.
Mechanism of force generation
As a result of ATP hydrolysis, filaments elongate approximately 10 times faster at their barbed ends than their pointed ends. At steady-state, the polymerization rate at the barbed end matches the depolymerization rate at the pointed end, and microfilaments are said to be treadmilling. Treadmilling results in elongation in the barbed end and shortening in the pointed-end, so that the filament in total moves. Since both processes are energetically favorable, this means force is generated, the energy ultimately coming from ATP.
Actin in cells
Intracellular actin cytoskeletal assembly and disassembly are tightly regulated by cell signaling mechanisms. Many signal transduction systems use the actin cytoskeleton as a scaffold, holding them at or near the inner face of the peripheral membrane. This subcellular location allows immediate responsiveness to transmembrane receptor action and the resulting cascade of signal-processing enzymes.
Because actin monomers must be recycled to sustain high rates of actin-based motility during chemotaxis, cell signalling is believed to activate cofilin, the actin-filament depolymerizing protein which binds to ADP-rich actin subunits nearest the filament's pointed-end and promotes filament fragmentation, with concomitant depolymerization in order to liberate actin monomers. In most animal cells, monomeric actin is bound to profilin and thymosin beta-4, both of which preferentially bind with one-to-one stoichiometry to ATP-containing monomers. Although thymosin beta-4 is strictly a monomer-sequestering protein, the behavior of profilin is far more complex. Profilin enhances the ability of monomers to assemble by stimulating the exchange of actin-bound ADP for solution-phase ATP to yield actin-ATP and ADP. Profilin is transferred to the leading edge by virtue of its PIP2 binding site, and it employs its poly-L-proline binding site to dock onto end-tracking proteins. Once bound, profilin-actin-ATP is loaded into the monomer-insertion site of actoclampin motors.
Another important component in filament formation is the Arp2/3 complex, which binds to the side of an already existing filament (or "mother filament"), where it nucleates the formation of a new daughter filament at a 70-degree angle relative to the mother filament, effecting a fan-like branched filament network.
Specialized unique actin cytoskeletal structures are found adjacent to the plasma membrane. Four remarkable examples include red blood cells, human embryonic kidney cells, neurons, and sperm cells. In red blood cells, a spectrin-actin hexagonal lattice is formed by interconnected short actin filaments. In human embryonic kidney cells, the cortical actin forms a scale-free fractal structure. First found in neuronal axons, actin forms periodic rings that are stabilized by spectrin and adducin and this ring structure was then found by He et al 2016 to occur in almost every neuronal type and glial cells, across seemingly every animal taxon including Caenorhabditis elegans, Drosophila, Gallus gallus and Mus musculus. And in mammalian sperm, actin forms a helical structure in the midpiece, i.e., the first segment of the flagellum.
Associated proteins
In non-muscle cells, actin filaments are formed proximal to membrane surfaces. Their formation and turnover are regulated by many proteins, including:
Filament end-tracking protein (e.g., formins, VASP, N-WASP)
Filament-nucleator known as the Actin-Related Protein-2/3 (or Arp2/3) complex
Filament cross-linkers (e.g., α-actinin, fascin, and fimbrin)
Actin monomer-binding proteins profilin and thymosin β4
Filament barbed-end cappers such as Capping Protein and CapG, etc.
Filament-severing proteins like gelsolin.
Actin depolymerizing proteins such as ADF/cofilin.
The actin filament network in non-muscle cells is highly dynamic. The actin filament network is arranged with the barbed-end of each filament attached to the cell's peripheral membrane by means of clamped-filament elongation motors, the above-mentioned "actoclampins", formed from a filament barbed-end and a clamping protein (formins, VASP, Mena, WASP, and N-WASP). The primary substrate for these elongation motors is profilin-actin-ATP complex which is directly transferred to elongating filament ends. The pointed-end of each filament is oriented toward the cell's interior. In the case of lamellipodial growth, the Arp2/3 complex generates a branched network, and in filopodia a parallel array of filaments is formed.
Actin acts as a track for myosin motor motility
Myosin motors are intracellular ATP-dependent enzymes that bind to and move along actin filaments. Various classes of myosin motors have very different behaviors, including exerting tension in the cell and transporting cargo vesicles.
A proposed model – actoclampins track filament ends
One proposed model suggests the existence of actin filament barbed-end-tracking molecular motors termed "actoclampin". The proposed actoclampins generate the propulsive forces needed for actin-based motility of lamellipodia, filopodia, invadipodia, dendritic spines, intracellular vesicles, and motile processes in endocytosis, exocytosis, podosome formation, and phagocytosis. Actoclampin motors also propel such intracellular pathogens as Listeria monocytogenes, Shigella flexneri, Vaccinia and Rickettsia. When assembled under suitable conditions, these end-tracking molecular motors can also propel biomimetic particles.
The term actoclampin is derived from acto- to indicate the involvement of an actin filament, as in actomyosin, and clamp to indicate a clasping device used for strengthening flexible/moving objects and for securely fastening two or more components, followed by the suffix -in to indicate its protein origin. An actin filament end-tracking protein may thus be termed a clampin.
Dickinson and Purich recognized that prompt ATP hydrolysis could explain the forces achieved during actin-based motility. They proposed a simple mechanoenzymatic sequence known as the Lock, Load & Fire Model, in which an end-tracking protein remains tightly bound ("locked" or clamped) onto the end of one sub-filament of the double-stranded actin filament. After binding to Glycyl-Prolyl-Prolyl-Prolyl-Prolyl-Prolyl-registers on tracker proteins, Profilin-ATP-actin is delivered ("loaded") to the unclamped end of the other sub-filament, whereupon ATP within the already clamped terminal subunit of the other subfragment is hydrolyzed ("fired"), providing the energy needed to release that arm of the end-tracker, which then can bind another Profilin-ATP-actin to begin a new monomer-addition round.
Steps involved
The following steps describe one force-generating cycle of an actoclampin molecular motor:
The polymerization cofactor profilin and the ATP·actin combine to form a profilin-ATP-actin complex that then binds to the end-tracking unit
The cofactor and monomer are transferred to the barbed-end of an actin already clamped filament
The tracking unit and cofactor dissociate from the adjacent protofilament, in a step that can be facilitated by ATP hydrolysis energy to modulate the affinity of the cofactor and/or the tracking unit for the filament; and this mechanoenzymatic cycle is then repeated, starting this time on the other sub-filament growth site.
When operating with the benefit of ATP hydrolysis, AC motors generate per-filament forces of 8–9 pN, which is far greater than the per-filament limit of 1–2 pN for motors operating without ATP hydrolysis. The term actoclampin is generic and applies to all actin filament end-tracking molecular motors, irrespective of whether they are driven actively by an ATP-activated mechanism or passively.
Some actoclampins (e.g., those involving Ena/VASP proteins, WASP, and N-WASP) apparently require Arp2/3-mediated filament initiation to form the actin polymerization nucleus that is then "loaded" onto the end-tracker before processive motility can commence. To generate a new filament, Arp2/3 requires a "mother" filament, monomeric ATP-actin, and an activating domain from Listeria ActA or the VCA region of N-WASP. The Arp2/3 complex binds to the side of the mother filament, forming a Y-shaped branch having a 70-degree angle with respect to the longitudinal axis of the mother filament. Then upon activation by ActA or VCA, the Arp complex is believed to undergo a major conformational change, bringing its two actin-related protein subunits near enough to each other to generate a new filament gate. Whether ATP hydrolysis may be required for nucleation and/or Y-branch release is a matter under active investigation.
References
External links
Cell biology
Actin-based structures | Microfilament | [
"Biology"
] | 3,238 | [
"Cell biology"
] |
319,419 | https://en.wikipedia.org/wiki/Rendezvous%20%28Plan%209%29 | Rendezvous is a data synchronization mechanism in Plan 9 from Bell Labs. It is a system call that allows two processes to exchange a single datum while synchronizing.
The rendezvous call takes a tag and a value as its arguments. The tag is typically an address in memory shared by both processes. Calling rendezvous causes a process to sleep until a second rendezvous call with a matching tag occurs. Then, the values are exchanged and both processes are awakened.
More complex synchronization mechanisms can be created from this primitive operation. See also mutual exclusion.
See also
Synchronous rendezvous
Communicating sequential processes
References
External links
Process Sleep and Wakeup on a Shared-memory Multiprocessor by Rob Pike, Dave Presotto, Ken Thompson and Gerard Holzmann.
Plan 9 from Bell Labs
Parallel computing
Inter-process communication | Rendezvous (Plan 9) | [
"Technology"
] | 170 | [
"Plan 9 from Bell Labs",
"Computing stubs",
"Computing platforms",
"Operating system stubs"
] |
319,483 | https://en.wikipedia.org/wiki/Hetch%20Hetchy | Hetch Hetchy is a valley, reservoir, and water system in California in the United States. The glacial Hetch Hetchy Valley lies in the northwestern part of Yosemite National Park and is drained by the Tuolumne River. For thousands of years before the arrival of settlers from the United States in the 1850s, the valley was inhabited by Native Americans who practiced subsistence hunting-gathering.
During the late 19th century, the valley was renowned for its natural beauty – often compared to that of Yosemite Valley – but also targeted for the development of water supply for irrigation and municipal interests. The controversy over damming Hetch Hetchy became mired in the political issues of the day. The law authorizing the dam passed Congress on December 7, 1913. In 1923, the O'Shaughnessy Dam was completed on the Tuolumne River, flooding the entire valley under the Hetch Hetchy Reservoir. The dam and reservoir are the centerpiece of the Hetch Hetchy Project, which in 1934 began to deliver water west to San Francisco and its client municipalities in the greater San Francisco Bay Area.
Geography
Before damming, the high granite formations produced a valley with an average depth of and a maximum depth of over ; the length of the valley was with a width ranging from . The valley floor consisted of roughly of meadows fringed by pine forest, through which meandered the Tuolumne River and numerous tributary streams. Kolana Rock, at , is a massive rock spire on the south side of the Hetch Hetchy Valley. Hetch Hetchy Dome, at , lies directly north of it. The locations of these two formations roughly correspond with those of Cathedral Rocks and El Capitan seen from Tunnel View in Yosemite Valley. A broad, low rocky outcrop situated between Kolana Rock and Hetch Hetchy Dome divided the former meadow in two distinct sections.
The valley is fed by the Tuolumne River, Falls Creek, Tiltill Creek, Rancheria Creek, and numerous smaller streams which collectively drain a watershed of . In its natural state, the valley floor was marshy and often flooded in the spring when snow melt in the high Sierra cascaded down the Tuolumne River and backed up behind the narrow gorge which is now spanned by O'Shaughnessy Dam. The entire valley is now flooded under an average of water behind the dam, although it occasionally reemerges in droughts, as it did in 1955, 1977, and 1991.
Upstream from the valley lies the Grand Canyon of the Tuolumne, while the smaller Poopenaut Valley is directly downstream from O'Shaughnessy Dam. The Hetch Hetchy Road drops into the valley at the dam, but all points east of there are roadless, and accessible only to hikers and equestrians. The O'Shaughnessy Dam is near Yosemite's western boundary, but the long, narrow, fingerlike reservoir stretches eastward for about .
Wapama Falls, at , and Tueeulala Falls, at – both among the tallest waterfalls in North America – are both located in Hetch Hetchy Valley. Rancheria Falls is located farther southeast, on Rancheria Creek. Formerly, a "small but noisy" waterfall and natural pool existed on the Tuolumne River marked the upper entrance to Hetch Hetchy Valley, informally known as Tuolumne Fall (not to be confused with a similarly named waterfall several miles upriver near Tuolumne Meadows). The waterfall on the Tuolumne is now submerged under Hetch Hetchy Reservoir.
Geology
The Hetch Hetchy Valley began as a V-shaped river canyon cut out by the ancestral Tuolumne River. About one million years ago, the extensive Sherwin glaciation widened, deepened and straightened river valleys along the western slope of the Sierra Nevada, including Hetch Hetchy, Yosemite Valley, and Kings Canyon farther to the south. During the last glacial period, the Tioga Glacier formed from extensive icefields in the upper Tuolumne River watershed; between 110,000 and 10,000 years ago Hetch Hetchy Valley was sculpted into its present shape by repeated advance and retreat of the ice, which also removed extensive talus deposits that may have accumulated in the valley since the Sherwin period. At maximum extent, Tioga Glacier may have been long and up to thick, filling Hetch Hetchy Valley to the brim and spilling over the sides, carving out the present rugged plateau country to the north and southwest. When the glacier retreated for the final time, sediment-laden meltwater deposited thick layers of silt, forming the flat alluvial floodplain of the valley floor.
Compared with Yosemite Valley, the walls of Hetch Hetchy are smoother and rounder because it was glaciated to a greater extent. This is because the Tuolumne catchment basin above Hetch Hetchy is almost three times as large as the catchment area of the Merced River above Yosemite, allowing a greater volume of ice to form.
Flora and fauna
Hetch Hetchy is home to a diverse array of plants and animals. Gray pine, incense-cedar, and California black oak grow in abundance. Many examples of red-barked manzanita can be seen along the Hetch Hetchy Road. Spring and early summer bring wildflowers including lupine, wallflower, monkey flower, and buttercup. Seventeen species of bats inhabit the Hetch Hetchy area, including the largest North American bat, the western mastiff.
Before damming, the valley floor contained abundant stands of black oaks, live oak, Ponderosa pine, Douglas fir, and silver fir bordering the meadows, with alder, willow, poplar and dogwood in the riparian zone along the Tuolumne River. The valley's abundant plants provided nourishment for mule deer, black bears and bighorn sheep. Due to large cataracts on the Tuolumne River upstream, Hetch Hetchy Valley may have been in the uppermost range for native rainbow trout in the river.
Due to its abundant wetlands and stream pools, Hetch Hetchy was notorious among early travelers for becoming infested with mosquitoes in the summertime. Said San Francisco resident William Denman in 1918, "The first time I went into the Hetch Hetchy the mosquitoes were intolerable. They would light upon a man's blue shirt and turn it brown, and were voracious as mosquitoes would be."
History
Indigenous peoples
People have lived in Hetch Hetchy Valley for over 6,000 years. Native American cultures were prominent before the 1850s when the first settlers from the United States arrived in the Sierra Nevada. During summer, people of the Miwok and Paiute came to Hetch Hetchy from the Central Valley in the west and the Great Basin in the east. The valley provided an escape from the summer heat of the lowlands. They hunted, and gathered seeds and edible plants to furnish themselves winter food, trade items, and materials for art and ceremonial objects. Today, descendants of these people still use milkweed, deergrass, bracken fern, willow, and other plants for a variety of uses including baskets, medicines, and string.
Meadow plants unavailable in the lowlands were particularly valuable resources to these tribes. For thousands of years, Native Americans subjected the valley to controlled bushfires, which prevented forest from taking over the valley meadows. Periodic clearing of the valley provided ample space for the growth of the grasses and shrubs they relied on, as well as additional room for large game animals such as deer to browse. In the 19th century, the first white visitors to the valley did not realize that Hetch Hetchy's extensive meadows were the product of millennia of management by Native Americans; instead they believed "the valley was purely a product of ancient geological forces (or divine intervention) ... this was fundamental to its allure as a destination and subject."
The valley's name may be derived from a Miwok word earlier anglicized as hatchhatchie, which means "edible grasses" or "magpie". It is likely that the edible grass was blue dicks. Chief Tenaya of the Yosemite Valley's Ahwaneechee tribe claimed that Hetch Hetchy was Miwok for "Valley of the Two Trees", referring to a pair of yellow pines that once stood at the head of Hetch Hetchy. Miwok names are still used for features, including Tueeulala Fall, Wapama Fall, and Kolana Rock.
While its cousin Yosemite Valley to the south had permanent Miwok settlements, Hetch Hetchy was only seasonally inhabited. This was likely because of Hetch Hetchy's narrow outlet, which in years of heavy snowmelt created a bottleneck in the Tuolumne River and the subsequent flooding of the valley floor.
Exploration and early development
In the early 1850s, a mountain man by the name of Nathan Screech became the first non-Native American to enter the valley. Local legend attributes the modern name Hetch Hetchy to Screech's initial arrival in the valley, during which he observed the Native Americans "cooking a variety of grass covered with edible seeds", which they called "hatch hatchy" or "hatchhatchie". Screech reported that the valley was bitterly disputed between the "Pah Utah Indians" (Paiute) and "Big Creek Indians" (Miwok), and witnessed several fights in which the Paiute appeared to be the dominant tribe. About 1853, his brother, Joseph Screech (credited in some accounts for the original discovery of the valley) blazed the first trail from Big Oak Flat, a mining camp near present-day Lake Don Pedro, for northeast to Hetch Hetchy Valley.
During this time, the upper Tuolumne River, including Hetch Hetchy Valley, was visited by prospectors attracted by the California Gold Rush. Miners did not stay in the area for long, however, as richer deposits occurred further south along the Merced River and in the Big Oak Flat area. After the valley's native inhabitants were driven out by the newcomers, it was used by ranchers, many of whom were former miners, to graze livestock. Animals were principally driven along Joseph Screech's trail from Big Oak Flat to Hetch Hetchy. Its meadows provided abundant feed for "thousands of head of sheep and cattle that entered lean and lank in the spring, but left rolling fat and hardly able to negotiate the precipitous and difficult defiles out of the mountains in the fall."
In 1867, Charles F. Hoffman of the California Geological Survey conducted the first survey of the valley. Hoffman observed a meadow "well timbered and affording good grazing", and noted the valley had a milder climate than Yosemite Valley, hence the abundance of ponderosa pine and gray pine. The valley was slowly becoming known for its natural beauty, but it was never a popular tourist destination because of extremely poor access and the location of the famous Yosemite Valley just to the south. Those who did visit it were enchanted by its scenery, but encountered difficulties with the primitive conditions and, in summertime, swarms of mosquitoes. Albert Bierstadt, Charles Dorman Robinson and William Keith were known for their landscapes that drew tourists to the Hetch Hetchy Valley. Bierstadt described the valley as "smaller than the more famous valley ... but it presents many of the same features in his scenery and is quite as beautiful."
When Yosemite Valley became part of a state park in 1864, Hetch Hetchy received no such designation. As the grazing of livestock damaged native plants in the Hetch Hetchy Valley, mountaineer and naturalist John Muir pressed for the protection of both valleys under a single national park. Muir, who himself had briefly worked as a shepherd in Hetch Hetchy, was known for calling sheep "hoofed locusts" because of their environmental impact. Muir's friend Robert Underwood Johnson of the politically influential Century Magazine and several other prominent figures were inspired by Muir's work and helped to get Yosemite National Park established by October 1, 1890. However, ranchers who had previously owned land in the new park continued their use of Hetch Hetchy Valley – a "sheep-grazing free-for-all [that] threatened to denude the High Sierra meadows" – before disputes over state and private properties in respect to national park boundaries were finally settled in the early 1900s.
Interest in using the valley as a water source or reservoir dates back as far as the 1850s, when the Tuolumne Valley Water Company proposed developing water storage there for irrigation. By the 1880s, San Francisco was looking to Hetch Hetchy water as a fix for its outdated and unreliable water system. The city would repeatedly try to acquire water rights to Hetch Hetchy, including in 1901, 1903 and 1905, but was continually rebuffed because of conflicts with irrigation districts that had senior water rights on the Tuolumne River, and because of the valley's national park status.
Damming
In 1906, after a major earthquake and subsequent fire that devastated San Francisco, the inadequacy of the city's water system was made tragically clear.
San Francisco applied to the United States Department of the Interior to gain water rights to Hetch Hetchy, and in 1908 President Theodore Roosevelt's Secretary of the Interior, James R. Garfield, granted San Francisco the rights to development of the Tuolumne River. This provoked a seven-year environmental struggle with the environmental group Sierra Club, led by John Muir. Muir observed:
Dam Hetch Hetchy! As well dam for water-tanks the people's cathedrals and churches, for no holier temple has ever been consecrated by the heart of man.
Proponents of the dam replied that out of multiple sites considered by San Francisco, Hetch Hetchy had the "perfect architecture for a reservoir", with pristine water, lack of development or private property, a steep-sided and flat-floored profile that would maximize the amount of water stored, and a narrow outlet ideal for placement of a dam. They claimed the valley was not unique and would be even more beautiful with a lake. Muir predicted that this lake would create an unsightly "bathtub ring" around its perimeter, caused by the water's destruction of lichen growth on the canyon walls, which would inevitably be visible at low lake levels.
Since the valley was within Yosemite National Park, an act of Congress was needed to authorize the project. The U.S. Congress passed and President Woodrow Wilson signed the Raker Act in 1913, which permitted the flooding of the valley under the conditions that power and water derived from the river could only be used for public interests. Ultimately, San Francisco sold hydropower from the dam to the Pacific Gas and Electric Company (PG&E), which led to decades of legal wrangling and controversy over terms in the Raker Act.
The controversy over Hetch Hetchy was in the context of other political scandals and controversies, especially prevalent in the Taft administration. The Great Alaskan Land Fraud and the Pinchot-Ballinger Controversy caused both Richard A. Ballinger and Gifford Pinchot to resign and be fired respectively. The openings in the Taft administration led to the eventual success of the Raker Act.
Work on the Hetch Hetchy Project began in 1914. The Hetch Hetchy Railroad was constructed to link the Sierra Railway with Hetch Hetchy Valley, allowing for direct rail shipment of construction materials from San Francisco to the dam site. Construction of O'Shaughnessy Dam began in 1919 and was finished in 1923, with the reservoir first filling in May of that year. The dam was then high; its present height of was achieved only later, in 1938. On October 28, 1934 – twenty years after the beginning of construction on the Hetch Hetchy project – a crowd of 20,000 San Franciscans gathered to celebrate the arrival of the first Hetch Hetchy water in the city.
The Early Intake (Lower Cherry) Powerhouse began commercial operation five years before the O'Shaughnessy Dam was completed. The first Moccasin Powerhouse in Moccasin, California began commercial operation in 1925 followed by the Holm Powerhouse in 1960 (the same month the Early Intake Powerhouse was taken out of service). In 1967 the Robert C. Kirkwood Powerhouse started commercial operation followed by a New Moccasin Powerhouse in 1969 when the Old Moccasin Powerhouse was taken out of service. Finally, in 1988, a third generator was added to the Kirkwood Powerhouse.
Hetch Hetchy Project
Hetch Hetchy Valley serves as the primary water source for the City and County of San Francisco and several surrounding municipalities in the greater San Francisco Bay Area. The dam and reservoir, combined with a series of aqueducts, tunnels, and hydroelectric plants as well as eight other storage dams, comprise a system known as the Hetch Hetchy Project, which provides 80% of the water supply for 2.6 million people. The project is operated by the San Francisco Public Utilities Commission. The city must pay a lease of $30,000 per year for the use of Hetch Hetchy, which sits on federal land. The aqueduct delivers an average of of water each year, or per day, to residents of San Francisco and San Mateo, Santa Clara and Alameda Counties.
As completed, O'Shaughnessy Dam is long, spanning the valley at its narrow outlet. The dam contains of concrete. The Hetch Hetchy Reservoir created by the dam has a capacity of , with a maximum area of and a maximum depth of . From Hetch Hetchy Reservoir, the water flows through the Canyon and Mountain Tunnels to Kirkwood and Moccasin Powerhouses, which have capacities of 124 and 110 megawatts, respectively. An additional hydroelectric system comprising Cherry Lake, Lake Eleanor and the Holm Powerhouse is also part of the Hetch Hetchy Project, adding another 169 megawatts of generating capacity. The entire system produces about 1.7 billion kilowatt hours per year, enough to meet 20% of San Francisco's electricity needs.
After passing through the powerhouses, Hetch Hetchy water flows into the Hetch Hetchy Aqueduct which travels across the Central Valley. Just before reaching the Bay Area, it passes through the Irvington tunnel near the city of Fremont, and the aqueduct splits into four pipelines at . These are called Bay Division Pipelines (BDPL) 1, 2, 3, and 4, with nominal pipeline diameters of 60, 66, 78, and 96 inches (1.5, 1.7, 2.0 and 2.4 m, respectively). All four pipelines cross the Hayward fault. Pipelines 1 and 2 cross the San Francisco Bay to the south of the Dumbarton Bridge, while pipelines 3 and 4 run to the south of the bay. In the Bay Area, Hetch Hetchy water is stored in local facilities including Calaveras Reservoir, Crystal Springs Reservoir, and San Antonio Reservoir. Pipelines 3 and 4 end at the Pulgas Water Temple, a small park that contains classical architectural elements which celebrate the water delivery.
Water from Hetch Hetchy is some of the cleanest municipal water in the United States; San Francisco is one of six U.S. cities not required by law to filter its tap water, although the water is disinfected by ozonation and, since 2011, exposure to UV. The water quality is high because of the unique geology of the upper Tuolumne River drainage basin, which consists mostly of bare granite; as a result, the rivers feeding Hetch Hetchy Reservoir have extremely low loads of sediments and nutrients. The watershed is also strictly protected, so swimming and boating are prohibited at the reservoir (although fishing is permitted at the reservoir and in the rivers which feed it), a measure which is considered unusual for US lakes outside the region. In 2018, the Department of the Interior of the Trump administration began to consider a proposal to allow limited boating on the Hetch Hetchy Reservoir for the first time, supported by the advocacy group Restore Hetch Hetchy which argued that "San Francisco received [Hetch Hetchy's] benefits long ago, but the American people have not."
Proposed restoration
Arguments for
The battle over Hetch Hetchy Valley continues today between those who wish to retain the dam and reservoir, and those who wish to drain the reservoir and return Hetch Hetchy Valley to its former state. Those in favor of dam removal have pointed out that many actions by San Francisco since 1913 have been in violation of the Raker Act, which explicitly stated that power and water from Hetch Hetchy could not be sold to private interests. Hydroelectric power generated from the Hetch Hetchy project is largely sold to Bay Area customers through a private power company, Pacific Gas & Electric (PG&E). San Francisco was able to accomplish this in 1925 by claiming it had run out of funds to extend the Hetch Hetchy transmission line all the way to the city. The terminus of the incomplete line was "conveniently located next to a PG&E substation", which connected to PG&E's private line which in turn bridged the gap to San Francisco. The city justified this as a temporary measure, but no attempt to follow through with completing the municipal grid was ever made. Peter Byrne of SF Weekly has stated that "the plain language of the Raker Act itself and experts who are familiar with the act (and have no stake in city politics) all agree: The city of San Francisco is not in violation of the Raker Act." Harold L. Ickes, Secretary of the Interior in the late 1930s, said there was a violation of the Raker Act, but he and the city reached an agreement in 1945. In 2015, Restore Hetch Hetchy filed a complaint arguing that the construction of the dam had violated a provision in the constitution of California about water use, but the lawsuit was rejected by an appeals court and later the California State Supreme Court.
Preservation groups including the Sierra Club and Restore Hetch Hetchy state that draining Hetch Hetchy would open the valley back up to recreation, a right that should be provided to the American people because the reservoir is within the legal boundaries of a national park. They acknowledge that a concerted effort would have to be made to control the introduction of wildlife and tourism back into the valley in order to prevent destabilization of the ecosystem, and that it might be decades or even centuries before the valley could be returned to natural conditions.
In 1987, the idea of razing the O'Shaughnessy Dam gained an adherent in Don Hodel, Secretary of the Department of the Interior under President Ronald Reagan. Hodel called for a study of the effect of tearing down the dam. The National Park Service concluded that two years after draining the valley, grasses would cover most of its floor and within 10 years, clumps of cone-bearing trees and some oaks would take root. Within 50 years, vegetative cover would be complete except for exposed rocky areas. In this unmanaged scenario, where nature is left to take hold in the valley, eventually a forest would grow, rather than the meadow being restored. However, the same NPS study also finds that with intensive management, an outcome in which "the entire valley would appear much as it did before construction of the reservoir" is feasible.
The dam would not have to be completely removed; rather, it would only be necessary to cut a hole through the base in order to drain the water and restore natural flows of the Tuolumne River. Most of the dam would remain in place, both to avoid the enormous costs of demolition and removal, and to serve as a monument for the workers who built it. The water storage provided at Hetch Hetchy could be transferred into Lake Don Pedro lower on the Tuolumne River by raising the New Don Pedro Dam . Water could be diverted into the Kirkwood and Moccasin Powerhouses using lower-impact diversion dams, providing power generation on a seasonal basis, and the increased height, and thus hydraulic head, at Don Pedro would also increase power generation there. Furthermore, the removal of O'Shaughnessy Dam would not require costly sediment control measures, as would be typical on most dam removal projects, because of the high quality of the Tuolumne River water – in the first 90 years since its construction, only around of sediment had been deposited in Hetch Hetchy Reservoir, much less than most other dams. A 2019 study commissioned by Restore Hetch Hetchy argued that draining the reservoir and equipping the valley with a tourism infrastructure comparable to that of Yosemite Valley (which receives around 100 times as many visitors annually as Hetch Hetchy's 44,000) could result in a "recreational value" of up to $178 million per year, or possibly an overall economic value of up to $100 billion.
Arguments against
Those in opposition of dam removal state that demolishing O'Shaughnessy Dam would take away a valuable source of clean, renewable hydroelectric power in the Kirkwood and Moccasin powerhouses; even if measures such as seasonal water diversion into the powerhouses were employed, it would only make up for a fraction of the original power production. The remaining deficit would likely have to be replaced by polluting fossil fuel generation. The removal of the dam would be extremely costly, at least $3–10 billion, and the transport of the demolished material away from the dam site along the narrow, winding Hetch Hetchy Road would be a logistical nightmare with possible environmental impacts. Most importantly, San Francisco would lose its source of high-quality mountain water, and would have to depend on lower-quality water from other reservoirs – which would require costly filtration and re-engineering of the aqueduct system – to meet its needs.
The economic wisdom of removing the dam has been frequently questioned. Some observers, such as Carl Pope (director of the Sierra Club), stated that Hodel had political motives in proposing the study. The imputed motive was to divide the environmental movement: to see residents of the strongly Democratic city of San Francisco coming out against an environmental issue. Dianne Feinstein, the mayor of San Francisco at the time, said in a Los Angeles Times story in 1987: "All this is for an expanded campground? ... It's dumb, dumb, dumb." Hodel, now retired, remains a strong proponent of restoring Hetch Hetchy Valley and Senator Feinstein remained strongly against restoration. The George W. Bush administration proposed allocating $7 million to studying the removal of the dam in the 2007 National Park Service budget. Dianne Feinstein opposed this allocation, saying, "I will do all I can to make sure it isn't included in the final bill. We're not going to remove this dam, and the funding is unnecessary."
Opponents of dam removal have pointed out that the flooding of the Hetch Hetchy Valley has also deterred the crowds that overrun other areas of Yosemite National Park. Indeed, Hetch Hetchy today remains the least visited developed area of the park. Karin Klein has described Yosemite Valley as "so crammed ... that it looks more like a ripstop ghetto than the site of a nature experience." However, she does support breaching the dam once it has reached the end of its lifespan, and not replacing it.
In November 2012, San Francisco voters soundly rejected Proposition F, which would have required the city to conduct an $8 million study on how the flooded valley could be drained and restored to its former state. The proposed study would also have been required to identify potential replacements for the water storage capacity and hydroelectric power production.
See also
Grand Canyon of the Tuolumne
Hetch Hetchy Railroad
Lake Vernon trail
List of dams and reservoirs in California
List of power stations in California
List of the tallest dams in the United States
List of lakes in California
List of largest reservoirs of California
The National Parks: America's Best Idea
Gifford Pinchot
San Francisco Public Utilities Commission
San Francisco Water Department
Timeline of environmental events
Tuolumne River
Yosemite National Park
Citations
General and cited references
Further reading
External links
Current Conditions, Hetch Hetchy Reservoir, California Department of Water Resources
San Francisco Public Utilities Commission: Hetch Hetchy Water and Power
United States Geological Survey
California Resources Agency Hetch Hetchy Restoration Study
Bay Area Water Supply and Conservation Agency on Hetch Hetchy dam
Geology of Yosemite National Park
Historic American Engineering Record in California
History of San Francisco
History of the Sierra Nevada (United States)
Interbasin transfer
Landforms of Tuolumne County, California
Landforms of Yosemite National Park
Tuolumne River | Hetch Hetchy | [
"Environmental_science"
] | 5,920 | [
"Hydrology",
"Interbasin transfer"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.