id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
7,252
https://en.wikipedia.org/wiki/Cell%20cycle
The cell cycle, or cell-division cycle, is the sequential series of events that take place in a cell that causes it to divide into two daughter cells. These events include the growth of the cell, duplication of its DNA (DNA replication) and some of its organelles, and subsequently the partitioning of its cytoplasm, chromosomes and other components into two daughter cells in a process called cell division. In eukaryotic cells (having a cell nucleus) including animal, plant, fungal, and protist cells, the cell cycle is divided into two main stages: interphase, and the M phase that includes mitosis and cytokinesis. During interphase, the cell grows, accumulating nutrients needed for mitosis, and replicates its DNA and some of its organelles. During the M phase, the replicated chromosomes, organelles, and cytoplasm separate into two new daughter cells. To ensure the proper replication of cellular components and division, there are control mechanisms known as cell cycle checkpoints after each of the key steps of the cycle that determine if the cell can progress to the next phase. In cells without nuclei the prokaryotes, bacteria and archaea, the cell cycle is divided into the B, C, and D periods. The B period extends from the end of cell division to the beginning of DNA replication. DNA replication occurs during the C period. The D period refers to the stage between the end of DNA replication and the splitting of the bacterial cell into two daughter cells. In single-celled organisms, a single cell-division cycle is how the organism reproduces to ensure its survival. In multicellular organisms such as plants and animals, a series of cell-division cycles is how the organism develops from a single-celled fertilized egg into a mature organism, and is also the process by which hair, skin, blood cells, and some internal organs are regenerated and healed (with possible exception of nerves; see nerve damage). After cell division, each of the daughter cells begin the interphase of a new cell cycle. Although the various stages of interphase are not usually morphologically distinguishable, each phase of the cell cycle has a distinct set of specialized biochemical processes that prepare the cell for initiation of the cell division. Phases The eukaryotic cell cycle consists of four distinct phases: G1 phase, S phase (synthesis), G2 phase (collectively known as interphase) and M phase (mitosis and cytokinesis). M phase is itself composed of two tightly coupled processes: mitosis, in which the cell's nucleus divides, and cytokinesis, in which the cell's cytoplasm and cell membrane divides forming two daughter cells. Activation of each phase is dependent on the proper progression and completion of the previous one. Cells that have temporarily or reversibly stopped dividing are said to have entered a state of quiescence known as G0 phase or resting phase. G0 phase (quiescence) G0 is a resting phase where the cell has left the cycle and has stopped dividing. The cell cycle starts with this phase. Non-proliferative (non-dividing) cells in multicellular eukaryotes generally enter the quiescent G0 state from G1 and may remain quiescent for long periods of time, possibly indefinitely (as is often the case for neurons). This is very common for cells that are fully differentiated. Some cells enter the G0 phase semi-permanently and are considered post-mitotic, e.g., some liver, kidney, and stomach cells. Many cells do not enter G0 and continue to divide throughout an organism's life, e.g., epithelial cells. The word "post-mitotic" is sometimes used to refer to both quiescent and senescent cells. Cellular senescence occurs in response to DNA damage and external stress and usually constitutes an arrest in G1. Cellular senescence may make a cell's progeny nonviable; it is often a biochemical alternative to the self-destruction of such a damaged cell by apoptosis. Interphase Interphase represents the phase between two successive M phases. Interphase is a series of changes that takes place in a newly formed cell and its nucleus before it becomes capable of division again. It is also called preparatory phase or intermitosis. Typically interphase lasts for at least 91% of the total time required for the cell cycle. Interphase proceeds in three stages, G1, S, and G2, followed by the cycle of mitosis and cytokinesis. The cell's nuclear DNA contents are duplicated during S phase. G1 phase (First growth phase or Post mitotic gap phase) The first phase within interphase, from the end of the previous M phase until the beginning of DNA synthesis, is called G1 (G indicating gap). It is also called the growth phase. During this phase, the biosynthetic activities of the cell, which are considerably slowed down during M phase, resume at a high rate. The duration of G1 is highly variable, even among different cells of the same species. In this phase, the cell increases its supply of proteins, increases the number of organelles (such as mitochondria, ribosomes), and grows in size. In G1 phase, a cell has three options. To continue cell cycle and enter S phase Stop cell cycle and enter G0 phase for undergoing differentiation. Become arrested in G1 phase hence it may enter G0 phase or re-enter cell cycle. The deciding point is called check point (Restriction point). This check point is called the restriction point or START and is regulated by G1/S cyclins, which cause transition from G1 to S phase. Passage through the G1 check point commits the cell to division. S phase (DNA replication) The ensuing S phase starts when DNA synthesis commences; when it is complete, all of the chromosomes have been replicated, i.e., each chromosome consists of two sister chromatids. Thus, during this phase, the amount of DNA in the cell has doubled, though the ploidy and number of chromosomes are unchanged. Rates of RNA transcription and protein synthesis are very low during this phase. An exception to this is histone production, most of which occurs during the S phase. G2 phase (growth) G2 phase occurs after DNA replication and is a period of protein synthesis and rapid cell growth to prepare the cell for mitosis. During this phase microtubules begin to reorganize to form a spindle (preprophase). Before proceeding to mitotic phase, cells must be checked at the G2 checkpoint for any DNA damage within the chromosomes. The G2 checkpoint is mainly regulated by the tumor protein p53. If the DNA is damaged, p53 will either repair the DNA or trigger the apoptosis of the cell. If p53 is dysfunctional or mutated, cells with damaged DNA may continue through the cell cycle, leading to the development of cancer. Mitotic phase (chromosome separation) The relatively brief M phase consists of nuclear division (karyokinesis) and division of cytoplasm (cytokinesis). M phase is complex and highly regulated. The sequence of events is divided into phases, corresponding to the completion of one set of activities and the start of the next. These phases are sequentially known as: prophase prometaphase metaphase anaphase telophase Mitosis is the process by which a eukaryotic cell separates the chromosomes in its cell nucleus into two identical sets in two nuclei. During the process of mitosis the pairs of chromosomes condense and attach to microtubules that pull the sister chromatids to opposite sides of the cell. Mitosis occurs exclusively in eukaryotic cells, but occurs in different ways in different species. For example, animal cells undergo an "open" mitosis, where the nuclear envelope breaks down before the chromosomes separate, while fungi such as Aspergillus nidulans and Saccharomyces cerevisiae (yeast) undergo a "closed" mitosis, where chromosomes divide within an intact cell nucleus. Cytokinesis phase (separation of all cell components) Mitosis is immediately followed by cytokinesis, which divides the nuclei, cytoplasm, organelles and cell membrane into two cells containing roughly equal shares of these cellular components. Cytokinesis occurs differently in plant and animal cells. While the cell membrane forms a groove that gradually deepens to separate the cytoplasm in animal cells, a cell plate is formed to separate it in plant cells. The position of the cell plate is determined by the position of a preprophase band of microtubules and actin filaments. Mitosis and cytokinesis together define the division of the parent cell into two daughter cells, genetically identical to each other and to their parent cell. This accounts for approximately 10% of the cell cycle. Because cytokinesis usually occurs in conjunction with mitosis, "mitosis" is often used interchangeably with "M phase". However, there are many cells where mitosis and cytokinesis occur separately, forming single cells with multiple nuclei in a process called endoreplication. This occurs most notably among the fungi and slime molds, but is found in various groups. Even in animals, cytokinesis and mitosis may occur independently, for instance during certain stages of fruit fly embryonic development. Errors in mitosis can result in cell death through apoptosis or cause mutations that may lead to cancer. Regulation of eukaryotic cell cycle Regulation of the cell cycle involves processes crucial to the survival of a cell, including the detection and repair of genetic damage as well as the prevention of uncontrolled cell division. The molecular events that control the cell cycle are ordered and directional; that is, each process occurs in a sequential fashion and it is impossible to "reverse" the cycle. Role of cyclins and CDKs Two key classes of regulatory molecules, cyclins and cyclin-dependent kinases (CDKs), determine a cell's progress through the cell cycle. Leland H. Hartwell, R. Timothy Hunt, and Paul M. Nurse won the 2001 Nobel Prize in Physiology or Medicine for their discovery of these central molecules. Many of the genes encoding cyclins and CDKs are conserved among all eukaryotes, but in general, more complex organisms have more elaborate cell cycle control systems that incorporate more individual components. Many of the relevant genes were first identified by studying yeast, especially Saccharomyces cerevisiae; genetic nomenclature in yeast dubs many of these genes cdc (for "cell division cycle") followed by an identifying number, e.g. cdc25 or cdc20. Cyclins form the regulatory subunits and CDKs the catalytic subunits of an activated heterodimer; cyclins have no catalytic activity and CDKs are inactive in the absence of a partner cyclin. When activated by a bound cyclin, CDKs perform a common biochemical reaction called phosphorylation that activates or inactivates target proteins to orchestrate coordinated entry into the next phase of the cell cycle. Different cyclin-CDK combinations determine the downstream proteins targeted. CDKs are constitutively expressed in cells whereas cyclins are synthesised at specific stages of the cell cycle, in response to various molecular signals. General mechanism of cyclin-CDK interaction Upon receiving a pro-mitotic extracellular signal, G1 cyclin-CDK complexes become active to prepare the cell for S phase, promoting the expression of transcription factors that in turn promote the expression of S cyclins and of enzymes required for DNA replication. The G1 cyclin-CDK complexes also promote the degradation of molecules that function as S phase inhibitors by targeting them for ubiquitination. Once a protein has been ubiquitinated, it is targeted for proteolytic degradation by the proteasome. Results from a study of E2F transcriptional dynamics at the single-cell level argue that the role of G1 cyclin-CDK activities, in particular cyclin D-CDK4/6, is to tune the timing rather than the commitment of cell cycle entry. Active S cyclin-CDK complexes phosphorylate proteins that make up the pre-replication complexes assembled during G1 phase on DNA replication origins. The phosphorylation serves two purposes: to activate each already-assembled pre-replication complex, and to prevent new complexes from forming. This ensures that every portion of the cell's genome will be replicated once and only once. The reason for prevention of gaps in replication is fairly clear, because daughter cells that are missing all or part of crucial genes will die. However, for reasons related to gene copy number effects, possession of extra copies of certain genes is also deleterious to the daughter cells. Mitotic cyclin-CDK complexes, which are synthesized but inactivated during S and G2 phases, promote the initiation of mitosis by stimulating downstream proteins involved in chromosome condensation and mitotic spindle assembly. A critical complex activated during this process is a ubiquitin ligase known as the anaphase-promoting complex (APC), which promotes degradation of structural proteins associated with the chromosomal kinetochore. APC also targets the mitotic cyclins for degradation, ensuring that telophase and cytokinesis can proceed. Specific action of cyclin-CDK complexes Cyclin D is the first cyclin produced in the cells that enter the cell cycle, in response to extracellular signals (e.g. growth factors). Cyclin D levels stay low in resting cells that are not proliferating. Additionally, CDK4/6 and CDK2 are also inactive because CDK4/6 are bound by INK4 family members (e.g., p16), limiting kinase activity. Meanwhile, CDK2 complexes are inhibited by the CIP/KIP proteins such as p21 and p27, When it is time for a cell to enter the cell cycle, which is triggered by a mitogenic stimuli, levels of cyclin D increase. In response to this trigger, cyclin D binds to existing CDK4/6, forming the active cyclin D-CDK4/6 complex. Cyclin D-CDK4/6 complexes in turn mono-phosphorylates the retinoblastoma susceptibility protein (Rb) to pRb. The un-phosphorylated Rb tumour suppressor functions in inducing cell cycle exit and maintaining G0 arrest (senescence). In the last few decades, a model has been widely accepted whereby pRB proteins are inactivated by cyclin D-Cdk4/6-mediated phosphorylation. Rb has 14+ potential phosphorylation sites. Cyclin D-Cdk 4/6 progressively phosphorylates Rb to hyperphosphorylated state, which triggers dissociation of pRB–E2F complexes, thereby inducing G1/S cell cycle gene expression and progression into S phase. Scientific observations from a study have shown that Rb is present in three types of isoforms: (1) un-phosphorylated Rb in G0 state; (2) mono-phosphorylated Rb, also referred to as "hypo-phosphorylated' or 'partially' phosphorylated Rb in early G1 state; and (3) inactive hyper-phosphorylated Rb in late G1 state. In early G1 cells, mono-phosphorylated Rb exists as 14 different isoforms, one of each has distinct E2F binding affinity. Rb has been found to associate with hundreds of different proteins and the idea that different mono-phosphorylated Rb isoforms have different protein partners was very appealing. A later report confirmed that mono-phosphorylation controls Rb's association with other proteins and generates functional distinct forms of Rb. All different mono-phosphorylated Rb isoforms inhibit E2F transcriptional program and are able to arrest cells in G1-phase. Different mono-phosphorylated forms of Rb have distinct transcriptional outputs that are extended beyond E2F regulation. In general, the binding of pRb to E2F inhibits the E2F target gene expression of certain G1/S and S transition genes including E-type cyclins. The partial phosphorylation of Rb de-represses the Rb-mediated suppression of E2F target gene expression, begins the expression of cyclin E. The molecular mechanism that causes the cell switched to cyclin E activation is currently not known, but as cyclin E levels rise, the active cyclin E-CDK2 complex is formed, bringing Rb to be inactivated by hyper-phosphorylation. Hyperphosphorylated Rb is completely dissociated from E2F, enabling further expression of a wide range of E2F target genes are required for driving cells to proceed into S phase [1]. It has been identified that cyclin D-Cdk4/6 binds to a C-terminal alpha-helix region of Rb that is only distinguishable to cyclin D rather than other cyclins, cyclin E, A and B. This observation based on the structural analysis of Rb phosphorylation supports that Rb is phosphorylated in a different level through multiple Cyclin-Cdk complexes. This also makes feasible the current model of a simultaneous switch-like inactivation of all mono-phosphorylated Rb isoforms through one type of Rb hyper-phosphorylation mechanism. In addition, mutational analysis of the cyclin D- Cdk 4/6 specific Rb C-terminal helix shows that disruptions of cyclin D-Cdk 4/6 binding to Rb prevents Rb phosphorylation, arrests cells in G1, and bolsters Rb's functions in tumor suppressor. This cyclin-Cdk driven cell cycle transitional mechanism governs a cell committed to the cell cycle that allows cell proliferation. A cancerous cell growth often accompanies with deregulation of Cyclin D-Cdk 4/6 activity. The hyperphosphorylated Rb dissociates from the E2F/DP1/Rb complex (which was bound to the E2F responsive genes, effectively "blocking" them from transcription), activating E2F. Activation of E2F results in transcription of various genes like cyclin E, cyclin A, DNA polymerase, thymidine kinase, etc. Cyclin E thus produced binds to CDK2, forming the cyclin E-CDK2 complex, which pushes the cell from G1 to S phase (G1/S, which initiates the G2/M transition). Cyclin B-cdk1 complex activation causes breakdown of nuclear envelope and initiation of prophase, and subsequently, its deactivation causes the cell to exit mitosis. A quantitative study of E2F transcriptional dynamics at the single-cell level by using engineered fluorescent reporter cells provided a quantitative framework for understanding the control logic of cell cycle entry, challenging the canonical textbook model. Genes that regulate the amplitude of E2F accumulation, such as Myc, determine the commitment in cell cycle and S phase entry. G1 cyclin-CDK activities are not the driver of cell cycle entry. Instead, they primarily tune the timing of E2F increase, thereby modulating the pace of cell cycle progression. Inhibitors Endogenous Two families of genes, the cip/kip (CDK interacting protein/Kinase inhibitory protein) family and the INK4a/ARF (Inhibitor of Kinase 4/Alternative Reading Frame) family, prevent the progression of the cell cycle. Because these genes are instrumental in prevention of tumor formation, they are known as tumor suppressors. The cip/kip family includes the genes p21, p27 and p57. They halt the cell cycle in G1 phase by binding to and inactivating cyclin-CDK complexes. p21 is activated by p53 (which, in turn, is triggered by DNA damage e.g. due to radiation). p27 is activated by Transforming Growth Factor β (TGF β), a growth inhibitor. The INK4a/ARF family includes p16INK4a, which binds to CDK4 and arrests the cell cycle in G1 phase, and p14ARF which prevents p53 degradation. Synthetic Synthetic inhibitors of Cdc25 could also be useful for the arrest of cell cycle and therefore be useful as antineoplastic and anticancer agents. Many human cancers possess the hyper-activated Cdk 4/6 activities. Given the observations of cyclin D-Cdk 4/6 functions, inhibition of Cdk 4/6 should result in preventing a malignant tumor from proliferating. Consequently, scientists have tried to invent the synthetic Cdk4/6 inhibitor as Cdk4/6 has been characterized to be a therapeutic target for anti-tumor effectiveness. Three Cdk4/6 inhibitors – palbociclib, ribociclib, and abemaciclib – currently received FDA approval for clinical use to treat advanced-stage or metastatic, hormone-receptor-positive (HR-positive, HR+), HER2-negative (HER2-) breast cancer. For example, palbociclib is an orally active CDK4/6 inhibitor which has demonstrated improved outcomes for ER-positive/HER2-negative advanced breast cancer. The main side effect is neutropenia which can be managed by dose reduction. Cdk4/6 targeted therapy will only treat cancer types where Rb is expressed. Cancer cells with loss of Rb have primary resistance to Cdk4/6 inhibitors. Transcriptional regulatory network Current evidence suggests that a semi-autonomous transcriptional network acts in concert with the CDK-cyclin machinery to regulate the cell cycle. Several gene expression studies in Saccharomyces cerevisiae have identified 800–1200 genes that change expression over the course of the cell cycle. They are transcribed at high levels at specific points in the cell cycle, and remain at lower levels throughout the rest of the cycle. While the set of identified genes differs between studies due to the computational methods and criteria used to identify them, each study indicates that a large portion of yeast genes are temporally regulated. Many periodically expressed genes are driven by transcription factors that are also periodically expressed. One screen of single-gene knockouts identified 48 transcription factors (about 20% of all non-essential transcription factors) that show cell cycle progression defects. Genome-wide studies using high throughput technologies have identified the transcription factors that bind to the promoters of yeast genes, and correlating these findings with temporal expression patterns have allowed the identification of transcription factors that drive phase-specific gene expression. The expression profiles of these transcription factors are driven by the transcription factors that peak in the prior phase, and computational models have shown that a CDK-autonomous network of these transcription factors is sufficient to produce steady-state oscillations in gene expression). Experimental evidence also suggests that gene expression can oscillate with the period seen in dividing wild-type cells independently of the CDK machinery. Orlando et al. used microarrays to measure the expression of a set of 1,271 genes that they identified as periodic in both wild type cells and cells lacking all S-phase and mitotic cyclins (clb1,2,3,4,5,6). Of the 1,271 genes assayed, 882 continued to be expressed in the cyclin-deficient cells at the same time as in the wild type cells, despite the fact that the cyclin-deficient cells arrest at the border between G1 and S phase. However, 833 of the genes assayed changed behavior between the wild type and mutant cells, indicating that these genes are likely directly or indirectly regulated by the CDK-cyclin machinery. Some genes that continued to be expressed on time in the mutant cells were also expressed at different levels in the mutant and wild type cells. These findings suggest that while the transcriptional network may oscillate independently of the CDK-cyclin oscillator, they are coupled in a manner that requires both to ensure the proper timing of cell cycle events. Other work indicates that phosphorylation, a post-translational modification, of cell cycle transcription factors by Cdk1 may alter the localization or activity of the transcription factors in order to tightly control timing of target genes. While oscillatory transcription plays a key role in the progression of the yeast cell cycle, the CDK-cyclin machinery operates independently in the early embryonic cell cycle. Before the midblastula transition, zygotic transcription does not occur and all needed proteins, such as the B-type cyclins, are translated from maternally loaded mRNA. DNA replication and DNA replication origin activity Analyses of synchronized cultures of Saccharomyces cerevisiae under conditions that prevent DNA replication initiation without delaying cell cycle progression showed that origin licensing decreases the expression of genes with origins near their 3' ends, revealing that downstream origins can regulate the expression of upstream genes. This confirms previous predictions from mathematical modeling of a global causal coordination between DNA replication origin activity and mRNA expression, and shows that mathematical modeling of DNA microarray data can be used to correctly predict previously unknown biological modes of regulation. Checkpoints Cell cycle checkpoints are used by the cell to monitor and regulate the progress of the cell cycle. Checkpoints prevent cell cycle progression at specific points, allowing verification of necessary phase processes and repair of DNA damage. The cell cannot proceed to the next phase until checkpoint requirements have been met. Checkpoints typically consist of a network of regulatory proteins that monitor and dictate the progression of the cell through the different stages of the cell cycle. It is estimated that in normal human cells about 1% of single-strand DNA damages are converted to about 50 endogenous DNA double-strand breaks per cell per cell cycle. Although such double-strand breaks are usually repaired with high fidelity, errors in their repair are considered to contribute significantly to the rate of cancer in humans. There are several checkpoints to ensure that damaged or incomplete DNA is not passed on to daughter cells. Three main checkpoints exist: the G1/S checkpoint, the G2/M checkpoint and the metaphase (mitotic) checkpoint. Another checkpoint is the Go checkpoint, in which the cells are checked for maturity. If the cells fail to pass this checkpoint by not being ready yet, they will be discarded from dividing. G1/S transition is a rate-limiting step in the cell cycle and is also known as restriction point. This is where the cell checks whether it has enough raw materials to fully replicate its DNA (nucleotide bases, DNA synthase, chromatin, etc.). An unhealthy or malnourished cell will get stuck at this checkpoint. The G2/M checkpoint is where the cell ensures that it has enough cytoplasm and phospholipids for two daughter cells. But sometimes more importantly, it checks to see if it is the right time to replicate. There are some situations where many cells need to all replicate simultaneously (for example, a growing embryo should have a symmetric cell distribution until it reaches the mid-blastula transition). This is done by controlling the G2/M checkpoint. The metaphase checkpoint is a fairly minor checkpoint, in that once a cell is in metaphase, it has committed to undergoing mitosis. However that's not to say it isn't important. In this checkpoint, the cell checks to ensure that the spindle has formed and that all of the chromosomes are aligned at the spindle equator before anaphase begins. While these are the three "main" checkpoints, not all cells have to pass through each of these checkpoints in this order to replicate. Many types of cancer are caused by mutations that allow the cells to speed through the various checkpoints or even skip them altogether. Going from S to M to S phase almost consecutively. Because these cells have lost their checkpoints, any DNA mutations that may have occurred are disregarded and passed on to the daughter cells. This is one reason why cancer cells have a tendency to exponentially acquire mutations. Aside from cancer cells, many fully differentiated cell types no longer replicate so they leave the cell cycle and stay in G0 until their death. Thus removing the need for cellular checkpoints. An alternative model of the cell cycle response to DNA damage has also been proposed, known as the postreplication checkpoint. Checkpoint regulation plays an important role in an organism's development. In sexual reproduction, when egg fertilization occurs, when the sperm binds to the egg, it releases signalling factors that notify the egg that it has been fertilized. Among other things, this induces the now fertilized oocyte to return from its previously dormant, G0, state back into the cell cycle and on to mitotic replication and division. p53 plays an important role in triggering the control mechanisms at both G1/S and G2/M checkpoints. In addition to p53, checkpoint regulators are being heavily researched for their roles in cancer growth and proliferation. Fluorescence imaging of the cell cycle Pioneering work by Atsushi Miyawaki and coworkers developed the fluorescent ubiquitination-based cell cycle indicator (FUCCI), which enables fluorescence imaging of the cell cycle. Originally, a green fluorescent protein, mAG, was fused to hGem(1/110) and an orange fluorescent protein (mKO2) was fused to hCdt1(30/120). Note, these fusions are fragments that contain a nuclear localization signal and ubiquitination sites for degradation, but are not functional proteins. The green fluorescent protein is made during the S, G2, or M phase and degraded during the G0 or G1 phase, while the orange fluorescent protein is made during the G0 or G1 phase and destroyed during the S, G2, or M phase. A far-red and near-infrared FUCCI was developed using a cyanobacteria-derived fluorescent protein (smURFP) and a bacteriophytochrome-derived fluorescent protein (movie found at this link). Several modifications have been made to the original FUCCI system to improve its usability in several in vitro systems and model organisms. These advancements have increased the sensitivity and accuracy of cell cycle phase detection, enabling more precise assessments of cellular proliferation Role in tumor formation A disregulation of the cell cycle components may lead to tumor formation. As mentioned above, when some genes like the cell cycle inhibitors, RB, p53 etc. mutate, they may cause the cell to multiply uncontrollably, forming a tumor. Although the duration of cell cycle in tumor cells is equal to or longer than that of normal cell cycle, the proportion of cells that are in active cell division (versus quiescent cells in G0 phase) in tumors is much higher than that in normal tissue. Thus there is a net increase in cell number as the number of cells that die by apoptosis or senescence remains the same. The cells which are actively undergoing cell cycle are targeted in cancer therapy as the DNA is relatively exposed during cell division and hence susceptible to damage by drugs or radiation. This fact is made use of in cancer treatment; by a process known as debulking, a significant mass of the tumor is removed which pushes a significant number of the remaining tumor cells from G0 to G1 phase (due to increased availability of nutrients, oxygen, growth factors etc.). Radiation or chemotherapy following the debulking procedure kills these cells which have newly entered the cell cycle. The fastest cycling mammalian cells in culture, crypt cells in the intestinal epithelium, have a cycle time as short as 9 to 10 hours. Stem cells in resting mouse skin may have a cycle time of more than 200 hours. Most of this difference is due to the varying length of G1, the most variable phase of the cycle. M and S do not vary much. In general, cells are most radiosensitive in late M and G2 phases and most resistant in late S phase. For cells with a longer cell cycle time and a significantly long G1 phase, there is a second peak of resistance late in G1. The pattern of resistance and sensitivity correlates with the level of sulfhydryl compounds in the cell. Sulfhydryls are natural substances that protect cells from radiation damage and tend to be at their highest levels in S and at their lowest near mitosis. Homologous recombination (HR) is an accurate process for repairing DNA double-strand breaks. HR is nearly absent in G1 phase, is most active in S phase, and declines in G2/M. Non-homologous end joining, a less accurate and more mutagenic process for repairing double strand breaks, is active throughout the cell cycle. Cell cycle evolution Evolution of the genome The cell cycle must duplicate all cellular constituents and equally partition them into two daughter cells. Many constituents, such as proteins and ribosomes, are produced continuously throughout the cell cycle (except during M-phase). However, the chromosomes and other associated elements like MTOCs, are duplicated just once during the cell cycle. A central component of the cell cycle is its ability to coordinate the continuous and periodic duplications of different cellular elements, which evolved with the formation of the genome. The pre-cellular environment contained functional and self-replicating RNAs. All RNA concentrations depended on the concentrations of other RNAs that might be helping or hindering the gathering of resources. In this environment, growth was simply the continuous production of RNAs. These pre-cellular structures would have had to contend with parasitic RNAs, issues of inheritance, and copy-number control of specific RNAs. Partitioning "genomic" RNA from "functional" RNA helped solve these problems. The fusion of multiple RNAs into a genome gave a template from which functional RNAs were cleaved. Now, parasitic RNAs would have to incorporate themselves into the genome, a much greater barrier, in order to survive. Controlling the copy number of genomic RNA also allowed RNA concentration to be determined through synthesis rates and RNA half-lives, instead of competition. Separating the duplication of genomic RNAs from the generation of functional RNAs allowed for much greater duplication fidelity of genomic RNAs without compromising the production of functional RNAs. Finally, the replacement of genomic RNA with DNA, which is a more stable molecule, allowed for larger genomes. The transition from self-catalysis enzyme synthesis to genome-directed enzyme synthesis was a critical step in cell evolution, and had lasting implications on the cell cycle, which must regulate functional synthesis and genomic duplication in very different ways. Cyclin-dependent kinase and cyclin evolution Cell-cycle progression is controlled by the oscillating concentrations of different cyclins and the resulting molecular interactions from the various cyclin-dependent kinases (CDKs). In yeast, just one CDK (Cdc28 in S. cerevisiae and Cdc2 in S. pombe) controls the cell cycle. However, in animals, whole families of CDKs have evolved. Cdk1 controls entry to mitosis and Cdk2, Cdk4, and Cdk6 regulate entry into S phase. Despite the evolution of the CDK family in animals, these proteins have related or redundant functions. For example, cdk2 cdk4 cdk6 triple knockout mice cells can still progress through the basic cell cycle. cdk1 knockouts are lethal, which suggests an ancestral CDK1-type kinase ultimately controlling the cell cycle. Arabidopsis thaliana has a Cdk1 homolog called CDKA;1, however cdka;1 A. thaliana mutants are still viable, running counter to the opisthokont pattern of CDK1-type kinases as essential regulators controlling the cell cycle. Plants also have a unique group of B-type CDKs, whose functions may range from development-specific functions to major players in mitotic regulation. G1/S checkpoint evolution The G1/S checkpoint is the point at which the cell commits to division through the cell cycle. Complex regulatory networks lead to the G1/S transition decision. Across opisthokonts, there are both highly diverged protein sequences as well as strikingly similar network topologies. Entry into S-phase in both yeast and animals is controlled by the levels of two opposing regulators. The networks regulating these transcription factors are double-negative feedback loops and positive feedback loops in both yeast and animals. Additional regulation of the regulatory network for the G1/S checkpoint in yeast and animals includes the phosphorylation/de-phosphorylation of CDK-cyclin complexes. The sum of these regulatory networks creates a hysteretic and bistable scheme, despite the specific proteins being highly diverged. For yeast, Whi5 must be suppressed by Cln3 phosphorylation for SBF to be expressed, while in animals Rb must be suppressed by the Cdk4/6-cyclin D complex for E2F to be expressed. Both Rb and Whi5 inhibit transcript through the recruitment of histone deacetylase proteins to promoters. Both proteins additionally have multiple CDK phosphorylation sites through which they are inhibited. However, these proteins share no sequence similarity. Studies in A. thaliana extend our knowledge of the G1/S transition across eukaryotes as a whole. Plants also share a number of conserved network features with opisthokonts, and many plant regulators have direct animal homologs. For example, plants also need to suppress Rb for E2F translation in the network. These conserved elements of the plant and animal cell cycles may be ancestral in eukaryotes. While yeast share a conserved network topology with plants and animals, the highly diverged nature of yeast regulators suggests possible rapid evolution along the yeast lineage. See also Cellular model Eukaryotic DNA replication Mitotic catastrophe Origin recognition complex Retinoblastoma protein Synchronous culture – synchronization of cell cultures Wee1 References Further reading External links David Morgan's Seminar: Controlling the Cell Cycle The cell cycle & Cell death Transcriptional program of the cell cycle: high-resolution timing Cell cycle and metabolic cycle regulated transcription in yeast Cell Cycle Animation 1Lec.com Cell Cycle Fucci:Using GFP to visualize the cell-cycle Science Creative Quarterly's overview of the cell cycle KEGG – Human Cell Cycle Cellular senescence
Cell cycle
[ "Biology" ]
8,161
[ "Senescence", "Cellular senescence", "Cell cycle", "Cellular processes" ]
7,262
https://en.wikipedia.org/wiki/CORAL
CORAL, short for Computer On-line Real-time Applications Language is a programming language originally developed in 1964 at the Royal Radar Establishment (RRE), Malvern, Worcestershire, in the United Kingdom. The R was originally for "radar", not "real-time". It was influenced primarily by JOVIAL, and thus ALGOL, but is not a subset of either. The most widely-known version, CORAL 66, was subsequently developed by I. F. Currie and M. Griffiths under the auspices of the Inter-Establishment Committee for Computer Applications (IECCA). Its official definition, edited by Woodward, Wetherall, and Gorman, was first published in 1970. In 1971, CORAL was selected by the Ministry of Defence as the language for future military applications and to support this, a standardization program was introduced to ensure CORAL compilers met the specifications. This process was later adopted by the US Department of Defense while defining Ada. Overview Coral 66 is a general-purpose programming language based on ALGOL 60, with some features from Coral 64, JOVIAL, and Fortran. It includes structured record types (as in Pascal) and supports the packing of data into limited storage (also as in Pascal). Like Edinburgh IMP it allows inline (embedded) assembly language, and also offers good runtime checking and diagnostics. It is designed for real-time computing and embedded system applications, and for use on computers with limited processing power, including those limited to fixed-point arithmetic and those without support for dynamic storage allocation. The language was an inter-service standard for British military programming, and was also widely adopted for civil purposes in the British control and automation industry. It was used to write software for both the Ferranti and General Electric Company (GEC) computers from 1971 onwards. Implementations also exist for the Interdata 8/32, PDP-11, VAX and Alpha platforms and HPE Integrity Servers; for the Honeywell, and for the Computer Technology Limited (CTL, later ITL) Modular-1; and for SPARC running Solaris, and Intel running Linux. Queen Elizabeth II sent the first email from a head of state from the Royal Signals and Radar Establishment over the ARPANET on March 26, 1976. The message read "This message to all ARPANET users announces the availability on ARPANET of the Coral 66 compiler provided by the GEC 4080 computer at the Royal Signals and Radar Establishment, Malvern, England, ... Coral 66 is the standard real-time high level language adopted by the Ministry of Defence." As Coral was aimed at a variety of real-time work, rather than general office data processing, there was no standardised equivalent to a stdio library. IECCA recommended a primitive input/output (I/O) package to accompany any compiler (in a document titled Input/Output of Character data in Coral 66 Utility Programs). Most implementers avoided this by producing Coral interfaces to extant Fortran and, later, C libraries. CORAL's most significant contribution to computing may have been enforcing quality control in commercial compilers. To have a CORAL compiler approved by IECCA, and thus allowing a compiler to be marketed as a CORAL 66 compiler, the candidate compiler had to compile and execute a standard suite of 25 test programs and 6 benchmark programs. The process was part of the British Standard (BS) 5905 approval process. This methodology was observed and adapted later by the United States Department of Defense for the certification of Ada compilers. Source code for a Coral 66 compiler (written in BCPL) has been recovered and the Official Definition of Coral 66 document by Her Majesty's Stationery Office (HMSO) has been scanned; the Ministry of Defence patent office has issued a licence to the Edinburgh Computer History project to allow them to put both the code and the language reference online for non-commercial use. Variants A variant of Coral 66 named PO-CORAL was developed during the late 1970s to early 1980s by the British General Post Office (GPO), together with GEC, STC and Plessey, for use on the System X digital telephone exchange control computers. This was later renamed BT-CORAL when British Telecom was spun off from the Post Office. Unique features of this language were the focus on real-time execution, message processing, limits on statement execution between waiting for input, and a prohibition on recursion to remove the need for a stack. References External links CORAL 66 test program extracted from the Test Responder report CORAL 66 benchmarks BS5905 CORAL 66 Standard DEF STAN 05-47 PDP-11 CORAL/ASM interfacing library ECCE editor script to translate CORAL 66 into Edinburgh IMP History of computing in the United Kingdom Procedural programming languages Programming languages created in 1964
CORAL
[ "Technology" ]
972
[ "History of computing", "History of computing in the United Kingdom" ]
7,284
https://en.wikipedia.org/wiki/Centromere
The centromere links a pair of sister chromatids together during cell division. This constricted region of chromosome connects the sister chromatids, creating a short arm (p) and a long arm (q) on the chromatids. During mitosis, spindle fibers attach to the centromere via the kinetochore. The physical role of the centromere is to act as the site of assembly of the kinetochores – a highly complex multiprotein structure that is responsible for the actual events of chromosome segregation – i.e. binding microtubules and signaling to the cell cycle machinery when all chromosomes have adopted correct attachments to the spindle, so that it is safe for cell division to proceed to completion and for cells to enter anaphase. There are, broadly speaking, two types of centromeres. "Point centromeres" bind to specific proteins that recognize particular DNA sequences with high efficiency. Any piece of DNA with the point centromere DNA sequence on it will typically form a centromere if present in the appropriate species. The best characterized point centromeres are those of the budding yeast, Saccharomyces cerevisiae. "Regional centromeres" is the term coined to describe most centromeres, which typically form on regions of preferred DNA sequence, but which can form on other DNA sequences as well. The signal for formation of a regional centromere appears to be epigenetic. Most organisms, ranging from the fission yeast Schizosaccharomyces pombe to humans, have regional centromeres. Regarding mitotic chromosome structure, centromeres represent a constricted region of the chromosome (often referred to as the primary constriction) where two identical sister chromatids are most closely in contact. When cells enter mitosis, the sister chromatids (the two copies of each chromosomal DNA molecule resulting from DNA replication in chromatin form) are linked along their length by the action of the cohesin complex. It is now believed that this complex is mostly released from chromosome arms during prophase, so that by the time the chromosomes line up at the mid-plane of the mitotic spindle (also known as the metaphase plate), the last place where they are linked with one another is in the chromatin in and around the centromere. Position In humans, centromere positions define the chromosomal karyotype, in which each chromosome has two arms, p (the shorter of the two) and q (the longer). The short arm 'p' is reportedly named for the French word "petit" meaning 'small'. The position of the centromere relative to any particular linear chromosome is used to classify chromosomes as metacentric, submetacentric, acrocentric, telocentric, or holocentric. Metacentric Metacentric means that the centromere is positioned midway between the chromosome ends, resulting in the arms being approximately equal in length. When the centromeres are metacentric, the chromosomes appear to be "x-shaped." Submetacentric Submetacentric means that the centromere is positioned below the middle, with one chromosome arm shorter than the other, often resulting in an L shape. Acrocentric An acrocentric chromosome's centromere is situated so that one of the chromosome arms is much shorter than the other. The "acro-" in acrocentric refers to the Greek word for "peak." The human genome has six acrocentric chromosomes, including five autosomal chromosomes (13, 14, 15, 21, 22) and the Y chromosome. Short acrocentric p-arms contain little genetic material and can be translocated without significant harm, as in a balanced Robertsonian translocation. In addition to some protein coding genes, human acrocentric p-arms also contain Nucleolus organizer regions (NORs), from which ribosomal RNA is transcribed. However, a proportion of acrocentric p-arms in cell lines and tissues from normal human donors do not contain detectable NORs. The domestic horse genome includes one metacentric chromosome that is homologous to two acrocentric chromosomes in the conspecific but undomesticated Przewalski's horse. This may reflect either fixation of a balanced Robertsonian translocation in domestic horses or, conversely, fixation of the fission of one metacentric chromosome into two acrocentric chromosomes in Przewalski's horses. A similar situation exists between the human and great ape genomes, with a reduction of two acrocentric chromosomes in the great apes to one metacentric chromosome in humans (see aneuploidy and the human chromosome 2). Many diseases from the result of unbalanced translocations more frequently involve acrocentric chromosomes than other non-acrocentric chromosomes. Acrocentric chromosomes are usually located in and around the nucleolus. As a result, these chromosomes tend to be less densely packed than chromosomes in the nuclear periphery. Consistently, chromosomal regions that are less densely packed are also more prone to chromosomal translocations in cancers. Telocentric Telocentric chromosomes have a centromere at one end of the chromosome and therefore exhibit only one arm at the cytological (microscopic) level. They are not present in humans but can form through cellular chromosomal errors. Telocentric chromosomes occur naturally in many species, such as the house mouse, in which all chromosomes except the Y are telocentric. Subtelocentric Subtelocentric chromosomes' centromeres are located between the middle and the end of the chromosomes, but reside closer to the end of the chromosomes. Centromere types Acentric An acentric chromosome is fragment of a chromosome that lacks a centromere. Since centromeres are the attachment point for spindle fibers in cell division, acentric fragments are not evenly distributed to daughter cells during cell division. As a result, a daughter cell will lack the acentric fragment and deleterious consequences could occur. Chromosome-breaking events can also generate acentric chromosomes or acentric fragments. Dicentric A dicentric chromosome is an abnormal chromosome with two centromeres, which can be unstable through cell divisions. It can form through translocation between or fusion of two chromosome segments, each with a centromere. Some rearrangements produce both dicentric chromosomes and acentric fragments which can not attach to spindles at mitosis. The formation of dicentric chromosomes has been attributed to genetic processes, such as Robertsonian translocation and paracentric inversion. Dicentric chromosomes can have a variety of fates, including mitotic stability. In some cases, their stability comes from inactivation of one of the two centromeres to make a functionally monocentric chromosome capable of normal transmission to daughter cells during cell division. For example, human chromosome 2, which is believed to be the result of a Robertsonian translocation at some point in the evolution between the great apes and Homo, has a second, vestigial, centromere near the middle of its long arm. Monocentric The monocentric chromosome is a chromosome that has only one centromere in a chromosome and forms a narrow constriction. Monocentric centromeres are the most common structure on highly repetitive DNA in plants and animals. Holocentric Unlike monocentric chromosomes, holocentric chromosomes have no distinct primary constriction when viewed at mitosis. Instead, spindle fibers attach along almost the entire (Greek: holo-) length of the chromosome. In holocentric chromosomes centromeric proteins, such as CENPA (CenH3) are spread over the whole chromosome. The nematode, Caenorhabditis elegans, is a well-known example of an organism with holocentric chromosomes, but this type of centromere can be found in various species, plants, and animals, across eukaryotes. Holocentromeres are actually composed of multiple distributed centromere units that form a line-like structure along the chromosomes during mitosis. Alternative or nonconventional strategies are deployed at meiosis to achieve the homologous chromosome pairing and segregation needed to produce viable gametes or gametophytes for sexual reproduction. Different types of holocentromeres exist in different species, namely with or without centromeric repetitive DNA sequences and with or without CenH3. Holocentricity has evolved at least 13 times independently in various green algae, protozoans, invertebrates, and different plant families. Contrary to monocentric species where acentric fragments usually become lost during cell division, the breakage of holocentric chromosomes creates fragments with normal spindle fiber attachment sites. Because of this, organisms with holocentric chromosomes can more rapidly evolve karyotype variation, able to heal fragmented chromosomes through subsequent addition of telomere caps at the sites of breakage. Polycentric Polycentric chromosomes have several kinetochore clusters, i.e. centromes. The term overlaps partially with "holocentric", but "polycentric" is clearly preferred when discussing defectively formed monocentric chromosomes. There is some actual ambiguity as well, as there is no clear line dividing up the transition from kinetochores covering the whole chromosome to distinct clusters. In other words, the difference between "the whole chromosome is a centrome" and "the chromosome has no centrome" is hazy and usage varies. Beyond "polycentricity" being used more about defects, there is no clear preference in other topics such as evolutionary origin or kinetochore distribution and detailed structure (e.g. as seen in tagging or genome assembly analysis). Even clearly distinct clusters of kinetochore proteins do not necessarily produce more than one constriction: "Metapolycentric" chromosomes feature one elongated constriction of the chromosome, joining a longer segment which is still visibly shorter than the chromatids. Metapolycentric chromosomes may be a step in the emergence and suppression of centromere drive, a type of meiotic drive that disrupts parity by monocentric centromeres growing additional kinetochore proteins to gain an advantage during meiosis. Human chromosomes Based on the micrographic characteristics of size, position of the centromere and sometimes the presence of a chromosomal satellite, the human chromosomes are classified into the following groups: Sequence There are two types of centromeres. In regional centromeres, DNA sequences contribute to but do not define function. Regional centromeres contain large amounts of DNA and are often packaged into heterochromatin. In most eukaryotes, the centromere's DNA sequence consists of large arrays of repetitive DNA (e.g. satellite DNA) where the sequence within individual repeat elements is similar but not identical. In humans, the primary centromeric repeat unit is called α-satellite (or alphoid), although a number of other sequence types are found in this region. Centromere satellites are hypothesized to evolve by a process called layered expansion. They evolve rapidly between species, and analyses in wild mice show that satellite copy number and heterogeneity relates to population origins and subspecies. Additionally, satellite sequences may be affected by inbreeding. Point centromeres are smaller and more compact. DNA sequences are both necessary and sufficient to specify centromere identity and function in organisms with point centromeres. In budding yeasts, the centromere region is relatively small (about 125 bp DNA) and contains two highly conserved DNA sequences that serve as binding sites for essential kinetochore proteins. Inheritance Since centromeric DNA sequence is not the key determinant of centromeric identity in metazoans, it is thought that epigenetic inheritance plays a major role in specifying the centromere. The daughter chromosomes will assemble centromeres in the same place as the parent chromosome, independent of sequence. It has been proposed that histone H3 variant CENP-A (Centromere Protein A) is the epigenetic mark of the centromere. The question arises whether there must be still some original way in which the centromere is specified, even if it is subsequently propagated epigenetically. If the centromere is inherited epigenetically from one generation to the next, the problem is pushed back to the origin of the first metazoans. On the other hand, thanks to comparisons of the centromeres in the X chromosomes, epigenetic and structural variations have been seen in these regions. In addition, a recent assembly of the human genome has detected a possible mechanism of how pericentromeric and centromeric structures evolve, through a layered expansion model for αSat sequences. This model proposes that different αSat sequence repeats emerge periodically and expand within an active vector, displacing old sequences, and becoming the site of kinetochore assembly. The αSat can originate from the same, or from different vectors. As this process is repeated over time, the layers that flank the active centromere shrink and deteriorate. This process raises questions about the relationship between this dynamic evolutionary process and the position of the centromere. Structure The centromeric DNA is normally in a heterochromatin state, which is essential for the recruitment of the cohesin complex that mediates sister chromatid cohesion after DNA replication as well as coordinating sister chromatid separation during anaphase. In this chromatin, the normal histone H3 is replaced with a centromere-specific variant, CENP-A in humans. The presence of CENP-A is believed to be important for the assembly of the kinetochore on the centromere. CENP-C has been shown to localise almost exclusively to these regions of CENP-A associated chromatin. In human cells, the histones are found to be most enriched for H4K20me3 and H3K9me3 which are known heterochromatic modifications. In Drosophila, Islands of retroelements are major components of the centromeres. In the yeast Schizosaccharomyces pombe (and probably in other eukaryotes), the formation of centromeric heterochromatin is connected to RNAi. In nematodes such as Caenorhabditis elegans, some plants, and the insect orders Lepidoptera and Hemiptera, chromosomes are "holocentric", indicating that there is not a primary site of microtubule attachments or a primary constriction, and a "diffuse" kinetochore assembles along the entire length of the chromosome. Centromeric aberrations In rare cases, neocentromeres can form at new sites on a chromosome as a result of a repositioning of the centromere. This phenomenon is most well known from human clinical studies and there are currently over 90 known human neocentromeres identified on 20 different chromosomes. The formation of a neocentromere must be coupled with the inactivation of the previous centromere, since chromosomes with two functional centromeres (Dicentric chromosome) will result in chromosome breakage during mitosis. In some unusual cases human neocentromeres have been observed to form spontaneously on fragmented chromosomes. Some of these new positions were originally euchromatic and lack alpha satellite DNA altogether. Neocentromeres lack the repetitive structure seen in normal centromeres which suggest that centromere formation is mainly controlled epigenetically. Over time a neocentromere can accumulate repetitive elements and mature into what is known as an evolutionary new centromere. There are several well known examples in primate chromosomes where the centromere position is different from the human centromere of the same chromosome and is thought to be evolutionary new centromeres. Centromere repositioning and the formation of evolutionary new centromeres has been suggested to be a mechanism of speciation. Centromere proteins are also the autoantigenic target for some anti-nuclear antibodies, such as anti-centromere antibodies. Dysfunction and disease It has been known that centromere misregulation contributes to mis-segregation of chromosomes, which is strongly related to cancer and miscarriage. Notably, overexpression of many centromere genes have been linked to cancer malignant phenotypes. Overexpression of these centromere genes can increase genomic instability in cancers. Elevated genomic instability on one hand relates to malignant phenotypes; on the other hand, it makes the tumor cells more vulnerable to specific adjuvant therapies such as certain chemotherapies and radiotherapy. Instability of centromere repetitive DNA was recently shown in cancer and aging. Repair of centromeric DNA When DNA breaks occur at centromeres in the G1 phase of the cell cycle, the cells are able to recruit the homologous recombinational repair machinery to the damaged site, even in the absence of a sister chromatid. It appears that homologous recombinational repair can occur at centromeric breaks throughout the cell cycle in order to prevent the activation of inaccurate mutagenic DNA repair pathways and to preserve centromeric integrity. Etymology and pronunciation The word centromere () uses combining forms of centro- and -mere, yielding "central part", describing the centromere's location at the center of the chromosome. See also Telomere Chromatid Diploid Monopolin References Further reading External links Chromosomes DNA replication
Centromere
[ "Biology" ]
3,624
[ "Genetics techniques", "DNA replication", "Molecular genetics" ]
7,294
https://en.wikipedia.org/wiki/Cartography
Cartography (; from , 'papyrus, sheet of paper, map'; and , 'write') is the study and practice of making and using maps. Combining science, aesthetics and technique, cartography builds on the premise that reality (or an imagined reality) can be modeled in ways that communicate spatial information effectively. The fundamental objectives of traditional cartography are to: Set the map's agenda and select traits of the object to be mapped. This is the concern of map editing. Traits may be physical, such as roads or land masses, or may be abstract, such as toponyms or political boundaries. Represent the terrain of the mapped object on flat media. This is the concern of map projections. Eliminate the mapped object's characteristics that are irrelevant to the map's purpose. This is the concern of generalization. Reduce the complexity of the characteristics that will be mapped. This is also the concern of generalization. Orchestrate the elements of the map to best convey its message to its audience. This is the concern of map design. Modern cartography constitutes many theoretical and practical foundations of geographic information systems (GIS) and geographic information science (GISc). History Ancient times What is the earliest known map is a matter of some debate, both because the term "map" is not well-defined and because some artifacts that might be maps might actually be something else. A wall painting that might depict the ancient Anatolian city of Çatalhöyük (previously known as Catal Huyuk or Çatal Hüyük) has been dated to the late 7th millennium BCE. Among the prehistoric alpine rock carvings of Mount Bego (France) and Valcamonica (Italy), dated to the 4th millennium BCE, geometric patterns consisting of dotted rectangles and lines are widely interpreted in archaeological literature as depicting cultivated plots. Other known maps of the ancient world include the Minoan "House of the Admiral" wall painting from , showing a seaside community in an oblique perspective, and an engraved map of the holy Babylonian city of Nippur, from the Kassite period (14th12th centuries BCE). The oldest surviving world maps are from 9th century BCE Babylonia. One shows Babylon on the Euphrates, surrounded by Assyria, Urartu and several cities, all, in turn, surrounded by a "bitter river" (Oceanus). Another depicts Babylon as being north of the center of the world. The ancient Greeks and Romans created maps from the time of Anaximander in the 6th century BCE. In the 2nd century CE, Ptolemy wrote his treatise on cartography, Geographia. This contained Ptolemy's world map – the world then known to Western society (Ecumene). As early as the 8th century, Arab scholars were translating the works of the Greek geographers into Arabic. Roads were essential in the Roman world, motivating the creation of maps, called itinerarium, that portrayed the world as experienced via the roads. The is the only surviving example. In ancient China, geographical literature dates to the 5th century BCE. The oldest extant Chinese maps come from the State of Qin, dated back to the 4th century BCE, during the Warring States period. In the book Xin Yi Xiang Fa Yao, published in 1092 by the Chinese scientist Su Song, a star map on the equidistant cylindrical projection. Although this method of charting seems to have existed in China even before this publication and scientist, the greatest significance of the star maps by Su Song is that they represent the oldest existent star maps in printed form. Early forms of cartography of India included depictions of the pole star and surrounding constellations. These charts may have been used for navigation. Middle Ages and Renaissance ('maps of the world') are the medieval European maps of the world. About 1,100 of these are known to have survived: of these, some 900 are found illustrating manuscripts, and the remainder exist as stand-alone documents. The Arab geographer Muhammad al-Idrisi produced his medieval atlas Tabula Rogeriana (Book of Roger) in 1154. By combining the knowledge of Africa, the Indian Ocean, Europe, and the Far East (which he learned through contemporary accounts from Arab merchants and explorers) with the information he inherited from the classical geographers, he was able to write detailed descriptions of a multitude of countries. Along with the substantial text he had written, he created a world map influenced mostly by the Ptolemaic conception of the world, but with significant influence from multiple Arab geographers. It remained the most accurate world map for the next three centuries. The map was divided into seven climatic zones, with detailed descriptions of each zone. As part of this work, a smaller, circular map depicting the south on top and Arabia in the center was made. Al-Idrisi also made an estimate of the circumference of the world, accurate to within 10%. In the Age of Discovery, from the 15th century to the 17th century, European cartographers both copied earlier maps (some of which had been passed down for centuries) and drew their own based on explorers' observations and new surveying techniques. The invention of the magnetic compass, telescope and sextant enabled increasing accuracy. In 1492, Martin Behaim, a German cartographer and advisor to the king John II of Portugal, made the oldest extant globe of the Earth. In 1507, Martin Waldseemüller produced a globular world map and a large 12-panel world wall map (Universalis Cosmographia) bearing the first use of the name "America." Portuguese cartographer Diogo Ribero was the author of the first known planisphere with a graduated Equator (1527). Italian cartographer Battista Agnese produced at least 71 manuscript atlases of sea charts. Johannes Werner refined and promoted the Werner projection. This was an equal-area, heart-shaped world map projection (generally called a cordiform projection) that was used in the 16th and 17th centuries. Over time, other iterations of this map type arose; most notable are the sinusoidal projection and the Bonne projection. The Werner projection places its standard parallel at the North Pole; a sinusoidal projection places its standard parallel at the equator; and the Bonne projection is intermediate between the two. In 1569, mapmaker Gerardus Mercator first published a map based on his Mercator projection, which uses equally-spaced parallel vertical lines of longitude and parallel latitude lines spaced farther apart as they get farther away from the equator. By this construction, courses of constant bearing are conveniently represented as straight lines for navigation. The same property limits its value as a general-purpose world map because regions are shown as increasingly larger than they actually are the further from the equator they are. Mercator is also credited as the first to use the word "atlas" to describe a collection of maps. In the later years of his life, Mercator resolved to create his Atlas, a book filled with many maps of different regions of the world, as well as a chronological history of the world from the Earth's creation by God until 1568. He was unable to complete it to his satisfaction before he died. Still, some additions were made to the Atlas after his death, and new editions were published after his death. In 1570, the Brabantian cartographer Abraham Ortelius, strongly encouraged by Gillis Hooftman, created the first true modern atlas, Theatrum Orbis Terrarum. In a rare move, Ortelius credited mapmakers who contributed to the atlas, the list of which grew to 183 individuals by 1603. In the Renaissance, maps were used to impress viewers and establish the owner's reputation as sophisticated, educated, and worldly. Because of this, towards the end of the Renaissance, maps were displayed with equal importance of painting, sculptures, and other pieces of art. In the sixteenth century, maps were becoming increasingly available to consumers through the introduction of printmaking, with about 10% of Venetian homes having some sort of map by the late 1500s. There were three main functions of maps in the Renaissance: General descriptions of the world Navigation and wayfinding Land surveying and property management In medieval times, written directions of how to get somewhere were more common than the use of maps. With the Renaissance, cartography began to be seen as a metaphor for power. Political leaders could lay claim to territories through the use of maps, and this was greatly aided by the religious and colonial expansion of Europe. The Holy Land and other religious places were the most commonly mapped during the Renaissance. In the late 1400s to the late 1500s, Rome, Florence, and Venice dominated map-making and trade. It started in Florence in the mid-to late 1400s. Map trade quickly shifted to Rome and Venice but then was overtaken by atlas makers in the late 16th century. Map publishing in Venice was completed with humanities and book publishing in mind, rather than just informational use. Printing technology There were two main printmaking technologies in the Renaissance: woodcut and copper-plate intaglio, referring to the medium used to transfer the image onto paper. In woodcut, the map image is created as a relief chiseled from medium-grain hardwood. The areas intended to be printed are inked and pressed against the sheet. Being raised from the rest of the block, the map lines cause indentations in the paper that can often be felt on the back of the map. There are advantages to using relief to make maps. For one, a printmaker doesn't need a press because the maps could be developed as rubbings. Woodblock is durable enough to be used many times before defects appear. Existing printing presses can be used to create the prints rather than having to create a new one. On the other hand, it is hard to achieve fine detail with the relief technique. Inconsistencies in linework are more apparent in woodcut than in intaglio. To improve quality in the late fifteenth century, a style of relief craftsmanship developed using fine chisels to carve the wood, rather than the more commonly used knife. In intaglio, lines are engraved into workable metals, typically copper but sometimes brass. The engraver spreads a thin sheet of wax over the metal plate and uses ink to draw the details. Then, the engraver traces the lines with a stylus to etch them into the plate beneath. The engraver can also use styli to prick holes along the drawn lines, trace along them with colored chalk, and then engrave the map. Lines going in the same direction are carved at the same time, and then the plate is turned to carve lines going in a different direction. To print from the finished plate, ink is spread over the metal surface and scraped off such that it remains only in the etched channels. Then the plate is pressed forcibly against the paper so that the ink in the channels is transferred to the paper. The pressing is so forceful that it leaves a "plate mark" around the border of the map at the edge of the plate, within which the paper is depressed compared to the margins. Copper and other metals were expensive at the time, so the plate was often reused for new maps or melted down for other purposes. Whether woodcut or intaglio, the printed map is hung out to dry. Once dry, it is usually placed in another press to flatten the paper. Any type of paper that was available at the time could be used to print the map, but thicker paper was more durable. Both relief and intaglio were used about equally by the end of the fifteenth century. Lettering Lettering in mapmaking is important for denoting information. Fine lettering is difficult in woodcut, where it often turned out square and blocky, contrary to the stylized, rounded writing style popular in Italy at the time. To improve quality, mapmakers developed fine chisels to carve the relief. Intaglio lettering did not suffer the troubles of a coarse medium and so was able to express the looping cursive that came to be known as cancellaresca. There were custom-made reverse punches that were also used in metal engraving alongside freehand lettering. Color The first use of color in map-making cannot be narrowed down to one reason. There are arguments that color started as a way to indicate information on the map, with aesthetics coming second. There are also arguments that color was first used on maps for aesthetics but then evolved into conveying information. Either way, many maps of the Renaissance left the publisher without being colored, a practice that continued all the way into the 1800s. However, most publishers accepted orders from their patrons to have their maps or atlases colored if they wished. Because all coloring was done by hand, the patron could request simple, cheap color, or more expensive, elaborate color, even going so far as silver or gold gilding. The simplest coloring was merely outlines, such as of borders and along rivers. Wash color meant painting regions with inks or watercolors. Limning meant adding silver and gold leaf to the map to illuminate lettering, heraldic arms, or other decorative elements. Early modern period The early modern period saw the convergence of cartographical techniques across Eurasia and the exchange of mercantile mapping techniques via the Indian Ocean. In the early seventeenth century, the Selden map was created by a Chinese cartographer. Historians have put its date of creation around 1620, but there is debate in this regard. This map's significance draws from historical misconceptions of East Asian cartography, the main one being that East Asians did not do cartography until Europeans arrived. The map's depiction of trading routes, a compass rose, and scale bar points to the culmination of many map-making techniques incorporated into Chinese mercantile cartography. In 1689, representatives of the Russian tsar and Qing Dynasty met near the border town of Nerchinsk, which was near the disputed border of the two powers, in eastern Siberia. The two parties, with the Qing negotiation party bringing Jesuits as intermediaries, managed to work a treaty which placed the Amur River as the border between the Eurasian powers, and opened up trading relations between the two. This treaty's significance draws from the interaction between the two sides, and the intermediaries who were drawn from a wide variety of nationalities. Age of Enlightenment Maps of the Enlightenment period practically universally used copper plate intaglio, having abandoned the fragile, coarse woodcut technology. Use of map projections evolved, with the double hemisphere being very common and Mercator's prestigious navigational projection gradually making more appearances. Due to the paucity of information and the immense difficulty of surveying during the period, mapmakers frequently plagiarized material without giving credit to the original cartographer. For example, a famous map of North America known as the "Beaver Map" was published in 1715 by Herman Moll. This map is a close reproduction of a 1698 work by Nicolas de Fer. De Fer, in turn, had copied images that were first printed in books by Louis Hennepin, published in 1697, and François Du Creux, in 1664. By the late 18th century, mapmakers often credited the original publisher with something along the lines of, "After [the original cartographer]" in the map's title or cartouche. Modern period In cartography, technology has continually changed in order to meet the demands of new generations of mapmakers and map users. The first maps were produced manually, with brushes and parchment; so they varied in quality and were limited in distribution. The advent of magnetic devices, such as the compass and much later, magnetic storage devices, allowed for the creation of far more accurate maps and the ability to store and manipulate them digitally. Advances in mechanical devices such as the printing press, quadrant, and vernier allowed the mass production of maps and the creation of accurate reproductions from more accurate data. Hartmann Schedel was one of the first cartographers to use the printing press to make maps more widely available. Optical technology, such as the telescope, sextant, and other devices that use telescopes, allowed accurate land surveys and allowed mapmakers and navigators to find their latitude by measuring angles to the North Star at night or the Sun at noon. Advances in photochemical technology, such as the lithographic and photochemical processes, make possible maps with fine details, which do not distort in shape and which resist moisture and wear. This also eliminated the need for engraving, which further speeded up map production. In the 20th century, aerial photography, satellite imagery, and remote sensing provided efficient, precise methods for mapping physical features, such as coastlines, roads, buildings, watersheds, and topography. The United States Geological Survey has devised multiple new map projections, notably the Space Oblique Mercator for interpreting satellite ground tracks for mapping the surface. The use of satellites and space telescopes now allows researchers to map other planets and moons in outer space. Advances in electronic technology ushered in another revolution in cartography: ready availability of computers and peripherals such as monitors, plotters, printers, scanners (remote and document) and analytic stereo plotters, along with computer programs for visualization, image processing, spatial analysis, and database management, have democratized and greatly expanded the making of maps. The ability to superimpose spatially located variables onto existing maps has created new uses for maps and new industries to explore and exploit these potentials. See also digital raster graphic. In the early years of the new millennium, three key technological advances transformed cartography: the removal of Selective Availability in the Global Positioning System (GPS) in May 2000, which improved locational accuracy for consumer-grade GPS receivers to within a few metres; the invention of OpenStreetMap in 2004, a global digital counter-map that allowed anyone to contribute and use new spatial data without complex licensing agreements; and the launch of Google Earth in 2005 as a development of the virtual globe EarthViewer 3D (2004), which revolutionised accessibility of accurate world maps, as well as access to satellite and aerial imagery. These advances brought more accuracy to geographical and location-based data and widened the range of applications for cartography, for example in the development of satnav devices. Today most commercial-quality maps are made using software of three main types: CAD, GIS and specialized illustration software. Spatial information can be stored in a database, from which it can be extracted on demand. These tools lead to increasingly dynamic, interactive maps that can be manipulated digitally. On the other hand, we can observe a reverse trend. In contemporary times, there is a resurgence of interest in the most beautiful periods of cartography, with various maps being created using, for example, Renaissance-style aesthetics. We encounter imitators or continuators of Renaissance traditions that merge the realms of science and art. Among them are figures such as Luther Phillips (1891–1960) and Ruth Rhoads Lepper Gardner (1905–2011), who still operated using traditional cartographic methods, as well as creators utilizing modern developments based on GIS solutions and those employing techniques that combine advanced GIS/CAD methods with traditional artistic forms. Field-rugged computers, GPS, and laser rangefinders make it possible to create maps directly from measurements made on site. Deconstruction There are technical and cultural aspects to producing maps. In this sense, maps can sometimes be said to be biased. The study of bias, influence, and agenda in making a map is what comprise a map's deconstruction. A central tenet of deconstructionism is that maps have power. Other assertions are that maps are inherently biased and that we search for metaphor and rhetoric in maps. It is claimed that the Europeans promoted an "epistemological" understanding of the map as early as the 17th century. An example of this understanding is that "[European reproduction of terrain on maps] reality can be expressed in mathematical terms; that systematic observation and measurement offer the only route to cartographic truth…". A common belief is that science heads in a direction of progress, and thus leads to more accurate representations of maps. In this belief, European maps must be superior to others, which necessarily employed different map-making skills. "There was a 'not cartography' land where lurked an army of inaccurate, heretical, subjective, valuative, and ideologically distorted images. Cartographers developed a 'sense of the other' in relation to nonconforming maps." Depictions of Africa are a common target of deconstructionism. According to deconstructionist models, cartography was used for strategic purposes associated with imperialism and as instruments and representations of power during the conquest of Africa. The depiction of Africa and the low latitudes in general on the Mercator projection has been interpreted as imperialistic and as symbolic of subjugation due to the diminished proportions of those regions compared to higher latitudes where the European powers were concentrated. Maps furthered imperialism and colonization of Africa in practical ways by showing basic information like roads, terrain, natural resources, settlements, and communities. Through this, maps made European commerce in Africa possible by showing potential commercial routes and made natural resource extraction possible by depicting locations of resources. Such maps also enabled military conquests and made them more efficient, and imperial nations further used them to put their conquests on display. These same maps were then used to cement territorial claims, such as at the Berlin Conference of 1884–1885. Before 1749, maps of the African continent had African kingdoms drawn with assumed or contrived boundaries, with unknown or unexplored areas having drawings of animals, imaginary physical geographic features, and descriptive texts. In 1748, Jean B. B. d'Anville created the first map of the African continent that had blank spaces to represent the unknown territory. Map types General vs. thematic cartography In understanding basic maps, the field of cartography can be divided into two general categories: general cartography and thematic cartography. General cartography involves those maps that are constructed for a general audience and thus contain a variety of features. General maps exhibit many reference and location systems and often are produced in a series. For example, the 1:24,000 scale topographic maps of the United States Geological Survey (USGS) are a standard as compared to the 1:50,000 scale Canadian maps. The government of the UK produces the classic 1:50,000 (replacing the older 1 inch to 1 mile) "Ordnance Survey" maps of the entire UK and with a range of correlated larger- and smaller-scale maps of great detail. Many private mapping companies have also produced thematic map series. Thematic cartography involves maps of specific geographic themes, oriented toward specific audiences. A couple of examples might be a dot map showing corn production in Indiana or a shaded area map of Ohio counties, divided into numerical choropleth classes. As the volume of geographic data has exploded over the last century, thematic cartography has become increasingly useful and necessary to interpret spatial, cultural and social data. A third type of map is known as an "orienteering," or special purpose map. This type of map falls somewhere between thematic and general maps. They combine general map elements with thematic attributes in order to design a map with a specific audience in mind. Oftentimes, the type of audience an orienteering map is made for is in a particular industry or occupation. An example of this kind of map would be a municipal utility map. Topographic vs. topological A topographic map is primarily concerned with the topographic description of a place, including (especially in the 20th and 21st centuries) the use of contour lines showing elevation. Terrain or relief can be shown in a variety of ways (see Cartographic relief depiction). In the present era, one of the most widespread and advanced methods used to form topographic maps is to use computer software to generate digital elevation models which show shaded relief. Before such software existed, cartographers had to draw shaded relief by hand. One cartographer who is respected as a master of hand-drawn shaded relief is the Swiss professor Eduard Imhof whose efforts in hill shading were so influential that his method became used around the world despite it being so labor-intensive. A topological map is a very general type of map, the kind one might sketch on a napkin. It often disregards scale and detail in the interest of clarity of communicating specific route or relational information. Beck's London Underground map is an iconic example. Although the most widely used map of "The Tube," it preserves little of reality: it varies scale constantly and abruptly, it straightens curved tracks, and it contorts directions. The only topography on it is the River Thames, letting the reader know whether a station is north or south of the river. That and the topology of station order and interchanges between train lines are all that is left of the geographic space. Yet those are all a typical passenger wishes to know, so the map fulfills its purpose. Map design Modern technology, including advances in printing, the advent of geographic information systems and graphics software, and the Internet, has vastly simplified the process of map creation and increased the palette of design options available to cartographers. This has led to a decreased focus on production skill, and an increased focus on quality design, the attempt to craft maps that are both aesthetically pleasing and practically useful for their intended purposes. Map purpose and audience A map has a purpose and an audience. Its purpose may be as broad as teaching the major physical and political features of the entire world, or as narrow as convincing a neighbor to move a fence. The audience may be as broad as the general public or as narrow as a single person. Mapmakers use design principles to guide them in constructing a map that is effective for its purpose and audience. Cartographic process The cartographic process spans many stages, starting from conceiving the need for a map and extending all the way through its consumption by an audience. Conception begins with a real or imagined environment. As the cartographer gathers information about the subject, they consider how that information is structured and how that structure should inform the map's design. Next, the cartographers experiment with generalization, symbolization, typography, and other map elements to find ways to portray the information so that the map reader can interpret the map as intended. Guided by these experiments, the cartographer settles on a design and creates the map, whether in physical or electronic form. Once finished, the map is delivered to its audience. The map reader interprets the symbols and patterns on the map to draw conclusions and perhaps to take action. By the spatial perspectives they provide, maps help shape how we view the world. Aspects of map design Designing a map involves bringing together a number of elements and making a large number of decisions. The elements of design fall into several broad topics, each of which has its own theory, its own research agenda, and its own best practices. That said, there are synergistic effects between these elements, meaning that the overall design process is not just working on each element one at a time, but an iterative feedback process of adjusting each to achieve the desired gestalt. Map projections: The foundation of the map is the plane on which it rests (whether paper or screen), but projections are required to flatten the surface of the Earth or other celestial bodies. While all projections distort the surface, cartographers strategically control how and where distortion occurs For example, the popular Mercator projection does not distort angles on the surface, but it makes regions near the poles appear larger than they are. Generalization: All maps must be drawn at a smaller scale than reality, requiring that the information included on a map be a very small sample of the wealth of information about a place. Generalization is the process of adjusting the level of detail in geographic information to be appropriate for the scale and purpose of a map, through procedures such as selection, simplification, and classification. Symbology: Any map visually represents the location and properties of geographic phenomena using map symbols, graphical depictions composed of several visual variables, such as size, shape, color, and pattern. Composition: As all of the symbols are brought together, their interactions have major effects on map reading, such as grouping and visual hierarchy. Typography or labeling: Text serves a number of purposes on the map, especially aiding the recognition of features, but labels must be designed and positioned well to be effective. Layout: The map image must be placed on the page (whether paper, web, or other media), along with related elements, such as the title, legend, additional maps, text, images, and so on. Each of these elements have their own design considerations, as does their integration, which largely follows the principles of graphic design. Map type-specific design: Different kinds of maps, especially thematic maps, have their own design needs and best practices. Deliberate cartographic errors Some maps contain deliberate errors or distortions, either as propaganda or as a "watermark" to help the copyright owner identify infringement if the error appears in competitors' maps. The latter often come in the form of nonexistent, misnamed, or misspelled "trap streets". Other names and forms for this are paper towns, fictitious entries, and copyright easter eggs. Another motive for deliberate errors is cartographic "vandalism": a mapmaker wishing to leave their mark on the work. Mount Richard, for example, was a fictitious peak on the Rocky Mountains' continental divide that appeared on a Boulder County, Colorado map in the early 1970s. It is believed to be the work of draftsman Richard Ciacci. The fiction was not discovered until two years later. Sandy Island in New Caledonia is an example of a fictitious location that stubbornly survives, reappearing on new maps copied from older maps while being deleted from other new editions. With the emergence of the internet and Web mapping, technologies allow for the creation and distribution of maps by people without proper cartographic training are readily available. This has led to maps that ignore cartographic conventions and are potentially misleading. Professional and learned societies Professional and learned societies include: International Cartographic Association (ICA), the world body for mapping and GIScience professionals, as well as the ICA member organizations British Cartographic Society (BCS) a registered charity in the UK dedicated to exploring and developing the world of maps Society of Cartographers supports in the UK the practising cartographer and encourages and maintains a high standard of cartographic illustration Cartography and Geographic Information Society (CaGIS), promotes in the U.S. research, education, and practice to improve the understanding, creation, analysis, and use of maps and geographic information. The society serves as a forum for the exchange of original concepts, techniques, approaches, and experiences by those who design, implement, and use cartography, geographical information systems, and related geospatial technologies. North American Cartographic Information Society (NACIS), A North American-based cartography society that is aimed at improving communication, coordination and cooperation among the producers, disseminators, curators, and users of cartographic information. Their members are located worldwide and the meetings are on an annual basis Canadian Cartographic Association (CCA) Academic journals Journals related to cartography, as well as GIS, GISc, include: International Journal of Cartography The Cartographic Journal Cartographica Cartography and Geographic Information Science Cartographic Perspectives KN - Journal of Cartography and Geographic Information Journal of Maps Journal of Geovisualization and Spatial Analysis Transactions in GIS Journal of Spatial Science Geocarto International GIScience & Remote Sensing International Journal of Applied Earth Observation and Geoinformation International Journal of Digital Earth Geoinformatica ISPRS International Journal of Geo-information Journal of Photogrammetry, Remote Sensing and Geoinformation Science Geo-spatial Information Science ACM Transactions on Spatial Algorithms and Systems Imago Mundi Revista Cartográfica Terrae Incognitae See also References Bibliography Further reading Mapmaking History Meanings External links Mapping History – a learning resource from the British Library Antique Maps by Carl Moreland and David Bannister – complete text of the book, with information both on mapmaking and on mapmakers, including short biographies of many cartographers (archived 2 February 2007) Concise Bibliography of the History of Cartography , Newberry Library Geodesy
Cartography
[ "Mathematics" ]
6,664
[ "Applied mathematics", "Geodesy" ]
7,296
https://en.wikipedia.org/wiki/Cardiac%20glycoside
Cardiac glycosides are a class of organic compounds that increase the output force of the heart and decrease its rate of contractions by inhibiting the cellular sodium-potassium ATPase pump. Their beneficial medical uses include treatments for congestive heart failure and cardiac arrhythmias; however, their relative toxicity prevents them from being widely used. Most commonly found as secondary metabolites in several plants such as foxglove plants and milkweed plants, these compounds nevertheless have a diverse range of biochemical effects regarding cardiac cell function and have also been suggested for use in cancer treatment. Classification General structure The general structure of a cardiac glycoside consists of a steroid molecule attached to a sugar (glycoside) and an R group. The steroid nucleus consists of four fused rings to which other functional groups such as methyl, hydroxyl, and aldehyde groups can be attached to influence the overall molecule's biological activity. Cardiac glycosides also vary in the groups attached at either end of the steroid. Specifically, different sugar groups attached at the sugar end of the steroid can alter the molecule's solubility and kinetics; however, the lactone moiety at the R group end only serves a structural function. In particular, the structure of the ring attached at the R end of the molecule allows it to be classified as either a cardenolide or bufadienolide. Cardenolides differ from bufadienolides due to the presence of an "enolide," a five-membered ring with a single double bond, at the lactone end. Bufadienolides, on the other hand, contain a "dienolide," a six-membered ring with two double bonds, at the lactone end. While compounds of both groups can be used to influence the cardiac output of the heart, cardenolides are more commonly used medicinally, primarily due to the widespread availability of the plants from which they are derived. Classification Cardiac glycosides can be more specifically categorized based on the plant they are derived from, as in the following list. For example, cardenolides have been primarily derived from the foxglove plants Digitalis purpurea and Digitalis lanata, while bufadienolides have been derived from the venom of the cane toad Rhinella marina (formerly known as Bufo marinus), from which they receive the "bufo" portion of their name. Below is a list of organisms from which cardiac glycosides can be derived. Plant cardenolides Convallaria majalis (Lily of the Valley): convallatoxin Antiaris toxicaria (upas tree): antiarin Strophanthus kombe (Strophanthus vine): ouabain (g-strophanthin) and other strophanthins Digitalis lanata and Digitalis purpurea (Woolly and purple foxglove): digoxin, digitoxin Nerium oleander (oleander tree): oleandrin Asclepias sp. (milkweed): asclepin, calotropin, uzarin, calactin, coroglucigenin, uzarigenin, oleandrin Adonis vernalis (Spring pheasant's eye): adonitoxin Kalanchoe daigremontiana and other Kalanchoe species: daigremontianin Erysimum cheiranthoides (wormseed wallflower) and other Erysimum species Cerbera odollam (suicide tree): cerberin Periploca sepium: periplocin Other cardenolides some species of Chrysolina beetles, including Chrysolina coerulans, have cardiac glycosides (including Xylose) in their defensive glands. Bufadienolides Leonurus cardiaca (motherwort): scillarenin Drimia maritima (squill): proscillaridine A Rhinella marina (cane toad): various bufadienolides – see also toad venom Kalanchoe daigremontiana and other Kalanchoe species: daigremontianin and others Helleborus'' spp. (hellebore) Mechanism of action Cardiac glycosides affect the sodium-potassium ATPase pump in cardiac muscle cells to alter their function. Normally, these sodium-potassium pumps move potassium ions in and sodium ions out. Cardiac glycosides, however, inhibit this pump by stabilizing it in the E2-P transition state, so that sodium cannot be extruded: intracellular sodium concentration therefore increases. With regard to potassium ion movement, because both cardiac glycosides and potassium compete for binding to the ATPase pump, changes in extracellular potassium concentration can potentially lead to altered drug efficacy. Nevertheless, by carefully controlling the dosage, such adverse effects can be avoided. Continuing on with the mechanism, raised intracellular sodium levels inhibit the function of a second membrane ion exchanger, NCX, which is responsible for pumping calcium ions out of the cell and sodium ions in at a ratio of /. Thus, calcium ions are also not extruded and will begin to build up inside the cell as well. The disrupted calcium homeostasis and increased cytoplasmic calcium concentrations cause increased calcium uptake into the sarcoplasmic reticulum (SR) via the SERCA2 transporter. Raised calcium stores in the SR allow for greater calcium release on stimulation, so the myocyte can achieve faster and more powerful contraction by cross-bridge cycling. The refractory period of the AV node is increased, so cardiac glycosides also function to decrease heart rate. For example, the ingestion of digoxin leads to increased cardiac output and decreased heart rate without significant changes in blood pressure; this quality allows it to be widely used medicinally in the treatment of cardiac arrhythmias. Non-cardiac uses Cardiac glycosides were identified as senolytics: they can selectively eliminate senescent cells which are more sensitive to the ATPase-inhibiting action due to cell membrane changes. Clinical significance Cardiac glycosides have long served as the main medical treatment to congestive heart failure and cardiac arrhythmia, due to their effects of increasing the force of muscle contraction while reducing heart rate. Heart failure is characterized by an inability to pump enough blood to support the body, possibly due to a decrease in the volume of the blood or its contractile force. Treatments for the condition thus focus on lowering blood pressure, so that the heart does not have to exert as much force to pump the blood, or directly increasing the heart's contractile force, so that the heart can overcome the higher blood pressure. Cardiac glycosides, such as the commonly used digoxin and digitoxin, deal with the latter, due to their positive inotropic activity. On the other hand, cardiac arrhythmia are changes in heart rate, whether faster (tachycardia) or slower (bradycardia). Medicinal treatments for this condition work primarily to counteract tachycardia or atrial fibrillation by slowing down heart rate, as done by cardiac glycosides. Nevertheless, due to questions of toxicity and dosage, cardiac glycosides have been replaced with synthetic drugs such as ACE inhibitors and beta blockers and are no longer used as the primary medical treatment for such conditions. Depending on the severity of the condition, though, they may still be used in conjunction with other treatments. Toxicity From ancient times, humans have used cardiac-glycoside-containing plants and their crude extracts as arrow coatings, homicidal or suicidal aids, rat poisons, heart tonics, diuretics and emetics, primarily due to the toxic nature of these compounds. Thus, though cardiac glycosides have been used for their medicinal function, their toxicity must also be recognized. For example, in 2008 US poison centers reported 2,632 cases of digoxin toxicity, and 17 cases of digoxin-related deaths. Because cardiac glycosides affect the cardiovascular, neurologic, and gastrointestinal systems, these three systems can be used to determine the effects of toxicity. The effect of these compounds on the cardiovascular system presents a reason for concern, as they can directly affect the function of the heart through their inotropic and chronotropic effects. In terms of inotropic activity, excessive cardiac glycoside dosage results in cardiac contractions with greater force, as further calcium is released from the SR of cardiac muscle cells. Toxicity also results in changes to heart chronotropic activity, resulting in multiple kinds of dysrhythmia and potentially fatal ventricular tachycardia. These dysrhythmias are an effect of an influx of sodium and decrease of resting membrane potential threshold in cardiac muscle cells. When taken beyond a narrow dosage range specific to each particular cardiac glycoside, these compounds can rapidly become dangerous. In sum, they interfere with fundamental processes that regulate membrane potential. They are toxic to the heart, the brain, and the gut at doses that are not difficult to reach. In the heart, the most common negative effect is premature ventricular contraction. References External links Plant toxins
Cardiac glycoside
[ "Chemistry" ]
1,959
[ "Chemical ecology", "Plant toxins" ]
7,303
https://en.wikipedia.org/wiki/Cross
A cross is a compound geometrical figure consisting of two intersecting lines, usually perpendicular to each other. The lines usually run vertically and horizontally. A cross of oblique lines, in the shape of the Latin letter X, is termed a saltire in heraldic terminology. The cross has been widely recognized as an exclusive symbol of Christianity from an early period in that religion's history. Before then, it was used as a religious or cultural symbol throughout Europe, in western and south Asia (the latter, in the form of the original Swastika); and in Egypt, where the Ankh was a hieroglyph that represented "life" and was used in the worship of the god Aten. The effigy of a man hanging on a cross was set up in the fields to protect the crops. It often appeared in conjunction with the female-genital circle or oval, to signify the sacred marriage, as in Egyptian amulet Nefer with male cross and female orb, considered as an amulet of blessedness, a charm of sexual harmony. Name The word cross is recorded in 11th-century Old English as cros, exclusively for the instrument of Christ's crucifixion, replacing the native Old English word rood. The word's history is complicated; it appears to have entered English from Old Irish, possibly via Old Norse, ultimately from the Latin (or its accusative and its genitive ), "stake, cross". The English verb to cross arises from the noun , first in the sense "to make the sign of the cross"; the generic meaning "to intersect" develops in the 15th century. The Latin word was influenced by popular etymology by a native Germanic word reconstructed as *krukjo (English crook, Old English , Old Norse , Old High German ). This word, by conflation with Latin , gave rise to Old French (modern French ), the term for a shepherd's crook, adopted in English as crosier. Latin referred to the gibbet where criminals were executed, a stake or pole, with or without , on which the condemned were impaled or hanged, but more particularly a cross or the pole of a carriage. The derived verb means "to put to death on the cross" or, more frequently, "to put to the rack, to torture, torment", especially in reference to mental troubles. In the Roman world, replaced as the name of some cross-like instruments for lethal and temporary punishment, ranging from a forked cross to a gibbet or gallows. The field of etymology is of no help in any effort to trace a supposed original meaning of crux. A crux can be of various shapes: from a single beam used for impaling or suspending () to the various composite kinds of cross () made from more beams than one. The latter shapes include not only the traditional †-shaped cross (the ), but also the T-shaped cross (the or tau cross), which the descriptions in antiquity of the execution cross indicate as the normal form in use at that time, and the X-shaped cross (the crux decussata or saltire). The Greek equivalent of Latin crux "stake, gibbet" is , found in texts of four centuries or more before the gospels and always in the plural number to indicate a stake or pole. From the first century BC, it is used to indicate an instrument used in executions. The Greek word is used in descriptions in antiquity of the execution cross, which indicate that its normal shape was similar to the Greek letter tau (Τ). History Pre-Christian Due to the simplicity of the design (two intersecting lines), cross-shaped incisions make their appearance from deep prehistory; as petroglyphs in European cult caves, dating back to the beginning of the Upper Paleolithic, and throughout prehistory to the Iron Age. Also of prehistoric age are numerous variants of the simple cross mark, including the crux gammata with curving or angular lines, and the Egyptian crux ansata with a loop. Speculation has associated the cross symbol – even in the prehistoric period – with astronomical or cosmological symbology involving "four elements" (Chevalier, 1997) or the cardinal points, or the unity of a vertical axis mundi or celestial pole with the horizontal world (Koch, 1955). Speculation of this kind became especially popular in the mid- to late-19th century in the context of comparative mythology seeking to tie Christian mythology to ancient cosmological myths. Influential works in this vein included G. de Mortillet (1866), L. Müller (1865), W. W. Blake (1888), Ansault (1891), etc. In the European Bronze Age the cross symbol appeared to carry a religious meaning, perhaps as a symbol of consecration, especially pertaining to burial. The cross sign occurs trivially in tally marks, and develops into a number symbol independently in the Roman numerals (X "ten"), the Chinese rod numerals (十 "ten") and the Brahmi numerals ("four", whence the numeral 4). In the Phoenician alphabet and derived scripts, the cross symbol represented the phoneme /t/, i.e. the letter taw, which is the historical predecessor of Latin T. The letter name taw means "mark", presumably continuing the Egyptian hieroglyph "two crossed sticks" (Gardiner Z9). Post-Christian The shape of the cross (crux, stauros "stake, gibbet"), as represented by the Latin letter T, came to be used as a new symbol (seal) or emblem of Christianity since the 2nd century AD to succeeding Ichthys in aftermaths of that new religion's separation from Judaism. Clement of Alexandria in the early 3rd century calls it ("the Lord's sign") he repeats the idea, current as early as the Epistle of Barnabas, that the number 318 (in Greek numerals, ΤΙΗ) in Genesis 14:14 was a foreshadowing (a "type") of the cross (the letter Tau) and of Jesus (the letters Iota Eta). Clement's contemporary Tertullian rejects the accusation that Christians are crucis religiosi (i.e. "adorers of the gibbet"), and returns the accusation by likening the worship of pagan idols to the worship of poles or stakes. In his book De Corona, written in 204, Tertullian tells how it was already a tradition for Christians to trace repeatedly on their foreheads the sign of the cross. While early Christians used the T-shape to represent the cross in writing and gesture, the use of the Greek cross and Latin cross, i.e. crosses with intersecting beams, appears in Christian art towards the end of Late Antiquity. An early example of the cruciform halo, used to identify Christ in paintings, is found in the Miracles of the Loaves and Fishes mosaic of Sant'Apollinare Nuovo, Ravenna (6th century). The Patriarchal cross, a Latin cross with an additional horizontal bar, first appears in the 10th century. A wide variation of cross symbols is introduced for the purposes of heraldry beginning in the age of the Crusades. Marks and graphemes The cross mark is used to mark a position, or as a check mark, but also to mark deletion. Derived from Greek Chi are the Latin letter X, Cyrillic Kha and possibly runic Gyfu. Egyptian hieroglyphs involving cross shapes include ankh "life", ndj "protect" and nfr "good; pleasant, beautiful". Sumerian cuneiform had a simple cross-shaped character, consisting of a horizontal and a vertical wedge (𒈦), read as maš "tax, yield, interest"; the superposition of two diagonal wedges results in a decussate cross (𒉽), read as pap "first, pre-eminent" (the superposition of these two types of crosses results in the eight-pointed star used as the sign for "sky" or "deity" (𒀭), DINGIR). The cuneiform script has other, more complex, cruciform characters, consisting of an arrangement of boxes or the fourfold arrangement of other characters, including the archaic cuneiform characters LAK-210, LAK-276, LAK-278, LAK-617 and the classical sign EZEN (𒂡). Phoenician tāw is still cross-shaped in Paleo-Hebrew alphabet and in some Old Italic scripts (Raetic and Lepontic), and its descendant T becomes again cross-shaped in the Latin minuscule t. The plus sign (+) is derived from Latin t via a simplification of a ligature for et "and" (introduced by Johannes Widmann in the late 15th century). The letter Aleph is cross-shaped in Aramaic and paleo-Hebrew. Egyptian hieroglyphs with cross-shapes include Gardiner Z9 – Z11 ("crossed sticks", "crossed planks"). Other, unrelated cross-shaped letters include Brahmi ka (predecessor of the Devanagari letter क) and Old Turkic (Orkhon) d² and Old Hungarian b, and Katakana ナ na and メme. The multiplication sign (×), often attributed to William Oughtred (who first used it in an appendix to the 1618 edition of John Napier's Descriptio) apparently had been in occasional use since the mid 16th century. Other typographical symbols resembling crosses include the dagger or obelus (†), the Chinese (十, Kangxi radical 24) and Roman (X ten). Unicode has a variety of cross symbols in the "Dingbat" block (U+2700–U+27BF): ✕ ✖ ✗ ✘ ✙ ✚ ✛ ✜ ✝ ✞ ✟ ✠ ✢ ✣ ✤ ✥ The Miscellaneous Symbols block (U+2626 to U+262F) adds three specific Christian cross variants, viz. the Patriarchal cross (☦), Cross of Lorraine (☨) and Cross potent (☩, mistakenly labeled a "Cross of Jerusalem"). Emblems The following is a list of cross symbols, except for variants of the Christian cross and Heraldic crosses, for which see the dedicated lists at Christian cross variants and Crosses in heraldry, respectively. As a design element Physical gestures Cross shapes are made by a variety of physical gestures. Crossing the fingers of one hand is a common invocation of the symbol. The sign of the cross associated with Christian genuflection is made with one hand: in Eastern Orthodox tradition the sequence is head-heart-right shoulder-left shoulder, while in Oriental Orthodox, Catholic and Anglican tradition the sequence is head-heart-left-right. Crossing the index fingers of both hands represents and a charm against evil in European folklore. Other gestures involving more than one hand include the "cross my heart" movement associated with making a promise and the Tau shape of the referee's "time out" hand signal. Crossed index fingers represent the number 10 (十) in Chinese number gestures. Unicode Unicode provides various cross symbols: References Chevalier, Jean (1997). The Penguin Dictionary of Symbols. Penguin. . Drury, Nevill (1985). Dictionary of Mysticism and the Occult. Harper & Row. . Koch, Rudolf (1955). The Book of Signs. Dover, NY. . Webber, F. R. (1927, rev. 1938). Church Symbolism: An Explanation of the More Important Symbols of the Old and New Testament, the Primitive, the Mediaeval and the Modern Church . Cleveland, OH. . External links Seiyaku.com, all Crosses—probably the largest collection on the Internet Variations of Crosses – Images and Meanings Cross & Crucifix—Glossary: Forms and Topics Nasrani.net, Indian Cross The Christian Cross of Jesus Christ: Symbols of Christianity, Images, Designs and representations of it as objects of devotion Petroglyphs Religious symbols Religious terminology Christian terminology Geometric shapes
Cross
[ "Mathematics", "Technology" ]
2,513
[ "Geometric shapes", "Timber framing", "Mathematical objects", "Structural system", "Geometric objects" ]
7,304
https://en.wikipedia.org/wiki/Coordination%20complex
A coordination complex is a chemical compound consisting of a central atom or ion, which is usually metallic and is called the coordination centre, and a surrounding array of bound molecules or ions, that are in turn known as ligands or complexing agents. Many metal-containing compounds, especially those that include transition metals (elements like titanium that belong to the periodic table's d-block), are coordination complexes. Nomenclature and terminology Coordination complexes are so pervasive that their structures and reactions are described in many ways, sometimes confusingly. The atom within a ligand that is bonded to the central metal atom or ion is called the donor atom. In a typical complex, a metal ion is bonded to several donor atoms, which can be the same or different. A polydentate (multiple bonded) ligand is a molecule or ion that bonds to the central atom through several of the ligand's atoms; ligands with 2, 3, 4 or even 6 bonds to the central atom are common. These complexes are called chelate complexes; the formation of such complexes is called chelation, complexation, and coordination. The central atom or ion, together with all ligands, comprise the coordination sphere. The central atoms or ion and the donor atoms comprise the first coordination sphere. Coordination refers to the "coordinate covalent bonds" (dipolar bonds) between the ligands and the central atom. Originally, a complex implied a reversible association of molecules, atoms, or ions through such weak chemical bonds. As applied to coordination chemistry, this meaning has evolved. Some metal complexes are formed virtually irreversibly and many are bound together by bonds that are quite strong. The number of donor atoms attached to the central atom or ion is called the coordination number. The most common coordination numbers are 2, 4, and especially 6. A hydrated ion is one kind of a complex ion (or simply a complex), a species formed between a central metal ion and one or more surrounding ligands, molecules or ions that contain at least one lone pair of electrons. If all the ligands are monodentate, then the number of donor atoms equals the number of ligands. For example, the cobalt(II) hexahydrate ion or the hexaaquacobalt(II) ion [Co(H2O)6]2+ is a hydrated-complex ion that consists of six water molecules attached to a metal ion Co. The oxidation state and the coordination number reflect the number of bonds formed between the metal ion and the ligands in the complex ion. However, the coordination number of Pt(en) is 4 (rather than 2) since it has two bidentate ligands, which contain four donor atoms in total. Any donor atom will give a pair of electrons. There are some donor atoms or groups which can offer more than one pair of electrons. Such are called bidentate (offers two pairs of electrons) or polydentate (offers more than two pairs of electrons). In some cases an atom or a group offers a pair of electrons to two similar or different central metal atoms or acceptors—by division of the electron pair—into a three-center two-electron bond. These are called bridging ligands. History Coordination complexes have been known since the beginning of modern chemistry. Early well-known coordination complexes include dyes such as Prussian blue. Their properties were first well understood in the late 1800s, following the 1869 work of Christian Wilhelm Blomstrand. Blomstrand developed what has come to be known as the complex ion chain theory. In considering metal amine complexes, he theorized that the ammonia molecules compensated for the charge of the ion by forming chains of the type [(NH3)X]X+, where X is the coordination number of the metal ion. He compared his theoretical ammonia chains to hydrocarbons of the form (CH2)X. Following this theory, Danish scientist Sophus Mads Jørgensen made improvements to it. In his version of the theory, Jørgensen claimed that when a molecule dissociates in a solution there were two possible outcomes: the ions would bind via the ammonia chains Blomstrand had described or the ions would bind directly to the metal. It was not until 1893 that the most widely accepted version of the theory today was published by Alfred Werner. Werner's work included two important changes to the Blomstrand theory. The first was that Werner described the two possibilities in terms of location in the coordination sphere. He claimed that if the ions were to form a chain, this would occur outside of the coordination sphere while the ions that bound directly to the metal would do so within the coordination sphere. In one of his most important discoveries however Werner disproved the majority of the chain theory. Werner discovered the spatial arrangements of the ligands that were involved in the formation of the complex hexacoordinate cobalt. His theory allows one to understand the difference between a coordinated ligand and a charge balancing ion in a compound, for example the chloride ion in the cobaltammine chlorides and to explain many of the previously inexplicable isomers. In 1911, Werner first resolved the coordination complex hexol into optical isomers, overthrowing the theory that only carbon compounds could possess chirality. Structures The ions or molecules surrounding the central atom are called ligands. Ligands are classified as L or X (or a combination thereof), depending on how many electrons they provide for the bond between ligand and central atom. L ligands provide two electrons from a lone electron pair, resulting in a coordinate covalent bond. X ligands provide one electron, with the central atom providing the other electron, thus forming a regular covalent bond. The ligands are said to be coordinated to the atom. For alkenes, the pi bonds can coordinate to metal atoms. An example is ethylene in the complex (Zeise's salt). Geometry In coordination chemistry, a structure is first described by its coordination number, the number of ligands attached to the metal (more specifically, the number of donor atoms). Usually one can count the ligands attached, but sometimes even the counting can become ambiguous. Coordination numbers are normally between two and nine, but large numbers of ligands are not uncommon for the lanthanides and actinides. The number of bonds depends on the size, charge, and electron configuration of the metal ion and the ligands. Metal ions may have more than one coordination number. Typically the chemistry of transition metal complexes is dominated by interactions between s and p molecular orbitals of the donor-atoms in the ligands and the d orbitals of the metal ions. The s, p, and d orbitals of the metal can accommodate 18 electrons (see 18-Electron rule). The maximum coordination number for a certain metal is thus related to the electronic configuration of the metal ion (to be more specific, the number of empty orbitals) and to the ratio of the size of the ligands and the metal ion. Large metals and small ligands lead to high coordination numbers, e.g. . Small metals with large ligands lead to low coordination numbers, e.g. . Due to their large size, lanthanides, actinides, and early transition metals tend to have high coordination numbers. Most structures follow the points-on-a-sphere pattern (or, as if the central atom were in the middle of a polyhedron where the corners of that shape are the locations of the ligands), where orbital overlap (between ligand and metal orbitals) and ligand-ligand repulsions tend to lead to certain regular geometries. The most observed geometries are listed below, but there are many cases that deviate from a regular geometry, e.g. due to the use of ligands of diverse types (which results in irregular bond lengths; the coordination atoms do not follow a points-on-a-sphere pattern), due to the size of ligands, or due to electronic effects (see, e.g., Jahn–Teller distortion): Linear for two-coordination Trigonal planar for three-coordination Tetrahedral or square planar for four-coordination Trigonal bipyramidal for five-coordination Octahedral for six-coordination Pentagonal bipyramidal for seven-coordination Square antiprismatic for eight-coordination Tricapped trigonal prismatic for nine-coordination The idealized descriptions of 5-, 7-, 8-, and 9- coordination are often indistinct geometrically from alternative structures with slightly differing L-M-L (ligand-metal-ligand) angles, e.g. the difference between square pyramidal and trigonal bipyramidal structures. Square pyramidal for five-coordination Capped octahedral or capped trigonal prismatic for seven-coordination Dodecahedral or bicapped trigonal prismatic for eight-coordination Capped square antiprismatic for nine-coordination To distinguish between the alternative coordinations for five-coordinated complexes, the τ geometry index was invented by Addison et al. This index depends on angles by the coordination center and changes between 0 for the square pyramidal to 1 for trigonal bipyramidal structures, allowing to classify the cases in between. This system was later extended to four-coordinated complexes by Houser et al. and also Okuniewski et al. In systems with low d electron count, due to special electronic effects such as (second-order) Jahn–Teller stabilization, certain geometries (in which the coordination atoms do not follow a points-on-a-sphere pattern) are stabilized relative to the other possibilities, e.g. for some compounds the trigonal prismatic geometry is stabilized relative to octahedral structures for six-coordination. Bent for two-coordination Trigonal pyramidal for three-coordination Trigonal prismatic for six-coordination Isomerism The arrangement of the ligands is fixed for a given complex, but in some cases it is mutable by a reaction that forms another stable isomer. There exist many kinds of isomerism in coordination complexes, just as in many other compounds. Stereoisomerism Stereoisomerism occurs with the same bonds in distinct orientations. Stereoisomerism can be further classified into: Cis–trans isomerism and facial–meridional isomerism Cis–trans isomerism occurs in octahedral and square planar complexes (but not tetrahedral). When two ligands are adjacent they are said to be cis, when opposite each other, trans. When three identical ligands occupy one face of an octahedron, the isomer is said to be facial, or fac. In a fac isomer, any two identical ligands are adjacent or cis to each other. If these three ligands and the metal ion are in one plane, the isomer is said to be meridional, or mer. A mer isomer can be considered as a combination of a trans and a cis, since it contains both trans and cis pairs of identical ligands. Optical isomerism Optical isomerism occurs when a complex is not superimposable with its mirror image. It is so called because the two isomers are each optically active, that is, they rotate the plane of polarized light in opposite directions. In the first molecule shown, the symbol Λ (lambda) is used as a prefix to describe the left-handed propeller twist formed by three bidentate ligands. The second molecule is the mirror image of the first, with the symbol Δ (delta) as a prefix for the right-handed propeller twist. The third and fourth molecules are a similar pair of Λ and Δ isomers, in this case with two bidentate ligands and two identical monodentate ligands. Structural isomerism Structural isomerism occurs when the bonds are themselves different. Four types of structural isomerism are recognized: ionisation isomerism, solvate or hydrate isomerism, linkage isomerism and coordination isomerism. Ionisation isomerism – the isomers give different ions in solution although they have the same composition. This type of isomerism occurs when the counter ion of the complex is also a potential ligand. For example, pentaamminebromocobalt(III) sulphate is red violet and in solution gives a precipitate with barium chloride, confirming the presence of sulphate ion, while pentaamminesulphatecobalt(III) bromide is red and tests negative for sulphate ion in solution, but instead gives a precipitate of AgBr with silver nitrate. Solvate or hydrate isomerism – the isomers have the same composition but differ with respect to the number of molecules of solvent that serve as ligand vs simply occupying sites in the crystal. Examples: is violet colored, is blue-green, and is dark green. See water of crystallization. Linkage isomerism occurs with ligands with more than one possible donor atom, known as ambidentate ligands. For example, nitrite can coordinate through O or N. One pair of nitrite linkage isomers have structures (nitro isomer) and (nitrito isomer). Coordination isomerism occurs when both positive and negative ions of a salt are complex ions and the two isomers differ in the distribution of ligands between the cation and the anion. For example, and . Electronic properties Many of the properties of transition metal complexes are dictated by their electronic structures. The electronic structure can be described by a relatively ionic model that ascribes formal charges to the metals and ligands. This approach is the essence of crystal field theory (CFT). Crystal field theory, introduced by Hans Bethe in 1929, gives a quantum mechanically based attempt at understanding complexes. But crystal field theory treats all interactions in a complex as ionic and assumes that the ligands can be approximated by negative point charges. More sophisticated models embrace covalency, and this approach is described by ligand field theory (LFT) and Molecular orbital theory (MO). Ligand field theory, introduced in 1935 and built from molecular orbital theory, can handle a broader range of complexes and can explain complexes in which the interactions are covalent. The chemical applications of group theory can aid in the understanding of crystal or ligand field theory, by allowing simple, symmetry based solutions to the formal equations. Chemists tend to employ the simplest model required to predict the properties of interest; for this reason, CFT has been a favorite for the discussions when possible. MO and LF theories are more complicated, but provide a more realistic perspective. The electronic configuration of the complexes gives them some important properties: Color of transition metal complexes Transition metal complexes often have spectacular colors caused by electronic transitions by the absorption of light. For this reason they are often applied as pigments. Most transitions that are related to colored metal complexes are either d–d transitions or charge transfer bands. In a d–d transition, an electron in a d orbital on the metal is excited by a photon to another d orbital of higher energy, therefore d–d transitions occur only for partially-filled d-orbital complexes (d1–9). For complexes having d0 or d10 configuration, charge transfer is still possible even though d–d transitions are not. A charge transfer band entails promotion of an electron from a metal-based orbital into an empty ligand-based orbital (metal-to-ligand charge transfer or MLCT). The converse also occurs: excitation of an electron in a ligand-based orbital into an empty metal-based orbital (ligand-to-metal charge transfer or LMCT). These phenomena can be observed with the aid of electronic spectroscopy; also known as UV-Vis. For simple compounds with high symmetry, the d–d transitions can be assigned using Tanabe–Sugano diagrams. These assignments are gaining increased support with computational chemistry. Colors of lanthanide complexes Superficially lanthanide complexes are similar to those of the transition metals in that some are colored. However, for the common Ln3+ ions (Ln = lanthanide) the colors are all pale, and hardly influenced by the nature of the ligand. The colors are due to 4f electron transitions. As the 4f orbitals in lanthanides are "buried" in the xenon core and shielded from the ligand by the 5s and 5p orbitals they are therefore not influenced by the ligands to any great extent leading to a much smaller crystal field splitting than in the transition metals. The absorption spectra of an Ln3+ ion approximates to that of the free ion where the electronic states are described by spin-orbit coupling. This contrasts to the transition metals where the ground state is split by the crystal field. Absorptions for Ln3+ are weak as electric dipole transitions are parity forbidden (Laporte forbidden) but can gain intensity due to the effect of a low-symmetry ligand field or mixing with higher electronic states (e.g. d orbitals). f-f absorption bands are extremely sharp which contrasts with those observed for transition metals which generally have broad bands. This can lead to extremely unusual effects, such as significant color changes under different forms of lighting. Magnetism Metal complexes that have unpaired electrons are paramagnetic. This can be due to an odd number of electrons overall, or to destabilization of electron-pairing. Thus, monomeric Ti(III) species have one "d-electron" and must be (para)magnetic, regardless of the geometry or the nature of the ligands. Ti(II), with two d-electrons, forms some complexes that have two unpaired electrons and others with none. This effect is illustrated by the compounds TiX2[(CH3)2PCH2CH2P(CH3)2]2: when X = Cl, the complex is paramagnetic (high-spin configuration), whereas when X = CH3, it is diamagnetic (low-spin configuration). Ligands provide an important means of adjusting the ground state properties. In bi- and polymetallic complexes, in which the individual centres have an odd number of electrons or that are high-spin, the situation is more complicated. If there is interaction (either direct or through ligand) between the two (or more) metal centres, the electrons may couple (antiferromagnetic coupling, resulting in a diamagnetic compound), or they may enhance each other (ferromagnetic coupling). When there is no interaction, the two (or more) individual metal centers behave as if in two separate molecules. Reactivity Complexes show a variety of possible reactivities: Electron transfers Electron transfer (ET) between metal ions can occur via two distinct mechanisms, inner and outer sphere electron transfers. In an inner sphere reaction, a bridging ligand serves as a conduit for ET. (Degenerate) ligand exchange One important indicator of reactivity is the rate of degenerate exchange of ligands. For example, the rate of interchange of coordinate water in [M(H2O)6]n+ complexes varies over 20 orders of magnitude. Complexes where the ligands are released and rebound rapidly are classified as labile. Such labile complexes can be quite stable thermodynamically. Typical labile metal complexes either have low-charge (Na+), electrons in d-orbitals that are antibonding with respect to the ligands (Zn2+), or lack covalency (Ln3+, where Ln is any lanthanide). The lability of a metal complex also depends on the high-spin vs. low-spin configurations when such is possible. Thus, high-spin Fe(II) and Co(III) form labile complexes, whereas low-spin analogues are inert. Cr(III) can exist only in the low-spin state (quartet), which is inert because of its high formal oxidation state, absence of electrons in orbitals that are M–L antibonding, plus some "ligand field stabilization" associated with the d3 configuration. Associative processes Complexes that have unfilled or half-filled orbitals are often capable of reacting with substrates. Most substrates have a singlet ground-state; that is, they have lone electron pairs (e.g., water, amines, ethers), so these substrates need an empty orbital to be able to react with a metal centre. Some substrates (e.g., molecular oxygen) have a triplet ground state, which results that metals with half-filled orbitals have a tendency to react with such substrates (it must be said that the dioxygen molecule also has lone pairs, so it is also capable to react as a 'normal' Lewis base). If the ligands around the metal are carefully chosen, the metal can aid in (stoichiometric or catalytic) transformations of molecules or be used as a sensor. Classification Metal complexes, also known as coordination compounds, include virtually all metal compounds. The study of "coordination chemistry" is the study of "inorganic chemistry" of all alkali and alkaline earth metals, transition metals, lanthanides, actinides, and metalloids. Thus, coordination chemistry is the chemistry of the majority of the periodic table. Metals and metal ions exist, in the condensed phases at least, only surrounded by ligands. The areas of coordination chemistry can be classified according to the nature of the ligands, in broad terms: Classical (or "Werner Complexes"): Ligands in classical coordination chemistry bind to metals, almost exclusively, via their lone pairs of electrons residing on the main-group atoms of the ligand. Typical ligands are H2O, NH3, Cl−, CN−, en. Some of the simplest members of such complexes are described in metal aquo complexes, metal ammine complexes, Examples: [Co(EDTA)]−, [Co(NH3)6]3+, [Fe(C2O4)3]3- Organometallic chemistry: Ligands are organic (alkenes, alkynes, alkyls) as well as "organic-like" ligands such as phosphines, hydride, and CO. Example: (C5H5)Fe(CO)2CH3 Bioinorganic chemistry: Ligands are those provided by nature, especially including the side chains of amino acids, and many cofactors such as porphyrins. Example: hemoglobin contains heme, a porphyrin complex of iron Example: chlorophyll contains a porphyrin complex of magnesium Many natural ligands are "classical" especially including water. Cluster chemistry: Ligands include all of the above as well as other metal ions or atoms as well. Example Ru3(CO)12 In some cases there are combinations of different fields: Example: [Fe4S4(Scysteinyl)4]2−, in which a cluster is embedded in a biologically active species. Mineralogy, materials science, and solid state chemistry – as they apply to metal ions – are subsets of coordination chemistry in the sense that the metals are surrounded by ligands. In many cases these ligands are oxides or sulfides, but the metals are coordinated nonetheless, and the principles and guidelines discussed below apply. In hydrates, at least some of the ligands are water molecules. It is true that the focus of mineralogy, materials science, and solid state chemistry differs from the usual focus of coordination or inorganic chemistry. The former are concerned primarily with polymeric structures, properties arising from a collective effects of many highly interconnected metals. In contrast, coordination chemistry focuses on reactivity and properties of complexes containing individual metal atoms or small ensembles of metal atoms. Nomenclature of coordination complexes The basic procedure for naming a complex is: When naming a complex ion, the ligands are named before the metal ion. The ligands' names are given in alphabetical order. Numerical prefixes do not affect the order. Multiple occurring monodentate ligands receive a prefix according to the number of occurrences: di-, tri-, tetra-, penta-, or hexa-. Multiple occurring polydentate ligands (e.g., ethylenediamine, oxalate) receive bis-, tris-, tetrakis-, etc. Anions end in o. This replaces the final 'e' when the anion ends with '-ide', '-ate' or '-ite', e.g. chloride becomes chlorido and sulfate becomes sulfato. Formerly, '-ide' was changed to '-o' (e.g. chloro and cyano), but this rule has been modified in the 2005 IUPAC recommendations and the correct forms for these ligands are now chlorido and cyanido. Neutral ligands are given their usual name, with some exceptions: NH3 becomes ammine; H2O becomes aqua or aquo; CO becomes carbonyl; NO becomes nitrosyl. Write the name of the central atom/ion. If the complex is an anion, the central atom's name will end in -ate, and its Latin name will be used if available (except for mercury). The oxidation state of the central atom is to be specified (when it is one of several possible, or zero), and should be written as a Roman numeral (or 0) enclosed in parentheses. Name of the cation should be preceded by the name of anion. (if applicable, as in last example) Examples: [Cd(CN)2(en)2] → dicyanidobis(ethylenediamine)cadmium(II) [CoCl(NH3)5]SO4 → pentaamminechloridocobalt(III) sulfate [Cu(H2O)6] 2+ → hexaaquacopper(II) ion [CuCl5NH3]3− → amminepentachloridocuprate(II) ion K4[Fe(CN)6] → potassium hexacyanidoferrate(II) [NiCl4]2− → tetrachloridonickelate(II) ion (The use of chloro- was removed from IUPAC naming convention) The coordination number of ligands attached to more than one metal (bridging ligands) is indicated by a subscript to the Greek symbol μ placed before the ligand name. Thus the dimer of aluminium trichloride is described by Al2Cl4(μ2-Cl)2. Any anionic group can be electronically stabilized by any cation. An anionic complex can be stabilised by a hydrogen cation, becoming an acidic complex which can dissociate to release the cationic hydrogen. This kind of complex compound has a name with "ic" added after the central metal. For example, H2[Pt(CN)4] has the name tetracyanoplatinic (II) acid. Stability constant The affinity of metal ions for ligands is described by a stability constant, also called the formation constant, and is represented by the symbol Kf. It is the equilibrium constant for its assembly from the constituent metal and ligands, and can be calculated accordingly, as in the following example for a simple case: xM (aq) + yL (aq) zZ (aq) where : x, y, and z are the stoichiometric coefficients of each species. M stands for metal / metal ion , the L for Lewis bases , and finally Z for complex ions. Formation constants vary widely. Large values indicate that the metal has high affinity for the ligand, provided the system is at equilibrium. Sometimes the stability constant will be in a different form known as the constant of destability. This constant is expressed as the inverse of the constant of formation and is denoted as Kd = 1/Kf . This constant represents the reverse reaction for the decomposition of a complex ion into its individual metal and ligand components. When comparing the values for Kd, the larger the value, the more unstable the complex ion is. As a result of these complex ions forming in solutions they also can play a key role in solubility of other compounds. When a complex ion is formed it can alter the concentrations of its components in the solution. For example: Ag + 2NH3 Ag(NH3) AgCl(s) + H2O(l) Ag + Cl If these reactions both occurred in the same reaction vessel, the solubility of the silver chloride would be increased by the presence of NH4OH because formation of the Diammine argentum(I) complex consumes a significant portion of the free silver ions from the solution. By Le Chatelier's principle, this causes the equilibrium reaction for the dissolving of the silver chloride, which has silver ion as a product, to shift to the right. This new solubility can be calculated given the values of Kf and Ksp for the original reactions. The solubility is found essentially by combining the two separate equilibria into one combined equilibrium reaction and this combined reaction is the one that determines the new solubility. So Kc, the new solubility constant, is denoted by: Application of coordination compounds As metals only exist in solution as coordination complexes, it follows then that this class of compounds is useful in a wide variety of ways. Bioinorganic chemistry In bioinorganic chemistry and bioorganometallic chemistry, coordination complexes serve either structural or catalytic functions. An estimated 30% of proteins contain metal ions. Examples include the intensely colored vitamin B12, the heme group in hemoglobin, the cytochromes, the chlorin group in chlorophyll, and carboxypeptidase, a hydrolytic enzyme important in digestion. Another complex ion enzyme is catalase, which decomposes the cell's waste hydrogen peroxide. Synthetic coordination compounds are also used to bind to proteins and especially nucleic acids (e.g. anticancer drug cisplatin). Industry Homogeneous catalysis is a major application of coordination compounds for the production of organic substances. Processes include hydrogenation, hydroformylation, oxidation. In one example, a combination of titanium trichloride and triethylaluminium gives rise to Ziegler–Natta catalysts, used for the polymerization of ethylene and propylene to give polymers of great commercial importance as fibers, films, and plastics. Nickel, cobalt, and copper can be extracted using hydrometallurgical processes involving complex ions. They are extracted from their ores as ammine complexes. Metals can also be separated using the selective precipitation and solubility of complex ions. Cyanide is used chiefly for extraction of gold and silver from their ores. Phthalocyanine complexes are an important class of pigments. Analysis At one time, coordination compounds were used to identify the presence of metals in a sample. Qualitative inorganic analysis has largely been superseded by instrumental methods of analysis such as atomic absorption spectroscopy (AAS), inductively coupled plasma atomic emission spectroscopy (ICP-AES) and inductively coupled plasma mass spectrometry (ICP-MS). See also Activated complex IUPAC nomenclature of inorganic chemistry Coordination cage Coordination geometry Coordination isomerism Coordination polymers, in which coordination complexes are the repeating units. Inclusion compounds Organometallic chemistry deals with a special class of coordination compounds where organic fragments are bonded to a metal at least through one C atom. References Further reading De Vito, D.; Weber, J. ; Merbach, A. E. “Calculated Volume and Energy Profiles for Water Exchange on t2g 6 Rhodium(III) and Iridium(III) Hexaaquaions: Conclusive Evidence for an Ia Mechanism” Inorganic Chemistry, 2004, Volume 43, pages 858–863. Zumdahl, Steven S. Chemical Principles, Fifth Edition. New York: Houghton Mifflin, 2005. 943–946, 957. Harris, D., Bertolucci, M., Symmetry and Spectroscopy. 1989 New York, Dover Publications External links Naming Coordination Compounds Transition Metal Complex Colors Inorganic chemistry Transition metals Coordination chemistry
Coordination complex
[ "Chemistry" ]
6,743
[ "Coordination chemistry", "nan", "Coordination complexes" ]
7,316
https://en.wikipedia.org/wiki/Hypothetical%20types%20of%20biochemistry
Several forms of biochemistry are agreed to be scientifically viable but are not proven to exist at this time. The kinds of living organisms currently known on Earth all use carbon compounds for basic structural and metabolic functions, water as a solvent, and DNA or RNA to define and control their form. If life exists on other planets or moons it may be chemically similar, though it is also possible that there are organisms with quite different chemistries for instance, involving other classes of carbon compounds, compounds of another element, or another solvent in place of water. The possibility of life-forms being based on "alternative" biochemistries is the topic of an ongoing scientific discussion, informed by what is known about extraterrestrial environments and about the chemical behaviour of various elements and compounds. It is of interest in synthetic biology and is also a common subject in science fiction. The element silicon has been much discussed as a hypothetical alternative to carbon. Silicon is in the same group as carbon on the periodic table and, like carbon, it is tetravalent. Hypothetical alternatives to water include ammonia, which, like water, is a polar molecule, and cosmically abundant; and non-polar hydrocarbon solvents such as methane and ethane, which are known to exist in liquid form on the surface of Titan. Overview of hypothetical types of biochemistry Shadow biosphere A shadow biosphere is a hypothetical microbial biosphere of Earth that uses radically different biochemical and molecular processes than currently known life. Although life on Earth is relatively well-studied, the shadow biosphere may still remain unnoticed because the exploration of the microbial world targets primarily the biochemistry of the macro-organisms. Alternative-chirality biomolecules Perhaps the least unusual alternative biochemistry would be one with differing chirality of its biomolecules. In known Earth-based life, amino acids are almost universally of the form and sugars are of the form. Molecules using amino acids or sugars may be possible; molecules of such a chirality, however, would be incompatible with organisms using the opposing chirality molecules. Amino acids whose chirality is opposite to the norm are found on Earth, and these substances are generally thought to result from decay of organisms of normal chirality. However, physicist Paul Davies speculates that some of them might be products of "anti-chiral" life. It is questionable, however, whether such a biochemistry would be truly alien. Although it would certainly be an alternative stereochemistry, molecules that are overwhelmingly found in one enantiomer throughout the vast majority of organisms can nonetheless often be found in another enantiomer in different (often basal) organisms such as in comparisons between members of Archaea and other domains, making it an open topic whether an alternative stereochemistry is truly novel. Non-carbon-based biochemistries On Earth, all known living things have a carbon-based structure and system. Scientists have speculated about the pros and cons of using elements other than carbon to form the molecular structures necessary for life, but no one has proposed a theory employing such atoms to form all the necessary structures. However, as Carl Sagan argued, it is very difficult to be certain whether a statement that applies to all life on Earth will turn out to apply to all life throughout the universe. Sagan used the term "carbon chauvinism" for such an assumption. He regarded silicon and germanium as conceivable alternatives to carbon (other plausible elements include but are not limited to palladium and titanium); but, on the other hand, he noted that carbon does seem more chemically versatile and is more abundant in the cosmos. Norman Horowitz devised the experiments to determine whether life might exist on Mars that were carried out by the Viking Lander of 1976, the first U.S. mission to successfully land a probe on the surface of Mars. Horowitz argued that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival on other planets. He considered that there was only a remote possibility that non-carbon life forms could exist with genetic information systems capable of self-replication and the ability to evolve and adapt. Silicon biochemistry The silicon atom has been much discussed as the basis for an alternative biochemical system, because silicon has many chemical similarities to carbon and is in the same group of the periodic table. Like carbon, silicon can create molecules that are sufficiently large to carry biological information. However, silicon has several drawbacks as a carbon alternative. Carbon is ten times more cosmically abundant than silicon, and its chemistry appears naturally more complex. By 1998, astronomers had identified 84 carbon-containing molecules in the interstellar medium, but only 8 containing silicon, of which half also included carbon. Even though Earth and other terrestrial planets are exceptionally silicon-rich and carbon-poor (silicon is roughly 925 times more abundant in Earth's crust than carbon), terrestrial life bases itself on carbon. It may eschew silicon because silicon compounds are less varied, unstable in the presence of water, or block the flow of heat. Relative to carbon, silicon has a much larger atomic radius, and forms much weaker covalent bonds to atoms — except oxygen and fluorine, with which it forms very strong bonds. Almost no multiple bonds to silicon are stable, although silicon does exhibit varied coordination number. Silanes, silicon analogues to the alkanes, react rapidly with water, and long-chain silanes spontaneously decompose. Consequently, most terrestrial silicon is "locked up" in silica, and not a wide variety of biogenic precursors. Silicones, which alternate between silicon and oxygen atoms, are much more stable than silanes, and may even be more stable than the equivalent hydrocarbons in sulfuric acid-rich extraterrestrial environments. Alternatively, the weak bonds in silicon compounds may help maintain a rapid pace of life at cryogenic temperatures. Polysilanols, the silicon homologues to sugars, are among the few compounds soluble in liquid nitrogen. All known silicon macromolecules are artificial polymers, and so "monotonous compared with the combinatorial universe of organic macromolecules". Even so, some Earth life uses biogenic silica: diatoms' silicate skeletons. A. G. Cairns-Smith hypothesized that silicate minerals in water played a crucial role in abiogenesis, in that biogenic carbon compounds formed around their crystal structures. Although not observed in nature, carbon–silicon bonds have been added to biochemistry under directed evolution (artificial selection): a cytochrome c protein from Rhodothermus marinus has been engineered to catalyze new carbon–silicon bonds between hydrosilanes and diazo compounds. Other exotic element-based biochemistries Boranes are dangerously explosive in Earth's atmosphere, but would be more stable in a reducing atmosphere. However, boron's low cosmic abundance makes it less likely as a base for life than carbon. Various metals, together with oxygen, can form very complex and thermally stable structures rivaling those of organic compounds; the heteropoly acids are one such family. Some metal oxides are also similar to carbon in their ability to form both nanotube structures and diamond-like crystals (such as cubic zirconia). Titanium, aluminium, magnesium, and iron are all more abundant in the Earth's crust than carbon. Metal-oxide-based life could therefore be a possibility under certain conditions, including those (such as high temperatures) at which carbon-based life would be unlikely. The Cronin group at Glasgow University reported self-assembly of tungsten polyoxometalates into cell-like spheres. By modifying their metal oxide content, the spheres can acquire holes that act as porous membrane, selectively allowing chemicals in and out of the sphere according to size. Sulfur is also able to form long-chain molecules, but suffers from the same high-reactivity problems as phosphorus and silanes. The biological use of sulfur as an alternative to carbon is purely hypothetical, especially because sulfur usually forms only linear chains rather than branched ones. (The biological use of sulfur as an electron acceptor is widespread and can be traced back 3.5 billion years on Earth, thus predating the use of molecular oxygen. Sulfur-reducing bacteria can utilize elemental sulfur instead of oxygen, reducing sulfur to hydrogen sulfide.) Arsenic as an alternative to phosphorus Arsenic, which is chemically similar to phosphorus, while poisonous for most life forms on Earth, is incorporated into the biochemistry of some organisms. Some marine algae incorporate arsenic into complex organic molecules such as arsenosugars and arsenobetaines. Fungi and bacteria can produce volatile methylated arsenic compounds. Arsenate reduction and arsenite oxidation have been observed in microbes (Chrysiogenes arsenatis). Additionally, some prokaryotes can use arsenate as a terminal electron acceptor during anaerobic growth and some can utilize arsenite as an electron donor to generate energy. It has been speculated that the earliest life forms on Earth may have used arsenic biochemistry in place of phosphorus in the structure of their DNA. A common objection to this scenario is that arsenate esters are so much less stable to hydrolysis than corresponding phosphate esters that arsenic is poorly suited for this function. The authors of a 2010 geomicrobiology study, supported in part by NASA, have postulated that a bacterium, named GFAJ-1, collected in the sediments of Mono Lake in eastern California, can employ such 'arsenic DNA' when cultured without phosphorus. They proposed that the bacterium may employ high levels of poly-β-hydroxybutyrate or other means to reduce the effective concentration of water and stabilize its arsenate esters. This claim was heavily criticized almost immediately after publication for the perceived lack of appropriate controls. Science writer Carl Zimmer contacted several scientists for an assessment: "I reached out to a dozen experts ... Almost unanimously, they think the NASA scientists have failed to make their case". Other authors were unable to reproduce their results and showed that the study had issues with phosphate contamination, suggesting that the low amounts present could sustain extremophile lifeforms. Alternatively, it was suggested that GFAJ-1 cells grow by recycling phosphate from degraded ribosomes, rather than by replacing it with arsenate. Non-water solvents In addition to carbon compounds, all currently known terrestrial life also requires water as a solvent. This has led to discussions about whether water is the only liquid capable of filling that role. The idea that an extraterrestrial life-form might be based on a solvent other than water has been taken seriously in recent scientific literature by the biochemist Steven Benner, and by the astrobiological committee chaired by John A. Baross. Solvents discussed by the Baross committee include ammonia, sulfuric acid, formamide, hydrocarbons, and (at temperatures much lower than Earth's) liquid nitrogen, or hydrogen in the form of a supercritical fluid. Water as a solvent limits the forms biochemistry can take. For example, Steven Benner, proposes the polyelectrolyte theory of the gene that claims that for a genetic biopolymer such as, DNA, to function in water, it requires repeated ionic charges. If water is not required for life, these limits on genetic biopolymers are removed. Carl Sagan once described himself as both a carbon chauvinist and a water chauvinist; however, on another occasion he said that he was a carbon chauvinist but "not that much of a water chauvinist". He speculated on hydrocarbons, hydrofluoric acid, and ammonia as possible alternatives to water. Some of the properties of water that are important for life processes include: A complexity which leads to a large number of permutations of possible reaction paths including acid–base chemistry, H+ cations, OH− anions, hydrogen bonding, van der Waals bonding, dipole–dipole and other polar interactions, aqueous solvent cages, and hydrolysis. This complexity offers a large number of pathways for evolution to produce life, many other solvents have dramatically fewer possible reactions, which severely limits evolution. Thermodynamic stability: the free energy of formation of liquid water is low enough (−237.24 kJ/mol) that water undergoes few reactions. Other solvents are highly reactive, particularly with oxygen. Water does not combust in oxygen because it is already the combustion product of hydrogen with oxygen. Most alternative solvents are not stable in an oxygen-rich atmosphere, so it is highly unlikely that those liquids could support aerobic life. A large temperature range over which it is liquid. High solubility of oxygen and carbon dioxide at room temperature supporting the evolution of aerobic aquatic plant and animal life. A high heat capacity (leading to higher environmental temperature stability). Water is a room-temperature liquid leading to a large population of quantum transition states required to overcome reaction barriers. Cryogenic liquids (such as liquid methane) have exponentially lower transition state populations which are needed for life based on chemical reactions. This leads to chemical reaction rates which may be so slow as to preclude the development of any life based on chemical reactions. Spectroscopic transparency allowing solar radiation to penetrate several meters into the liquid (or solid), greatly aiding the evolution of aquatic life. A large heat of vaporization leading to stable lakes and oceans. The ability to dissolve a wide variety of compounds. The solid (ice) has lower density than the liquid, so ice floats on the liquid. This is why bodies of water freeze over but do not freeze solid (from the bottom up). If ice were denser than liquid water (as is true for nearly all other compounds), then large bodies of liquid would slowly freeze solid, which would not be conducive to the formation of life. Water as a compound is cosmically abundant, although much of it is in the form of vapor or ice. Subsurface liquid water is considered likely or possible on several of the outer moons: Enceladus (where geysers have been observed), Europa, Titan, and Ganymede. Earth and Titan are the only worlds currently known to have stable bodies of liquid on their surfaces. Not all properties of water are necessarily advantageous for life, however. For instance, water ice has a high albedo, meaning that it reflects a significant quantity of light and heat from the Sun. During ice ages, as reflective ice builds up over the surface of the water, the effects of global cooling are increased. There are some properties that make certain compounds and elements much more favorable than others as solvents in a successful biosphere. The solvent must be able to exist in liquid equilibrium over a range of temperatures the planetary object would normally encounter. Because boiling points vary with the pressure, the question tends not to be does the prospective solvent remain liquid, but at what pressure. For example, hydrogen cyanide has a narrow liquid-phase temperature range at 1 atmosphere, but in an atmosphere with the pressure of Venus, with of pressure, it can indeed exist in liquid form over a wide temperature range. Ammonia The ammonia molecule (NH3), like the water molecule, is abundant in the universe, being a compound of hydrogen (the simplest and most common element) with another very common element, nitrogen. The possible role of liquid ammonia as an alternative solvent for life is an idea that goes back at least to 1954, when J. B. S. Haldane raised the topic at a symposium about life's origin. Numerous chemical reactions are possible in an ammonia solution, and liquid ammonia has chemical similarities with water. Ammonia can dissolve most organic molecules at least as well as water does and, in addition, it is capable of dissolving many elemental metals. Haldane made the point that various common water-related organic compounds have ammonia-related analogs; for instance the ammonia-related amine group (−NH2) is analogous to the water-related hydroxyl group (−OH). Ammonia, like water, can either accept or donate an H+ ion. When ammonia accepts an H+, it forms the ammonium cation (NH4+), analogous to hydronium (H3O+). When it donates an H+ ion, it forms the amide anion (NH2−), analogous to the hydroxide anion (OH−). Compared to water, however, ammonia is more inclined to accept an H+ ion, and less inclined to donate one; it is a stronger nucleophile. Ammonia added to water functions as Arrhenius base: it increases the concentration of the anion hydroxide. Conversely, using a solvent system definition of acidity and basicity, water added to liquid ammonia functions as an acid, because it increases the concentration of the cation ammonium. The carbonyl group (C=O), which is much used in terrestrial biochemistry, would not be stable in ammonia solution, but the analogous imine group (C=NH) could be used instead. However, ammonia has some problems as a basis for life. The hydrogen bonds between ammonia molecules are weaker than those in water, causing ammonia's heat of vaporization to be half that of water, its surface tension to be a third, and reducing its ability to concentrate non-polar molecules through a hydrophobic effect. Gerald Feinberg and Robert Shapiro have questioned whether ammonia could hold prebiotic molecules together well enough to allow the emergence of a self-reproducing system. Ammonia is also flammable in oxygen and could not exist sustainably in an environment suitable for aerobic metabolism. A biosphere based on ammonia would likely exist at temperatures or air pressures that are extremely unusual in relation to life on Earth. Life on Earth usually exists between the melting point and boiling point of water, at a pressure designated as normal pressure, between . When also held to normal pressure, ammonia's melting and boiling points are and respectively. Because chemical reactions generally proceed more slowly at lower temperatures, ammonia-based life existing in this set of conditions might metabolize more slowly and evolve more slowly than life on Earth. On the other hand, lower temperatures could also enable living systems to use chemical species that would be too unstable at Earth temperatures to be useful. A set of conditions where ammonia is liquid at Earth-like temperatures would involve it being at a much higher pressure. For example, at 60 atm ammonia melts at and boils at . Ammonia and ammonia–water mixtures remain liquid at temperatures far below the freezing point of pure water, so such biochemistries might be well suited to planets and moons orbiting outside the water-based habitability zone. Such conditions could exist, for example, under the surface of Saturn's largest moon Titan. Methane and other hydrocarbons Methane (CH4) is a simple hydrocarbon: that is, a compound of two of the most common elements in the cosmos: hydrogen and carbon. It has a cosmic abundance comparable with ammonia. Hydrocarbons could act as a solvent over a wide range of temperatures, but would lack polarity. Isaac Asimov, the biochemist and science fiction writer, suggested in 1981 that poly-lipids could form a substitute for proteins in a non-polar solvent such as methane. Lakes composed of a mixture of hydrocarbons, including methane and ethane, have been detected on the surface of Titan by the Cassini spacecraft. There is debate about the effectiveness of methane and other hydrocarbons as a solvent for life compared to water or ammonia. Water is a stronger solvent than the hydrocarbons, enabling easier transport of substances in a cell. However, water is also more chemically reactive and can break down large organic molecules through hydrolysis. A life-form whose solvent was a hydrocarbon would not face the threat of its biomolecules being destroyed in this way. Also, the water molecule's tendency to form strong hydrogen bonds can interfere with internal hydrogen bonding in complex organic molecules. Life with a hydrocarbon solvent could make more use of hydrogen bonds within its biomolecules. Moreover, the strength of hydrogen bonds within biomolecules would be appropriate to a low-temperature biochemistry. Astrobiologist Chris McKay has argued, on thermodynamic grounds, that if life does exist on Titan's surface, using hydrocarbons as a solvent, it is likely also to use the more complex hydrocarbons as an energy source by reacting them with hydrogen, reducing ethane and acetylene to methane. Possible evidence for this form of life on Titan was identified in 2010 by Darrell Strobel of Johns Hopkins University; a greater abundance of molecular hydrogen in the upper atmospheric layers of Titan compared to the lower layers, arguing for a downward diffusion at a rate of roughly 1025 molecules per second and disappearance of hydrogen near Titan's surface. As Strobel noted, his findings were in line with the effects Chris McKay had predicted if methanogenic life-forms were present. The same year, another study showed low levels of acetylene on Titan's surface, which were interpreted by Chris McKay as consistent with the hypothesis of organisms reducing acetylene to methane. While restating the biological hypothesis, McKay cautioned that other explanations for the hydrogen and acetylene findings are to be considered more likely: the possibilities of yet unidentified physical or chemical processes (e.g. a non-living surface catalyst enabling acetylene to react with hydrogen), or flaws in the current models of material flow. He noted that even a non-biological catalyst effective at 95 K would in itself be a startling discovery. Azotosome A hypothetical cell membrane termed an azotosome, capable of functioning in liquid methane in Titan conditions was computer-modeled in an article published in February 2015. Composed of acrylonitrile, a small molecule containing carbon, hydrogen, and nitrogen, it is predicted to have stability and flexibility in liquid methane comparable to that of a phospholipid bilayer (the type of cell membrane possessed by all life on Earth) in liquid water. An analysis of data obtained using the Atacama Large Millimeter / submillimeter Array (ALMA), completed in 2017, confirmed substantial amounts of acrylonitrile in Titan's atmosphere. Later studies questioned whether acrylonitrile would be able to self-assemble into azotozomes. Hydrogen fluoride Hydrogen fluoride (HF), like water, is a polar molecule, and due to its polarity it can dissolve many ionic compounds. At atmospheric pressure, its melting point is , and its boiling point is ; the difference between the two is a little more than 100 K. HF also makes hydrogen bonds with its neighbor molecules, as do water and ammonia. It has been considered as a possible solvent for life by scientists such as Peter Sneath and Carl Sagan. HF is dangerous to the systems of molecules that Earth-life is made of, but certain other organic compounds, such as paraffin waxes, are stable with it. Like water and ammonia, liquid hydrogen fluoride supports an acid–base chemistry. Using a solvent system definition of acidity and basicity, nitric acid functions as a base when it is added to liquid HF. However, hydrogen fluoride is cosmically rare, unlike water, ammonia, and methane. Hydrogen sulfide Hydrogen sulfide is the closest chemical analog to water, but is less polar and is a weaker inorganic solvent. Hydrogen sulfide is quite plentiful on Jupiter's moon Io and may be in liquid form a short distance below the surface; astrobiologist Dirk Schulze-Makuch has suggested it as a possible solvent for life there. On a planet with hydrogen sulfide oceans, the source of the hydrogen sulfide could come from volcanoes, in which case it could be mixed in with a bit of hydrogen fluoride, which could help dissolve minerals. Hydrogen sulfide life might use a mixture of carbon monoxide and carbon dioxide as their carbon source. They might produce and live on sulfur monoxide, which is analogous to oxygen (O2). Hydrogen sulfide, like hydrogen cyanide and ammonia, suffers from the small temperature range where it is liquid, though that, like that of hydrogen cyanide and ammonia, increases with increasing pressure. Silicon dioxide and silicates Silicon dioxide, also known as silica and quartz, is very abundant in the universe and has a large temperature range where it is liquid. However, its melting point is , so it would be impossible to make organic compounds in that temperature, because all of them would decompose. Silicates are similar to silicon dioxide and some have lower melting points than silica. Feinberg and Shapiro have suggested that molten silicate rock could serve as a liquid medium for organisms with a chemistry based on silicon, oxygen, and other elements such as aluminium. Other solvents or cosolvents Other solvents sometimes proposed: Supercritical fluids: supercritical carbon dioxide and supercritical hydrogen. Simple hydrogen compounds: hydrogen chloride. More complex compounds: sulfuric acid, formamide, methanol. Very-low-temperature fluids: liquid nitrogen and hydrogen. High-temperature liquids: sodium chloride. Sulfuric acid in liquid form is strongly polar. It remains liquid at higher temperatures than water, its liquid range being 10 °C to 337 °C at a pressure of 1 atm, although above 300 °C it slowly decomposes. Sulfuric acid is known to be abundant in the clouds of Venus, in the form of aerosol droplets. In a biochemistry that used sulfuric acid as a solvent, the alkene group (C=C), with two carbon atoms joined by a double bond, could function analogously to the carbonyl group (C=O) in water-based biochemistry. A proposal has been made that life on Mars may exist and be using a mixture of water and hydrogen peroxide as its solvent. A 61.2% (by mass) mix of water and hydrogen peroxide has a freezing point of −56.5 °C and tends to super-cool rather than crystallize. It is also hygroscopic, an advantage in a water-scarce environment. Supercritical carbon dioxide has been proposed as a candidate for alternative biochemistry due to its ability to selectively dissolve organic compounds and assist the functioning of enzymes and because "super-Earth"- or "super-Venus"-type planets with dense high-pressure atmospheres may be common. Other speculations Non-green photosynthesizers Physicists have noted that, although photosynthesis on Earth generally involves green plants, a variety of other-colored plants could also support photosynthesis, essential for most life on Earth, and that other colors might be preferred in places that receive a different mix of stellar radiation than Earth. These studies indicate that blue plants would be unlikely; however yellow or red plants may be relatively common. Variable environments Many Earth plants and animals undergo major biochemical changes during their life cycles as a response to changing environmental conditions, for example, by having a spore or hibernation state that can be sustained for years or even millennia between more active life stages. Thus, it would be biochemically possible to sustain life in environments that are only periodically consistent with life as we know it. For example, frogs in cold climates can survive for extended periods of time with most of their body water in a frozen state, whereas desert frogs in Australia can become inactive and dehydrate in dry periods, losing up to 75% of their fluids, yet return to life by rapidly rehydrating in wet periods. Either type of frog would appear biochemically inactive (i.e. not living) during dormant periods to anyone lacking a sensitive means of detecting low levels of metabolism. Alanine world and hypothetical alternatives The genetic code may have evolved during the transition from the RNA world to a protein world. The Alanine World Hypothesis postulates that the evolution of the genetic code (the so-called GC phase) started with only four basic amino acids: alanine, glycine, proline and ornithine (now arginine). The evolution of the genetic code ended with 20 proteinogenic amino acids. From a chemical point of view, most of them are Alanine-derivatives particularly suitable for the construction of α-helices and β-sheets basic secondary structural elements of modern proteins. Direct evidence of this is an experimental procedure in molecular biology known as alanine scanning. A hypothetical "Proline World" would create a possible alternative life with the genetic code based on the proline chemical scaffold as the protein backbone. Similarly, a "Glycine World" and "Ornithine World" are also conceivable, but nature has chosen none of them. Evolution of life with Proline, Glycine, or Ornithine as the basic structure for protein-like polymers (foldamers) would lead to parallel biological worlds. They would have morphologically radically different body plans and genetics from the living organisms of the known biosphere. Nonplanetary life Dusty plasma-based In 2007, Vadim N. Tsytovich and colleagues proposed that lifelike behaviors could be exhibited by dust particles suspended in a plasma, under conditions that might exist in space. Computer models showed that, when the dust became charged, the particles could self-organize into microscopic helical structures, and the authors offer "a rough sketch of a possible model of...helical grain structure reproduction". Cosmic necklace-based In 2020, Luis A. Anchordoqu and Eugene M. Chudnovsky of the City University of New York hypothesized that cosmic necklace-based life composed of magnetic monopoles connected by cosmic strings could evolve inside stars. This would be achieved by a stretching of cosmic strings due to the star's intense gravity, thus allowing it to take on more complex forms and potentially form structures similar to the RNA and DNA structures found within carbon-based life. As such, it is theoretically possible that such beings could eventually become intelligent and construct a civilization using the power generated by the star's nuclear fusion. Because such use would use up part of the star's energy output, the luminosity would also fall. For this reason, it is thought that such life might exist inside stars observed to be cooling faster or dimmer than current cosmological models predict. Life on a neutron star Frank Drake suggested in 1973 that intelligent life could inhabit neutron stars. Physical models in 1973 implied that Drake's creatures would be microscopic. Scientists who have published on this topic Scientists who have considered possible alternatives to carbon-water biochemistry include: J. B. S. Haldane (1892–1964), a geneticist noted for his work on abiogenesis. V. Axel Firsoff (1910–1981), British astronomer. Isaac Asimov (1920–1992), biochemist and science fiction writer. Fred Hoyle (1915–2001), astronomer and science fiction writer. Norman Horowitz (1915–2005), Caltech geneticist who devised the first experiments carried out to detect life on Mars. George C. Pimentel (1922–1989), American chemist, University of California, Berkeley. Peter Sneath (1923–2011), microbiologist, author of the book Planets and Life. Gerald Feinberg (1933–1992), physicist and Robert Shapiro (1935–2011), chemist, co-authors of the book Life Beyond Earth. Carl Sagan (1934–1996), astronomer, science popularizer, and SETI proponent. Jonathan Lunine (born 1959), American planetary scientist and physicist. Robert Freitas (born 1952), specialist in nano-technology and nano-medicine. John Baross (born 1940), oceanographer and astrobiologist, who chaired a committee of scientists under the United States National Research Council that published a report on life's limiting conditions in 2007. See also Abiogenesis Astrobiology Carbon chauvinism Carbon-based life Earliest known life forms Extraterrestrial life Hachimoji DNA Iron–sulfur world hypothesis Life origination beyond planets Nexus for Exoplanet System Science Non-cellular life Non-proteinogenic amino acids Nucleic acid analogues Planetary habitability Shadow biosphere References Further reading External links Astronomy FAQ Ammonia-based life Silicon-based life Astrobiology Science fiction themes Biological hypotheses Scientific speculation
Hypothetical types of biochemistry
[ "Astronomy", "Biology" ]
6,712
[ "Origin of life", "Speculative evolution", "Astrobiology", "Biological hypotheses", "Astronomical sub-disciplines" ]
7,322
https://en.wikipedia.org/wiki/Creation%20myth
A creation myth or cosmogonic myth is a type of cosmogony, a symbolic narrative of how the world began and how people first came to inhabit it. While in popular usage the term myth often refers to false or fanciful stories, members of cultures often ascribe varying degrees of truth to their creation myths. In the society in which it is told, a creation myth is usually regarded as conveying profound truthsmetaphorically, symbolically, historically, or literally. They are commonly, although not always, considered cosmogonical mythsthat is, they describe the ordering of the cosmos from a state of chaos or amorphousness. Creation myths often share several features. They often are considered sacred accounts and can be found in nearly all known religious traditions. They are all stories with a plot and characters who are either deities, human-like figures, or animals, who often speak and transform easily. They are often set in a dim and nonspecific past that historian of religion Mircea Eliade termed in illo tempore ('at that time'). Creation myths address questions deeply meaningful to the society that shares them, revealing their central worldview and the framework for the self-identity of the culture and individual in a universal context. Creation myths develop in oral traditions and therefore typically have multiple versions; found throughout human culture, they are the most common form of myth. Definitions Creation myth definitions from modern references: A "symbolic narrative of the beginning of the world as understood in a particular tradition and community. Creation myths are of central importance for the valuation of the world, for the orientation of humans in the universe, and for the basic patterns of life and culture." "Creation myths tell us how things began. All cultures have creation myths; they are our primary myths, the first stage in what might be called the psychic life of the species. As cultures, we identify ourselves through the collective dreams we call creation myths, or cosmogonies. ... Creation myths explain in metaphorical terms our sense of who we are in the context of the world, and in so doing they reveal our real priorities, as well as our real prejudices. Our images of creation say a great deal about who we are." A "philosophical and theological elaboration of the primal myth of creation within a religious community. The term myth here refers to the imaginative expression in narrative form of what is experienced or apprehended as basic reality ... The term creation refers to the beginning of things, whether by the will and act of a transcendent being, by emanation from some ultimate source, or in any other way." Religion professor Mircea Eliade defined the word myth in terms of creation: Myth narrates a sacred history; it relates an event that took place in primordial Time, the fabled time of the "beginnings." In other words, myth tells how, through the deeds of Supernatural Beings, a reality came into existence, be it the whole of reality, the Cosmos, or only a fragment of reality – an island, a species of plant, a particular kind of human behavior, an institution. Meaning and function Creation myths have been around since ancient history and have served important societal roles. Over 100 "distinct" ones have been discovered. All creation myths are in one sense etiological because they attempt to explain how the world formed and where humanity came from. Myths attempt to explain the unknown and sometimes teach a lesson. Ethnologists and anthropologists who study origin myths say that in the modern context theologians try to discern humanity's meaning from revealed truths and scientists investigate cosmology with the tools of empiricism and rationality, but creation myths define human reality in very different terms. In the past, historians of religion and other students of myth thought of such stories as forms of primitive or early-stage science or religion and analyzed them in a literal or logical sense. Today, however, they are seen as symbolic narratives which must be understood in terms of their own cultural context. Charles Long writes: "The beings referred to in the myth – gods, animals, plants – are forms of power grasped existentially. The myths should not be understood as attempts to work out a rational explanation of deity." While creation myths are not literal explications, they do serve to define an orientation of humanity in the world in terms of a birth story. They provide the basis of a worldview that reaffirms and guides how people relate to the natural world, to any assumed spiritual world, and to each other. A creation myth acts as a cornerstone for distinguishing primary reality from relative reality, the origin and nature of being from non-being. In this sense cosmogonic myths serve as a philosophy of life – but one expressed and conveyed through symbol rather than through systematic reason. And in this sense they go beyond etiological myths (which explain specific features in religious rites, natural phenomena, or cultural life). Creation myths also help to orient human beings in the world, giving them a sense of their place in the world and the regard that they must have for humans and nature. Historian David Christian has summarised issues common to multiple creation myths: Classification Mythologists have applied various schemes to classify creation myths found throughout human cultures. Eliade and his colleague Charles Long developed a classification based on some common motifs that reappear in stories the world over. The classification identifies five basic types: Creation ex nihilo in which the creation is through the thought, word, dream, or bodily secretions of a divine being. Earth-diver creation in which a diver, usually a bird or amphibian sent by a creator, plunges to the seabed through a primordial ocean to bring up sand or mud which develops into a terrestrial world. Emergence myths in which progenitors pass through a series of worlds and metamorphoses until reaching the present world. Creation by the dismemberment of a primordial being. Creation by the splitting or ordering of a primordial unity such as the cracking of a cosmic egg or a bringing order from chaos. Marta Weigle further developed and refined this typology to highlight nine themes, adding elements such as deus faber, a creation crafted by a deity, creation from the work of two creators working together or against each other, creation from sacrifice and creation from division/conjugation, accretion/conjunction, or secretion. An alternative system based on six recurring narrative themes was designed by Raymond Van Over: Primeval abyss, an infinite expanse of waters or space Originator deity which is awakened or an eternal entity within the abyss Originator deity poised above the abyss Cosmic egg or embryo Originator deity creating life through sound or word Life generating from the corpse or dismembered parts of an originator deity Ex nihilo The myth that God created the world out of nothing – ex nihilo – is central today to Judaism, Christianity, and Islam, and the medieval Jewish philosopher Maimonides felt it was the only concept that the three religions shared. Nonetheless, the concept is not found in the entire Hebrew Bible. The authors of Genesis 1 were concerned not with the origins of matter (the material which God formed into the habitable cosmos), but with assigning roles so that the cosmos should function. In the early 2nd century CE, early Christian scholars were beginning to see a tension between the idea of world-formation and the omnipotence of God, and by the beginning of the 3rd century creation ex nihilo had become a fundamental tenet of Christian theology. Ex nihilo creation is found in creation stories from ancient Egypt, the Rig Veda, and many animistic cultures in Africa, Asia, Oceania, and North America. In most of these stories, the world is brought into being by the speech, dream, breath, or pure thought of a creator but creation ex nihilo may also take place through a creator's bodily secretions. The literal translation of the phrase ex nihilo is "from nothing" but in many creation myths the line is blurred whether the creative act would be better classified as a creation ex nihilo or creation from chaos. In ex nihilo creation myths, the potential and the substance of creation springs from within the creator. Such a creator may or may not be existing in physical surroundings such as darkness or water, but does not create the world from them, whereas in creation from chaos the substance used for creation is pre-existing within the unformed void. Creation from chaos In creation from chaos myths, there is nothing initially but a formless, shapeless expanse. In these stories the word "chaos" means "disorder", and this formless expanse, which is also sometimes called a void or an abyss, contains the material with which the created world will be made. Chaos may be described as having the consistency of vapor or water, dimensionless, and sometimes salty or muddy. These myths associate chaos with evil and oblivion, in contrast to "order" (cosmos) which is the good. The act of creation is the bringing of order from disorder, and in many of these cultures it is believed that at some point the forces preserving order and form will weaken and the world will once again be engulfed into the abyss. One example is the Genesis creation narrative from the first chapter of the Book of Genesis. World parent There are two types of world parent myths, both describing a separation or splitting of a primeval entity, the world parent or parents. One form describes the primeval state as an eternal union of two parents, and the creation takes place when the two are pulled apart. The two parents are commonly identified as Sky (usually male) and Earth (usually female), who were so tightly bound to each other in the primeval state that no offspring could emerge. These myths often depict creation as the result of a sexual union and serve as genealogical record of the deities born from it. In the second form of world parent myths, creation itself springs from dismembered parts of the body of the primeval being. Often, in these stories, the limbs, hair, blood, bones, or organs of the primeval being are somehow severed or sacrificed to transform into sky, earth, animal or plant life, and other worldly features. These myths tend to emphasize creative forces as animistic in nature rather than sexual, and depict the sacred as the elemental and integral component of the natural world. One example of this is the Norse creation myth described in "Völuspá", the first poem in the Poetic Edda, and in Gylfaginning. Emergence In emergence myths, humanity emerges from another world into the one they currently inhabit. The previous world is often considered the womb of the earth mother, and the process of emergence is likened to the act of giving birth. The role of midwife is usually played by a female deity, like the spider woman of several mythologies of Indigenous peoples in the Americas. Male characters rarely figure into these stories, and scholars often consider them in counterpoint to male-oriented creation myths, like those of the ex nihilo variety. Emergence myths commonly describe the creation of people and/or supernatural beings as a staged ascent or metamorphosis from nascent forms through a series of subterranean worlds to arrive at their current place and form. Often the passage from one world or stage to the next is impelled by inner forces, a process of germination or gestation from earlier, embryonic forms. The genre is most commonly found in Native American cultures where the myths frequently link the final emergence of people from a hole opening to the underworld to stories about their subsequent migrations and eventual settlement in their current homelands. Earth-diver The earth-diver is a common character in various traditional creation myths. In these stories a supreme being usually sends an animal (most often a type of bird, but also crustaceans, insects, and fish in some narratives) into the primal waters to find bits of sand or mud with which to build habitable land. Some scholars interpret these myths psychologically while others interpret them cosmogonically. In both cases emphasis is placed on beginnings emanating from the depths. Motif distribution According to Gudmund Hatt and Tristram P. Coffin, Earth-diver myths are common in Native American folklore, among the following populations: Shoshone, Meskwaki, Blackfoot, Chipewyan, Newettee, Yokuts of California, Mandan, Hidatsa, Cheyenne, Arapaho, Ojibwe, Yuchi, and Cherokee. American anthropologist Gladys Reichard located the distribution of the motif across "all parts of North America", save for "the extreme north, northeast, and southwest". In a 1977 study, anthropologist Victor Barnouw surmised that the earth-diver motif appeared in "hunting-gathering societies", mainly among northerly groups such as the Hare, Dogrib, Kaska, Beaver, Carrier, Chipewyan, Sarsi, Cree, and Montagnais. Similar tales are also found among the Chukchi and Yukaghir, the Tatars, and many Finno-Ugric traditions, as well as among the Buryat and the Samoyed. In addition, the earth-diver motif also exists in narratives from Eastern Europe, namely Romani, Romanian, Slavic (namely, Bulgarian, Polish, Ukrainian, and Belarusian), and Lithuanian mythological traditions. The pattern of distribution of these stories suggest they have a common origin in the eastern Asiatic coastal region, spreading as peoples migrated west into Siberia and east to the North American continent. However, there are examples of this mytheme found well outside of this boreal distribution pattern, for example the West African Yoruba creation myth of Ọbatala and Oduduwa. Native American narrative Characteristic of many Native American myths, earth-diver creation stories begin as beings and potential forms linger asleep or suspended in the primordial realm. The earth-diver is among the first of them to awaken and lay the necessary groundwork by building suitable lands where the coming creation will be able to live. In many cases, these stories will describe a series of failed attempts to make land before the solution is found. Among the indigenous peoples of the Americas, the earth-diver cosmogony is attested in Iroquois mythology: a female sky deity falls from the heavens, and certain animals, the beaver, the otter, the duck, and the muskrat dive in the waters to fetch mud to construct an island. In a similar story from the Seneca, people lived in a sky realm. One day, the chief's daughter was afflicted with a mysterious illness, and the only cure recommended for her (revealed in a dream) was to lie beside a tree and to have it be dug up. The people do so, but a man complains that the tree was their livelihood, and kicks the girl through the hole. She ends up falling from the sky to a world of only water, but is rescued by waterfowl. A turtle offers to bear her on its shell, but asked where would be a definitive dwelling place for her. They decide to create land, and the toad dives into the depths of the primal sea to get pieces of soil. The toad puts it on the turtle's back, which grows larger with every deposit of soil. In another version from the Wyandot, the Wyandot lived in heaven. The daughter of the Big Chief (or Mighty Ruler) was sick, so the medicine man recommends that they dig up the wild apple tree that stands next to the Lodge of the Mighty Ruler, because the remedy is to be found on its roots. However, as the tree has been dug out, the ground begins to sink away, and the treetops catch and carry down the sick daughter with it. As the girl falls from the skies, two swans rescue her on their backs. The birds decide to summon all the Swimmers and the Water Tribes. Many volunteer to dive into the Great Water to fetch bits of earth from the bottom of the sea, but only the toad (female, in the story) is the one successful. See also Abiogenesis Anthropology of religion Australian Aboriginal religion and mythology Big Bang Ceremonial pole Chinese creation myths Creationism Young Earth creationism Creator deity Evolutionary origin of religion Mother goddess Origin myth Origin of death Religious cosmology Theism Xirang References Bibliography External links Comparative mythology Cosmogony Religious cosmologies Traditional stories
Creation myth
[ "Astronomy" ]
3,379
[ "Cosmogony", "Creation myths" ]
7,327
https://en.wikipedia.org/wiki/Copernican%20principle
In physical cosmology, the Copernican principle states that humans are not privileged observers of the universe, that observations from the Earth are representative of observations from the average position in the universe. Named for Copernican heliocentrism, it is a working assumption that arises from a modified cosmological extension of Copernicus' argument of a moving Earth. Origin and implications Hermann Bondi named the principle after Copernicus in the mid-20th century, although the principle itself dates back to the 16th-17th century paradigm shift away from the Ptolemaic system, which placed Earth at the center of the universe. Copernicus proposed that the motion of the planets could be explained by reference to an assumption that the Sun is centrally located and stationary in contrast to the geocentrism. He argued that the apparent retrograde motion of the planets is an illusion caused by Earth's movement around the Sun, which the Copernican model placed at the centre of the universe. Copernicus himself was mainly motivated by technical dissatisfaction with the earlier system and not by support for any mediocrity principle. Although the Copernican heliocentric model is often described as "demoting" Earth from its central role it had in the Ptolemaic geocentric model, it was successors to Copernicus, notably the 16th century Giordano Bruno, who adopted this new perspective. The Earth's central position had been interpreted as being in the "lowest and filthiest parts". Instead, as Galileo said, the Earth is part of the "dance of the stars" rather than the "sump where the universe's filth and ephemera collect". In the late 20th Century, Carl Sagan asked, "Who are we? We find that we live on an insignificant planet of a humdrum star lost in a galaxy tucked away in some forgotten corner of a universe in which there are far more galaxies than people." While the Copernican principle is derived from the negation of past assumptions, such as geocentrism, heliocentrism, or galactocentrism which state that humans are at the center of the universe, the Copernican principle is stronger than acentrism, which merely states that humans are not at the center of the universe. The Copernican principle assumes acentrism and also states that human observers or observations from Earth are representative of observations from the average position in the universe. Michael Rowan-Robinson emphasizes the Copernican principle as the threshold test for modern thought, asserting that: "It is evident that in the post-Copernican era of human history, no well-informed and rational person can imagine that the Earth occupies a unique position in the universe." Most modern cosmology is based on the assumption that the cosmological principle is almost, but not exactly, true on the largest scales. The Copernican principle represents the irreducible philosophical assumption needed to justify this, when combined with the observations. If one assumes the Copernican principle and observes that the universe appears isotropic or the same in all directions from the vantage point of Earth, then one can infer that the universe is generally homogeneous or the same everywhere (at any given time) and is also isotropic about any given point. These two conditions make up the cosmological principle. In practice, astronomers observe that the universe has heterogeneous or non-uniform structures up to the scale of galactic superclusters, filaments and great voids. In the current Lambda-CDM model, the predominant model of cosmology in the modern era, the universe is predicted to become more and more homogeneous and isotropic when observed on larger and larger scales, with little detectable structure on scales of more than about 260 million parsecs. However, recent evidence from galaxy clusters, quasars, and type Ia supernovae suggests that isotropy is violated on large scales. Furthermore, various large-scale structures have been discovered, such as the Clowes–Campusano LQG, the Sloan Great Wall, U1.11, the Huge-LQG, the Hercules–Corona Borealis Great Wall, and the Giant Arc, all which indicate that homogeneity might be violated. On scales comparable to the radius of the observable universe, we see systematic changes with distance from Earth. For instance, at greater distances, galaxies contain more young stars and are less clustered, and quasars appear more numerous. If the Copernican principle is assumed, then it follows that this is evidence for the evolution of the universe with time: this distant light has taken most of the age of the universe to reach Earth and shows the universe when it was young. The most distant light of all, cosmic microwave background radiation, is isotropic to at least one part in a thousand. Bondi and Thomas Gold used the Copernican principle to argue for the perfect cosmological principle which maintains that the universe is also homogeneous in time, and is the basis for the steady-state cosmology. However, this strongly conflicts with the evidence for cosmological evolution mentioned earlier: the universe has progressed from extremely different conditions at the Big Bang, and will continue to progress toward extremely different conditions, particularly under the rising influence of dark energy, apparently toward the Big Freeze or Big Rip. Since the 1990s the term has been used (interchangeably with "the Copernicus method") for J. Richard Gott's Bayesian-inference-based prediction of duration of ongoing events, a generalized version of the Doomsday argument. Tests of the principle The Copernican principle has never been proven, and in the most general sense cannot be proven, but it is implicit in many modern theories of physics. Cosmological models are often derived with reference to the cosmological principle, slightly more general than the Copernican principle, and many tests of these models can be considered tests of the Copernican principle. Historical Before the term Copernican principle was even coined, past assumptions, such as geocentrism, heliocentrism, and galactocentrism, which state that Earth, the Solar System, or the Milky Way respectively were located at the center of the universe, were shown to be false. The Copernican Revolution dethroned Earth to just one of many planets orbiting the Sun. Proper motion was mentioned by Halley. William Herschel found that the Solar System is moving through space within our disk-shaped Milky Way galaxy. Edwin Hubble showed that the Milky Way galaxy is just one of many galaxies in the universe. Examination of the galaxy's position and motion in the universe led to the Big Bang theory and the whole of modern cosmology. Modern tests Recent and planned tests relevant to the cosmological and Copernican principles include: time drift of cosmological redshifts; modelling the local gravitational potential using reflection of cosmic microwave background (CMB) photons; the redshift dependence of the luminosity of supernovae; the kinetic Sunyaev–Zeldovich effect in relation to dark energy; cosmic neutrino background; the integrated Sachs–Wolfe effect testing the isotropy and homogeneity of the CMB; Some authors claim that the KBC Void violates the cosmological principle and thus the Copernican principle. However, other authors claim that the KBC void is consistent with the cosmological principle and the Copernican principle. Physics without the principle The standard model of cosmology, the Lambda-CDM model, assumes the Copernican principle and the more general cosmological principle. Some cosmologists and theoretical physicists have created models without the cosmological or Copernican principles to constrain the values of observational results, to address specific known issues in the Lambda-CDM model, and to propose tests to distinguish between current models and other possible models. A prominent example in this context is inhomogeneous cosmology, to model the observed accelerating universe and cosmological constant. Instead of using the current accepted idea of dark energy, this model proposes the universe is much more inhomogeneous than currently assumed, and instead, we are in an extremely large low-density void. To match observations we would have to be very close to the centre of this void, immediately contradicting the Copernican principle. While the Big Bang model in cosmology is sometimes said to derive from the Copernican principle in conjunction with redshift observations, the Big Bang model can still be assumed to be valid in absence of the Copernican principle, because the cosmic microwave background, primordial gas clouds, and the structure, evolution, and distribution of galaxies all provide evidence, independent of the Copernican principle, in favor of the Big Bang. However, the key tenets of the Big Bang model, such as the expansion of the universe, become assumptions themselves akin to the Copernican principle, rather than derived from the Copernican principle and observations. See also Absolute time and space Anthropic principle Axis of evil (cosmology) Hubble Bubble (astronomy) Mediocrity principle Particle chauvinism P symmetry Rare Earth hypothesis The Principle (2014 film) Cosmological principle References Physical cosmology Principles Principle Razors (philosophy) Concepts in astronomy
Copernican principle
[ "Physics", "Astronomy" ]
1,962
[ "Astronomical sub-disciplines", "History of astronomy", "Concepts in astronomy", "Theoretical physics", "Astrophysics", "Copernican Revolution", "Physical cosmology" ]
7,331
https://en.wikipedia.org/wiki/Cellular%20digital%20packet%20data
Cellular Digital Packet Data (CDPD) is an obsolete wide-area mobile data service which used unused bandwidth normally used by Advanced Mobile Phone System (AMPS) mobile phones between 800 and 900 MHz to transfer data. Speeds up to 19.2 kbit/s were possible, though real world speeds seldom reached higher than 9.6 kbit/s. The service was discontinued in conjunction with the retirement of the parent AMPS service; it has been functionally replaced by faster services such as 1xRTT, Evolution-Data Optimized, and UMTS/High Speed Packet Access (HSPA). Developed in the early 1990s, CDPD was large on the horizon as a future technology. However, it had difficulty competing against existing slower but less expensive Mobitex and DataTAC systems, and never quite gained widespread acceptance before newer, faster standards such as General Packet Radio Service (GPRS) became dominant. CDPD had very limited consumer products. AT&T Wireless first sold the technology in the United States under the PocketNet brand. It was one of the first products of wireless web service. Digital Ocean, Inc. an original equipment manufacturer licensee of the Apple Newton, sold the Seahorse product, which integrated the Newton handheld computer, an AMPS/CDPD handset/modem along with a web browser in 1996, winning the CTIA's hardware product of the year award as a smartphone, arguably the world's first. A company named OmniSky provided service for Palm V devices. OmniSky then filed for bankruptcy in 2001 then was picked up by EarthLink Wireless. The technician that developed the tech support for all of the wireless technology was a man by the name of Myron Feasel he was brought from company to company ending up at Palm. Sierra Wireless sold PCMCIA devices and Airlink sold a serial modem. Both of these were used by police and fire departments for dispatch. Wireless later sold CDPD under the Wireless Internet brand (not to be confused with Wireless Internet Express, their brand for GPRS/EDGE data). PocketNet was generally considered a failure with competition from 2G services such as Sprint's Wireless Web. AT&T Wireless sold four PocketNet Phone models to the public: the Samsung Duette and the Mitsubishi MobileAccess-120 were AMPS/CDPD PocketNet phones introduced in October 1997; and two IS-136/CDPD Digital PocketNet phones, the Mitsubishi T-250 and the Ericsson R289LX. Despite its limited success as a consumer offering, CDPD was adopted in a number of enterprise and government networks. It was particularly popular as a first-generation wireless data solution for telemetry devices (machine to machine communications) and for public safety mobile data terminals. In 2004, major carriers in the United States announced plans to shut down CDPD service. In July 2005, the AT&T Wireless and Cingular Wireless CDPD networks were shut down. CDPD Network and system Primary elements of a CDPD network are: 1. End systems: physical & logical end systems that exchange information 2. Intermediate systems: CDPD infrastructure elements that store, forward & route the information There are 2 kinds of End systems 1. Mobile end system: subscriber unit to access CDPD network over a wireless interface 2. Fixed end system: common host/server that is connected to the CDPD backbone and providing access to specific application and data There are 2 kinds of Intermediate systems 1. Generic intermediate system: simple router with no knowledge of mobility issues 2. mobile data intermediate system: specialized intermediate system that routes data based on its knowledge of the current location of Mobile end system. It is a set of hardware and software functions that provide switching, accounting, registration, authentication, encryption, and so on. The design of CDPD was based on several design objectives that are often repeated in designing overlay networks or new networks. A lot of emphasis was laid on open architectures and reusing as much of the existing RF infrastructure as possible. The design goal of CDPD included location independence and independence fro, service provider, so that coverage could be maximized; application transparency and multiprotocol support, interoperability between products from multiple vendors. External links CIO CDPD article History and Development Detailed Description About CDPD First generation mobile telecommunications
Cellular digital packet data
[ "Technology" ]
885
[ "Mobile telecommunications", "First generation mobile telecommunications" ]
7,346
https://en.wikipedia.org/wiki/Centimetre%E2%80%93gram%E2%80%93second%20system%20of%20units
The centimetre–gram–second system of units (CGS or cgs) is a variant of the metric system based on the centimetre as the unit of length, the gram as the unit of mass, and the second as the unit of time. All CGS mechanical units are unambiguously derived from these three base units, but there are several different ways in which the CGS system was extended to cover electromagnetism. The CGS system has been largely supplanted by the MKS system based on the metre, kilogram, and second, which was in turn extended and replaced by the International System of Units (SI). In many fields of science and engineering, SI is the only system of units in use, but CGS is still prevalent in certain subfields. In measurements of purely mechanical systems (involving units of length, mass, force, energy, pressure, and so on), the differences between CGS and SI are straightforward: the unit-conversion factors are all powers of 10 as and . For example, the CGS unit of force is the dyne, which is defined as , so the SI unit of force, the newton (), is equal to . On the other hand, in measurements of electromagnetic phenomena (involving units of charge, electric and magnetic fields, voltage, and so on), converting between CGS and SI is less straightforward. Formulas for physical laws of electromagnetism (such as Maxwell's equations) take a form that depends on which system of units is being used, because the electromagnetic quantities are defined differently in SI and in CGS. Furthermore, within CGS, there are several plausible ways to define electromagnetic quantities, leading to different "sub-systems", including Gaussian units, "ESU", "EMU", and Heaviside–Lorentz units. Among these choices, Gaussian units are the most common today, and "CGS units" is often intended to refer to CGS-Gaussian units. History The CGS system goes back to a proposal in 1832 by the German mathematician Carl Friedrich Gauss to base a system of absolute units on the three fundamental units of length, mass and time. Gauss chose the units of millimetre, milligram and second. In 1873, a committee of the British Association for the Advancement of Science, including physicists James Clerk Maxwell and William Thomson, 1st Baron Kelvin recommended the general adoption of centimetre, gram and second as fundamental units, and to express all derived electromagnetic units in these fundamental units, using the prefix "C.G.S. unit of ...". The sizes of many CGS units turned out to be inconvenient for practical purposes. For example, many everyday objects are hundreds or thousands of centimetres long, such as humans, rooms and buildings. Thus the CGS system never gained wide use outside the field of science. Starting in the 1880s, and more significantly by the mid-20th century, CGS was gradually superseded internationally for scientific purposes by the MKS (metre–kilogram–second) system, which in turn developed into the modern SI standard. Since the international adoption of the MKS standard in the 1940s and the SI standard in the 1960s, the technical use of CGS units has gradually declined worldwide. CGS units have been deprecated in favor of SI units by NIST, as well as organizations such as the American Physical Society and the International Astronomical Union. SI units are predominantly used in engineering applications and physics education, while Gaussian CGS units are still commonly used in theoretical physics, describing microscopic systems, relativistic electrodynamics, and astrophysics. The units gram and centimetre remain useful as noncoherent units within the SI system, as with any other prefixed SI units. Definition of CGS units in mechanics In mechanics, the quantities in the CGS and SI systems are defined identically. The two systems differ only in the scale of the three base units (centimetre versus metre and gram versus kilogram, respectively), with the third unit (second) being the same in both systems. There is a direct correspondence between the base units of mechanics in CGS and SI. Since the formulae expressing the laws of mechanics are the same in both systems and since both systems are coherent, the definitions of all coherent derived units in terms of the base units are the same in both systems, and there is an unambiguous relationship between derived units:   (definition of velocity)   (Newton's second law of motion)   (energy defined in terms of work)   (pressure defined as force per unit area)   (dynamic viscosity defined as shear stress per unit velocity gradient). Thus, for example, the CGS unit of pressure, barye, is related to the CGS base units of length, mass, and time in the same way as the SI unit of pressure, pascal, is related to the SI base units of length, mass, and time: 1 unit of pressure = 1 unit of force / (1 unit of length)2 = 1 unit of mass / (1 unit of length × (1 unit of time)2) 1 Ba = 1 g/(cm⋅s2) 1 Pa = 1 kg/(m⋅s2). Expressing a CGS derived unit in terms of the SI base units, or vice versa, requires combining the scale factors that relate the two systems: 1 Ba = 1 g/(cm⋅s2) = 10−3 kg / (10−2 m⋅s2) = 10−1 kg/(m⋅s2) = 10−1 Pa. Definitions and conversion factors of CGS units in mechanics Derivation of CGS units in electromagnetism CGS approach to electromagnetic units The conversion factors relating electromagnetic units in the CGS and SI systems are made more complex by the differences in the formulas expressing physical laws of electromagnetism as assumed by each system of units, specifically in the nature of the constants that appear in these formulas. This illustrates the fundamental difference in the ways the two systems are built: In SI, the unit of electric current, the ampere (A), was historically defined such that the magnetic force exerted by two infinitely long, thin, parallel wires 1 metre apart and carrying a current of 1 ampere is exactly . This definition results in all SI electromagnetic units being numerically consistent (subject to factors of some integer powers of 10) with those of the CGS-EMU system described in further sections. The ampere is a base unit of the SI system, with the same status as the metre, kilogram, and second. Thus the relationship in the definition of the ampere with the metre and newton is disregarded, and the ampere is not treated as dimensionally equivalent to any combination of other base units. As a result, electromagnetic laws in SI require an additional constant of proportionality (see Vacuum permeability) to relate electromagnetic units to kinematic units. (This constant of proportionality is derivable directly from the above definition of the ampere.) All other electric and magnetic units are derived from these four base units using the most basic common definitions: for example, electric charge q is defined as current I multiplied by time t, resulting in the unit of electric charge, the coulomb (C), being defined as 1 C = 1 A⋅s. The CGS system variant avoids introducing new base quantities and units, and instead defines all electromagnetic quantities by expressing the physical laws that relate electromagnetic phenomena to mechanics with only dimensionless constants, and hence all units for these quantities are directly derived from the centimetre, gram, and second. In each of these systems the quantities called "charge" etc. may be a different quantity; they are distinguished here by a superscript. The corresponding quantities of each system are related through a proportionality constant. Maxwell's equations can be written in each of these systems as: Electrostatic units (ESU) In the electrostatic units variant of the CGS system, (CGS-ESU), charge is defined as the quantity that obeys a form of Coulomb's law without a multiplying constant (and current is then defined as charge per unit time): The ESU unit of charge, franklin (Fr), also known as statcoulomb or esu charge, is therefore defined as follows: Therefore, in CGS-ESU, a franklin is equal to a centimetre times square root of dyne: The unit of current is defined as: In the CGS-ESU system, charge q is therefore has the dimension to M1/2L3/2T−1. Other units in the CGS-ESU system include the statampere (1 statC/s) and statvolt (1 erg/statC). In CGS-ESU, all electric and magnetic quantities are dimensionally expressible in terms of length, mass, and time, and none has an independent dimension. Such a system of units of electromagnetism, in which the dimensions of all electric and magnetic quantities are expressible in terms of the mechanical dimensions of mass, length, and time, is traditionally called an 'absolute system'.:3 Unit symbols All electromagnetic units in the CGS-ESU system that have not been given names of their own are named as the corresponding SI name with an attached prefix "stat" or with a separate abbreviation "esu", and similarly with the corresponding symbols. Electromagnetic units (EMU) In another variant of the CGS system, electromagnetic units (EMU), current is defined via the force existing between two thin, parallel, infinitely long wires carrying it, and charge is then defined as current multiplied by time. (This approach was eventually used to define the SI unit of ampere as well). The EMU unit of current, biot (Bi), also known as abampere or emu current, is therefore defined as follows: Therefore, in electromagnetic CGS units, a biot is equal to a square root of dyne: The unit of charge in CGS EMU is: Dimensionally in the CGS-EMU system, charge q is therefore equivalent to M1/2L1/2. Hence, neither charge nor current is an independent physical quantity in the CGS-EMU system. EMU notation All electromagnetic units in the CGS-EMU system that do not have proper names are denoted by a corresponding SI name with an attached prefix "ab" or with a separate abbreviation "emu". Practical CGS units The practical CGS system is a hybrid system that uses the volt and the ampere as the units of voltage and current respectively. Doing this avoids the inconveniently large and small electrical units that arise in the esu and emu systems. This system was at one time widely used by electrical engineers because the volt and ampere had been adopted as international standard units by the International Electrical Congress of 1881. As well as the volt and ampere, the farad (capacitance), ohm (resistance), coulomb (electric charge), and henry (inductance) are consequently also used in the practical system and are the same as the SI units. The magnetic units are those of the emu system. The electrical units, other than the volt and ampere, are determined by the requirement that any equation involving only electrical and kinematical quantities that is valid in SI should also be valid in the system. For example, since electric field strength is voltage per unit length, its unit is the volt per centimetre, which is one hundred times the SI unit. The system is electrically rationalized and magnetically unrationalized; i.e., and , but the above formula for is invalid. A closely related system is the International System of Electric and Magnetic Units, which has a different unit of mass so that the formula for ′ is invalid. The unit of mass was chosen to remove powers of ten from contexts in which they were considered to be objectionable (e.g., and ). Inevitably, the powers of ten reappeared in other contexts, but the effect was to make the familiar joule and watt the units of work and power respectively. The ampere-turn system is constructed in a similar way by considering magnetomotive force and magnetic field strength to be electrical quantities and rationalizing the system by dividing the units of magnetic pole strength and magnetization by 4. The units of the first two quantities are the ampere and the ampere per centimetre respectively. The unit of magnetic permeability is that of the emu system, and the magnetic constitutive equations are and . Magnetic reluctance is given a hybrid unit to ensure the validity of Ohm's law for magnetic circuits. In all the practical systems ε0 = 8.8542 × 10−14 A⋅s/(V⋅cm), μ0 = 1 V⋅s/(A⋅cm), and c2 = 1/(4π × 10−9 ε0μ0). Other variants There were at various points in time about half a dozen systems of electromagnetic units in use, most based on the CGS system. These include the Gaussian units and the Heaviside–Lorentz units. Electromagnetic units in various CGS systems In this table, c = is the numeric value of the speed of light in vacuum when expressed in units of centimetres per second. The symbol "≘" is used instead of "=" as a reminder that the units are corresponding but not equal. For example, according to the capacitance row of the table, if a capacitor has a capacitance of 1 F in SI, then it has a capacitance of (10−9 c2) cm in ESU; but it is incorrect to replace "1 F" with "(10−9 c2) cm" within an equation or formula. (This warning is a special aspect of electromagnetism units. By contrast it is always correct to replace, e.g., "1 m" with "100 cm" within an equation or formula.) Physical constants in CGS units Advantages and disadvantages Lack of unique unit names leads to potential confusion: "15 emu" may mean either 15 abvolts, or 15 emu units of electric dipole moment, or 15 emu units of magnetic susceptibility, sometimes (but not always) per gram, or per mole. With its system of uniquely named units, the SI removes any confusion in usage: 1 ampere is a fixed value of a specified quantity, and so are 1 henry, 1 ohm, and 1 volt. In the CGS-Gaussian system, electric and magnetic fields have the same units, 40 is replaced by 1, and the only dimensional constant appearing in the Maxwell equations is c, the speed of light. The Heaviside–Lorentz system has these properties as well (with ε0 equaling 1). In SI, and other rationalized systems (for example, Heaviside–Lorentz), the unit of current was chosen such that electromagnetic equations concerning charged spheres contain 4, those concerning coils of current and straight wires contain 2 and those dealing with charged surfaces lack entirely, which was the most convenient choice for applications in electrical engineering and relates directly to the geometric symmetry of the system being described by the equation. Specialized unit systems are used to simplify formulas further than either SI or CGS do, by eliminating constants through a convention of normalizing quantities with respect to some system of natural units. For example, in particle physics a system is in use where every quantity is expressed by only one unit of energy, the electronvolt, with lengths, times, and so on all converted into units of energy by inserting factors of speed of light c and the reduced Planck constant ħ. This unit system is convenient for calculations in particle physics, but is impractical in other contexts. See also Outline of metrology and measurement International System of Units International System of Electrical and Magnetic Units List of metric units List of scientific units named after people Metre–tonne–second system of units United States customary units Foot–pound–second system of units References and notes General literature Metrology Systems of units Metric system British Science Association
Centimetre–gram–second system of units
[ "Mathematics" ]
3,376
[ "Quantity", "Systems of units", "Units of measurement" ]
7,376
https://en.wikipedia.org/wiki/Cosmic%20microwave%20background
The cosmic microwave background (CMB, CMBR), or relic radiation, is microwave radiation that fills all space in the observable universe. With a standard optical telescope, the background space between stars and galaxies is almost completely dark. However, a sufficiently sensitive radio telescope detects a faint background glow that is almost uniform and is not associated with any star, galaxy, or other object. This glow is strongest in the microwave region of the electromagnetic spectrum. The accidental discovery of the CMB in 1965 by American radio astronomers Arno Penzias and Robert Wilson was the culmination of work initiated in the 1940s. The CMB is landmark evidence of the Big Bang theory for the origin of the universe. In the Big Bang cosmological models, during the earliest periods, the universe was filled with an opaque fog of dense, hot plasma of sub-atomic particles. As the universe expanded, this plasma cooled to the point where protons and electrons combined to form neutral atoms of mostly hydrogen. Unlike the plasma, these atoms could not scatter thermal radiation by Thomson scattering, and so the universe became transparent. Known as the recombination epoch, this decoupling event released photons to travel freely through space. However, the photons have grown less energetic due to the cosmological redshift associated with the expansion of the universe. The surface of last scattering refers to a shell at the right distance in space so photons are now received that were originally emitted at the time of decoupling. The CMB is not completely smooth and uniform, showing a faint anisotropy that can be mapped by sensitive detectors. Ground and space-based experiments such as COBE, WMAP and Planck have been used to measure these temperature inhomogeneities. The anisotropy structure is determined by various interactions of matter and photons up to the point of decoupling, which results in a characteristic lumpy pattern that varies with angular scale. The distribution of the anisotropy across the sky has frequency components that can be represented by a power spectrum displaying a sequence of peaks and valleys. The peak values of this spectrum hold important information about the physical properties of the early universe: the first peak determines the overall curvature of the universe, while the second and third peak detail the density of normal matter and so-called dark matter, respectively. Extracting fine details from the CMB data can be challenging, since the emission has undergone modification by foreground features such as galaxy clusters. Features The cosmic microwave background radiation is an emission of uniform black body thermal energy coming from all directions. Intensity of the CMB is expressed in kelvin (K), the SI unit of temperature. The CMB has a thermal black body spectrum at a temperature of . Variations in intensity are expressed as variations in temperature. The blackbody temperature uniquely characterizes the intensity of the radiation at all wavelengths; a measured brightness temperature at any wavelength can be converted to a blackbody temperature. The radiation is remarkably uniform across the sky, very unlike the almost point-like structure of stars or clumps of stars in galaxies. The radiation is isotropic to roughly one part in 25,000: the root mean square variations are just over 100 μK, after subtracting a dipole anisotropy from the Doppler shift of the background radiation. The latter is caused by the peculiar velocity of the Sun relative to the comoving cosmic rest frame as it moves at 369.82 ± 0.11 km/s towards the constellation Crater near its boundary with the constellation Leo The CMB dipole and aberration at higher multipoles have been measured, consistent with galactic motion. Despite the very small degree of anisotropy in the CMB, many aspects can be measured with high precision and such measurements are critical for cosmological theories. In addition to temperature anisotropy, the CMB should have an angular variation in polarization. The polarization at each direction in the sky has an orientation described in terms of E-mode and B-mode polarization. The E-mode signal is a factor of 10 less strong than the temperature anisotropy; it supplements the temperature data as they are correlated. The B-mode signal is even weaker but may contain additional cosmological data. The anisotropy is related to physical origin of the polarization. Excitation of an electron by linear polarized light generates polarized light at 90 degrees to the incident direction. If the incoming radiation is isotropic, different incoming directions create polarizations that cancel out. If the incoming radiation has quadrupole anisotropy, residual polarization will be seen. Other than the temperature and polarization anisotropy, the CMB frequency spectrum is expected to feature tiny departures from the black-body law known as spectral distortions. These are also at the focus of an active research effort with the hope of a first measurement within the forthcoming decades, as they contain a wealth of information about the primordial universe and the formation of structures at late time. The CMB contains the vast majority of photons in the universe by a factor of 400 to 1; the number density of photons in the CMB is one billion times (109) the number density of matter in the universe. Without the expansion of the universe to cause the cooling of the CMB, the night sky would shine as brightly as the Sun. The energy density of the CMB is , about 411 photons/cm3. History Early speculations In 1931, Georges Lemaître speculated that remnants of the early universe may be observable as radiation, but his candidate was cosmic rays. Richard C. Tolman showed in 1934 that expansion of the universe would cool blackbody radiation while maintaining a thermal spectrum. The cosmic microwave background was first predicted in 1948 by Ralph Alpher and Robert Herman, in a correction they prepared for a paper by Alpher's PhD advisor George Gamow. Alpher and Herman were able to estimate the temperature of the cosmic microwave background to be 5 K. Discovery The first published recognition of the CMB radiation as a detectable phenomenon appeared in a brief paper by Soviet astrophysicists A. G. Doroshkevich and Igor Novikov, in the spring of 1964. In 1964, David Todd Wilkinson and Peter Roll, Robert H. Dicke's colleagues at Princeton University, began constructing a Dicke radiometer to measure the cosmic microwave background. In 1964, Arno Penzias and Robert Woodrow Wilson at the Crawford Hill location of Bell Telephone Laboratories in nearby Holmdel Township, New Jersey had built a Dicke radiometer that they intended to use for radio astronomy and satellite communication experiments. The antenna was constructed in 1959 to support Project Echo—the National Aeronautics and Space Administration's passive communications satellites, which used large earth orbiting aluminized plastic balloons as reflectors to bounce radio signals from one point on the Earth to another. On 20 May 1964 they made their first measurement clearly showing the presence of the microwave background, with their instrument having an excess 4.2K antenna temperature which they could not account for. After receiving a telephone call from Crawford Hill, Dicke said "Boys, we've been scooped." A meeting between the Princeton and Crawford Hill groups determined that the antenna temperature was indeed due to the microwave background. Penzias and Wilson received the 1978 Nobel Prize in Physics for their discovery. Cosmic origin The interpretation of the cosmic microwave background was a controversial issue in the late 1960s. Alternative explanations included energy from within the solar system, from galaxies, from intergalactic plasma and from multiple extragalactic radio sources. Two requirements would show that the microwave radiation was truly "cosmic". First, the intensity vs frequency or spectrum needed to be shown to match a thermal or blackbody source. This was accomplished by 1968 in a series of measurements of the radiation temperature at higher and lower wavelengths. Second, the radiation needed be shown to be isotropic, the same from all directions. This was also accomplished by 1970, demonstrating that this radiation was truly cosmic in origin. Progress on theory In the 1970s numerous studies showed that tiny deviations from isotropy in the CMB could result from events in the early universe. Harrison, Peebles and Yu, and Zel'dovich realized that the early universe would require quantum inhomogeneities that would result in temperature anisotropy at the level of 10−4 or 10−5. Rashid Sunyaev, using the alternative name relic radiation, calculated the observable imprint that these inhomogeneities would have on the cosmic microwave background. COBE After a lull in the 1970s caused in part by the many experimental difficulties in measuring CMB at high precision, increasingly stringent limits on the anisotropy of the cosmic microwave background were set by ground-based experiments during the 1980s. RELIKT-1, a Soviet cosmic microwave background anisotropy experiment on board the Prognoz 9 satellite (launched 1 July 1983), gave the first upper limits on the large-scale anisotropy. The other key event in the 1980s was the proposal by Alan Guth for cosmic inflation. This theory of rapid spatial expansion gave an explanation for large-scale isotropy by allowing causal connection just before the epoch of last scattering. With this and similar theories, detailed prediction encouraged larger and more ambitious experiments. The NASA Cosmic Background Explorer (COBE) satellite orbited Earth in 1989–1996 detected and quantified the large scale anisotropies at the limit of its detection capabilities. The NASA COBE mission clearly confirmed the primary anisotropy with the Differential Microwave Radiometer instrument, publishing their findings in 1992. The team received the Nobel Prize in physics for 2006 for this discovery. Precision cosmology Inspired by the COBE results, a series of ground and balloon-based experiments measured cosmic microwave background anisotropies on smaller angular scales over the two decades. The sensitivity of the new experiments improved dramatically, with a reduction in internal noise by three orders of magnitude. The primary goal of these experiments was to measure the scale of the first acoustic peak, which COBE did not have sufficient resolution to resolve. This peak corresponds to large scale density variations in the early universe that are created by gravitational instabilities, resulting in acoustical oscillations in the plasma. The first peak in the anisotropy was tentatively detected by the MAT/TOCO experiment and the result was confirmed by the BOOMERanG and MAXIMA experiments. These measurements demonstrated that the geometry of the universe is approximately flat, rather than curved. They ruled out cosmic strings as a major component of cosmic structure formation and suggested cosmic inflation was the right theory of structure formation. Observations after COBE Inspired by the initial COBE results of an extremely isotropic and homogeneous background, a series of ground- and balloon-based experiments quantified CMB anisotropies on smaller angular scales over the next decade. The primary goal of these experiments was to measure the angular scale of the first acoustic peak, for which COBE did not have sufficient resolution. These measurements were able to rule out cosmic strings as the leading theory of cosmic structure formation, and suggested cosmic inflation was the right theory. During the 1990s, the first peak was measured with increasing sensitivity and by 2000 the BOOMERanG experiment reported that the highest power fluctuations occur at scales of approximately one degree. Together with other cosmological data, these results implied that the geometry of the universe is flat. A number of ground-based interferometers provided measurements of the fluctuations with higher accuracy over the next three years, including the Very Small Array, Degree Angular Scale Interferometer (DASI), and the Cosmic Background Imager (CBI). DASI made the first detection of the polarization of the CMB and the CBI provided the first E-mode polarization spectrum with compelling evidence that it is out of phase with the T-mode spectrum. Wilkinson Microwave Anisotropy Probe In June 2001, NASA launched a second CMB space mission, WMAP, to make much more precise measurements of the large scale anisotropies over the full sky. WMAP used symmetric, rapid-multi-modulated scanning, rapid switching radiometers at five frequencies to minimize non-sky signal noise. The data from the mission was released in five installments, the last being the nine year summary. The results are broadly consistent Lambda CDM models based on 6 free parameters and fitting in to Big Bang cosmology with cosmic inflation. Degree Angular Scale Interferometer Atacama Cosmology Telescope Planck Surveyor A third space mission, the ESA (European Space Agency) Planck Surveyor, was launched in May 2009 and performed an even more detailed investigation until it was shut down in October 2013. Planck employed both HEMT radiometers and bolometer technology and measured the CMB at a smaller scale than WMAP. Its detectors were trialled in the Antarctic Viper telescope as ACBAR (Arcminute Cosmology Bolometer Array Receiver) experiment—which has produced the most precise measurements at small angular scales to date—and in the Archeops balloon telescope. On 21 March 2013, the European-led research team behind the Planck cosmology probe released the mission's all-sky map (565x318 jpeg, 3600x1800 jpeg) of the cosmic microwave background. The map suggests the universe is slightly older than researchers expected. According to the map, subtle fluctuations in temperature were imprinted on the deep sky when the cosmos was about years old. The imprint reflects ripples that arose as early, in the existence of the universe, as the first nonillionth (10−30) of a second. Apparently, these ripples gave rise to the present vast cosmic web of galaxy clusters and dark matter. Based on the 2013 data, the universe contains 4.9% ordinary matter, 26.8% dark matter and 68.3% dark energy. On 5 February 2015, new data was released by the Planck mission, according to which the age of the universe is billion years old and the Hubble constant was measured to be . South Pole Telescope Theoretical models The cosmic microwave background radiation and the cosmological redshift-distance relation are together regarded as the best available evidence for the Big Bang event. Measurements of the CMB have made the inflationary Big Bang model the Standard Cosmological Model. The discovery of the CMB in the mid-1960s curtailed interest in alternatives such as the steady state theory. In the Big Bang model for the formation of the universe, inflationary cosmology predicts that after about 10−37 seconds the nascent universe underwent exponential growth that smoothed out nearly all irregularities. The remaining irregularities were caused by quantum fluctuations in the inflaton field that caused the inflation event. Long before the formation of stars and planets, the early universe was more compact, much hotter and, starting 10−6 seconds after the Big Bang, filled with a uniform glow from its white-hot fog of interacting plasma of photons, electrons, and baryons. As the universe expanded, adiabatic cooling caused the energy density of the plasma to decrease until it became favorable for electrons to combine with protons, forming hydrogen atoms. This recombination event happened when the temperature was around 3000 K or when the universe was approximately 379,000 years old. As photons did not interact with these electrically neutral atoms, the former began to travel freely through space, resulting in the decoupling of matter and radiation. The color temperature of the ensemble of decoupled photons has continued to diminish ever since; now down to , it will continue to drop as the universe expands. The intensity of the radiation corresponds to black-body radiation at 2.726 K because red-shifted black-body radiation is just like black-body radiation at a lower temperature. According to the Big Bang model, the radiation from the sky we measure today comes from a spherical surface called the surface of last scattering. This represents the set of locations in space at which the decoupling event is estimated to have occurred and at a point in time such that the photons from that distance have just reached observers. Most of the radiation energy in the universe is in the cosmic microwave background, making up a fraction of roughly of the total density of the universe. Two of the greatest successes of the Big Bang theory are its prediction of the almost perfect black body spectrum and its detailed prediction of the anisotropies in the cosmic microwave background. The CMB spectrum has become the most precisely measured black body spectrum in nature. Predictions based on the Big Bang model In the late 1940s Alpher and Herman reasoned that if there was a Big Bang, the expansion of the universe would have stretched the high-energy radiation of the very early universe into the microwave region of the electromagnetic spectrum, and down to a temperature of about 5 K. They were slightly off with their estimate, but they had the right idea. They predicted the CMB. It took another 15 years for Penzias and Wilson to discover that the microwave background was actually there. According to standard cosmology, the CMB gives a snapshot of the hot early universe at the point in time when the temperature dropped enough to allow electrons and protons to form hydrogen atoms. This event made the universe nearly transparent to radiation because light was no longer being scattered off free electrons. When this occurred some 380,000 years after the Big Bang, the temperature of the universe was about 3,000 K. This corresponds to an ambient energy of about , which is much less than the ionization energy of hydrogen. This epoch is generally known as the "time of last scattering" or the period of recombination or decoupling. Since decoupling, the color temperature of the background radiation has dropped by an average factor of 1,089 due to the expansion of the universe. As the universe expands, the CMB photons are redshifted, causing them to decrease in energy. The color temperature of this radiation stays inversely proportional to a parameter that describes the relative expansion of the universe over time, known as the scale length. The color temperature Tr of the CMB as a function of redshift, z, can be shown to be proportional to the color temperature of the CMB as observed in the present day (2.725 K or 0.2348 meV): Tr = 2.725 K × (1 + z) The high degree of uniformity throughout the observable universe and its faint but measured anisotropy lend strong support for the Big Bang model in general and the ΛCDM ("Lambda Cold Dark Matter") model in particular. Moreover, the fluctuations are coherent on angular scales that are larger than the apparent cosmological horizon at recombination. Either such coherence is acausally fine-tuned, or cosmic inflation occurred. Primary anisotropy The anisotropy, or directional dependency, of the cosmic microwave background is divided into two types: primary anisotropy, due to effects that occur at the surface of last scattering and before; and secondary anisotropy, due to effects such as interactions of the background radiation with intervening hot gas or gravitational potentials, which occur between the last scattering surface and the observer. The structure of the cosmic microwave background anisotropies is principally determined by two effects: acoustic oscillations and diffusion damping (also called collisionless damping or Silk damping). The acoustic oscillations arise because of a conflict in the photon–baryon plasma in the early universe. The pressure of the photons tends to erase anisotropies, whereas the gravitational attraction of the baryons, moving at speeds much slower than light, makes them tend to collapse to form overdensities. These two effects compete to create acoustic oscillations, which give the microwave background its characteristic peak structure. The peaks correspond, roughly, to resonances in which the photons decouple when a particular mode is at its peak amplitude. The peaks contain interesting physical signatures. The angular scale of the first peak determines the curvature of the universe (but not the topology of the universe). The next peak—ratio of the odd peaks to the even peaks—determines the reduced baryon density. The third peak can be used to get information about the dark-matter density. The locations of the peaks give important information about the nature of the primordial density perturbations. There are two fundamental types of density perturbations called adiabatic and isocurvature. A general density perturbation is a mixture of both, and different theories that purport to explain the primordial density perturbation spectrum predict different mixtures. Adiabatic density perturbationsIn an adiabatic density perturbation, the fractional additional number density of each type of particle (baryons, photons, etc.) is the same. That is, if at one place there is a 1% higher number density of baryons than average, then at that place there is a 1% higher number density of photons (and a 1% higher number density in neutrinos) than average. Cosmic inflation predicts that the primordial perturbations are adiabatic. Isocurvature density perturbationsIn an isocurvature density perturbation, the sum (over different types of particle) of the fractional additional densities is zero. That is, a perturbation where at some spot there is 1% more energy in baryons than average, 1% more energy in photons than average, and 2% energy in neutrinos than average, would be a pure isocurvature perturbation. Hypothetical cosmic strings would produce mostly isocurvature primordial perturbations. The CMB spectrum can distinguish between these two because these two types of perturbations produce different peak locations. Isocurvature density perturbations produce a series of peaks whose angular scales (ℓ values of the peaks) are roughly in the ratio 1 : 3 : 5 : ..., while adiabatic density perturbations produce peaks whose locations are in the ratio 1 : 2 : 3 : ... Observations are consistent with the primordial density perturbations being entirely adiabatic, providing key support for inflation, and ruling out many models of structure formation involving, for example, cosmic strings. Collisionless damping is caused by two effects, when the treatment of the primordial plasma as fluid begins to break down: the increasing mean free path of the photons as the primordial plasma becomes increasingly rarefied in an expanding universe, the finite depth of the last scattering surface (LSS), which causes the mean free path to increase rapidly during decoupling, even while some Compton scattering is still occurring. These effects contribute about equally to the suppression of anisotropies at small scales and give rise to the characteristic exponential damping tail seen in the very small angular scale anisotropies. The depth of the LSS refers to the fact that the decoupling of the photons and baryons does not happen instantaneously, but instead requires an appreciable fraction of the age of the universe up to that era. One method of quantifying how long this process took uses the photon visibility function (PVF). This function is defined so that, denoting the PVF by P(t), the probability that a CMB photon last scattered between time t and is given by P(t)dt. The maximum of the PVF (the time when it is most likely that a given CMB photon last scattered) is known quite precisely. The first-year WMAP results put the time at which P(t) has a maximum as 372,000 years. This is often taken as the "time" at which the CMB formed. However, to figure out how it took the photons and baryons to decouple, we need a measure of the width of the PVF. The WMAP team finds that the PVF is greater than half of its maximal value (the "full width at half maximum", or FWHM) over an interval of 115,000 years. By this measure, decoupling took place over roughly 115,000 years, and thus when it was complete, the universe was roughly 487,000 years old. Late time anisotropy Since the CMB came into existence, it has apparently been modified by several subsequent physical processes, which are collectively referred to as late-time anisotropy, or secondary anisotropy. When the CMB photons became free to travel unimpeded, ordinary matter in the universe was mostly in the form of neutral hydrogen and helium atoms. However, observations of galaxies today seem to indicate that most of the volume of the intergalactic medium (IGM) consists of ionized material (since there are few absorption lines due to hydrogen atoms). This implies a period of reionization during which some of the material of the universe was broken into hydrogen ions. The CMB photons are scattered by free charges such as electrons that are not bound in atoms. In an ionized universe, such charged particles have been liberated from neutral atoms by ionizing (ultraviolet) radiation. Today these free charges are at sufficiently low density in most of the volume of the universe that they do not measurably affect the CMB. However, if the IGM was ionized at very early times when the universe was still denser, then there are two main effects on the CMB: Small scale anisotropies are erased. (Just as when looking at an object through fog, details of the object appear fuzzy.) The physics of how photons are scattered by free electrons (Thomson scattering) induces polarization anisotropies on large angular scales. This broad angle polarization is correlated with the broad angle temperature perturbation. Both of these effects have been observed by the WMAP spacecraft, providing evidence that the universe was ionized at very early times, at a redshift around 10. The detailed provenance of this early ionizing radiation is still a matter of scientific debate. It may have included starlight from the very first population of stars (population III stars), supernovae when these first stars reached the end of their lives, or the ionizing radiation produced by the accretion disks of massive black holes. The time following the emission of the cosmic microwave background—and before the observation of the first stars—is semi-humorously referred to by cosmologists as the Dark Age, and is a period which is under intense study by astronomers (see 21 centimeter radiation). Two other effects which occurred between reionization and our observations of the cosmic microwave background, and which appear to cause anisotropies, are the Sunyaev–Zeldovich effect, where a cloud of high-energy electrons scatters the radiation, transferring some of its energy to the CMB photons, and the Sachs–Wolfe effect, which causes photons from the Cosmic Microwave Background to be gravitationally redshifted or blueshifted due to changing gravitational fields. Alternative theories The standard cosmology that includes the Big Bang "enjoys considerable popularity among the practicing cosmologists" However, there are challenges to the standard big bang framework for explaining CMB data. In particular standard cosmology requires fine-tuning of some free parameters, with different values supported by different experimental data. As an example of the fine-tuning issue, standard cosmology cannot predict the present temperature of the relic radiation, . This value of is one of the best results of experimental cosmology and the steady state model can predict it. However, alternative models have their own set of problems and they have only made post-facto explanations of existing observations. Nevertheless, these alternatives have played an important historic role in providing ideas for and challenges to the standard explanation. Polarization The cosmic microwave background is polarized at the level of a few microkelvin. There are two types of polarization, called E-mode (or gradient-mode) and B-mode (or curl mode). This is in analogy to electrostatics, in which the electric field (E-field) has a vanishing curl and the magnetic field (B-field) has a vanishing divergence. E-modes The E-modes arise from Thomson scattering in a heterogeneous plasma. E-modes were first seen in 2002 by the Degree Angular Scale Interferometer (DASI). B-modes B-modes are expected to be an order of magnitude weaker than the E-modes. The former are not produced by standard scalar type perturbations, but are generated by gravitational waves during cosmic inflation shortly after the big bang. However, gravitational lensing of the stronger E-modes can also produce B-mode polarization. Detecting the original B-modes signal requires analysis of the contamination caused by lensing of the relatively strong E-mode signal. Primordial gravitational waves Models of "slow-roll" cosmic inflation in the early universe predicts primordial gravitational waves that would impact the polarisation of the cosmic microwave background, creating a specific pattern of B-mode polarization. Detection of this pattern would support the theory of inflation and their strength can confirm and exclude different models of inflation. Claims that this characteristic pattern of B-mode polarization had been measured by BICEP2 instrument were later attributed to cosmic dust due to new results of the Planck experiment. Gravitational lensing The second type of B-modes was discovered in 2013 using the South Pole Telescope with help from the Herschel Space Observatory. In October 2014, a measurement of the B-mode polarization at 150 GHz was published by the POLARBEAR experiment. Compared to BICEP2, POLARBEAR focuses on a smaller patch of the sky and is less susceptible to dust effects. The team reported that POLARBEAR's measured B-mode polarization was of cosmological origin (and not just due to dust) at a 97.2% confidence level. Multipole analysis The CMB angular anisotropies are usually presented in terms of power per multipole. The map of temperature across the sky, is written as coefficients of spherical harmonics, where the term measures the strength of the angular oscillation in , and ℓ is the multipole number while m is the azimuthal number. The azimuthal variation is not significant and is removed by applying the angular correlation function, giving power spectrum term  Increasing values of ℓ correspond to higher multipole moments of CMB, meaning more rapid variation with angle. CMBR monopole term (ℓ = 0) The monopole term, , is the constant isotropic mean temperature of the CMB, with one standard deviation confidence. This term must be measured with absolute temperature devices, such as the FIRAS instrument on the COBE satellite. CMBR dipole anisotropy (ℓ = 1) CMB dipole represents the largest anisotropy, which is in the first spherical harmonic (), a cosine function. The amplitude of CMB dipole is around . The CMB dipole moment is interpreted as the peculiar motion of the Earth relative to the CMB. Its amplitude depends on the time due to the Earth's orbit about the barycenter of the solar system. This enables us to add a time-dependent term to the dipole expression. The modulation of this term is 1 year, which fits the observation done by COBE FIRAS. The dipole moment does not encode any primordial information. From the CMB data, it is seen that the Sun appears to be moving at relative to the reference frame of the CMB (also called the CMB rest frame, or the frame of reference in which there is no motion through the CMB). The Local Group — the galaxy group that includes our own Milky Way galaxy — appears to be moving at in the direction of galactic longitude , . The dipole is now used to calibrate mapping studies. Multipole (ℓ ≥ 2) The temperature variation in the CMB temperature maps at higher multipoles, or , is considered to be the result of perturbations of the density in the early Universe, before the recombination epoch at a redshift of around . Before recombination, the Universe consisted of a hot, dense plasma of electrons and baryons. In such a hot dense environment, electrons and protons could not form any neutral atoms. The baryons in such early Universe remained highly ionized and so were tightly coupled with photons through the effect of Thompson scattering. These phenomena caused the pressure and gravitational effects to act against each other, and triggered fluctuations in the photon-baryon plasma. Quickly after the recombination epoch, the rapid expansion of the universe caused the plasma to cool down and these fluctuations are "frozen into" the CMB maps we observe today. Data analysis challenges Raw CMBR data, even from space vehicles such as WMAP or Planck, contain foreground effects that completely obscure the fine-scale structure of the cosmic microwave background. The fine-scale structure is superimposed on the raw CMBR data but is too small to be seen at the scale of the raw data. The most prominent of the foreground effects is the dipole anisotropy caused by the Sun's motion relative to the CMBR background. The dipole anisotropy and others due to Earth's annual motion relative to the Sun and numerous microwave sources in the galactic plane and elsewhere must be subtracted out to reveal the extremely tiny variations characterizing the fine-scale structure of the CMBR background. The detailed analysis of CMBR data to produce maps, an angular power spectrum, and ultimately cosmological parameters is a complicated, computationally difficult problem. In practice it is hard to take the effects of noise and foreground sources into account. In particular, these foregrounds are dominated by galactic emissions such as Bremsstrahlung, synchrotron, and dust that emit in the microwave band; in practice, the galaxy has to be removed, resulting in a CMB map that is not a full-sky map. In addition, point sources like galaxies and clusters represent another source of foreground which must be removed so as not to distort the short scale structure of the CMB power spectrum. Constraints on many cosmological parameters can be obtained from their effects on the power spectrum, and results are often calculated using Markov chain Monte Carlo sampling techniques. Anomalies With the increasingly precise data provided by WMAP, there have been a number of claims that the CMB exhibits anomalies, such as very large scale anisotropies, anomalous alignments, and non-Gaussian distributions. The most longstanding of these is the low-ℓ multipole controversy. Even in the COBE map, it was observed that the quadrupole (, spherical harmonic) has a low amplitude compared to the predictions of the Big Bang. In particular, the quadrupole and octupole () modes appear to have an unexplained alignment with each other and with both the ecliptic plane and equinoxes. A number of groups have suggested that this could be the signature of new physics at the greatest observable scales; other groups suspect systematic errors in the data. Ultimately, due to the foregrounds and the cosmic variance problem, the greatest modes will never be as well measured as the small angular scale modes. The analyses were performed on two maps that have had the foregrounds removed as far as possible: the "internal linear combination" map of the WMAP collaboration and a similar map prepared by Max Tegmark and others. Later analyses have pointed out that these are the modes most susceptible to foreground contamination from synchrotron, dust, and Bremsstrahlung emission, and from experimental uncertainty in the monopole and dipole. A full Bayesian analysis of the WMAP power spectrum demonstrates that the quadrupole prediction of Lambda-CDM cosmology is consistent with the data at the 10% level and that the observed octupole is not remarkable. Carefully accounting for the procedure used to remove the foregrounds from the full sky map further reduces the significance of the alignment by ~5%. Recent observations with the Planck telescope, which is very much more sensitive than WMAP and has a larger angular resolution, record the same anomaly, and so instrumental error (but not foreground contamination) appears to be ruled out. Coincidence is a possible explanation, chief scientist from WMAP, Charles L. Bennett suggested coincidence and human psychology were involved, "I do think there is a bit of a psychological effect; people want to find unusual things." Measurements of the density of quasars based on Wide-field Infrared Survey Explorer data finds a dipole significantly different from the one extracted from the CMB anisotropy. This difference is conflict with the cosmological principle. Future evolution Assuming the universe keeps expanding and it does not suffer a Big Crunch, a Big Rip, or another similar fate, the cosmic microwave background will continue redshifting until it will no longer be detectable, and will be superseded first by the one produced by starlight, and perhaps, later by the background radiation fields of processes that may take place in the far future of the universe such as proton decay, evaporation of black holes, and positronium decay. Timeline of prediction, discovery and interpretation Thermal (non-microwave background) temperature predictions 1896 – Charles Édouard Guillaume estimates the "radiation of the stars" to be 5–6 K. 1926 – Sir Arthur Eddington estimates the non-thermal radiation of starlight in the galaxy "... by the formula the effective temperature corresponding to this density is 3.18° absolute ... black body". 1930s – Cosmologist Erich Regener calculates that the non-thermal spectrum of cosmic rays in the galaxy has an effective temperature of 2.8 K. 1931 – Term microwave first used in print: "When trials with wavelengths as low as 18 cm. were made known, there was undisguised surprise+that the problem of the micro-wave had been solved so soon." Telegraph & Telephone Journal XVII. 179/1 1934 – Richard Tolman shows that black-body radiation in an expanding universe cools but remains thermal. 1946 – Robert Dicke predicts "... radiation from cosmic matter" at < 20 K, but did not refer to background radiation. 1946 – George Gamow calculates a temperature of 50 K (assuming a 3-billion year old universe), commenting it "... is in reasonable agreement with the actual temperature of interstellar space", but does not mention background radiation. 1953 – Erwin Finlay-Freundlich in support of his tired light theory, derives a blackbody temperature for intergalactic space of 2.3 K and in the following year values of 1.9K and 6.0K. Microwave background radiation predictions and measurements 1941 – Andrew McKellar detected a "rotational" temperature of 2.3 K for the interstellar medium by comparing the population of CN doublet lines measured by W. S. Adams in a B star. 1948 – Ralph Alpher and Robert Herman estimate "the temperature in the universe" at 5 K. Although they do not specifically mention microwave background radiation, it may be inferred. 1953 – George Gamow estimates 7 K based on a model that does not rely on a free parameter 1955 – Émile Le Roux of the Nançay Radio Observatory, in a sky survey at λ = 33 cm, initially reported a near-isotropic background radiation of 3 kelvins, plus or minus 2; he did not recognize the cosmological significance and later revised the error bars to 20K. 1957 – Tigran Shmaonov reports that "the absolute effective temperature of the radioemission background ... is 4±3 K". with radiation intensity was independent of either time or direction of observation. Although Shamonov did not recognize it at the time, it is now clear that Shmaonov did observe the cosmic microwave background at a wavelength of 3.2 cm 1964 – A. G. Doroshkevich and Igor Dmitrievich Novikov publish a brief paper suggesting microwave searches for the black-body radiation predicted by Gamow, Alpher, and Herman, where they name the CMB radiation phenomenon as detectable. 1964–65 – Arno Penzias and Robert Woodrow Wilson measure the temperature to be approximately 3 K. Robert Dicke, James Peebles, P. G. Roll, and D. T. Wilkinson interpret this radiation as a signature of the Big Bang. 1966 – Rainer K. Sachs and Arthur M. Wolfe theoretically predict microwave background fluctuation amplitudes created by gravitational potential variations between observers and the last scattering surface (see Sachs–Wolfe effect). 1968 – Martin Rees and Dennis Sciama theoretically predict microwave background fluctuation amplitudes created by photons traversing time-dependent wells of potential. 1969 – R. A. Sunyaev and Yakov Zel'dovich study the inverse Compton scattering of microwave background photons by hot electrons (see Sunyaev–Zel'dovich effect). 1983 – Researchers from the Cambridge Radio Astronomy Group and the Owens Valley Radio Observatory first detect the Sunyaev–Zel'dovich effect from clusters of galaxies. 1983 – RELIKT-1 Soviet CMB anisotropy experiment was launched. 1990 – FIRAS on the Cosmic Background Explorer (COBE) satellite measures the black body form of the CMB spectrum with exquisite precision, and shows that the microwave background has a nearly perfect black-body spectrum with T = 2.73 K and thereby strongly constrains the density of the intergalactic medium. January 1992 – Scientists that analysed data from the RELIKT-1 report the discovery of anisotropy in the cosmic microwave background at the Moscow astrophysical seminar. 1992 – Scientists that analysed data from COBE DMR report the discovery of anisotropy in the cosmic microwave background. 1995 – The Cosmic Anisotropy Telescope performs the first high resolution observations of the cosmic microwave background. 1999 – First measurements of acoustic oscillations in the CMB anisotropy angular power spectrum from the MAT/TOCO, BOOMERANG, and Maxima Experiments. The BOOMERanG experiment makes higher quality maps at intermediate resolution, and confirms that the universe is "flat". 2002 – Polarization discovered by DASI. 2003 – E-mode polarization spectrum obtained by the CBI. The CBI and the Very Small Array produces yet higher quality maps at high resolution (covering small areas of the sky). 2003 – The Wilkinson Microwave Anisotropy Probe spacecraft produces an even higher quality map at low and intermediate resolution of the whole sky (WMAP provides high-resolution data, but improves on the intermediate resolution maps from BOOMERanG). 2004 – E-mode polarization spectrum obtained by the CBI. 2004 – The Arcminute Cosmology Bolometer Array Receiver produces a higher quality map of the high resolution structure not mapped by WMAP. 2005 – The Arcminute Microkelvin Imager and the Sunyaev–Zel'dovich Array begin the first surveys for very high redshift clusters of galaxies using the Sunyaev–Zel'dovich effect. 2005 – Ralph A. Alpher is awarded the National Medal of Science for his groundbreaking work in nucleosynthesis and prediction that the universe expansion leaves behind background radiation, thus providing a model for the Big Bang theory. 2006 – The long-awaited three-year WMAP results are released, confirming previous analysis, correcting several points, and including polarization data. 2006 – Two of COBE's principal investigators, George Smoot and John Mather, received the Nobel Prize in Physics in 2006 for their work on precision measurement of the CMBR. 2006–2011 – Improved measurements from WMAP, new supernova surveys ESSENCE and SNLS, and baryon acoustic oscillations from SDSS and WiggleZ, continue to be consistent with the standard Lambda-CDM model. 2010 – The first all-sky map from the Planck telescope is released. 2013 – An improved all-sky map from the Planck telescope is released, improving the measurements of WMAP and extending them to much smaller scales. 2014 – On March 17, 2014, astrophysicists of the BICEP2 collaboration announced the detection of inflationary gravitational waves in the B-mode power spectrum, which if confirmed, would provide clear experimental evidence for the theory of inflation. However, on 19 June 2014, lowered confidence in confirming the cosmic inflation findings was reported. 2015 – On January 30, 2015, the same team of astronomers from BICEP2 withdrew the claim made on the previous year. Based on the combined data of BICEP2 and Planck, the European Space Agency announced that the signal can be entirely attributed to dust in the Milky Way. 2018 – The final data and maps from the Planck telescope is released, with improved measurements of the polarization on large scales. 2019 – Planck telescope analyses of their final 2018 data continue to be released. In popular culture In the Stargate Universe TV series (2009–2011), an ancient spaceship, Destiny, was built to study patterns in the CMBR which is a sentient message left over from the beginning of time. In Wheelers, a novel (2000) by Ian Stewart & Jack Cohen, CMBR is explained as the encrypted transmissions of an ancient civilization. This allows the Jovian "blimps" to have a society older than the currently-observed age of the universe. In The Three-Body Problem, a 2008 novel by Liu Cixin, a probe from an alien civilization compromises instruments monitoring the CMBR in order to deceive a character into believing the civilization has the power to manipulate the CMBR itself. The 2017 issue of the Swiss 20 francs bill lists several astronomical objects with their distances – the CMB is mentioned with 430 · 1015 light-seconds. In the 2021 Marvel series WandaVision, a mysterious television broadcast is discovered within the Cosmic Microwave Background. See also Notes References Further reading External links Student Friendly Intro to the CMB A pedagogic, step-by-step introduction to the cosmic microwave background power spectrum analysis suitable for those with an undergraduate physics background. More in depth than typical online sites. Less dense than cosmology texts. CMBR Theme on arxiv.org Audio: Fraser Cain and Dr. Pamela Gay – Astronomy Cast. The Big Bang and Cosmic Microwave Background – October 2006 Visualization of the CMB data from the Planck mission Astronomical radio sources Astrophysics Cosmic background radiation B-modes Inflation (cosmology) Observational astronomy Physical cosmological concepts Radio astronomy
Cosmic microwave background
[ "Physics", "Astronomy" ]
9,697
[ "Physical cosmological concepts", "Astronomical radio sources", "Concepts in astrophysics", "Astronomical events", "Observational astronomy", "Astrophysics", "Radio astronomy", "Astronomical objects", "Astronomical sub-disciplines" ]
7,381
https://en.wikipedia.org/wiki/Cyberspace
Cyberspace is an interconnected digital environment. It is a type of virtual world popularized with the rise of the Internet. The term entered popular culture from science fiction and the arts but is now used by technology strategists, security professionals, governments, military and industry leaders and entrepreneurs to describe the domain of the global technology environment, commonly defined as standing for the global network of interdependent information technology infrastructures, telecommunications networks and computer processing systems. Others consider cyberspace to be just a notional environment in which communication over computer networks occurs. The word became popular in the 1990s when the use of the Internet, networking, and digital communication were all growing dramatically; the term cyberspace was able to represent the many new ideas and phenomena that were emerging. As a social experience, individuals can interact, exchange ideas, share information, provide social support, conduct business, direct actions, create artistic media, play games, engage in political discussion, and so on, using this global network. Cyberspace users are sometimes referred to as cybernauts. The term cyberspace has become a conventional means to describe anything associated with general computing, the Internet and the diverse Internet culture. The U.S. government recognizes the interdependent network of information technology infrastructures and cyber-physical systems operating across this medium as part of the US national critical infrastructure. Amongst individuals on cyberspace, there is believed to be a code of shared rules and ethics mutually beneficial for all to follow, referred to as cyberethics. Many view the right to privacy as most important to a functional code of cyberethics. Such moral responsibilities go hand in hand when working online with global networks, specifically when opinions are involved with online social experiences. According to Chip Morningstar and F. Randall Farmer, cyberspace is defined more by the social interactions involved rather than its technical implementation. In their view, the computational medium in cyberspace is an augmentation of the communication channel between real people; the core characteristic of cyberspace is that it offers an environment that consists of many participants with the ability to affect and influence each other. They derive this concept from the observation that people seek richness, complexity, and depth within a virtual world. Etymology The term cyberspace first appeared in the visual arts in the late 1960s, when Danish artist Susanne Ussing (1940–1998) and her partner architect Carsten Hoff (b. 1934) constituted themselves as Atelier Cyberspace. Under this name the two made a series of installations and images entitled "sensory spaces" that were based on the principle of open systems adaptable to various influences, such as human movement and the behaviour of new materials. Atelier Cyberspace worked at a time when the Internet did not exist and computers were more or less off-limit to artists and creative engagement. In a 2015 interview with Scandinavian art magazine Kunstkritikk, Carsten Hoff recollects that although Atelier Cyberspace did try to implement computers, they had no interest in the virtual space as such: In the same interview, Hoff continues: The works of Atelier Cyberspace were originally shown at a number of Copenhagen venues and have later been exhibited at The National Gallery of Denmark in Copenhagen as part of the exhibition "What's Happening?" The term cyberspace first appeared in fiction in the 1980s in the work of cyberpunk science fiction author William Gibson, first in his 1982 short story "Burning Chrome" and later in his 1984 novel Neuromancer. In the next few years, the word became prominently identified with online computer networks. The portion of Neuromancer cited in this respect is usually the following: Now widely used, the term has since been criticized by Gibson, who commented on the origin of the term in the 2000 documentary No Maps for These Territories: Metaphorical Don Slater uses a metaphor to define cyberspace, describing the "sense of a social setting that exists purely within a space of representation and communication ... it exists entirely within a computer space, distributed across increasingly complex and fluid networks." The term cyberspace started to become a de facto synonym for the Internet, and later the World Wide Web, during the 1990s, especially in academic circles and activist communities. Author Bruce Sterling, who popularized this meaning, credits John Perry Barlow as the first to use it to refer to "the present-day nexus of computer and telecommunications networks". Barlow describes it thus in his essay to announce the formation of the Electronic Frontier Foundation (note the spatial metaphor) in June 1990: As Barlow and the EFF continued public education efforts to promote the idea of "digital rights", the term was increasingly used during the Internet boom of the late 1990s. Virtual environments Although in the present-day, loose use of the term cyberspace no longer implies or suggests immersion in a virtual reality, current technology allows the integration of a number of capabilities (sensors, signals, connections, transmissions, processors, and controllers) sufficient to generate a virtual interactive experience that is accessible regardless of a geographic location. It is for these reasons cyberspace has been described as the ultimate tax haven. In 1989, Autodesk, an American multinational corporation that focuses on 2D and 3D design software, developed a virtual design system called Cyberspace. Recent definitions of Cyberspace Although several definitions of cyberspace can be found both in scientific literature and in official governmental sources, there is no fully agreed official definition yet. According to F. D. Kramer ,there are 28 different definitions of the term cyberspace. The most recent draft definition is the following: The Joint Chiefs of Staff of the United States Department of Defense define cyberspace as one of five interdependent domains, the remaining four being land, air, maritime, and space. See United States Cyber Command Cyberspace as an Internet metaphor While cyberspace should not be confused with the Internet, the term is often used to refer to objects and identities that exist largely within the communication network itself, so that a website, for example, might be metaphorically said to "exist in cyberspace". According to this interpretation, events taking place on the Internet are not happening in the locations where participants or servers are physically located, but "in cyberspace". The philosopher Michel Foucault used the term heterotopias to describe such spaces which are simultaneously physical and mental. Firstly, cyberspace describes the flow of digital data through the network of interconnected computers: it is at once not "real"since one could not spatially locate it as a tangible objectand clearly "real" in its effects. There have been several attempts to create a concise model about how cyberspace works since it is not a physical thing that can be looked at. Secondly, cyberspace is the site of computer-mediated communication (CMC), in which online relationships and alternative forms of online identity are enacted, raising important questions about the social psychology of Internet use, the relationship between "online" and "offline" forms of life and interaction, and the relationship between the "real" and the virtual. Cyberspace draws attention to remediation of culture through new media technologies: it is not just a communication tool, but a social destination, and is culturally significant in its own right. Finally, cyberspace can be seen as providing new opportunities to reshape society and culture through "hidden" identities, or it can be seen as borderless communication and culture. The "space" in cyberspace has more in common with the abstract, mathematical meanings of the term (see space) than physical space. It does not have the duality of positive and negative volume (while in physical space, for example, a room has the negative volume of usable space delineated by positive volume of walls, Internet users cannot enter the screen and explore the unknown part of the Internet as an extension of the space they are in), but spatial meaning can be attributed to the relationship between different pages (of books as well as web servers), considering the unturned pages to be somewhere "out there." The concept of cyberspace, therefore, refers not to the content being presented to the surfer, but rather to the possibility of surfing among different sites, with feedback loops between the user and the rest of the system creating the potential to always encounter something unknown or unexpected. Video games differ from text-based communication in that on-screen images are meant to be figures that actually occupy a space and the animation shows the movement of those figures. Images are supposed to form the positive volume that delineates the empty space. A game adopts the cyberspace metaphor by engaging more players in the game, and then figuratively representing them on the screen as avatars. Games do not have to stop at the avatar-player level, but current implementations aiming for more immersive playing space (i.e. Laser tag) take the form of augmented reality rather than cyberspace, fully immersive virtual realities remaining impractical. Although the more radical consequences of the global communication network predicted by some cyberspace proponents (i.e. the diminishing of state influence envisioned by John Perry Barlow) failed to materialize and the word lost some of its novelty appeal, it remains current . Some virtual communities explicitly refer to the concept of cyberspacefor example, Linden Lab calling their customers "Residents" of Second Lifewhile all such communities can be positioned "in cyberspace" for explanatory and comparative purposes (as did Sterling in The Hacker Crackdown, followed by many journalists), integrating the metaphor into a wider cyber-culture. The metaphor has been useful in helping a new generation of thought leaders to reason through new military strategies around the world, led largely by the US Department of Defense (DoD). The use of cyberspace as a metaphor has had its limits, however, especially in areas where the metaphor becomes confused with physical infrastructure. It has also been critiqued as being unhelpful for falsely employing a spatial metaphor to describe what is inherently a network. Alternate realities in philosophy and art Predating computers A forerunner of the modern ideas of cyberspace is the Cartesian notion that people might be deceived by an evil demon that feeds them a false reality. This argument is the direct predecessor of modern ideas of a brain in a vat and many popular conceptions of cyberspace take Descartes's ideas as their starting point. Visual arts have a tradition, stretching back to antiquity, of artifacts meant to fool the eye and be mistaken for reality. This questioning of reality occasionally led some philosophers and especially theologians to distrust art as deceiving people into entering a world which was not real (see Aniconism). The artistic challenge was resurrected with increasing ambition as art became more and more realistic with the invention of photography, film (see Arrival of a Train at La Ciotat), and immersive computer simulations. Influenced by computers Philosophy American counterculture exponents like William S. Burroughs (whose literary influence on Gibson and cyberpunk in general is widely acknowledged) and Timothy Leary were among the first to extol the potential of computers and computer networks for individual empowerment. Some contemporary philosophers and scientists (e.g. David Deutsch in The Fabric of Reality) employ virtual reality in various thought experiments. For example, Philip Zhai in Get Real: A Philosophical Adventure in Virtual Reality connects cyberspace to the Platonic tradition: Note that this brain-in-a-vat argument conflates cyberspace with reality, while the more common descriptions of cyberspace contrast it with the "real world". Cyber-Geography The “Geography of Notopia” (Papadimitriou, 2006) theorizes about the complex interplay of cyber-cultures and the geographical space. This interplay has several philosophical and psychological facets (Papadimitriou, 2009). A New Communication Model The technological convergence of the mass media is the result of a long adaptation process of their communicative resources to the evolutionary changes of each historical moment. Thus, the new media became (plurally) an extension of the traditional media in cyberspace, allowing to the public access information in a wide range of digital devices. In other words, it is a cultural virtualization of human reality as a result of the migration from physical to virtual space (mediated by the ICTs), ruled by codes, signs and particular social relationships. Forwards, arise instant ways of communication, interaction and possible quick access to information, in which we are no longer mere senders, but also producers, reproducers, co-workers and providers. New technologies also help to "connect" people from different cultures outside the virtual space, which was unthinkable fifty years ago. In this giant relationships web, we mutually absorb each other's beliefs, customs, values, laws and habits, cultural legacies perpetuated by a physical-virtual dynamics in constant metamorphosis (ibidem). In this sense, Professor Doctor Marcelo Mendonça Teixeira created, in 2013, a new model of communication to the virtual universe, based in Claude Elwood Shannon (1948) article "A Mathematical Theory of Communication". Art Having originated among writers, the concept of cyberspace remains most popular in literature and film. Although artists working with other media have expressed interest in the concept, such as Roy Ascott, "cyberspace" in digital art is mostly used as a synonym for immersive virtual reality and remains more discussed than enacted. Computer crime Cyberspace also brings together every service and facility imaginable to expedite money laundering. One can purchase anonymous credit cards, bank accounts, encrypted global mobile telephones, and false passports. From there one can pay professional advisors to set up IBCs (International Business Corporations, or corporations with anonymous ownership) or similar structures in OFCs (Offshore Financial Centers). Such advisors are loath to ask any penetrating questions about the wealth and activities of their clients, since the average fees criminals pay them to launder their money can be as much as 20 percent. 5-level model In 2010, a five-level model was designed in France. According to this model, cyberspace is composed of five layers based on information discoveries: 1) language, 2) writing, 3) printing, 4) Internet, 5) Etc., i.e. the rest, e.g. noosphere, artificial life, artificial intelligence, etc., etc. This original model links the world of information to telecommunication technologies. See also Further reading Branch, J. (2020). "What's in a Name? Metaphors and Cybersecurity." International Organization. References Sources Cyberculture, The key Concepts, edited by David Bell, Brian D.Loader, Nicholas Pleace and Douglas Schuler Christine Buci-Glucksmann, "L’art à l’époque virtuel", in Frontières esthétiques de l’art, Arts 8, Paris: L’Harmattan, 2004 William Gibson. Neuromancer:20th Anniversary Edition. New York:Ace Books, 2004. Oliver Grau: Virtual Art. From Illusion to Immersion, MIT-Press, Cambridge 2003. (4 Auflagen). David Koepsell, The Ontology of Cyberspace, Chicago: Open Court, 2000. Irvine, Martin. "Postmodern Science Fiction and Cyberpunk", retrieved 2006-07-19. Slater, Don 2002, 'Social Relationships and Identity Online and Offline', in L.Lievrouw and S.Livingston (eds), The Handbook of New Media, Sage, London, pp533–46. Sterling, Bruce. The Hacker Crackdown: Law and Disorder On the Electronic Frontier. Spectra Books, 1992. Zhai, Philip. Get Real: A Philosophical Adventure in Virtual Reality. New York: Rowman & Littlefield Publishers, 1998. Teixeira, Marcelo Mendonça (2012). Cyberculture: From Plato To The Virtual Universe. The Architecture of Collective Intelligence. Munich: Grin Verlag. External links A Declaration of the Independence of Cyberspace by John Perry Barlow Peculiarities of Cyberspace by Albert Benschop Sex, Religion and Cyberspace by Richard Thieme Brains in a vat philosophical argument against the idea that we could be in cyberspace and not know it by Hilary Putnam Cyberspace as a Domain In which the Air Force Flies and Fights, Speech by Secretary of the Air Force Michael Wynne Cyberpunk themes History of the Internet Hyperreality Information Age Virtual reality William Gibson 1980s neologisms
Cyberspace
[ "Technology" ]
3,371
[ "Information Age", "Cyberspace", "Science and technology studies", "Information technology", "Computing and society", "Hyperreality" ]
7,392
https://en.wikipedia.org/wiki/Class%20%28computer%20programming%29
In object-oriented programming, a class defines the shared aspects of objects created from the class. The capabilities of a class differ between programming languages, but generally the shared aspects consist of state (variables) and behavior (methods) that are each either associated with a particular object or with all objects of that class. Object state can differ between each instance of the class whereas the class state is shared by all of them. The object methods include access to the object state (via an implicit or explicit parameter that references the object) whereas class methods do not. If the language supports inheritance, a class can be defined based on another class with all of its state and behavior plus additional state and behavior that further specializes the class. The specialized class is a sub-class, and the class it is based on is its superclass. Attributes Object lifecycle As an instance of a class, an object is constructed from a class via instantiation. Memory is allocated and initialized for the object state and a reference to the object is provided to consuming code. The object is usable until it is destroyed its state memory is de-allocated. Most languages allow for custom logic at lifecycle events via a constructor and a destructor. Type An object expresses data type as an interface the type of each member variable and the signature of each member function (method). A class defines an implementation of an interface, and instantiating the class results in an object that exposes the implementation via the interface. In the terms of type theory, a class is an implementationa concrete data structure and collection of subroutineswhile a type is an interface. Different (concrete) classes can produce objects of the same (abstract) type (depending on type system). For example, the type (interface) might be implemented by that is fast for small stacks but scales poorly and that scales well but has high overhead for small stacks. Structure A class contains data field descriptions (or properties, fields, data members, or attributes). These are usually field types and names that will be associated with state variables at program run time; these state variables either belong to the class or specific instances of the class. In most languages, the structure defined by the class determines the layout of the memory used by its instances. Other implementations are possible: for example, objects in Python use associative key-value containers. Some programming languages such as Eiffel support specification of invariants as part of the definition of the class, and enforce them through the type system. Encapsulation of state is necessary for being able to enforce the invariants of the class. Behavior The behavior of a class or its instances is defined using methods. Methods are subroutines with the ability to operate on objects or classes. These operations may alter the state of an object or simply provide ways of accessing it. Many kinds of methods exist, but support for them varies across languages. Some types of methods are created and called by programmer code, while other special methods—such as constructors, destructors, and conversion operators—are created and called by compiler-generated code. A language may also allow the programmer to define and call these special methods. Class interface Every class implements (or realizes) an interface by providing structure and behavior. Structure consists of data and state, and behavior consists of code that specifies how methods are implemented. There is a distinction between the definition of an interface and the implementation of that interface; however, this line is blurred in many programming languages because class declarations both define and implement an interface. Some languages, however, provide features that separate interface and implementation. For example, an abstract class can define an interface without providing an implementation. Languages that support class inheritance also allow classes to inherit interfaces from the classes that they are derived from. For example, if "class A" inherits from "class B" and if "class B" implements the interface "interface B" then "class A" also inherits the functionality(constants and methods declaration) provided by "interface B". In languages that support access specifiers, the interface of a class is considered to be the set of public members of the class, including both methods and attributes (via implicit getter and setter methods); any private members or internal data structures are not intended to be depended on by external code and thus are not part of the interface. Object-oriented programming methodology dictates that the operations of any interface of a class are to be independent of each other. It results in a layered design where clients of an interface use the methods declared in the interface. An interface places no requirements for clients to invoke the operations of one interface in any particular order. This approach has the benefit that client code can assume that the operations of an interface are available for use whenever the client has access to the object. Class interface example The buttons on the front of your television set are the interface between you and the electrical wiring on the other side of its plastic casing. You press the "power" button to toggle the television on and off. In this example, your particular television is the instance, each method is represented by a button, and all the buttons together compose the interface (other television sets that are the same model as yours would have the same interface). In its most common form, an interface is a specification of a group of related methods without any associated implementation of the methods. A television set also has a myriad of attributes, such as size and whether it supports color, which together comprise its structure. A class represents the full description of a television, including its attributes (structure) and buttons (interface). Getting the total number of televisions manufactured could be a static method of the television class. This method is associated with the class, yet is outside the domain of each instance of the class. A static method that finds a particular instance out of the set of all television objects is another example. Member accessibility The following is a common set of access specifiers: Private (or class-private) restricts access to the class itself. Only methods that are part of the same class can access private members. Protected (or class-protected) allows the class itself and all its subclasses to access the member. Public means that any code can access the member by its name. Although many object-oriented languages support the above access specifiers,their semantics may differ. Object-oriented design uses the access specifiers in conjunction with careful design of public method implementations to enforce class invariants—constraints on the state of the objects. A common usage of access specifiers is to separate the internal data of a class from its interface: the internal structure is made private, while public accessor methods can be used to inspect or alter such private data. Access specifiers do not necessarily control visibility, in that even private members may be visible to client external code. In some languages, an inaccessible but visible member may be referred to at runtime (for example, by a pointer returned from a member function), but an attempt to use it by referring to the name of the member from the client code will be prevented by the type checker. The various object-oriented programming languages enforce member accessibility and visibility to various degrees, and depending on the language's type system and compilation policies, enforced at either compile time or runtime. For example, the Java language does not allow client code that accesses the private data of a class to compile. In the C++ language, private methods are visible, but not accessible in the interface; however, they may be made invisible by explicitly declaring fully abstract classes that represent the interfaces of the class. Some languages feature other accessibility schemes: Instance vs. class accessibility: Ruby supports instance-private and instance-protected access specifiers in lieu of class-private and class-protected, respectively. They differ in that they restrict access based on the instance itself, rather than the instance's class. Friend: C++ supports a mechanism where a function explicitly declared as a friend function of the class may access the members designated as private or protected. Path-based: Java supports restricting access to a member within a Java package, which is the logical path of the file. However, it is a common practice when extending a Java framework to implement classes in the same package as a framework class to access protected members. The source file may exist in a completely different location, and may be deployed to a different .jar file, yet still be in the same logical path as far as the JVM is concerned. Inheritance Conceptually, a superclass is a superset of its subclasses. For example, could be a superclass of and , while would be a subclass of . These are all subset relations in set theory as well, i.e., all squares are rectangles but not all rectangles are squares. A common conceptual error is to mistake a part of relation with a subclass. For example, a car and truck are both kinds of vehicles and it would be appropriate to model them as subclasses of a vehicle class. However, it would be an error to model the parts of the car as subclass relations. For example, a car is composed of an engine and body, but it would not be appropriate to model an engine or body as a subclass of a car. In object-oriented modeling these kinds of relations are typically modeled as object properties. In this example, the class would have a property called . would be typed to hold a collection of objects, such as instances of , , , etc. Object modeling languages such as UML include capabilities to model various aspects of "part of" and other kinds of relations – data such as the cardinality of the objects, constraints on input and output values, etc. This information can be utilized by developer tools to generate additional code besides the basic data definitions for the objects, such as error checking on get and set methods. One important question when modeling and implementing a system of object classes is whether a class can have one or more superclasses. In the real world with actual sets, it would be rare to find sets that did not intersect with more than one other set. However, while some systems such as Flavors and CLOS provide a capability for more than one parent to do so at run time introduces complexity that many in the object-oriented community consider antithetical to the goals of using object classes in the first place. Understanding which class will be responsible for handling a message can get complex when dealing with more than one superclass. If used carelessly this feature can introduce some of the same system complexity and ambiguity classes were designed to avoid. Most modern object-oriented languages such as Smalltalk and Java require single inheritance at run time. For these languages, multiple inheritance may be useful for modeling but not for an implementation. However, semantic web application objects do have multiple superclasses. The volatility of the Internet requires this level of flexibility and the technology standards such as the Web Ontology Language (OWL) are designed to support it. A similar issue is whether or not the class hierarchy can be modified at run time. Languages such as Flavors, CLOS, and Smalltalk all support this feature as part of their meta-object protocols. Since classes are themselves first-class objects, it is possible to have them dynamically alter their structure by sending them the appropriate messages. Other languages that focus more on strong typing such as Java and C++ do not allow the class hierarchy to be modified at run time. Semantic web objects have the capability for run time changes to classes. The rationale is similar to the justification for allowing multiple superclasses, that the Internet is so dynamic and flexible that dynamic changes to the hierarchy are required to manage this volatility. Although many class-based languages support inheritance, inheritance is not an intrinsic aspect of classes. An object-based language (i.e. Classic Visual Basic) supports classes yet does not support inheritance. Inter-class relationships A programming language may support various class relationship features. Compositional Classes can be composed of other classes, thereby establishing a compositional relationship between the enclosing class and its embedded classes. Compositional relationship between classes is also commonly known as a has-a relationship. For example, a class "Car" could be composed of and contain a class "Engine". Therefore, a Car has an Engine. One aspect of composition is containment, which is the enclosure of component instances by the instance that has them. If an enclosing object contains component instances by value, the components and their enclosing object have a similar lifetime. If the components are contained by reference, they may not have a similar lifetime. For example, in Objective-C 2.0: @interface Car : NSObject @property NSString *name; @property Engine *engine @property NSArray *tires; @end This class has an instance of (a string object), , and (an array object). Hierarchical Classes can be derived from one or more existing classes, thereby establishing a hierarchical relationship between the derived-from classes (base classes, parent classes or ) and the derived class (child class or subclass) . The relationship of the derived class to the derived-from classes is commonly known as an is-a relationship. For example, a class 'Button' could be derived from a class 'Control'. Therefore, a Button is a Control. Structural and behavioral members of the parent classes are inherited by the child class. Derived classes can define additional structural members (data fields) and behavioral members (methods) in addition to those that they inherit and are therefore specializations of their superclasses. Also, derived classes can override inherited methods if the language allows. Not all languages support multiple inheritance. For example, Java allows a class to implement multiple interfaces, but only inherit from one class. If multiple inheritance is allowed, the hierarchy is a directed acyclic graph (or DAG for short), otherwise it is a tree. The hierarchy has classes as nodes and inheritance relationships as links. Classes in the same level are more likely to be associated than classes in different levels. The levels of this hierarchy are called layers or levels of abstraction. Example (Simplified Objective-C 2.0 code, from iPhone SDK): @interface UIResponder : NSObject //... @interface UIView : UIResponder //... @interface UIScrollView : UIView //... @interface UITableView : UIScrollView //... In this example, a UITableView is a UIScrollView is a UIView is a UIResponder is an NSObject. Modeling In object-oriented analysis and in Unified Modelling Language (UML), an association between two classes represents a collaboration between the classes or their corresponding instances. Associations have direction; for example, a bi-directional association between two classes indicates that both of the classes are aware of their relationship. Associations may be labeled according to their name or purpose. An association role is given end of an association and describes the role of the corresponding class. For example, a "subscriber" role describes the way instances of the class "Person" participate in a "subscribes-to" association with the class "Magazine". Also, a "Magazine" has the "subscribed magazine" role in the same association. Association role multiplicity describes how many instances correspond to each instance of the other class of the association. Common multiplicities are "0..1", "1..1", "1..*" and "0..*", where the "*" specifies any number of instances. Taxonomy There are many categories of classes, some of which overlap. Abstract and concrete In a language that supports inheritance, an abstract class, or abstract base class (ABC), is a class that cannot be directly instantiated. By contrast, a concrete class is a class that be directly instantiated. Instantiation of an abstract class can occur only indirectly, via a concrete class. An abstract class is either labeled as such explicitly or it may simply specify abstract methods (or virtual methods). An abstract class may provide implementations of some methods, and may also specify virtual methods via signatures that are to be implemented by direct or indirect descendants of the abstract class. Before a class derived from an abstract class can be instantiated, all abstract methods of its parent classes must be implemented by some class in the derivation chain. Most object-oriented programming languages allow the programmer to specify which classes are considered abstract and will not allow these to be instantiated. For example, in Java, C# and PHP, the keyword abstract is used. In C++, an abstract class is a class having at least one abstract method given by the appropriate syntax in that language (a pure virtual function in C++ parlance). A class consisting of only pure virtual methods is called a pure abstract base class (or pure ABC) in C++ and is also known as an interface by users of the language. Other languages, notably Java and C#, support a variant of abstract classes called an interface via a keyword in the language. In these languages, multiple inheritance is not allowed, but a class can implement multiple interfaces. Such a class can only contain abstract publicly accessible methods. Local and inner In some languages, classes can be declared in scopes other than the global scope. There are various types of such classes. An inner class is a class defined within another class. The relationship between an inner class and its containing class can also be treated as another type of class association. An inner class is typically neither associated with instances of the enclosing class nor instantiated along with its enclosing class. Depending on the language, it may or may not be possible to refer to the class from outside the enclosing class. A related concept is inner types, also known as inner data type or nested type, which is a generalization of the concept of inner classes. C++ is an example of a language that supports both inner classes and inner types (via typedef declarations). A local class is a class defined within a procedure or function. Such structure limits references to the class name to within the scope where the class is declared. Depending on the semantic rules of the language, there may be additional restrictions on local classes compared to non-local ones. One common restriction is to disallow local class methods to access local variables of the enclosing function. For example, in C++, a local class may refer to static variables declared within its enclosing function, but may not access the function's automatic variables. Metaclass A metaclass is a class where instances are classes. A metaclass describes a common structure of a collection of classes and can implement a design pattern or describe particular kinds of classes. Metaclasses are often used to describe frameworks. In some languages, such as Python, Ruby or Smalltalk, a class is also an object; thus each class is an instance of a unique metaclass that is built into the language. The Common Lisp Object System (CLOS) provides metaobject protocols (MOPs) to implement those classes and metaclasses. Sealed A sealed class cannot be subclassed. It is basically the opposite of an abstract class, which must be derived to be used. A sealed class is implicitly concrete. A class declared as sealed via the keyword in C# or in Java or PHP. For example, Java's class is marked as final. Sealed classes may allow a compiler to perform optimizations that are not available for classes that can be subclassed. Open An open class can be changed. Typically, an executable program cannot be changed by customers. Developers can often change some classes, but typically cannot change standard or built-in ones. In Ruby, all classes are open. In Python, classes can be created at runtime, and all can be modified afterward. Objective-C categories permit the programmer to add methods to an existing class without the need to recompile that class or even have access to its source code. Mixin Some languages have special support for mixins, though, in any language with multiple inheritance, a mixin is simply a class that does not represent an is-a-type-of relationship. Mixins are typically used to add the same methods to multiple classes; for example, a class might provide a method called when included in classes and that do not share a common parent. Partial In languages supporting the feature, a partial class is a class whose definition may be split into multiple pieces, within a single source-code file or across multiple files. The pieces are merged at compile time, making compiler output the same as for a non-partial class. The primary motivation for the introduction of partial classes is to facilitate the implementation of code generators, such as visual designers. It is otherwise a challenge or compromise to develop code generators that can manage the generated code when it is interleaved within developer-written code. Using partial classes, a code generator can process a separate file or coarse-grained partial class within a file, and is thus alleviated from intricately interjecting generated code via extensive parsing, increasing compiler efficiency and eliminating the potential risk of corrupting developer code. In a simple implementation of partial classes, the compiler can perform a phase of precompilation where it "unifies" all the parts of a partial class. Then, compilation can proceed as usual. Other benefits and effects of the partial class feature include: Enables separation of a class's interface and implementation code in a unique way. Eases navigation through large classes within an editor. Enables separation of concerns, in a way similar to aspect-oriented programming but without using any extra tools. Enables multiple developers to work on a single class concurrently without the need to merge individual code into one file at a later time. Partial classes have existed in Smalltalk under the name of Class Extensions for considerable time. With the arrival of the .NET framework 2, Microsoft introduced partial classes, supported in both C# 2.0 and Visual Basic 2005. WinRT also supports partial classes. Uninstantiable Uninstantiable classes allow programmers to group together per-class fields and methods that are accessible at runtime without an instance of the class. Indeed, instantiation is prohibited for this kind of class. For example, in C#, a class marked "static" can not be instantiated, can only have static members (fields, methods, other), may not have instance constructors, and is sealed. Unnamed An unnamed class or anonymous class is not bound to a name or identifier upon definition. This is analogous to named versus unnamed functions. Benefits The benefits of organizing software into object classes fall into three categories: Rapid development Ease of maintenance Reuse of code and designs Object classes facilitate rapid development because they lessen the semantic gap between the code and the users. System analysts can talk to both developers and users using essentially the same vocabulary, talking about accounts, customers, bills, etc. Object classes often facilitate rapid development because most object-oriented environments come with powerful debugging and testing tools. Instances of classes can be inspected at run time to verify that the system is performing as expected. Also, rather than get dumps of core memory, most object-oriented environments have interpreted debugging capabilities so that the developer can analyze exactly where in the program the error occurred and can see which methods were called to which arguments and with what arguments. Object classes facilitate ease of maintenance via encapsulation. When developers need to change the behavior of an object they can localize the change to just that object and its component parts. This reduces the potential for unwanted side effects from maintenance enhancements. Software reuse is also a major benefit of using Object classes. Classes facilitate re-use via inheritance and interfaces. When a new behavior is required it can often be achieved by creating a new class and having that class inherit the default behaviors and data of its superclass and then tailoring some aspect of the behavior or data accordingly. Re-use via interfaces (also known as methods) occurs when another object wants to invoke (rather than create a new kind of) some object class. This method for re-use removes many of the common errors that can make their way into software when one program re-uses code from another. Runtime representation As a data type, a class is usually considered as a compile time construct. A language or library may also support prototype or factory metaobjects that represent runtime information about classes, or even represent metadata that provides access to reflective programming (reflection) facilities and ability to manipulate data structure formats at runtime. Many languages distinguish this kind of run-time type information about classes from a class on the basis that the information is not needed at runtime. Some dynamic languages do not make strict distinctions between runtime and compile time constructs, and therefore may not distinguish between metaobjects and classes. For example, if Human is a metaobject representing the class Person, then instances of class Person can be created by using the facilities of the Human metaobject. Prototype-based programming In contrast to creating an object from a class, some programming contexts support object creation by copying (cloning) a prototype object. See also Notes References Further reading Abadi; Cardelli: A Theory of Objects ISO/IEC 14882:2003 Programming Language C++, International standard Class Warfare: Classes vs. Prototypes, by Brian Foote Meyer, B.: "Object-oriented software construction", 2nd edition, Prentice Hall, 1997, Rumbaugh et al.: "Object-oriented modeling and design", Prentice Hall, 1991, Programming constructs Programming language topics
Class (computer programming)
[ "Engineering" ]
5,289
[ "Software engineering", "Programming language topics" ]
7,398
https://en.wikipedia.org/wiki/Computer%20security
Computer security (also cybersecurity, digital security, or information technology (IT) security) is the protection of computer software, systems and networks from threats that can lead to unauthorized information disclosure, theft or damage to hardware, software, or data, as well as from the disruption or misdirection of the services they provide. The significance of the field stems from the expanded reliance on computer systems, the Internet, and wireless network standards. Its importance is further amplified by the growth of smart devices, including smartphones, televisions, and the various devices that constitute the Internet of things (IoT). Cybersecurity has emerged as one of the most significant new challenges facing the contemporary world, due to both the complexity of information systems and the societies they support. Security is particularly crucial for systems that govern large-scale systems with far-reaching physical effects, such as power distribution, elections, and finance. Although many aspects of computer security involve digital security, such as electronic passwords and encryption, physical security measures such as metal locks are still used to prevent unauthorized tampering. IT security is not a perfect subset of information security, therefore does not completely align into the security convergence schema. Vulnerabilities and attacks A vulnerability refers to a flaw in the structure, execution, functioning, or internal oversight of a computer or system that compromises its security. Most of the vulnerabilities that have been discovered are documented in the Common Vulnerabilities and Exposures (CVE) database. An exploitable vulnerability is one for which at least one working attack or exploit exists. Actors maliciously seeking vulnerabilities are known as threats. Vulnerabilities can be researched, reverse-engineered, hunted, or exploited using automated tools or customized scripts. Various people or parties are vulnerable to cyber attacks; however, different groups are likely to experience different types of attacks more than others. In April 2023, the United Kingdom Department for Science, Innovation & Technology released a report on cyber attacks over the previous 12 months. They surveyed 2,263 UK businesses, 1,174 UK registered charities, and 554 education institutions. The research found that "32% of businesses and 24% of charities overall recall any breaches or attacks from the last 12 months." These figures were much higher for "medium businesses (59%), large businesses (69%), and high-income charities with £500,000 or more in annual income (56%)." Yet, although medium or large businesses are more often the victims, since larger companies have generally improved their security over the last decade, small and midsize businesses (SMBs) have also become increasingly vulnerable as they often "do not have advanced tools to defend the business." SMBs are most likely to be affected by malware, ransomware, phishing, man-in-the-middle attacks, and Denial-of Service (DoS) Attacks. Normal internet users are most likely to be affected by untargeted cyberattacks. These are where attackers indiscriminately target as many devices, services, or users as possible. They do this using techniques that take advantage of the openness of the Internet. These strategies mostly include phishing, ransomware, water holing and scanning. To secure a computer system, it is important to understand the attacks that can be made against it, and these threats can typically be classified into one of the following categories: Backdoor A backdoor in a computer system, a cryptosystem, or an algorithm is any secret method of bypassing normal authentication or security controls. These weaknesses may exist for many reasons, including original design or poor configuration. Due to the nature of backdoors, they are of greater concern to companies and databases as opposed to individuals. Backdoors may be added by an authorized party to allow some legitimate access or by an attacker for malicious reasons. Criminals often use malware to install backdoors, giving them remote administrative access to a system. Once they have access, cybercriminals can "modify files, steal personal information, install unwanted software, and even take control of the entire computer." Backdoors can be very hard to detect and are usually discovered by someone who has access to the application source code or intimate knowledge of the operating system of the computer. Denial-of-service attack Denial-of-service attacks (DoS) are designed to make a machine or network resource unavailable to its intended users. Attackers can deny service to individual victims, such as by deliberately entering a wrong password enough consecutive times to cause the victim's account to be locked, or they may overload the capabilities of a machine or network and block all users at once. While a network attack from a single IP address can be blocked by adding a new firewall rule, many forms of distributed denial-of-service (DDoS) attacks are possible, where the attack comes from a large number of points. In this case, defending against these attacks is much more difficult. Such attacks can originate from the zombie computers of a botnet or from a range of other possible techniques, including distributed reflective denial-of-service (DRDoS), where innocent systems are fooled into sending traffic to the victim. With such attacks, the amplification factor makes the attack easier for the attacker because they have to use little bandwidth themselves. To understand why attackers may carry out these attacks, see the 'attacker motivation' section. Physical access attacks A direct-access attack is when an unauthorized user (an attacker) gains physical access to a computer, most likely to directly copy data from it or steal information. Attackers may also compromise security by making operating system modifications, installing software worms, keyloggers, covert listening devices or using wireless microphones. Even when the system is protected by standard security measures, these may be bypassed by booting another operating system or tool from a CD-ROM or other bootable media. Disk encryption and the Trusted Platform Module standard are designed to prevent these attacks. Direct service attackers are related in concept to direct memory attacks which allow an attacker to gain direct access to a computer's memory. The attacks "take advantage of a feature of modern computers that allows certain devices, such as external hard drives, graphics cards, or network cards, to access the computer's memory directly." Eavesdropping Eavesdropping is the act of surreptitiously listening to a private computer conversation (communication), usually between hosts on a network. It typically occurs when a user connects to a network where traffic is not secured or encrypted and sends sensitive business data to a colleague, which, when listened to by an attacker, could be exploited. Data transmitted across an open network allows an attacker to exploit a vulnerability and intercept it via various methods. Unlike malware, direct-access attacks, or other forms of cyber attacks, eavesdropping attacks are unlikely to negatively affect the performance of networks or devices, making them difficult to notice. In fact, "the attacker does not need to have any ongoing connection to the software at all. The attacker can insert the software onto a compromised device, perhaps by direct insertion or perhaps by a virus or other malware, and then come back some time later to retrieve any data that is found or trigger the software to send the data at some determined time." Using a virtual private network (VPN), which encrypts data between two points, is one of the most common forms of protection against eavesdropping. Using the best form of encryption possible for wireless networks is best practice, as well as using HTTPS instead of an unencrypted HTTP. Programs such as Carnivore and NarusInSight have been used by the Federal Bureau of Investigation (FBI) and NSA to eavesdrop on the systems of internet service providers. Even machines that operate as a closed system (i.e., with no contact with the outside world) can be eavesdropped upon by monitoring the faint electromagnetic transmissions generated by the hardware. TEMPEST is a specification by the NSA referring to these attacks. Malware Malicious software (malware) is any software code or computer program "intentionally written to harm a computer system or its users." Once present on a computer, it can leak sensitive details such as personal information, business information and passwords, can give control of the system to the attacker, and can corrupt or delete data permanently. Another type of malware is ransomware, which is when "malware installs itself onto a victim's machine, encrypts their files, and then turns around and demands a ransom (usually in Bitcoin) to return that data to the user." Types of malware include some of the following: Viruses are a specific type of malware, and are normally a malicious code that hijacks software with the intention to "do damage and spread copies of itself." Copies are made with the aim to spread to other programs on a computer. Worms are similar to viruses, however viruses can only function when a user runs (opens) a compromised program. Worms are self-replicating malware that spread between programs, apps and devices without the need for human interaction. Trojan horses are programs that pretend to be helpful or hide themselves within desired or legitimate software to "trick users into installing them." Once installed, a RAT (remote access trojan) can create a secret backdoor on the affected device to cause damage. Spyware is a type of malware that secretly gathers information from an infected computer and transmits the sensitive information back to the attacker. One of the most common forms of spyware are keyloggers, which record all of a user's keyboard inputs/keystrokes, to "allow hackers to harvest usernames, passwords, bank account and credit card numbers." Scareware, as the name suggests, is a form of malware which uses social engineering (manipulation) to scare, shock, trigger anxiety, or suggest the perception of a threat in order to manipulate users into buying or installing unwanted software. These attacks often begin with a "sudden pop-up with an urgent message, usually warning the user that they've broken the law or their device has a virus." Man-in-the-middle attacks Man-in-the-middle attacks (MITM) involve a malicious attacker trying to intercept, surveil or modify communications between two parties by spoofing one or both party's identities and injecting themselves in-between. Types of MITM attacks include: IP address spoofing is where the attacker hijacks routing protocols to reroute the targets traffic to a vulnerable network node for traffic interception or injection. Message spoofing (via email, SMS or OTT messaging) is where the attacker spoofs the identity or carrier service while the target is using messaging protocols like email, SMS or OTT (IP-based) messaging apps. The attacker can then monitor conversations, launch social attacks or trigger zero-day-vulnerabilities to allow for further attacks. WiFi SSID spoofing is where the attacker simulates a WIFI base station SSID to capture and modify internet traffic and transactions. The attacker can also use local network addressing and reduced network defenses to penetrate the target's firewall by breaching known vulnerabilities. Sometimes known as a Pineapple attack thanks to a popular device. See also Malicious association. DNS spoofing is where attackers hijack domain name assignments to redirect traffic to systems under the attackers control, in order to surveil traffic or launch other attacks. SSL hijacking, typically coupled with another media-level MITM attack, is where the attacker spoofs the SSL authentication and encryption protocol by way of Certificate Authority injection in order to decrypt, surveil and modify traffic. See also TLS interception Multi-vector, polymorphic attacks Surfacing in 2017, a new class of multi-vector, polymorphic cyber threats combine several types of attacks and change form to avoid cybersecurity controls as they spread. Multi-vector polymorphic attacks, as the name describes, are both multi-vectored and polymorphic. Firstly, they are a singular attack that involves multiple methods of attack. In this sense, they are "multi-vectored (i.e. the attack can use multiple means of propagation such as via the Web, email and applications." However, they are also multi-staged, meaning that "they can infiltrate networks and move laterally inside the network." The attacks can be polymorphic, meaning that the cyberattacks used such as viruses, worms or trojans "constantly change ("morph") making it nearly impossible to detect them using signature-based defences." Phishing Phishing is the attempt of acquiring sensitive information such as usernames, passwords, and credit card details directly from users by deceiving the users. Phishing is typically carried out by email spoofing, instant messaging, text message, or on a phone call. They often direct users to enter details at a fake website whose look and feel are almost identical to the legitimate one. The fake website often asks for personal information, such as login details and passwords. This information can then be used to gain access to the individual's real account on the real website. Preying on a victim's trust, phishing can be classified as a form of social engineering. Attackers can use creative ways to gain access to real accounts. A common scam is for attackers to send fake electronic invoices to individuals showing that they recently purchased music, apps, or others, and instructing them to click on a link if the purchases were not authorized. A more strategic type of phishing is spear-phishing which leverages personal or organization-specific details to make the attacker appear like a trusted source. Spear-phishing attacks target specific individuals, rather than the broad net cast by phishing attempts. Privilege escalation Privilege escalation describes a situation where an attacker with some level of restricted access is able to, without authorization, elevate their privileges or access level. For example, a standard computer user may be able to exploit a vulnerability in the system to gain access to restricted data; or even become root and have full unrestricted access to a system. The severity of attacks can range from attacks simply sending an unsolicited email to a ransomware attack on large amounts of data. Privilege escalation usually starts with social engineering techniques, often phishing. Privilege escalation can be separated into two strategies, horizontal and vertical privilege escalation: Horizontal escalation (or account takeover) is where an attacker gains access to a normal user account that has relatively low-level privileges. This may be through stealing the user's username and password. Once they have access, they have gained a foothold, and using this foothold the attacker then may move around the network of users at this same lower level, gaining access to information of this similar privilege. Vertical escalation however targets people higher up in a company and often with more administrative power, such as an employee in IT with a higher privilege. Using this privileged account will then enable the attacker to invade other accounts. Side-channel attack Any computational system affects its environment in some form. This effect it has on its environment can range from electromagnetic radiation, to residual effect on RAM cells which as a consequence make a Cold boot attack possible, to hardware implementation faults that allow for access or guessing of other values that normally should be inaccessible. In Side-channel attack scenarios, the attacker would gather such information about a system or network to guess its internal state and as a result access the information which is assumed by the victim to be secure. The target information in a side channel can be challenging to detect due to its low amplitude when combined with other signals Social engineering Social engineering, in the context of computer security, aims to convince a user to disclose secrets such as passwords, card numbers, etc. or grant physical access by, for example, impersonating a senior executive, bank, a contractor, or a customer. This generally involves exploiting people's trust, and relying on their cognitive biases. A common scam involves emails sent to accounting and finance department personnel, impersonating their CEO and urgently requesting some action. One of the main techniques of social engineering are phishing attacks. In early 2016, the FBI reported that such business email compromise (BEC) scams had cost US businesses more than $2 billion in about two years. In May 2016, the Milwaukee Bucks NBA team was the victim of this type of cyber scam with a perpetrator impersonating the team's president Peter Feigin, resulting in the handover of all the team's employees' 2015 W-2 tax forms. Spoofing Spoofing is an act of pretending to be a valid entity through the falsification of data (such as an IP address or username), in order to gain access to information or resources that one is otherwise unauthorized to obtain. Spoofing is closely related to phishing. There are several types of spoofing, including: Email spoofing, is where an attacker forges the sending (From, or source) address of an email. IP address spoofing, where an attacker alters the source IP address in a network packet to hide their identity or impersonate another computing system. MAC spoofing, where an attacker modifies the Media Access Control (MAC) address of their network interface controller to obscure their identity, or to pose as another. Biometric spoofing, where an attacker produces a fake biometric sample to pose as another user. Address Resolution Protocol (ARP) spoofing, where an attacker sends spoofed address resolution protocol onto a local area network to associate their Media Access Control address with a different host's IP address. This causes data to be sent to the attacker rather than the intended host. In 2018, the cybersecurity firm Trellix published research on the life-threatening risk of spoofing in the healthcare industry. Tampering Tampering describes a malicious modification or alteration of data. It is an intentional but unauthorized act resulting in the modification of a system, components of systems, its intended behavior, or data. So-called Evil Maid attacks and security services planting of surveillance capability into routers are examples. HTML smuggling HTML smuggling allows an attacker to smuggle a malicious code inside a particular HTML or web page. HTML files can carry payloads concealed as benign, inert data in order to defeat content filters. These payloads can be reconstructed on the other side of the filter. When a target user opens the HTML, the malicious code is activated; the web browser then decodes the script, which then unleashes the malware onto the target's device. Information security practices Employee behavior can have a big impact on information security in organizations. Cultural concepts can help different segments of the organization work effectively or work against effectiveness toward information security within an organization. Information security culture is the "...totality of patterns of behavior in an organization that contributes to the protection of information of all kinds." Andersson and Reimers (2014) found that employees often do not see themselves as part of their organization's information security effort and often take actions that impede organizational changes. Indeed, the Verizon Data Breach Investigations Report 2020, which examined 3,950 security breaches, discovered 30% of cybersecurity incidents involved internal actors within a company. Research shows information security culture needs to be improved continuously. In "Information Security Culture from Analysis to Change", authors commented, "It's a never-ending process, a cycle of evaluation and change or maintenance." To manage the information security culture, five steps should be taken: pre-evaluation, strategic planning, operative planning, implementation, and post-evaluation. Pre-evaluation: To identify the awareness of information security within employees and to analyze the current security policies. Strategic planning: To come up with a better awareness program, clear targets need to be set. Assembling a team of skilled professionals is helpful to achieve it. Operative planning: A good security culture can be established based on internal communication, management buy-in, security awareness and a training program. Implementation: Four stages should be used to implement the information security culture. They are: Commitment of the management Communication with organizational members Courses for all organizational members Commitment of the employees Post-evaluation: To assess the success of the planning and implementation, and to identify unresolved areas of concern. Computer protection (countermeasures) In computer security, a countermeasure is an action, device, procedure or technique that reduces a threat, a vulnerability, or an attack by eliminating or preventing it, by minimizing the harm it can cause, or by discovering and reporting it so that corrective action can be taken. Some common countermeasures are listed in the following sections: Security by design Security by design, or alternately secure by design, means that the software has been designed from the ground up to be secure. In this case, security is considered a main feature. The UK government's National Cyber Security Centre separates secure cyber design principles into five sections: Before a secure system is created or updated, companies should ensure they understand the fundamentals and the context around the system they are trying to create and identify any weaknesses in the system. Companies should design and centre their security around techniques and defences which make attacking their data or systems inherently more challenging for attackers. Companies should ensure that their core services that rely on technology are protected so that the systems are essentially never down. Although systems can be created which are safe against a multitude of attacks, that does not mean that attacks will not be attempted. Despite one's security, all companies' systems should aim to be able to detect and spot attacks as soon as they occur to ensure the most effective response to them. Companies should create secure systems designed so that any attack that is successful has minimal severity. These design principles of security by design can include some of the following techniques: The principle of least privilege, where each part of the system has only the privileges that are needed for its function. That way, even if an attacker gains access to that part, they only have limited access to the whole system. Automated theorem proving to prove the correctness of crucial software subsystems. Code reviews and unit testing, approaches to make modules more secure where formal correctness proofs are not possible. Defense in depth, where the design is such that more than one subsystem needs to be violated to compromise the integrity of the system and the information it holds. Default secure settings, and design to fail secure rather than fail insecure (see fail-safe for the equivalent in safety engineering). Ideally, a secure system should require a deliberate, conscious, knowledgeable and free decision on the part of legitimate authorities in order to make it insecure. Audit trails track system activity so that when a security breach occurs, the mechanism and extent of the breach can be determined. Storing audit trails remotely, where they can only be appended to, can keep intruders from covering their tracks. Full disclosure of all vulnerabilities, to ensure that the window of vulnerability is kept as short as possible when bugs are discovered. Security architecture Security architecture can be defined as the "practice of designing computer systems to achieve security goals." These goals have overlap with the principles of "security by design" explored above, including to "make initial compromise of the system difficult," and to "limit the impact of any compromise." In practice, the role of a security architect would be to ensure the structure of a system reinforces the security of the system, and that new changes are safe and meet the security requirements of the organization. Similarly, Techopedia defines security architecture as "a unified security design that addresses the necessities and potential risks involved in a certain scenario or environment. It also specifies when and where to apply security controls. The design process is generally reproducible." The key attributes of security architecture are: the relationship of different components and how they depend on each other. determination of controls based on risk assessment, good practices, finances, and legal matters. the standardization of controls. Practicing security architecture provides the right foundation to systematically address business, IT and security concerns in an organization. Security measures A state of computer security is the conceptual ideal, attained by the use of three processes: threat prevention, detection, and response. These processes are based on various policies and system components, which include the following: Limiting the access of individuals using user account access controls and using cryptography can protect systems files and data, respectively. Firewalls are by far the most common prevention systems from a network security perspective as they can (if properly configured) shield access to internal network services and block certain kinds of attacks through packet filtering. Firewalls can be both hardware and software-based. Firewalls monitor and control incoming and outgoing traffic of a computer network and establish a barrier between a trusted network and an untrusted network. Intrusion Detection System (IDS) products are designed to detect network attacks in-progress and assist in post-attack forensics, while audit trails and logs serve a similar function for individual systems. Response is necessarily defined by the assessed security requirements of an individual system and may cover the range from simple upgrade of protections to notification of legal authorities, counter-attacks, and the like. In some special cases, the complete destruction of the compromised system is favored, as it may happen that not all the compromised resources are detected. Cyber security awareness training to cope with cyber threats and attacks. Forward web proxy solutions can prevent the client to visit malicious web pages and inspect the content before downloading to the client machines. Today, computer security consists mainly of preventive measures, like firewalls or an exit procedure. A firewall can be defined as a way of filtering network data between a host or a network and another network, such as the Internet. They can be implemented as software running on the machine, hooking into the network stack (or, in the case of most UNIX-based operating systems such as Linux, built into the operating system kernel) to provide real-time filtering and blocking. Another implementation is a so-called physical firewall, which consists of a separate machine filtering network traffic. Firewalls are common amongst machines that are permanently connected to the Internet. Some organizations are turning to big data platforms, such as Apache Hadoop, to extend data accessibility and machine learning to detect advanced persistent threats. In order to ensure adequate security, the confidentiality, integrity and availability of a network, better known as the CIA triad, must be protected and is considered the foundation to information security. To achieve those objectives, administrative, physical and technical security measures should be employed. The amount of security afforded to an asset can only be determined when its value is known. Vulnerability management Vulnerability management is the cycle of identifying, fixing or mitigating vulnerabilities, especially in software and firmware. Vulnerability management is integral to computer security and network security. Vulnerabilities can be discovered with a vulnerability scanner, which analyzes a computer system in search of known vulnerabilities, such as open ports, insecure software configuration, and susceptibility to malware. In order for these tools to be effective, they must be kept up to date with every new update the vendor release. Typically, these updates will scan for the new vulnerabilities that were introduced recently. Beyond vulnerability scanning, many organizations contract outside security auditors to run regular penetration tests against their systems to identify vulnerabilities. In some sectors, this is a contractual requirement. Reducing vulnerabilities The act of assessing and reducing vulnerabilities to cyber attacks is commonly referred to as information technology security assessments. They aim to assess systems for risk and to predict and test for their vulnerabilities. While formal verification of the correctness of computer systems is possible, it is not yet common. Operating systems formally verified include seL4, and SYSGO's PikeOS – but these make up a very small percentage of the market. It is possible to reduce an attacker's chances by keeping systems up to date with security patches and updates and by hiring people with expertise in security. Large companies with significant threats can hire Security Operations Centre (SOC) Analysts. These are specialists in cyber defences, with their role ranging from "conducting threat analysis to investigating reports of any new issues and preparing and testing disaster recovery plans." Whilst no measures can completely guarantee the prevention of an attack, these measures can help mitigate the damage of possible attacks. The effects of data loss/damage can be also reduced by careful backing up and insurance. Outside of formal assessments, there are various methods of reducing vulnerabilities. Two factor authentication is a method for mitigating unauthorized access to a system or sensitive information. It requires something you know: a password or PIN, and something you have: a card, dongle, cellphone, or another piece of hardware. This increases security as an unauthorized person needs both of these to gain access. Protecting against social engineering and direct computer access (physical) attacks can only happen by non-computer means, which can be difficult to enforce, relative to the sensitivity of the information. Training is often involved to help mitigate this risk by improving people's knowledge of how to protect themselves and by increasing people's awareness of threats. However, even in highly disciplined environments (e.g. military organizations), social engineering attacks can still be difficult to foresee and prevent. Inoculation, derived from inoculation theory, seeks to prevent social engineering and other fraudulent tricks and traps by instilling a resistance to persuasion attempts through exposure to similar or related attempts. Hardware protection mechanisms Hardware-based or assisted computer security also offers an alternative to software-only computer security. Using devices and methods such as dongles, trusted platform modules, intrusion-aware cases, drive locks, disabling USB ports, and mobile-enabled access may be considered more secure due to the physical access (or sophisticated backdoor access) required in order to be compromised. Each of these is covered in more detail below. USB dongles are typically used in software licensing schemes to unlock software capabilities, but they can also be seen as a way to prevent unauthorized access to a computer or other device's software. The dongle, or key, essentially creates a secure encrypted tunnel between the software application and the key. The principle is that an encryption scheme on the dongle, such as Advanced Encryption Standard (AES) provides a stronger measure of security since it is harder to hack and replicate the dongle than to simply copy the native software to another machine and use it. Another security application for dongles is to use them for accessing web-based content such as cloud software or Virtual Private Networks (VPNs). In addition, a USB dongle can be configured to lock or unlock a computer. Trusted platform modules (TPMs) secure devices by integrating cryptographic capabilities onto access devices, through the use of microprocessors, or so-called computers-on-a-chip. TPMs used in conjunction with server-side software offer a way to detect and authenticate hardware devices, preventing unauthorized network and data access. Computer case intrusion detection refers to a device, typically a push-button switch, which detects when a computer case is opened. The firmware or BIOS is programmed to show an alert to the operator when the computer is booted up the next time. Drive locks are essentially software tools to encrypt hard drives, making them inaccessible to thieves. Tools exist specifically for encrypting external drives as well. Disabling USB ports is a security option for preventing unauthorized and malicious access to an otherwise secure computer. Infected USB dongles connected to a network from a computer inside the firewall are considered by the magazine Network World as the most common hardware threat facing computer networks. Disconnecting or disabling peripheral devices (like camera, GPS, removable storage, etc.), that are not in use. Mobile-enabled access devices are growing in popularity due to the ubiquitous nature of cell phones. Built-in capabilities such as Bluetooth, the newer Bluetooth low energy (LE), near-field communication (NFC) on non-iOS devices and biometric validation such as thumbprint readers, as well as QR code reader software designed for mobile devices, offer new, secure ways for mobile phones to connect to access control systems. These control systems provide computer security and can also be used for controlling access to secure buildings. IOMMUs allow for hardware-based sandboxing of components in mobile and desktop computers by utilizing direct memory access protections. Physical Unclonable Functions (PUFs) can be used as a digital fingerprint or a unique identifier to integrated circuits and hardware, providing users the ability to secure the hardware supply chains going into their systems. Secure operating systems One use of the term computer security refers to technology that is used to implement secure operating systems. Using secure operating systems is a good way of ensuring computer security. These are systems that have achieved certification from an external security-auditing organization, the most popular evaluations are Common Criteria (CC). Secure coding In software engineering, secure coding aims to guard against the accidental introduction of security vulnerabilities. It is also possible to create software designed from the ground up to be secure. Such systems are secure by design. Beyond this, formal verification aims to prove the correctness of the algorithms underlying a system; important for cryptographic protocols for example. Capabilities and access control lists Within computer systems, two of the main security models capable of enforcing privilege separation are access control lists (ACLs) and role-based access control (RBAC). An access-control list (ACL), with respect to a computer file system, is a list of permissions associated with an object. An ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects. Role-based access control is an approach to restricting system access to authorized users, used by the majority of enterprises with more than 500 employees, and can implement mandatory access control (MAC) or discretionary access control (DAC). A further approach, capability-based security has been mostly restricted to research operating systems. Capabilities can, however, also be implemented at the language level, leading to a style of programming that is essentially a refinement of standard object-oriented design. An open-source project in the area is the E language. User security training The end-user is widely recognized as the weakest link in the security chain and it is estimated that more than 90% of security incidents and breaches involve some kind of human error. Among the most commonly recorded forms of errors and misjudgment are poor password management, sending emails containing sensitive data and attachments to the wrong recipient, the inability to recognize misleading URLs and to identify fake websites and dangerous email attachments. A common mistake that users make is saving their user id/password in their browsers to make it easier to log in to banking sites. This is a gift to attackers who have obtained access to a machine by some means. The risk may be mitigated by the use of two-factor authentication. As the human component of cyber risk is particularly relevant in determining the global cyber risk an organization is facing, security awareness training, at all levels, not only provides formal compliance with regulatory and industry mandates but is considered essential in reducing cyber risk and protecting individuals and companies from the great majority of cyber threats. The focus on the end-user represents a profound cultural change for many security practitioners, who have traditionally approached cybersecurity exclusively from a technical perspective, and moves along the lines suggested by major security centers to develop a culture of cyber awareness within the organization, recognizing that a security-aware user provides an important line of defense against cyber attacks. Digital hygiene Related to end-user training, digital hygiene or cyber hygiene is a fundamental principle relating to information security and, as the analogy with personal hygiene shows, is the equivalent of establishing simple routine measures to minimize the risks from cyber threats. The assumption is that good cyber hygiene practices can give networked users another layer of protection, reducing the risk that one vulnerable node will be used to either mount attacks or compromise another node or network, especially from common cyberattacks. Cyber hygiene should also not be mistaken for proactive cyber defence, a military term. The most common acts of digital hygiene can include updating malware protection, cloud back-ups, passwords, and ensuring restricted admin rights and network firewalls. As opposed to a purely technology-based defense against threats, cyber hygiene mostly regards routine measures that are technically simple to implement and mostly dependent on discipline or education. It can be thought of as an abstract list of tips or measures that have been demonstrated as having a positive effect on personal or collective digital security. As such, these measures can be performed by laypeople, not just security experts. Cyber hygiene relates to personal hygiene as computer viruses relate to biological viruses (or pathogens). However, while the term computer virus was coined almost simultaneously with the creation of the first working computer viruses, the term cyber hygiene is a much later invention, perhaps as late as 2000 by Internet pioneer Vint Cerf. It has since been adopted by the Congress and Senate of the United States, the FBI, EU institutions and heads of state. Difficulty of responding to breaches Responding to attempted security breaches is often very difficult for a variety of reasons, including: Identifying attackers is difficult, as they may operate through proxies, temporary anonymous dial-up accounts, wireless connections, and other anonymizing procedures which make back-tracing difficult – and are often located in another jurisdiction. If they successfully breach security, they have also often gained enough administrative access to enable them to delete logs to cover their tracks. The sheer number of attempted attacks, often by automated vulnerability scanners and computer worms, is so large that organizations cannot spend time pursuing each. Law enforcement officers often lack the skills, interest or budget to pursue attackers. Furthermore, identifying attackers across a network may necessitate collecting logs from multiple locations within the network and across various countries, a process that can be both difficult and time-consuming. Where an attack succeeds and a breach occurs, many jurisdictions now have in place mandatory security breach notification laws. Types of security and privacy Access control Anti-keyloggers Anti-malware Anti-spyware Anti-subversion software Anti-tamper software Anti-theft Antivirus software Cryptographic software Computer-aided dispatch (CAD) Data loss prevention software Firewall Intrusion detection system (IDS) Intrusion prevention system (IPS) Log management software Parental control Records management Sandbox Security information management Security information and event management (SIEM) Software and operating system updating Vulnerability Management Systems at risk The growth in the number of computer systems and the increasing reliance upon them by individuals, businesses, industries, and governments means that there are an increasing number of systems at risk. Financial systems The computer systems of financial regulators and financial institutions like the U.S. Securities and Exchange Commission, SWIFT, investment banks, and commercial banks are prominent hacking targets for cybercriminals interested in manipulating markets and making illicit gains. Websites and apps that accept or store credit card numbers, brokerage accounts, and bank account information are also prominent hacking targets, because of the potential for immediate financial gain from transferring money, making purchases, or selling the information on the black market. In-store payment systems and ATMs have also been tampered with in order to gather customer account data and PINs. The UCLA Internet Report: Surveying the Digital Future (2000) found that the privacy of personal data created barriers to online sales and that more than nine out of 10 internet users were somewhat or very concerned about credit card security. The most common web technologies for improving security between browsers and websites are named SSL (Secure Sockets Layer), and its successor TLS (Transport Layer Security), identity management and authentication services, and domain name services allow companies and consumers to engage in secure communications and commerce. Several versions of SSL and TLS are commonly used today in applications such as web browsing, e-mail, internet faxing, instant messaging, and VoIP (voice-over-IP). There are various interoperable implementations of these technologies, including at least one implementation that is open source. Open source allows anyone to view the application's source code, and look for and report vulnerabilities. The credit card companies Visa and MasterCard cooperated to develop the secure EMV chip which is embedded in credit cards. Further developments include the Chip Authentication Program where banks give customers hand-held card readers to perform online secure transactions. Other developments in this arena include the development of technology such as Instant Issuance which has enabled shopping mall kiosks acting on behalf of banks to issue on-the-spot credit cards to interested customers. Utilities and industrial equipment Computers control functions at many utilities, including coordination of telecommunications, the power grid, nuclear power plants, and valve opening and closing in water and gas networks. The Internet is a potential attack vector for such machines if connected, but the Stuxnet worm demonstrated that even equipment controlled by computers not connected to the Internet can be vulnerable. In 2014, the Computer Emergency Readiness Team, a division of the Department of Homeland Security, investigated 79 hacking incidents at energy companies. Aviation The aviation industry is very reliant on a series of complex systems which could be attacked. A simple power outage at one airport can cause repercussions worldwide, much of the system relies on radio transmissions which could be disrupted, and controlling aircraft over oceans is especially dangerous because radar surveillance only extends 175 to 225 miles offshore. There is also potential for attack from within an aircraft. Implementing fixes in aerospace systems poses a unique challenge because efficient air transportation is heavily affected by weight and volume. Improving security by adding physical devices to airplanes could increase their unloaded weight, and could potentially reduce cargo or passenger capacity. In Europe, with the (Pan-European Network Service) and NewPENS, and in the US with the NextGen program, air navigation service providers are moving to create their own dedicated networks. Many modern passports are now biometric passports, containing an embedded microchip that stores a digitized photograph and personal information such as name, gender, and date of birth. In addition, more countries are introducing facial recognition technology to reduce identity-related fraud. The introduction of the ePassport has assisted border officials in verifying the identity of the passport holder, thus allowing for quick passenger processing. Plans are under way in the US, the UK, and Australia to introduce SmartGate kiosks with both retina and fingerprint recognition technology. The airline industry is moving from the use of traditional paper tickets towards the use of electronic tickets (e-tickets). These have been made possible by advances in online credit card transactions in partnership with the airlines. Long-distance bus companies are also switching over to e-ticketing transactions today. The consequences of a successful attack range from loss of confidentiality to loss of system integrity, air traffic control outages, loss of aircraft, and even loss of life. Consumer devices Desktop computers and laptops are commonly targeted to gather passwords or financial account information or to construct a botnet to attack another target. Smartphones, tablet computers, smart watches, and other mobile devices such as quantified self devices like activity trackers have sensors such as cameras, microphones, GPS receivers, compasses, and accelerometers which could be exploited, and may collect personal information, including sensitive health information. WiFi, Bluetooth, and cell phone networks on any of these devices could be used as attack vectors, and sensors might be remotely activated after a successful breach. The increasing number of home automation devices such as the Nest thermostat are also potential targets. Healthcare Today many healthcare providers and health insurance companies use the internet to provide enhanced products and services. Examples are the use of tele-health to potentially offer better quality and access to healthcare, or fitness trackers to lower insurance premiums. Hospitals increasingly use interconnected devices within their networks. This is called Internet of Things (IoT). Connecting multiple devices within the hospital introduces various benefits, such as automated detection of patient parameters, electronic dose adjustments, and decision support for clinicians. However, as these devices serve as potential access points to the hospital network, security threats increase, and hospitals have to introduce adequate security measures which, for example, comply with the Health Insurance Portability and Accountability Act (HIPAA). The health care company Humana partners with WebMD, Oracle Corporation, EDS and Microsoft to enable its members to access their health care records, as well as to provide an overview of health care plans. Patient records are increasingly being placed on secure in-house networks, alleviating the need for extra storage space. Large corporations Large corporations are common targets. In many cases attacks are aimed at financial gain through identity theft and involve data breaches. Examples include the loss of millions of clients' credit card and financial details by Home Depot, Staples, Target Corporation, and Equifax. Medical records have been targeted in general identify theft, health insurance fraud, and impersonating patients to obtain prescription drugs for recreational purposes or resale. Although cyber threats continue to increase, 62% of all organizations did not increase security training for their business in 2015. Not all attacks are financially motivated, however: security firm HBGary Federal had a serious series of attacks in 2011 from hacktivist group Anonymous in retaliation for the firm's CEO claiming to have infiltrated their group, and Sony Pictures was hacked in 2014 with the apparent dual motive of embarrassing the company through data leaks and crippling the company by wiping workstations and servers. Automobiles Vehicles are increasingly computerized, with engine timing, cruise control, anti-lock brakes, seat belt tensioners, door locks, airbags and advanced driver-assistance systems on many models. Additionally, connected cars may use WiFi and Bluetooth to communicate with onboard consumer devices and the cell phone network. Self-driving cars are expected to be even more complex. All of these systems carry some security risks, and such issues have gained wide attention. Simple examples of risk include a malicious compact disc being used as an attack vector, and the car's onboard microphones being used for eavesdropping. However, if access is gained to a car's internal controller area network, the danger is much greater – and in a widely publicized 2015 test, hackers remotely carjacked a vehicle from 10 miles away and drove it into a ditch. Manufacturers are reacting in numerous ways, with Tesla in 2016 pushing out some security fixes over the air into its cars' computer systems. In the area of autonomous vehicles, in September 2016 the United States Department of Transportation announced some initial safety standards, and called for states to come up with uniform policies. Additionally, e-Drivers' licenses are being developed using the same technology. For example, Mexico's licensing authority (ICV) has used a smart card platform to issue the first e-Drivers' licenses to the city of Monterrey, in the state of Nuevo León. Shipping Shipping companies have adopted RFID (Radio Frequency Identification) technology as an efficient, digitally secure, tracking device. Unlike a barcode, RFID can be read up to 20 feet away. RFID is used by FedEx and UPS. Government Government and military computer systems are commonly attacked by activists and foreign powers. Local and regional government infrastructure such as traffic light controls, police and intelligence agency communications, personnel records, as well as student records. The FBI, CIA, and Pentagon, all utilize secure controlled access technology for any of their buildings. However, the use of this form of technology is spreading into the entrepreneurial world. More and more companies are taking advantage of the development of digitally secure controlled access technology. GE's ACUVision, for example, offers a single panel platform for access control, alarm monitoring and digital recording. Internet of things and physical vulnerabilities The Internet of things (IoT) is the network of physical objects such as devices, vehicles, and buildings that are embedded with electronics, software, sensors, and network connectivity that enables them to collect and exchange data. Concerns have been raised that this is being developed without appropriate consideration of the security challenges involved. While the IoT creates opportunities for more direct integration of the physical world into computer-based systems, it also provides opportunities for misuse. In particular, as the Internet of Things spreads widely, cyberattacks are likely to become an increasingly physical (rather than simply virtual) threat. If a front door's lock is connected to the Internet, and can be locked/unlocked from a phone, then a criminal could enter the home at the press of a button from a stolen or hacked phone. People could stand to lose much more than their credit card numbers in a world controlled by IoT-enabled devices. Thieves have also used electronic means to circumvent non-Internet-connected hotel door locks. An attack aimed at physical infrastructure or human lives is often called a cyber-kinetic attack. As IoT devices and appliances become more widespread, the prevalence and potential damage of cyber-kinetic attacks can increase substantially. Medical systems Medical devices have either been successfully attacked or had potentially deadly vulnerabilities demonstrated, including both in-hospital diagnostic equipment and implanted devices including pacemakers and insulin pumps. There are many reports of hospitals and hospital organizations getting hacked, including ransomware attacks, Windows XP exploits, viruses, and data breaches of sensitive data stored on hospital servers. On 28 December 2016 the US Food and Drug Administration released its recommendations for how medical device manufacturers should maintain the security of Internet-connected devices – but no structure for enforcement. Energy sector In distributed generation systems, the risk of a cyber attack is real, according to Daily Energy Insider. An attack could cause a loss of power in a large area for a long period of time, and such an attack could have just as severe consequences as a natural disaster. The District of Columbia is considering creating a Distributed Energy Resources (DER) Authority within the city, with the goal being for customers to have more insight into their own energy use and giving the local electric utility, Pepco, the chance to better estimate energy demand. The D.C. proposal, however, would "allow third-party vendors to create numerous points of energy distribution, which could potentially create more opportunities for cyber attackers to threaten the electric grid." Telecommunications Perhaps the most widely known digitally secure telecommunication device is the SIM (Subscriber Identity Module) card, a device that is embedded in most of the world's cellular devices before any service can be obtained. The SIM card is just the beginning of this digitally secure environment. The Smart Card Web Servers draft standard (SCWS) defines the interfaces to an HTTP server in a smart card. Tests are being conducted to secure OTA ("over-the-air") payment and credit card information from and to a mobile phone. Combination SIM/DVD devices are being developed through Smart Video Card technology which embeds a DVD-compliant optical disc into the card body of a regular SIM card. Other telecommunication developments involving digital security include mobile signatures, which use the embedded SIM card to generate a legally binding electronic signature. Cost and impact of security breaches Serious financial damage has been caused by security breaches, but because there is no standard model for estimating the cost of an incident, the only data available is that which is made public by the organizations involved. "Several computer security consulting firms produce estimates of total worldwide losses attributable to virus and worm attacks and to hostile digital acts in general. The 2003 loss estimates by these firms range from $13 billion (worms and viruses only) to $226 billion (for all forms of covert attacks). The reliability of these estimates is often challenged; the underlying methodology is basically anecdotal." However, reasonable estimates of the financial cost of security breaches can actually help organizations make rational investment decisions. According to the classic Gordon-Loeb Model analyzing the optimal investment level in information security, one can conclude that the amount a firm spends to protect information should generally be only a small fraction of the expected loss (i.e., the expected value of the loss resulting from a cyber/information security breach). Attacker motivation As with physical security, the motivations for breaches of computer security vary between attackers. Some are thrill-seekers or vandals, some are activists, others are criminals looking for financial gain. State-sponsored attackers are now common and well resourced but started with amateurs such as Markus Hess who hacked for the KGB, as recounted by Clifford Stoll in The Cuckoo's Egg. Attackers motivations can vary for all types of attacks from pleasure to political goals. For example, hacktivists may target a company or organization that carries out activities they do not agree with. This would be to create bad publicity for the company by having its website crash. High capability hackers, often with larger backing or state sponsorship, may attack based on the demands of their financial backers. These attacks are more likely to attempt more serious attack. An example of a more serious attack was the 2015 Ukraine power grid hack, which reportedly utilised the spear-phising, destruction of files, and denial-of-service attacks to carry out the full attack. Additionally, recent attacker motivations can be traced back to extremist organizations seeking to gain political advantage or disrupt social agendas. The growth of the internet, mobile technologies, and inexpensive computing devices have led to a rise in capabilities but also to the risk to environments that are deemed as vital to operations. All critical targeted environments are susceptible to compromise and this has led to a series of proactive studies on how to migrate the risk by taking into consideration motivations by these types of actors. Several stark differences exist between the hacker motivation and that of nation state actors seeking to attack based on an ideological preference. A key aspect of threat modeling for any system is identifying the motivations behind potential attacks and the individuals or groups likely to carry them out. The level and detail of security measures will differ based on the specific system being protected. For instance, a home personal computer, a bank, and a classified military network each face distinct threats, despite using similar underlying technologies. Computer security incident management Computer security incident management is an organized approach to addressing and managing the aftermath of a computer security incident or compromise with the goal of preventing a breach or thwarting a cyberattack. An incident that is not identified and managed at the time of intrusion typically escalates to a more damaging event such as a data breach or system failure. The intended outcome of a computer security incident response plan is to contain the incident, limit damage and assist recovery to business as usual. Responding to compromises quickly can mitigate exploited vulnerabilities, restore services and processes and minimize losses. Incident response planning allows an organization to establish a series of best practices to stop an intrusion before it causes damage. Typical incident response plans contain a set of written instructions that outline the organization's response to a cyberattack. Without a documented plan in place, an organization may not successfully detect an intrusion or compromise and stakeholders may not understand their roles, processes and procedures during an escalation, slowing the organization's response and resolution. There are four key components of a computer security incident response plan: Preparation: Preparing stakeholders on the procedures for handling computer security incidents or compromises Detection and analysis: Identifying and investigating suspicious activity to confirm a security incident, prioritizing the response based on impact and coordinating notification of the incident Containment, eradication and recovery: Isolating affected systems to prevent escalation and limit impact, pinpointing the genesis of the incident, removing malware, affected systems and bad actors from the environment and restoring systems and data when a threat no longer remains Post incident activity: Post mortem analysis of the incident, its root cause and the organization's response with the intent of improving the incident response plan and future response efforts. Notable attacks and breaches Some illustrative examples of different types of computer security breaches are given below. Robert Morris and the first computer worm In 1988, 60,000 computers were connected to the Internet, and most were mainframes, minicomputers and professional workstations. On 2 November 1988, many started to slow down, because they were running a malicious code that demanded processor time and that spread itself to other computers – the first internet computer worm. The software was traced back to 23-year-old Cornell University graduate student Robert Tappan Morris who said "he wanted to count how many machines were connected to the Internet". Rome Laboratory In 1994, over a hundred intrusions were made by unidentified crackers into the Rome Laboratory, the US Air Force's main command and research facility. Using trojan horses, hackers were able to obtain unrestricted access to Rome's networking systems and remove traces of their activities. The intruders were able to obtain classified files, such as air tasking order systems data and furthermore able to penetrate connected networks of National Aeronautics and Space Administration's Goddard Space Flight Center, Wright-Patterson Air Force Base, some Defense contractors, and other private sector organizations, by posing as a trusted Rome center user. TJX customer credit card details In early 2007, American apparel and home goods company TJX announced that it was the victim of an unauthorized computer systems intrusion and that the hackers had accessed a system that stored data on credit card, debit card, check, and merchandise return transactions. Stuxnet attack In 2010, the computer worm known as Stuxnet reportedly ruined almost one-fifth of Iran's nuclear centrifuges. It did so by disrupting industrial programmable logic controllers (PLCs) in a targeted attack. This is generally believed to have been launched by Israel and the United States to disrupt Iran's nuclear program – although neither has publicly admitted this. Global surveillance disclosures In early 2013, documents provided by Edward Snowden were published by The Washington Post and The Guardian exposing the massive scale of NSA global surveillance. There were also indications that the NSA may have inserted a backdoor in a NIST standard for encryption. This standard was later withdrawn due to widespread criticism. The NSA additionally were revealed to have tapped the links between Google's data centers. Target and Home Depot breaches A Ukrainian hacker known as Rescator broke into Target Corporation computers in 2013, stealing roughly 40 million credit cards, and then Home Depot computers in 2014, stealing between 53 and 56 million credit card numbers. Warnings were delivered at both corporations, but ignored; physical security breaches using self checkout machines are believed to have played a large role. "The malware utilized is absolutely unsophisticated and uninteresting," says Jim Walter, director of threat intelligence operations at security technology company McAfee – meaning that the heists could have easily been stopped by existing antivirus software had administrators responded to the warnings. The size of the thefts has resulted in major attention from state and Federal United States authorities and the investigation is ongoing. Office of Personnel Management data breach In April 2015, the Office of Personnel Management discovered it had been hacked more than a year earlier in a data breach, resulting in the theft of approximately 21.5 million personnel records handled by the office. The Office of Personnel Management hack has been described by federal officials as among the largest breaches of government data in the history of the United States. Data targeted in the breach included personally identifiable information such as Social Security numbers, names, dates and places of birth, addresses, and fingerprints of current and former government employees as well as anyone who had undergone a government background check. It is believed the hack was perpetrated by Chinese hackers. Ashley Madison breach In July 2015, a hacker group is known as The Impact Team successfully breached the extramarital relationship website Ashley Madison, created by Avid Life Media. The group claimed that they had taken not only company data but user data as well. After the breach, The Impact Team dumped emails from the company's CEO, to prove their point, and threatened to dump customer data unless the website was taken down permanently. When Avid Life Media did not take the site offline the group released two more compressed files, one 9.7GB and the second 20GB. After the second data dump, Avid Life Media CEO Noel Biderman resigned; but the website remained to function. Colonial Pipeline ransomware attack In June 2021, the cyber attack took down the largest fuel pipeline in the U.S. and led to shortages across the East Coast. Legal issues and global regulation International legal issues of cyber attacks are complicated in nature. There is no global base of common rules to judge, and eventually punish, cybercrimes and cybercriminals - and where security firms or agencies do locate the cybercriminal behind the creation of a particular piece of malware or form of cyber attack, often the local authorities cannot take action due to lack of laws under which to prosecute. Proving attribution for cybercrimes and cyberattacks is also a major problem for all law enforcement agencies. "Computer viruses switch from one country to another, from one jurisdiction to another – moving around the world, using the fact that we don't have the capability to globally police operations like this. So the Internet is as if someone [had] given free plane tickets to all the online criminals of the world." The use of techniques such as dynamic DNS, fast flux and bullet proof servers add to the difficulty of investigation and enforcement. Role of government The role of the government is to make regulations to force companies and organizations to protect their systems, infrastructure and information from any cyberattacks, but also to protect its own national infrastructure such as the national power-grid. The government's regulatory role in cyberspace is complicated. For some, cyberspace was seen as a virtual space that was to remain free of government intervention, as can be seen in many of today's libertarian blockchain and bitcoin discussions. Many government officials and experts think that the government should do more and that there is a crucial need for improved regulation, mainly due to the failure of the private sector to solve efficiently the cybersecurity problem. R. Clarke said during a panel discussion at the RSA Security Conference in San Francisco, he believes that the "industry only responds when you threaten regulation. If the industry doesn't respond (to the threat), you have to follow through." On the other hand, executives from the private sector agree that improvements are necessary, but think that government intervention would affect their ability to innovate efficiently. Daniel R. McCarthy analyzed this public-private partnership in cybersecurity and reflected on the role of cybersecurity in the broader constitution of political order. On 22 May 2020, the UN Security Council held its second ever informal meeting on cybersecurity to focus on cyber challenges to international peace. According to UN Secretary-General António Guterres, new technologies are too often used to violate rights. International actions Many different teams and organizations exist, including: The Forum of Incident Response and Security Teams (FIRST) is the global association of CSIRTs. The US-CERT, AT&T, Apple, Cisco, McAfee, Microsoft are all members of this international team. The Council of Europe helps protect societies worldwide from the threat of cybercrime through the Convention on Cybercrime. The purpose of the Messaging Anti-Abuse Working Group (MAAWG) is to bring the messaging industry together to work collaboratively and to successfully address the various forms of messaging abuse, such as spam, viruses, denial-of-service attacks and other messaging exploitations. France Telecom, Facebook, AT&T, Apple, Cisco, Sprint are some of the members of the MAAWG. ENISA : The European Network and Information Security Agency (ENISA) is an agency of the European Union with the objective to improve network and information security in the European Union. Europe On 14 April 2016, the European Parliament and the Council of the European Union adopted the General Data Protection Regulation (GDPR). The GDPR, which came into force on 25 May 2018, grants individuals within the European Union (EU) and the European Economic Area (EEA) the right to the protection of personal data. The regulation requires that any entity that processes personal data incorporate data protection by design and by default. It also requires that certain organizations appoint a Data Protection Officer (DPO). National actions Computer emergency response teams Most countries have their own computer emergency response team to protect network security. Canada Since 2010, Canada has had a cybersecurity strategy. This functions as a counterpart document to the National Strategy and Action Plan for Critical Infrastructure. The strategy has three main pillars: securing government systems, securing vital private cyber systems, and helping Canadians to be secure online. There is also a Cyber Incident Management Framework to provide a coordinated response in the event of a cyber incident. The Canadian Cyber Incident Response Centre (CCIRC) is responsible for mitigating and responding to threats to Canada's critical infrastructure and cyber systems. It provides support to mitigate cyber threats, technical support to respond & recover from targeted cyber attacks, and provides online tools for members of Canada's critical infrastructure sectors. It posts regular cybersecurity bulletins & operates an online reporting tool where individuals and organizations can report a cyber incident. To inform the general public on how to protect themselves online, Public Safety Canada has partnered with STOP.THINK.CONNECT, a coalition of non-profit, private sector, and government organizations, and launched the Cyber Security Cooperation Program. They also run the GetCyberSafe portal for Canadian citizens, and Cyber Security Awareness Month during October. Public Safety Canada aims to begin an evaluation of Canada's cybersecurity strategy in early 2015. Australia Australian federal government announced an $18.2 million investment to fortify the cybersecurity resilience of small and medium enterprises (SMEs) and enhance their capabilities in responding to cyber threats. This financial backing is an integral component of the soon-to-be-unveiled 2023-2030 Australian Cyber Security Strategy, slated for release within the current week. A substantial allocation of $7.2 million is earmarked for the establishment of a voluntary cyber health check program, facilitating businesses in conducting a comprehensive and tailored self-assessment of their cybersecurity upskill. This avant-garde health assessment serves as a diagnostic tool, enabling enterprises to ascertain the robustness of Australia's cyber security regulations. Furthermore, it affords them access to a repository of educational resources and materials, fostering the acquisition of skills necessary for an elevated cybersecurity posture. This groundbreaking initiative was jointly disclosed by Minister for Cyber Security Clare O'Neil and Minister for Small Business Julie Collins. India Some provisions for cybersecurity have been incorporated into rules framed under the Information Technology Act 2000. The National Cyber Security Policy 2013 is a policy framework by the Ministry of Electronics and Information Technology (MeitY) which aims to protect the public and private infrastructure from cyberattacks, and safeguard "information, such as personal information (of web users), financial and banking information and sovereign data". CERT- In is the nodal agency which monitors the cyber threats in the country. The post of National Cyber Security Coordinator has also been created in the Prime Minister's Office (PMO). The Indian Companies Act 2013 has also introduced cyber law and cybersecurity obligations on the part of Indian directors. Some provisions for cybersecurity have been incorporated into rules framed under the Information Technology Act 2000 Update in 2013. South Korea Following cyberattacks in the first half of 2013, when the government, news media, television stations, and bank websites were compromised, the national government committed to the training of 5,000 new cybersecurity experts by 2017. The South Korean government blamed its northern counterpart for these attacks, as well as incidents that occurred in 2009, 2011, and 2012, but Pyongyang denies the accusations. United States Cyber Plan The United States has its first fully formed cyber plan in 15 years, as a result of the release of this National Cyber plan. In this policy, the US says it will: Protect the country by keeping networks, systems, functions, and data safe; Promote American wealth by building a strong digital economy and encouraging strong domestic innovation; Peace and safety should be kept by making it easier for the US to stop people from using computer tools for bad things, working with friends and partners to do this; and increase the United States' impact around the world to support the main ideas behind an open, safe, reliable, and compatible Internet. The new U.S. cyber strategy seeks to allay some of those concerns by promoting responsible behavior in cyberspace, urging nations to adhere to a set of norms, both through international law and voluntary standards. It also calls for specific measures to harden U.S. government networks from attacks, like the June 2015 intrusion into the U.S. Office of Personnel Management (OPM), which compromised the records of about 4.2 million current and former government employees. And the strategy calls for the U.S. to continue to name and shame bad cyber actors, calling them out publicly for attacks when possible, along with the use of economic sanctions and diplomatic pressure. Legislation The 1986 , the Computer Fraud and Abuse Act is the key legislation. It prohibits unauthorized access or damage of protected computers as defined in . Although various other measures have been proposed – none have succeeded. In 2013, executive order 13636 Improving Critical Infrastructure Cybersecurity was signed, which prompted the creation of the NIST Cybersecurity Framework. In response to the Colonial Pipeline ransomware attack President Joe Biden signed Executive Order 14028 on May 12, 2021, to increase software security standards for sales to the government, tighten detection and security on existing systems, improve information sharing and training, establish a Cyber Safety Review Board, and improve incident response. Standardized government testing services The General Services Administration (GSA) has standardized the penetration test service as a pre-vetted support service, to rapidly address potential vulnerabilities, and stop adversaries before they impact US federal, state and local governments. These services are commonly referred to as Highly Adaptive Cybersecurity Services (HACS). Agencies The Department of Homeland Security has a dedicated division responsible for the response system, risk management program and requirements for cybersecurity in the United States called the National Cyber Security Division. The division is home to US-CERT operations and the National Cyber Alert System. The National Cybersecurity and Communications Integration Center brings together government organizations responsible for protecting computer networks and networked infrastructure. The third priority of the FBI is to: "Protect the United States against cyber-based attacks and high-technology crimes", and they, along with the National White Collar Crime Center (NW3C), and the Bureau of Justice Assistance (BJA) are part of the multi-agency task force, The Internet Crime Complaint Center, also known as IC3. In addition to its own specific duties, the FBI participates alongside non-profit organizations such as InfraGard. The Computer Crime and Intellectual Property Section (CCIPS) operates in the United States Department of Justice Criminal Division. The CCIPS is in charge of investigating computer crime and intellectual property crime and is specialized in the search and seizure of digital evidence in computers and networks. In 2017, CCIPS published A Framework for a Vulnerability Disclosure Program for Online Systems to help organizations "clearly describe authorized vulnerability disclosure and discovery conduct, thereby substantially reducing the likelihood that such described activities will result in a civil or criminal violation of law under the Computer Fraud and Abuse Act (18 U.S.C. § 1030)." The United States Cyber Command, also known as USCYBERCOM, "has the mission to direct, synchronize, and coordinate cyberspace planning and operations to defend and advance national interests in collaboration with domestic and international partners." It has no role in the protection of civilian networks. The U.S. Federal Communications Commission's role in cybersecurity is to strengthen the protection of critical communications infrastructure, to assist in maintaining the reliability of networks during disasters, to aid in swift recovery after, and to ensure that first responders have access to effective communications services. The Food and Drug Administration has issued guidance for medical devices, and the National Highway Traffic Safety Administration is concerned with automotive cybersecurity. After being criticized by the Government Accountability Office, and following successful attacks on airports and claimed attacks on airplanes, the Federal Aviation Administration has devoted funding to securing systems on board the planes of private manufacturers, and the Aircraft Communications Addressing and Reporting System. Concerns have also been raised about the future Next Generation Air Transportation System. The US Department of Defense (DoD) issued DoD Directive 8570 in 2004, supplemented by DoD Directive 8140, requiring all DoD employees and all DoD contract personnel involved in information assurance roles and activities to earn and maintain various industry Information Technology (IT) certifications in an effort to ensure that all DoD personnel involved in network infrastructure defense have minimum levels of IT industry recognized knowledge, skills and abilities (KSA). Andersson and Reimers (2019) report these certifications range from CompTIA's A+ and Security+ through the ICS2.org's CISSP, etc. Computer emergency readiness team Computer emergency response team is a name given to expert groups that handle computer security incidents. In the US, two distinct organizations exist, although they do work closely together. US-CERT: part of the National Cyber Security Division of the United States Department of Homeland Security. CERT/CC: created by the Defense Advanced Research Projects Agency (DARPA) and run by the Software Engineering Institute (SEI). U.S. NRC, 10 CFR 73.54 Cybersecurity In the context of U.S. nuclear power plants, the U.S. Nuclear Regulatory Commission (NRC) outlines cybersecurity requirements under 10 CFR Part 73, specifically in §73.54. NEI 08-09: Cybersecurity Plan for Nuclear Power Plants The Nuclear Energy Institute's NEI 08-09 document, Cyber Security Plan for Nuclear Power Reactors, outlines a comprehensive framework for cybersecurity in the nuclear power industry. Drafted with input from the U.S. NRC, this guideline is instrumental in aiding licensees to comply with the Code of Federal Regulations (CFR), which mandates robust protection of digital computers and equipment and communications systems at nuclear power plants against cyber threats. Modern warfare There is growing concern that cyberspace will become the next theater of warfare. As Mark Clayton from The Christian Science Monitor wrote in a 2015 article titled "The New Cyber Arms Race": This has led to new terms such as cyberwarfare and cyberterrorism. The United States Cyber Command was created in 2009 and many other countries have similar forces. There are a few critical voices that question whether cybersecurity is as significant a threat as it is made out to be. Careers Cybersecurity is a fast-growing field of IT concerned with reducing organizations' risk of hack or data breaches. According to research from the Enterprise Strategy Group, 46% of organizations say that they have a "problematic shortage" of cybersecurity skills in 2016, up from 28% in 2015. Commercial, government and non-governmental organizations all employ cybersecurity professionals. The fastest increases in demand for cybersecurity workers are in industries managing increasing volumes of consumer data such as finance, health care, and retail. However, the use of the term cybersecurity is more prevalent in government job descriptions. Typical cybersecurity job titles and descriptions include: Security analyst Analyzes and assesses vulnerabilities in the infrastructure (software, hardware, networks), investigates using available tools and countermeasures to remedy the detected vulnerabilities and recommends solutions and best practices. Analyzes and assesses damage to the data/infrastructure as a result of security incidents, examines available recovery tools and processes, and recommends solutions. Tests for compliance with security policies and procedures. May assist in the creation, implementation, or management of security solutions. Security engineer Performs security monitoring, security and data/logs analysis, and forensic analysis, to detect security incidents, and mount the incident response. Investigates and utilizes new technologies and processes to enhance security capabilities and implement improvements. May also review code or perform other security engineering methodologies. Security architect Designs a security system or major components of a security system, and may head a security design team building a new security system. Chief Information Security Officer (CISO) A high-level management position responsible for the entire information security division/staff. The position may include hands-on technical work. Chief Security Officer (CSO) A high-level management position responsible for the entire security division/staff. A newer position is now deemed needed as security risks grow. Data Protection Officer (DPO) A DPO is tasked with monitoring compliance with data protection laws (such as GDPR), data protection policies, awareness-raising, training, and audits. Security Consultant/Specialist/Intelligence Broad titles that encompass any one or all of the other roles or titles tasked with protecting computers, networks, software, data or information systems against viruses, worms, spyware, malware, intrusion detection, unauthorized access, denial-of-service attacks, and an ever-increasing list of attacks by hackers acting as individuals or as part of organized crime or foreign governments. Student programs are also available for people interested in beginning a career in cybersecurity. Meanwhile, a flexible and effective option for information security professionals of all experience levels to keep studying is online security training, including webcasts. A wide range of certified courses are also available. In the United Kingdom, a nationwide set of cybersecurity forums, known as the U.K Cyber Security Forum, were established supported by the Government's cybersecurity strategy in order to encourage start-ups and innovation and to address the skills gap identified by the U.K Government. In Singapore, the Cyber Security Agency has issued a Singapore Operational Technology (OT) Cybersecurity Competency Framework (OTCCF). The framework defines emerging cybersecurity roles in Operational Technology. The OTCCF was endorsed by the Infocomm Media Development Authority (IMDA). It outlines the different OT cybersecurity job positions as well as the technical skills and core competencies necessary. It also depicts the many career paths available, including vertical and lateral advancement opportunities. Terminology The following terms used with regards to computer security are explained below: Access authorization restricts access to a computer to a group of users through the use of authentication systems. These systems can protect either the whole computer, such as through an interactive login screen, or individual services, such as a FTP server. There are many methods for identifying and authenticating users, such as passwords, identification cards, smart cards, and biometric systems. Anti-virus software consists of computer programs that attempt to identify, thwart, and eliminate computer viruses and other malicious software (malware). Applications are executable code, so general corporate practice is to restrict or block users the power to install them; to install them only when there is a demonstrated need (e.g. software needed to perform assignments); to install only those which are known to be reputable (preferably with access to the computer code used to create the application,- and to reduce the attack surface by installing as few as possible. They are typically run with least privilege, with a robust process in place to identify, test and install any released security patches or updates for them. For example, programs can be installed into an individual user's account, which limits the program's potential access, as well as being a means control which users have specific exceptions to policy. In Linux, FreeBSD, OpenBSD, and other Unix-like operating systems there is an option to further restrict an application using chroot or other means of restricting the application to its own 'sandbox'. For example. Linux provides namespaces, and Cgroups to further restrict the access of an application to system resources. Generalized security frameworks such as SELinux or AppArmor help administrators control access. Java and other languages which compile to Java byte code and run in the Java virtual machine can have their access to other applications controlled at the virtual machine level. Some software can be run in software containers which can even provide their own set of system libraries, limiting the software's, or anyone controlling it, access to the server's versions of the libraries. Authentication techniques can be used to ensure that communication end-points are who they say they are. Automated theorem proving and other verification tools can be used to enable critical algorithms and code used in secure systems to be mathematically proven to meet their specifications. Backups are one or more copies kept of important computer files. Typically, multiple copies will be kept at different locations so that if a copy is stolen or damaged, other copies will still exist. Capability and access control list techniques can be used to ensure privilege separation and mandatory access control. Capabilities vs. ACLs discusses their use. Chain of trust techniques can be used to attempt to ensure that all software loaded has been certified as authentic by the system's designers. Confidentiality is the nondisclosure of information except to another authorized person. Cryptographic techniques can be used to defend data in transit between systems, reducing the probability that the data exchange between systems can be intercepted or modified. Cyber attribution, is an attribution of cybercrime, i.e., finding who perpetrated a cyberattack. Cyberwarfare is an Internet-based conflict that involves politically motivated attacks on information and information systems. Such attacks can, for example, disable official websites and networks, disrupt or disable essential services, steal or alter classified data, and cripple financial systems. Data integrity is the accuracy and consistency of stored data, indicated by an absence of any alteration in data between two updates of a data record. Encryption is used to protect the confidentiality of a message. Cryptographically secure ciphers are designed to make any practical attempt of breaking them infeasible. Symmetric-key ciphers are suitable for bulk encryption using shared keys, and public-key encryption using digital certificates can provide a practical solution for the problem of securely communicating when no key is shared in advance. Endpoint security software aids networks in preventing malware infection and data theft at network entry points made vulnerable by the prevalence of potentially infected devices such as laptops, mobile devices, and USB drives. Firewalls serve as a gatekeeper system between networks, allowing only traffic that matches defined rules. They often include detailed logging, and may include intrusion detection and intrusion prevention features. They are near-universal between company local area networks and the Internet, but can also be used internally to impose traffic rules between networks if network segmentation is configured. A hacker is someone who seeks to breach defenses and exploit weaknesses in a computer system or network. Honey pots are computers that are intentionally left vulnerable to attack by crackers. They can be used to catch crackers and to identify their techniques. Intrusion-detection systems are devices or software applications that monitor networks or systems for malicious activity or policy violations. A microkernel is an approach to operating system design which has only the near-minimum amount of code running at the most privileged level – and runs other elements of the operating system such as device drivers, protocol stacks and file systems, in the safer, less privileged user space. Pinging. The standard ping application can be used to test if an IP address is in use. If it is, attackers may then try a port scan to detect which services are exposed. A port scan is used to probe an IP address for open ports to identify accessible network services and applications. A key logger is spyware that silently captures and stores each keystroke that a user types on the computer's keyboard. Social engineering is the use of deception to manipulate individuals to breach security. Logic bombs is a type of malware added to a legitimate program that lies dormant until it is triggered by a specific event. A unikernel is a computer program that runs on a minimalistic operating system where a single application is allowed to run (as opposed to a general purpose operating system where many applications can run at the same time). This approach to minimizing the attack surface is adopted mostly in cloud environments where software is deployed in virtual machines. Zero trust security means that no one is trusted by default from inside or outside the network, and verification is required from everyone trying to gain access to resources on the network. History Since the Internet's arrival and with the digital transformation initiated in recent years, the notion of cybersecurity has become a familiar subject in both our professional and personal lives. Cybersecurity and cyber threats have been consistently present for the last 60 years of technological change. In the 1970s and 1980s, computer security was mainly limited to academia until the conception of the Internet, where, with increased connectivity, computer viruses and network intrusions began to take off. After the spread of viruses in the 1990s, the 2000s marked the institutionalization of organized attacks such as distributed denial of service. This led to the formalization of cybersecurity as a professional discipline. The April 1967 session organized by Willis Ware at the Spring Joint Computer Conference, and the later publication of the Ware Report, were foundational moments in the history of the field of computer security. Ware's work straddled the intersection of material, cultural, political, and social concerns. A 1977 NIST publication introduced the CIA triad of confidentiality, integrity, and availability as a clear and simple way to describe key security goals. While still relevant, many more elaborate frameworks have since been proposed. However, in the 1970s and 1980s, there were no grave computer threats because computers and the internet were still developing, and security threats were easily identifiable. More often, threats came from malicious insiders who gained unauthorized access to sensitive documents and files. Although malware and network breaches existed during the early years, they did not use them for financial gain. By the second half of the 1970s, established computer firms like IBM started offering commercial access control systems and computer security software products. One of the earliest examples of an attack on a computer network was the computer worm Creeper written by Bob Thomas at BBN, which propagated through the ARPANET in 1971. The program was purely experimental in nature and carried no malicious payload. A later program, Reaper, was created by Ray Tomlinson in 1972 and used to destroy Creeper. Between September 1986 and June 1987, a group of German hackers performed the first documented case of cyber espionage. The group hacked into American defense contractors, universities, and military base networks and sold gathered information to the Soviet KGB. The group was led by Markus Hess, who was arrested on 29 June 1987. He was convicted of espionage (along with two co-conspirators) on 15 Feb 1990. In 1988, one of the first computer worms, called the Morris worm, was distributed via the Internet. It gained significant mainstream media attention. In 1993, Netscape started developing the protocol SSL, shortly after the National Center for Supercomputing Applications (NCSA) launched Mosaic 1.0, the first web browser, in 1993. Netscape had SSL version 1.0 ready in 1994, but it was never released to the public due to many serious security vulnerabilities. These weaknesses included replay attacks and a vulnerability that allowed hackers to alter unencrypted communications sent by users. However, in February 1995, Netscape launched Version 2.0. The National Security Agency (NSA) is responsible for the protection of U.S. information systems and also for collecting foreign intelligence. The agency analyzes commonly used software and system configurations to find security flaws, which it can use for offensive purposes against competitors of the United States. NSA contractors created and sold click-and-shoot attack tools to US agencies and close allies, but eventually, the tools made their way to foreign adversaries. In 2016, NSAs own hacking tools were hacked, and they have been used by Russia and North Korea. NSA's employees and contractors have been recruited at high salaries by adversaries, anxious to compete in cyberwarfare. In 2007, the United States and Israel began exploiting security flaws in the Microsoft Windows operating system to attack and damage equipment used in Iran to refine nuclear materials. Iran responded by heavily investing in their own cyberwarfare capability, which it began using against the United States. Notable scholars See also References Further reading Cybersecurity Best Practices | Cybersecurity and Infrastructure Security Agency CISA. (n.d.). Retrieved April 24, 2024, from https://www.cisa.gov/topics/cybersecurity-best-practices Sztyber-Betley, A., Syfert, M., Kościelny, J. M., & Górecka, Z. (2023). Controller Cyber-Attack Detection and Isolation †: Sensors (14248220). Sensors (14248220), 23(5), 2778. Computer security Cryptography Cyberwarfare Data protection Information governance Malware
Computer security
[ "Mathematics", "Technology", "Engineering" ]
18,420
[ "Malware", "Cybersecurity engineering", "Cryptography", "Applied mathematics", "Computer security exploits" ]
7,403
https://en.wikipedia.org/wiki/Chemotaxis
Chemotaxis (from chemo- + taxis) is the movement of an organism or entity in response to a chemical stimulus. Somatic cells, bacteria, and other single-cell or multicellular organisms direct their movements according to certain chemicals in their environment. This is important for bacteria to find food (e.g., glucose) by swimming toward the highest concentration of food molecules, or to flee from poisons (e.g., phenol). In multicellular organisms, chemotaxis is critical to early development (e.g., movement of sperm towards the egg during fertilization) and development (e.g., migration of neurons or lymphocytes) as well as in normal function and health (e.g., migration of leukocytes during injury or infection). In addition, it has been recognized that mechanisms that allow chemotaxis in animals can be subverted during cancer metastasis, and the aberrant change of the overall property of these networks, which control chemotaxis, can lead to carcinogenesis. The aberrant chemotaxis of leukocytes and lymphocytes also contribute to inflammatory diseases such as atherosclerosis, asthma, and arthritis. Sub-cellular components, such as the polarity patch generated by mating yeast, may also display chemotactic behavior. Positive chemotaxis occurs if the movement is toward a higher concentration of the chemical in question; negative chemotaxis if the movement is in the opposite direction. Chemically prompted kinesis (randomly directed or nondirectional) can be called chemokinesis. History of chemotaxis research Although migration of cells was detected from the early days of the development of microscopy by Leeuwenhoek, a Caltech lecture regarding chemotaxis propounds that 'erudite description of chemotaxis was only first made by T. W. Engelmann (1881) and W. F. Pfeffer (1884) in bacteria, and H. S. Jennings (1906) in ciliates'. The Nobel Prize laureate I. Metchnikoff also contributed to the study of the field during 1882 to 1886, with investigations of the process as an initial step of phagocytosis. The significance of chemotaxis in biology and clinical pathology was widely accepted in the 1930s, and the most fundamental definitions underlying the phenomenon were drafted by this time. The most important aspects in quality control of chemotaxis assays were described by H. Harris in the 1950s. In the 1960s and 1970s, the revolution of modern cell biology and biochemistry provided a series of novel techniques that became available to investigate the migratory responder cells and subcellular fractions responsible for chemotactic activity. The availability of this technology led to the discovery of C5a, a major chemotactic factor involved in acute inflammation. The pioneering works of J. Adler modernized Pfeffer's capillary assay and represented a significant turning point in understanding the whole process of intracellular signal transduction of bacteria. Bacterial chemotaxis—general characteristics Some bacteria, such as E. coli, have several flagella per cell (4–10 typically). These can rotate in two ways: Counter-clockwise rotation aligns the flagella into a single rotating bundle, causing the bacterium to swim in a straight line; and Clockwise rotation breaks the flagella bundle apart such that each flagellum points in a different direction, causing the bacterium to tumble in place. The directions of rotation are given for an observer outside the cell looking down the flagella toward the cell. Behavior The overall movement of a bacterium is the result of alternating tumble and swim phases, called run-and-tumble motion. As a result, the trajectory of a bacterium swimming in a uniform environment will form a random walk with relatively straight swims interrupted by random tumbles that reorient the bacterium. Bacteria such as E. coli are unable to choose the direction in which they swim, and are unable to swim in a straight line for more than a few seconds due to rotational diffusion; in other words, bacteria "forget" the direction in which they are going. By repeatedly evaluating their course, and adjusting if they are moving in the wrong direction, bacteria can direct their random walk motion toward favorable locations. In the presence of a chemical gradient bacteria will chemotax, or direct their overall motion based on the gradient. If the bacterium senses that it is moving in the correct direction (toward attractant/away from repellent), it will keep swimming in a straight line for a longer time before tumbling; however, if it is moving in the wrong direction, it will tumble sooner. Bacteria like E. coli use temporal sensing to decide whether their situation is improving or not, and in this way, find the location with the highest concentration of attractant, detecting even small differences in concentration. This biased random walk is a result of simply choosing between two methods of random movement; namely tumbling and straight swimming. The helical nature of the individual flagellar filament is critical for this movement to occur. The protein structure that makes up the flagellar filament, flagellin, is conserved among all flagellated bacteria. Vertebrates seem to have taken advantage of this fact by possessing an immune receptor (TLR5) designed to recognize this conserved protein. As in many instances in biology, there are bacteria that do not follow this rule. Many bacteria, such as Vibrio, are monoflagellated and have a single flagellum at one pole of the cell. Their method of chemotaxis is different. Others possess a single flagellum that is kept inside the cell wall. These bacteria move by spinning the whole cell, which is shaped like a corkscrew. Signal transduction Chemical gradients are sensed through multiple transmembrane receptors, called methyl-accepting chemotaxis proteins (MCPs), which vary in the molecules that they detect. Thousands of MCP receptors are known to be encoded across the bacterial kingdom. These receptors may bind attractants or repellents directly or indirectly through interaction with proteins of periplasmatic space. The signals from these receptors are transmitted across the plasma membrane into the cytosol, where Che proteins are activated. The Che proteins alter the tumbling frequency, and alter the receptors. Flagellum regulation The proteins CheW and CheA bind to the receptor. The absence of receptor activation results in autophosphorylation in the histidine kinase, CheA, at a single highly conserved histidine residue. CheA, in turn, transfers phosphoryl groups to conserved aspartate residues in the response regulators CheB and CheY; CheA is a histidine kinase and it does not actively transfer the phosphoryl group, rather, the response regulator CheB takes the phosphoryl group from CheA. This mechanism of signal transduction is called a two-component system, and it is a common form of signal transduction in bacteria. CheY induces tumbling by interacting with the flagellar switch protein FliM, inducing a change from counter-clockwise to clockwise rotation of the flagellum. Change in the rotation state of a single flagellum can disrupt the entire flagella bundle and cause a tumble. Receptor regulation CheB, when activated by CheA, acts as a methylesterase, removing methyl groups from glutamate residues on the cytosolic side of the receptor; it works antagonistically with CheR, a methyltransferase, which adds methyl residues to the same glutamate residues. If the level of an attractant remains high, the level of phosphorylation of CheA (and, therefore, CheY and CheB) will remain low, the cell will swim smoothly, and the level of methylation of the MCPs will increase (because CheB-P is not present to demethylate). The MCPs no longer respond to the attractant when they are fully methylated; therefore, even though the level of attractant might remain high, the level of CheA-P (and CheB-P) increases and the cell begins to tumble. The MCPs can be demethylated by CheB-P, and, when this happens, the receptors can once again respond to attractants. The situation is the opposite with regard to repellents: fully methylated MCPs respond best to repellents, while least-methylated MCPs respond worst to repellents. This regulation allows the bacterium to 'remember' chemical concentrations from the recent past, a few seconds, and compare them to those it is currently experiencing, thus 'know' whether it is traveling up or down a gradient. that bacteria have to chemical gradients, other mechanisms are involved in increasing the absolute value of the sensitivity on a given background. Well-established examples are the ultra-sensitive response of the motor to the CheY-P signal, and the clustering of chemoreceptors. Chemoattractants and chemorepellents Chemoattractants and chemorepellents are inorganic or organic substances possessing chemotaxis-inducer effect in motile cells. These chemotactic ligands create chemical concentration gradients that organisms, prokaryotic and eukaryotic, move toward or away from, respectively. Effects of chemoattractants are elicited via chemoreceptors such as methyl-accepting chemotaxis proteins (MCP). MCPs in E.coli include Tar, Tsr, Trg and Tap. Chemoattracttants to Trg include ribose and galactose with phenol as a chemorepellent. Tap and Tsr recognize dipeptides and serine as chemoattractants, respectively. Chemoattractants or chemorepellents bind MCPs at its extracellular domain; an intracellular signaling domain relays the changes in concentration of these chemotactic ligands to downstream proteins like that of CheA which then relays this signal to flagellar motors via phosphorylated CheY (CheY-P). CheY-P can then control flagellar rotation influencing the direction of cell motility. For E.coli, S. meliloti, and R. spheroides, the binding of chemoattractants to MCPs inhibit CheA and therefore CheY-P activity, resulting in smooth runs, but for B. substilis, CheA activity increases. Methylation events in E.coli cause MCPs to have lower affinity to chemoattractants which causes increased activity of CheA and CheY-P resulting in tumbles. In this way cells are able to adapt to the immediate chemoattractant concentration and detect further changes to modulate cell motility. Chemoattractants in eukaryotes are well characterized for immune cells. Formyl peptides, such as fMLF, attract leukocytes such as neutrophils and macrophages, causing movement toward infection sites. Non-acylated methioninyl peptides do not act as chemoattractants to neutrophils and macrophages. Leukocytes also move toward chemoattractants C5a, a complement component, and pathogen-specific ligands on bacteria. Mechanisms concerning chemorepellents are less known than chemoattractants. Although chemorepellents work to confer an avoidance response in organisms, Tetrahymena thermophila adapt to a chemorepellent, Netrin-1 peptide, within 10 minutes of exposure; however, exposure to chemorepellents such as GTP, PACAP-38, and nociceptin show no such adaptations. GTP and ATP are chemorepellents in micro-molar concentrations to both Tetrahymena and Paramecium. These organisms avoid these molecules by producing avoiding reactions to re-orient themselves away from the gradient. Eukaryotic chemotaxis The mechanism of chemotaxis that eukaryotic cells employ is quite different from that in the bacteria E. coli; however, sensing of chemical gradients is still a crucial step in the process. Due to their small size and other biophysical constraints, E. coli cannot directly detect a concentration gradient. Instead, they employ temporal gradient sensing, where they move over larger distances several times their own width and measure the rate at which perceived chemical concentration changes. Eukaryotic cells are much larger than prokaryotes and have receptors embedded uniformly throughout the cell membrane. Eukaryotic chemotaxis involves detecting a concentration gradient spatially by comparing the asymmetric activation of these receptors at the different ends of the cell. Activation of these receptors results in migration towards chemoattractants, or away from chemorepellants. In mating yeast, which are non-motile, patches of polarity proteins on the cell cortex can relocate in a chemotactic fashion up pheromone gradients. It has also been shown that both prokaryotic and eukaryotic cells are capable of chemotactic memory. In prokaryotes, this mechanism involves the methylation of receptors called methyl-accepting chemotaxis proteins (MCPs). This results in their desensitization and allows prokaryotes to "remember" and adapt to a chemical gradient. In contrast, chemotactic memory in eukaryotes can be explained by the Local Excitation Global Inhibition (LEGI) model. LEGI involves the balance between a fast excitation and delayed inhibition which controls downstream signaling such as Ras activation and PIP3 production. Levels of receptors, intracellular signalling pathways and the effector mechanisms all represent diverse, eukaryotic-type components. In eukaryotic unicellular cells, amoeboid movement and cilium or the eukaryotic flagellum are the main effectors (e.g., Amoeba or Tetrahymena). Some eukaryotic cells of higher vertebrate origin, such as immune cells also move to where they need to be. Besides immune competent cells (granulocyte, monocyte, lymphocyte) a large group of cells—considered previously to be fixed into tissues—are also motile in special physiological (e.g., mast cell, fibroblast, endothelial cells) or pathological conditions (e.g., metastases). Chemotaxis has high significance in the early phases of embryogenesis as development of germ layers is guided by gradients of signal molecules. Detection of a gradient of chemoattractant The specific molecule/s that allow a eukaryotic cells detect a gradient of chemoattractant ligands (that is, a sort of the molecular compass that detects the direction of a chemoattractant) seems to change depending on the cell and chemoattractant receptor involved or even the concentration of the chemoattractant. However, these molecules apparently are activated independently of the motility of the cell. That is, even an immnobilized cell is still able to detect the direction of a chemoattractant. There appear to be mechanisms by which an external chemotactic gradient is sensed and turned into an intracellular Ras and PIP3 gradients, which results in a gradient and the activation of a signaling pathway, culminating in the polymerisation of actin filaments. The growing distal end of actin filaments develops connections with the internal surface of the plasma membrane via different sets of peptides and results in the formation of anterior pseudopods and posterior uropods. Cilia of eukaryotic cells can also produce chemotaxis; in this case, it is mainly a Ca2+-dependent induction of the microtubular system of the basal body and the beat of the 9 + 2 microtubules within cilia. The orchestrated beating of hundreds of cilia is synchronized by a submembranous system built between basal bodies. The details of the signaling pathways are still not totally clear. Chemotaxis-related migratory responses Chemotaxis refers to the directional migration of cells in response to chemical gradients; several variations of chemical-induced migration exist as listed below. Chemokinesis refers to an increase in cellular motility in response to chemicals in the surrounding environment. Unlike chemotaxis, the migration stimulated by chemokinesis lacks directionality, and instead increases environmental scanning behaviors. In haptotaxis the gradient of the chemoattractant is expressed or bound on a surface, in contrast to the classical model of chemotaxis, in which the gradient develops in a soluble fluid. The most common biologically active haptotactic surface is the extracellular matrix (ECM); the presence of bound ligands is responsible for induction of transendothelial migration and angiogenesis. Necrotaxis embodies a special type of chemotaxis when the chemoattractant molecules are released from necrotic or apoptotic cells. Depending on the chemical character of released substances, necrotaxis can accumulate or repel cells, which underlines the pathophysiological significance of this phenomenon. Receptors In general, eukaryotic cells sense the presence of chemotactic stimuli through the use of 7-transmembrane (or serpentine) heterotrimeric G-protein-coupled receptors, a class representing a significant portion of the genome. Some members of this gene superfamily are used in eyesight (rhodopsins) as well as in olfaction (smelling). The main classes of chemotaxis receptors are triggered by: Formyl peptides - formyl peptide receptors (FPR), Chemokines - chemokine receptors (CCR or CXCR), and Leukotrienes - leukotriene receptors (BLT). However, induction of a wide set of membrane receptors (e.g., cyclic nucleotides, amino acids, insulin, vasoactive peptides) also elicit migration of the cell. Chemotactic selection While some chemotaxis receptors are expressed in the surface membrane with long-term characteristics, as they are determined genetically, others have short-term dynamics, as they are assembled ad hoc in the presence of the ligand. The diverse features of the chemotaxis receptors and ligands allows for the possibility of selecting chemotactic responder cells with a simple chemotaxis assay By chemotactic selection, we can determine whether a still-uncharacterized molecule acts via the long- or the short-term receptor pathway. The term chemotactic selection is also used to designate a technique that separates eukaryotic or prokaryotic cells according to their chemotactic responsiveness to selector ligands. Chemotactic ligands The number of molecules capable of eliciting chemotactic responses is relatively high, and we can distinguish primary and secondary chemotactic molecules. The main groups of the primary ligands are as follows: Formyl peptides are di-, tri-, tetrapeptides of bacterial origin, formylated on the N-terminus of the peptide. They are released from bacteria in vivo or after decomposition of the cell, a typical member of this group is the N-formylmethionyl-leucyl-phenylalanine (abbreviated fMLF or fMLP). Bacterial fMLF is a key component of inflammation has characteristic chemoattractant effects in neutrophil granulocytes and monocytes. The chemotactic factor ligands and receptors related to formyl peptides are summarized in the related article, Formyl peptide receptors. Complement 3a (C3a) and complement 5a (C5a) are intermediate products of the complement cascade. Their synthesis is joined to the three alternative pathways (classical, lectin-dependent, and alternative) of complement activation by a convertase enzyme. The main target cells of these derivatives are neutrophil granulocytes and monocytes as well. Chemokines belong to a special class of cytokines; not only do their groups (C, CC, CXC, CX3C chemokines) represent structurally related molecules with a special arrangement of disulfide bridges but also their target cell specificity is diverse. CC chemokines act on monocytes (e.g., RANTES), and CXC chemokines are neutrophil granulocyte-specific (e.g., IL-8). Investigations of the three-dimensional structures of chemokines provided evidence that a characteristic composition of beta-sheets and an alpha helix provides expression of sequences required for interaction with the chemokine receptors. Formation of dimers and their increased biological activity was demonstrated by crystallography of several chemokines, e.g. IL-8. Metabolites of polyunsaturated fatty acids Leukotrienes are eicosanoid lipid mediators made by the metabolism of arachidonic acid by ALOX5 (also termed 5-lipoxygenase). Their most prominent member with chemotactic factor activity is leukotriene B4, which elicits adhesion, chemotaxis, and aggregation of leukocytes. The chemoattractant action of LTB4 is induced via either of two G protein–coupled receptors, BLT1 and BLT2, which are highly expressed in cells involved in inflammation and allergy. The family of 5-Hydroxyicosatetraenoic acid eicosanoids are arachidonic acid metabolites also formed by ALOX5. Three members of the family form naturally and have prominent chemotactic activity. These, listed in order of decreasing potency, are: 5-oxo-eicosatetraenoic acid, 5-oxo-15-hydroxy-eicosatetraenoic acid, and 5-Hydroxyeicosatetraenoic acid. This family of agonists stimulates chemotactic responses in human eosinophils, neutrophils, and monocytes by binding to the Oxoeicosanoid receptor 1, which like the receptors for leukotriene B4, is a G protein-coupled receptor. Aside from the skin, neutrophils are the body's first line of defense against bacterial infections. After leaving nearby blood vessels, these cells recognize chemicals produced by bacteria in a cut or scratch and migrate "toward the smell". 5-hydroxyeicosatrienoic acid and 5-oxoeicosatrienoic acid are metabolites of Mead acid (5Z,8Z,11Z-eicosatrirenoid acid); they stimulate leukocyte chemotaxis through the oxoeicosanoid receptor 1 with 5-oxoeicosatrienoic acid being as potent as its arachidonic acid-derived analog, 5-oxo-eicosatetraenoic acid, in stimulating human blood eosinophil and neutrophil chemotaxis. 12-Hydroxyeicosatetraenoic acid is an eicosanoid metabolite of arachidonic acid made by ALOX12 which stimulates leukocyte chemotaxis through the leukotriene B4 receptor, BLT2. Prostaglandin D2 is an eicosanoid metabolite of arachidononic acid made by cyclooxygenase 1 or cyclooxygenase 2 that stimulates chemotaxis through the Prostaglandin DP2 receptor. It elicits chemotactic responses in eosinophils, basophils, and T helper cells of the Th2 subtype. 12-Hydroxyheptadecatrienoic acid is a non-eicosanoid metabolite of arachidonic acid made by cyclooxygenase 1 or cyclooxygenase 2 that stimulates leukocyte chemotaxis though the leukotriene B4 receptor, BLT2. 15-oxo-eicosatetraenoic acid is an eicosanoid metabolite of arachidonic acid made my ALOX15; it has weak chemotactic activity for human monocytes (sees 15-Hydroxyeicosatetraenoic acid#15-oxo-ETE). The receptor or other mechanism by which this metabolite stimulates chemotaxis has not been elucidated. Chemotactic range fitting Chemotactic responses elicited by ligand-receptor interactions vary with the concentration of the ligand. Investigations of ligand families (e.g. amino acids or oligopeptides) demonstrates that chemoattractant activity occurs over a wide range, while chemorepellent activities have narrow ranges. Clinical significance A changed migratory potential of cells has relatively high importance in the development of several clinical symptoms and syndromes. Altered chemotactic activity of extracellular (e.g., Escherichia coli) or intracellular (e.g., Listeria monocytogenes) pathogens itself represents a significant clinical target. Modification of endogenous chemotactic ability of these microorganisms by pharmaceutical agents can decrease or inhibit the ratio of infections or spreading of infectious diseases. Apart from infections, there are some other diseases wherein impaired chemotaxis is the primary etiological factor, as in Chédiak–Higashi syndrome, where giant intracellular vesicles inhibit normal migration of cells. Mathematical models Several mathematical models of chemotaxis were developed depending on the type of Migration (e.g., basic differences of bacterial swimming, movement of unicellular eukaryotes with cilia/flagellum and amoeboid migration) Physico-chemical characteristics of the chemicals (e.g., diffusion) working as ligands Biological characteristics of the ligands (attractant, neutral, and repellent molecules) Assay systems applied to evaluate chemotaxis (see incubation times, development, and stability of concentration gradients) Other environmental effects possessing direct or indirect influence on the migration (lighting, temperature, magnetic fields, etc.) Although interactions of the factors listed above make the behavior of the solutions of mathematical models of chemotaxis rather complex, it is possible to describe the basic phenomenon of chemotaxis-driven motion in a straightforward way. Indeed, let us denote with the spatially non-uniform concentration of the chemo-attractant and as its gradient. Then the chemotactic cellular flow (also called current) that is generated by the chemotaxis is linked to the above gradient by the law:where is the spatial density of the cells and is the so-called 'Chemotactic coefficient' - is often not constant, but a decreasing function of the chemo-attractant. For some quantity that is subject to total flux and generation/destruction term , it is possible to formulate a continuity equation: where is the divergence. This general equation applies to both the cell density and the chemo-attractant. Therefore, incorporating a diffusion flux into the total flux term, the interactions between these quantities are governed by a set of coupled reaction-diffusion partial differential equations describing the change in and :where describes the growth in cell density, is the kinetics/source term for the chemo-attractant, and the diffusion coefficients for cell density and the chemo-attractant are respectively and . Spatial ecology of soil microorganisms is a function of their chemotactic sensitivities towards substrate and fellow organisms. The chemotactic behavior of the bacteria was proven to lead to non-trivial population patterns even in the absence of environmental heterogeneities. The presence of structural pore scale heterogeneities has an extra impact on the emerging bacterial patterns. Measurement of chemotaxis A wide range of techniques is available to evaluate chemotactic activity of cells or the chemoattractant and chemorepellent character of ligands. The basic requirements of the measurement are as follows: Concentration gradients can develop relatively quickly and persist for a long time in the system Chemotactic and chemokinetic activities are distinguished Migration of cells is free toward and away on the axis of the concentration gradient Detected responses are the results of active migration of cells Despite the fact that an ideal chemotaxis assay is still not available, there are several protocols and pieces of equipment that offer good correspondence with the conditions described above. The most commonly used are summarised in the table below: Artificial chemotactic systems Chemical robots that use artificial chemotaxis to navigate autonomously have been designed. Applications include targeted delivery of drugs in the body. More recently, enzyme molecules have also shown positive chemotactic behavior in the gradient of their substrates. The thermodynamically favorable binding of enzymes to their specific substrates is recognized as the origin of enzymatic chemotaxis. Additionally, enzymes in cascades have also shown substrate-driven chemotactic aggregation. Apart from active enzymes, non-reacting molecules also show chemotactic behavior. This has been demonstrated by using dye molecules that move directionally in gradients of polymer solution through favorable hydrophobic interactions. See also McCutcheon index Tropism Durotaxis Haptotaxis Mechanotaxis Plithotaxis Thin layers (oceanography) References Further reading External links Chemotaxis Neutrophil Chemotaxis Cell Migration Gateway Downloadable Matlab chemotaxis simulator Bacterial Chemotaxis Interactive Simulator (web-app) Motile cells Perception Taxes (biology) Transmembrane receptors Transport phenomena
Chemotaxis
[ "Physics", "Chemistry", "Engineering" ]
6,263
[ "Transport phenomena", "Transmembrane receptors", "Physical phenomena", "Chemical engineering", "Signal transduction" ]
7,431
https://en.wikipedia.org/wiki/Counter-Strike%20%28video%20game%29
Counter-Strike (also known as Half-Life: Counter-Strike or Counter-Strike 1.6) is a tactical first-person shooter game developed by Valve. It was initially developed and released as a Half-Life modification by Minh "Gooseman" Le and Jess Cliffe in 1999, before Le and Cliffe were hired and the game's intellectual property acquired. Counter-Strike was released by Valve for Microsoft Windows in November 2000, and is the first installment in the Counter-Strike series. Several remakes and ports were released on Xbox, as well as OS X and Linux. Set in various locations around the globe, players assume the roles of counter-terrorist forces and terrorist militants opposing them. During each round of gameplay, the two teams are tasked with defeating the other by the means of either achieving the map's objectives or eliminating all of the enemy combatants. Each player may customize their arsenal of weapons and accessories at the beginning of every match, with currency being earned after the end of each round. Gameplay Counter-Strike is a team-based multiplayer first-person shooter video game in which players can join either the terrorists (T) or the counter-terrorists (CT). If one team has more players than the other, the server settings may automatically balance. Each game begins with both teams spawning simultaneously as one of eight possible default character models (four each for counter-terrorist and terrorist). Each player begins with $800, two magazines of ammo, a knife, and a handgun: a Heckler & Koch USP for the counter-terrorists or a Glock 18c for the terrorists. Players are usually allowed a few seconds before the round starts, known as freeze time to purchase equipment but not move. Players may purchase equipment whenever they are in a buy zone for their team, some of which can be shared by both sides and the round has not been in session for more than a certain duration, which is 90 seconds by default. Surviving players keep their equipment for the following game, while those who die start again with a handgun and knife. The scoreboard displays team results as well as information about each player, including their name, score, deaths, and ping/latency (ms) on the map, it also displays if each player is dead, carrying a bomb (in bomb defusal maps), or a VIP (in assassination maps), albeit the player must be killed during the round to gain this information about opposing team members. Players that are killed become "ghosts" for the rest of the round; they are unable to alter their names or receive chat/voice messages from live players, unless the console command sv_alltalk is set to 1. They may typically watch the rest of the round from a variety of chosen observer modes (free-look mode, locked chasecam and free chase chasecam), but some servers limit some of these views to prevent dead players from conveying information about surviving players to their teammates via alternate media (most notably voice in Internet cafés). Many players believe the practice known as "ghosting" to be cheating. Players receive standard bonuses, such as $3500 for winning a round, $1500 for losing one, and $300 for killing an enemy. They can have up to $16000 via earning and can be fined (e.g. killing a teammate fines the perpetrator $3300). Currently, there are three objectives depending on the map: Bomb defusal: The terrorist team has a bomb when the round starts. The goal of the terrorists is to plant the bomb at a bomb site—usually called Bombsite A or Bombsite B on the map—and make sure it explodes. The counter-terrorist team wins if they are able to defuse the bomb within a set time limit. If either team is eliminated before the bomb is planted, the other team wins. Bomb defusal maps start with the prefix "de_" (e.g de_dust2). Hostage rescue: Four hostages are often located close to the terrorist base on the map. The goal of the Counter-Terrorists is to lead the captives to a location on the map where they are rescued. A team wins if every member of that team has been eliminated. The Counter-Terrorists win and get $2400 for each captive that survives, provided that the number of rescued hostages is at least half of the original hostage count. The terrorists win when the round ends. Maps with this objective start with the prefix "cs_" (e.g cs_office). Assassination: In this objective, one Counter-Terrorist member becomes into a VIP, armed with 200 units of Kevlar vest and nothing more than the counter-terrorist standard-issue USP handgun and one additional magazine. Except for their own handgun, the VIP is not permitted to retrieve dropped firearms. The VIP's goal is to get at an extraction zone (1, typically), which is when the counter-terrorists triumph. The terrorists win if the VIP dies. The counter-terrorists win if every terrorist dies, while the terrorists also win when time runs out. A VIP shouldn't expect to escape without the team's help due to the gun's shortage of ammo, but the unique armor and the pistol together offer sufficient protection. Formerly, there was a fourth objective called Escape. The scenario is that the terrorist team must "escape" to one of the designated escape points after beginning their mission in a protected area. Before they can flee, the counter-terrorist team needs to kill them. Once half of the team has managed to escape, the terrorists win the round. Following each of the eight rounds of play, the two sides will trade roles. If one team eliminates the other, both teams can also win the scenario. Three categories exist for weapons: Melee (knife), Secondary (handguns), and Primary (rifles, shotguns, machine and submachine guns). There is a separate category for equipment like defusing kits and hand grenades. With the exception of equipment, which may hold many items at once, players are only allowed to carry one item in each of these categories at a time. Development Counter-Strike began as a mod of Half-Lifes engine GoldSrc. Minh Le, the mod's co-creator, had started his last semester at university, and wanted to do something in game development to help give him better job prospects. Throughout university, Le had worked on mods with the Quake engine, and on looking for this latest project, wanted to try something new and opted for GoldSrc. At the onset, Valve had not yet released the software development kit (SDK) for GoldSrc but affirmed it would be available in a few months, allowing Le to work on the character models in the interim. Once the GoldSrc SDK was available, Le estimated it took him about a month and a half to complete the programming and integrate his models for "Beta One" of Counter-Strike. To assist, Le had help from Jess Cliffe who managed the game's website and community, and had contacts within level map making community to help build some of the levels for the game. The theme of countering terrorists was inspired by Le's own interest in guns and the military, and from games like Rainbow Six and Spec Ops. Le and Cliffe continued to release Betas on a frequent basis for feedback. The initial few Betas, released starting in June 1999, had limited audiences but by the fifth one, interest in the project dramatically grew. The interest in the game drew numerous players to the website, which helped Le and Cliffe to make revenue from ads hosted on the site. Around 2000 at the time of Beta 5's release, the two were approached by Valve, offering to buy the Counter-Strike intellectual property and offering both jobs to continue its development. Both accepted the offer, and by September 2000, Valve released the first non-beta version of the game. While Cliffe stayed with Valve, Le did some additional work towards a Counter-Strike 2.0 based on Valve's upcoming Source engine, but left to start his own studio after Valve opted to shelve the sequel. Counter-Strike itself is a mod, and it has developed its own community of script writers and mod creators. Some mods add bots, while others remove features of the game, and others create different modes of play. Some mods, often called "admin plugins", give server administrators more flexible and efficient control over their servers. There are some mods which affect gameplay heavily, such as Gun Game, where players start with a basic pistol and must score kills to receive better weapons, and Zombie Mod, where one team consists of zombies and must "spread the infection" by killing the other team (using only the knife). There are also Superhero mods which mix the first-person gameplay of Counter-Strike with an experience system, allowing a player to become more powerful as they continue to play. The game is highly customizable on the player's end, allowing the user to install or even create their own custom skins, HUDs, spray graphics, sprites, and sound effects, given the proper tools. Valve Anti-Cheat Counter-Strike has been a target for cheating in online games since its release. In-game, cheating is often referred to as "hacking" in reference to programs or "hacks" executed by the client. Valve has implemented an anti-cheat system called Valve Anti-Cheat (VAC). Players cheating on a VAC-enabled server risk having their account permanently banned from all VAC-secured servers. With the first version of VAC, a ban took hold almost instantly after being detected and the cheater had to wait two years to have the account unbanned. Since VAC's second version, cheaters are not banned automatically. With the second version, Valve instituted a policy of 'delayed bans', the theory being that if a new hack is developed which circumvents the VAC system, it will spread amongst the 'cheating' community. By delaying the initial ban, Valve hopes to identify and ban as many cheaters as possible. Like any software detection system, some cheats are not detected by VAC. To remedy this, some servers implement a voting system, in which case players can call for a vote to kick or ban the accused cheater. VAC's success at identifying cheats and banning those who use them has also provided a boost in the purchasing of private cheats. These cheats are updated frequently to minimize the risk of detection, and are generally only available to a trusted list of recipients who collectively promise not to reveal the underlying design. Even with private cheats however, some servers have alternative anticheats to coincide with VAC itself. This can help with detecting some cheaters, but most paid for cheats are designed to bypass these alternative server-based anticheats. Release When Counter-Strike was published by Sierra Studios, it was bundled with Team Fortress Classic, Opposing Force multiplayer, and the Wanted, Half-Life: Absolute Redemption and Firearms mods. On March 24, 1999, Planet Half-Life opened its Counter-Strike section. Within two weeks, the site had received 10,000 hits. On June 19, 1999, the first public beta of Counter-Strike was released, followed by numerous further "beta" releases. On April 12, 2000, Valve announced that the Counter-Strike developers and Valve had teamed up. In January 2013, Valve began testing a version of Counter-Strike for OS X and Linux, eventually releasing the update to all users in April 2013. An unofficial browser version was released in 2023 on a Russian website. Reception Upon its retail release, Counter-Strike received highly favorable reviews. In 2003, Counter-Strike was inducted into GameSpot's list of the greatest games of all time. The New York Times reported that E-Sports Entertainment ESEA League started the first professional fantasy e-sports league in 2004 with the game Counter-Strike. Some credit the move into professional competitive team play with prizes as a major factor in Counter-Strike longevity and success. Global retail sales of Counter-Strike surpassed 250,000 units by July 2001. The game sold 1.5 million by February 2003 and generated $40 million in revenue. In the United States, its retail version sold 550,000 copies and earned $15.7 million by August 2006, after its release in November 2000. It was the country's 22nd best-selling PC game between January 2000 and August 2006. The Xbox version sold 1.5 million copies in total. Brazilian sale ban On January 17, 2008, a Brazilian federal court order prohibiting all sales of Counter-Strike and EverQuest began to be enforced. The federal Brazilian judge Carlos Alberto Simões de Tomaz ordered the ban in October 2007 because, as argued by the judge, the games "bring imminent stimulus to the subversion of the social order, attempting against the democratic state and the law and against public security." As of June 18, 2009, a regional federal court order lifting the prohibition on the sale of Counter-Strike was published. The game is now being sold again in Brazil. Competitive play The original Counter-Strike has been played in tournaments since 2000 with the first major being hosted in 2001 at the Cyberathlete Professional League Winter Championship. The first official sequel was Counter-Strike: Source, released on November 1, 2004. The game was criticized by the competitive community, who believed the game's skill ceiling was significantly lower than that of CS 1.6. This caused a divide in the competitive community as to which game to play competitively. Sequels Following the success of the first Counter-Strike, Valve went on to make multiple sequels to the game. Counter-Strike: Condition Zero, a game using Counter-Strikes GoldSrc engine, was released in 2004. Counter-Strike: Source, a remake of the original Counter-Strike, was the first in the series to use Valve's Source engine and was also released in 2004, eight months after the release of Counter-Strike: Condition Zero. The next game in the Counter-Strike series to be developed primarily by Valve was Counter-Strike: Global Offensive, released for Windows, OS X, Linux, PlayStation 3, and Xbox 360 in 2012. Counter-Strike 2, an updated version of Global Offensive, was released in 2023. The game spawned multiple spin-offs for the Asian gaming market. The first, Counter-Strike Neo, was an arcade game developed by Namco and released in Japan in 2003. In 2008, Nexon Corporation released Counter-Strike Online, a free-to-play instalment in the series monetized via microtransactions. Counter-Strike Online was followed by Counter-Strike Online 2 in 2013. In 2014, Nexon released Counter-Strike Nexon: Zombies worldwide via Steam. See also List of video games derived from modifications Notes References Bibliography 2000 video games Asymmetrical multiplayer video games Censored video games Counter-Strike Esports games First-person shooters Golden Joystick Award winners GoldSrc games GoldSrc mods Linux games MacOS games Microsoft games Multiplayer online games Sierra Entertainment games Tactical shooters Valve Corporation games Video games about bomb disposal Video games about police officers Video games about terrorism Video games about the Special Air Service Video games about the United States Navy SEALs Video games developed in the United States Video games set in Cuba Video games set in Italy Video games set in Mexico Video games set in the United States Windows games Xbox games Cooperative video games
Counter-Strike (video game)
[ "Physics" ]
3,187
[ "Asymmetrical multiplayer video games", "Symmetry", "Asymmetry" ]
7,439
https://en.wikipedia.org/wiki/Constructible%20number
In geometry and algebra, a real number is constructible if and only if, given a line segment of unit length, a line segment of length can be constructed with compass and straightedge in a finite number of steps. Equivalently, is constructible if and only if there is a closed-form expression for using only integers and the operations for addition, subtraction, multiplication, division, and square roots. The geometric definition of constructible numbers motivates a corresponding definition of constructible points, which can again be described either geometrically or algebraically. A point is constructible if it can be produced as one of the points of a compass and straightedge construction (an endpoint of a line segment or crossing point of two lines or circles), starting from a given unit length segment. Alternatively and equivalently, taking the two endpoints of the given segment to be the points (0, 0) and (1, 0) of a Cartesian coordinate system, a point is constructible if and only if its Cartesian coordinates are both constructible numbers. Constructible numbers and points have also been called ruler and compass numbers and ruler and compass points, to distinguish them from numbers and points that may be constructed using other processes. The set of constructible numbers forms a field: applying any of the four basic arithmetic operations to members of this set produces another constructible number. This field is a field extension of the rational numbers and in turn is contained in the field of algebraic numbers. It is the Euclidean closure of the rational numbers, the smallest field extension of the rationals that includes the square roots of all of its positive numbers. The proof of the equivalence between the algebraic and geometric definitions of constructible numbers has the effect of transforming geometric questions about compass and straightedge constructions into algebra, including several famous problems from ancient Greek mathematics. The algebraic formulation of these questions led to proofs that their solutions are not constructible, after the geometric formulation of the same problems previously defied centuries of attack. Geometric definitions Geometrically constructible points Let and be two given distinct points in the Euclidean plane, and define to be the set of points that can be constructed with compass and straightedge starting with and . Then the points of are called constructible points. and are, by definition, elements of . To more precisely describe the remaining elements of , make the following two definitions: a line segment whose endpoints are in is called a constructed segment, and a circle whose center is in and which passes through a point of (alternatively, whose radius is the distance between some pair of distinct points of ) is called a constructed circle. Then, the points of , besides and are: the intersection of two non-parallel constructed segments, or lines through constructed segments, the intersection points of a constructed circle and a constructed segment, or line through a constructed segment, or the intersection points of two distinct constructed circles. As an example, the midpoint of constructed segment is a constructible point. One construction for it is to construct two circles with as radius, and the line through the two crossing points of these two circles. Then the midpoint of segment is the point where this segment is crossed by the constructed line. Geometrically constructible numbers The starting information for the geometric formulation can be used to define a Cartesian coordinate system in which the point is associated to the origin having coordinates and in which the point is associated with the coordinates . The points of may now be used to link the geometry and algebra by defining a constructible number to be a coordinate of a constructible point. Equivalent definitions are that a constructible number is the -coordinate of a constructible point or the length of a constructible line segment. In one direction of this equivalence, if a constructible point has coordinates , then the point can be constructed as its perpendicular projection onto the -axis, and the segment from the origin to this point has length . In the reverse direction, if is the length of a constructible line segment, then intersecting the -axis with a circle centered at with radius gives the point . It follows from this equivalence that every point whose Cartesian coordinates are geometrically constructible numbers is itself a geometrically constructible point. For, when and are geometrically constructible numbers, point can be constructed as the intersection of lines through and , perpendicular to the coordinate axes. Algebraic definitions Algebraically constructible numbers The algebraically constructible real numbers are the subset of the real numbers that can be described by formulas that combine integers using the operations of addition, subtraction, multiplication, multiplicative inverse, and square roots of positive numbers. Even more simply, at the expense of making these formulas longer, the integers in these formulas can be restricted to be only 0 and 1. For instance, the square root of 2 is constructible, because it can be described by the formulas or . Analogously, the algebraically constructible complex numbers are the subset of complex numbers that have formulas of the same type, using a more general version of the square root that is not restricted to positive numbers but can instead take arbitrary complex numbers as its argument, and produces the principal square root of its argument. Alternatively, the same system of complex numbers may be defined as the complex numbers whose real and imaginary parts are both constructible real numbers. For instance, the complex number has the formulas or , and its real and imaginary parts are the constructible numbers 0 and 1 respectively. These two definitions of the constructible complex numbers are equivalent. In one direction, if is a complex number whose real part and imaginary part are both constructible real numbers, then replacing and by their formulas within the larger formula produces a formula for as a complex number. In the other direction, any formula for an algebraically constructible complex number can be transformed into formulas for its real and imaginary parts, by recursively expanding each operation in the formula into operations on the real and imaginary parts of its arguments, using the expansions , where and . Algebraically constructible points The algebraically constructible points may be defined as the points whose two real Cartesian coordinates are both algebraically constructible real numbers. Alternatively, they may be defined as the points in the complex plane given by algebraically constructible complex numbers. By the equivalence between the two definitions for algebraically constructible complex numbers, these two definitions of algebraically constructible points are also equivalent. Equivalence of algebraic and geometric definitions If and are the non-zero lengths of geometrically constructed segments then elementary compass and straightedge constructions can be used to obtain constructed segments of lengths , , , and . The latter two can be done with a construction based on the intercept theorem. A slightly less elementary construction using these tools is based on the geometric mean theorem and will construct a segment of length from a constructed segment of length . It follows that every algebraically constructible number is geometrically constructible, by using these techniques to translate a formula for the number into a construction for the number. In the other direction, a set of geometric objects may be specified by algebraically constructible real numbers: coordinates for points, slope and -intercept for lines, and center and radius for circles. It is possible (but tedious) to develop formulas in terms of these values, using only arithmetic and square roots, for each additional object that might be added in a single step of a compass-and-straightedge construction. It follows from these formulas that every geometrically constructible number is algebraically constructible. Algebraic properties The definition of algebraically constructible numbers includes the sum, difference, product, and multiplicative inverse of any of these numbers, the same operations that define a field in abstract algebra. Thus, the constructible numbers (defined in any of the above ways) form a field. More specifically, the constructible real numbers form a Euclidean field, an ordered field containing a square root of each of its positive elements. Examining the properties of this field and its subfields leads to necessary conditions on a number to be constructible, that can be used to show that specific numbers arising in classical geometric construction problems are not constructible. It is convenient to consider, in place of the whole field of constructible numbers, the subfield generated by any given constructible number , and to use the algebraic construction of to decompose this field. If is a constructible real number, then the values occurring within a formula constructing it can be used to produce a finite sequence of real numbers such that, for each , is an extension of of degree 2. Using slightly different terminology, a real number is constructible if and only if it lies in a field at the top of a finite tower of real quadratic extensions, starting with the rational field where is in and for all , . It follows from this decomposition that the degree of the field extension is , where counts the number of quadratic extension steps. Analogously to the real case, a complex number is constructible if and only if it lies in a field at the top of a finite tower of complex quadratic extensions. More precisely, is constructible if and only if there exists a tower of fields where is in , and for all , . The difference between this characterization and that of the real constructible numbers is only that the fields in this tower are not restricted to being real. Consequently, if a complex number a complex number is constructible, then the above characterization implies that is a power of two. However, this condition is not sufficient - there exist field extensions whose degree is a power of two, but which cannot be factored into a sequence of quadratic extensions. To obtain a sufficient condition for constructibility, one must instead consider the splitting field obtained by adjoining all roots of the minimal polynomial of . If the degree of extension is a power of two, then its Galois group is a 2-group, and thus admits a descending sequence of subgroups with for By the fundamental theorem of Galois theory, there is a corresponding tower of quadratic extensions whose topmost field contains and from this it follows that is constructible. The fields that can be generated from towers of quadratic extensions of are called of . The fields of real and complex constructible numbers are the unions of all real or complex iterated quadratic extensions of . Trigonometric numbers Trigonometric numbers are the cosines or sines of angles that are rational multiples of . These numbers are always algebraic, but they may not be constructible. The cosine or sine of the angle is constructible only for certain special numbers : The powers of two The Fermat primes, prime numbers that are one plus a power of two The products of powers of two and any number of distinct Fermat primes. Thus, for example, is constructible because 15 is the product of the Fermat primes 3 and 5; but is not constructible (not being the product of Fermat primes) and neither is (being a non-Fermat prime). Impossible constructions The ancient Greeks thought that certain problems of straightedge and compass construction they could not solve were simply obstinate, not unsolvable. However, the non-constructibility of certain numbers proves that these constructions are logically impossible to perform. (The problems themselves, however, are solvable using methods that go beyond the constraint of working only with straightedge and compass, and the Greeks knew how to solve them in this way. One such example is Archimedes' Neusis construction solution of the problem of Angle trisection.) In particular, the algebraic formulation of constructible numbers leads to a proof of the impossibility of the following construction problems: Doubling the cube The problem of doubling the unit square is solved by the construction of another square on the diagonal of the first one, with side length and area . Analogously, the problem of doubling the cube asks for the construction of the length of the side of a cube with volume . It is not constructible, because the minimal polynomial of this length, , has degree 3 over . As a cubic polynomial whose only real root is irrational, this polynomial must be irreducible, because if it had a quadratic real root then the quadratic conjugate would provide a second real root. Angle trisection In this problem, from a given angle , one should construct an angle . Algebraically, angles can be represented by their trigonometric functions, such as their sines or cosines, which give the Cartesian coordinates of the endpoint of a line segment forming the given angle with the initial segment. Thus, an angle is constructible when is a constructible number, and the problem of trisecting the angle can be formulated as one of constructing . For example, the angle of an equilateral triangle can be constructed by compass and straightedge, with . However, its trisection cannot be constructed, because has minimal polynomial of degree 3 over . Because this specific instance of the trisection problem cannot be solved by compass and straightedge, the general problem also cannot be solved. Squaring the circle A square with area , the same area as a unit circle, would have side length , a transcendental number. Therefore, this square and its side length are not constructible, because it is not algebraic over . Regular polygons If a regular -gon is constructed with its center at the origin, the angles between the segments from the center to consecutive vertices are . The polygon can be constructed only when the cosine of this angle is a trigonometric number. Thus, for instance, a 15-gon is constructible, but the regular heptagon is not constructible, because 7 is prime but not a Fermat prime. For a more direct proof of its non-constructibility, represent the vertices of a regular heptagon as the complex roots of the polynomial . Removing the factor , dividing by , and substituting gives the simpler polynomial , an irreducible cubic with three real roots, each two times the real part of a complex-number vertex. Its roots are not constructible, so the heptagon is also not constructible. Alhazen's problem If two points and a circular mirror are given, where on the circle does one of the given points see the reflected image of the other? Geometrically, the lines from each given point to the point of reflection meet the circle at equal angles and in equal-length chords. However, it is impossible to construct a point of reflection using a compass and straightedge. In particular, for a unit circle with the two points and inside it, the solution has coordinates forming roots of an irreducible degree-four polynomial . Although its degree is a power of two, the splitting field of this polynomial has degree divisible by three, so it does not come from an iterated quadratic extension and Alhazen's problem has no compass and straightedge solution. History The birth of the concept of constructible numbers is inextricably linked with the history of the three impossible compass and straightedge constructions: doubling the cube, trisecting an angle, and squaring the circle. The restriction of using only compass and straightedge in geometric constructions is often credited to Plato due to a passage in Plutarch. According to Plutarch, Plato gave the duplication of the cube (Delian) problem to Eudoxus and Archytas and Menaechmus, who solved the problem using mechanical means, earning a rebuke from Plato for not solving the problem using pure geometry. However, this attribution is challenged, due, in part, to the existence of another version of the story (attributed to Eratosthenes by Eutocius of Ascalon) that says that all three found solutions but they were too abstract to be of practical value. Proclus, citing Eudemus of Rhodes, credited Oenopides ( 450 BCE) with two ruler and compass constructions, leading some authors to hypothesize that Oenopides originated the restriction. The restriction to compass and straightedge is essential to the impossibility of the classic construction problems. Angle trisection, for instance, can be done in many ways, several known to the ancient Greeks. The Quadratrix of Hippias of Elis, the conics of Menaechmus, or the marked straightedge (neusis) construction of Archimedes have all been used, as has a more modern approach via paper folding. Although not one of the classic three construction problems, the problem of constructing regular polygons with straightedge and compass is often treated alongside them. The Greeks knew how to construct regular with (for any integer ), 3, 5, or the product of any two or three of these numbers, but other regular eluded them. In 1796 Carl Friedrich Gauss, then an eighteen-year-old student, announced in a newspaper that he had constructed a regular 17-gon with straightedge and compass. Gauss's treatment was algebraic rather than geometric; in fact, he did not actually construct the polygon, but rather showed that the cosine of a central angle was a constructible number. The argument was generalized in his 1801 book Disquisitiones Arithmeticae giving the condition for the construction of a regular Gauss claimed, but did not prove, that the condition was also necessary and several authors, notably Felix Klein, attributed this part of the proof to him as well. Alhazen's problem is also not one of the classic three problems, but despite being named after Ibn al-Haytham (Alhazen), a medieval Islamic mathematician, it already appears in Ptolemy's work on optics from the second century. Pierre Wantzel proved algebraically that the problems of doubling the cube and trisecting the angle are impossible to solve using only compass and straightedge. In the same paper he also solved the problem of determining which regular polygons are constructible: a regular polygon is constructible if and only if the number of its sides is the product of a power of two and any number of distinct Fermat primes (i.e., the sufficient conditions given by Gauss are also necessary). An attempted proof of the impossibility of squaring the circle was given by James Gregory in (The True Squaring of the Circle and of the Hyperbola) in 1667. Although his proof was faulty, it was the first paper to attempt to solve the problem using algebraic properties of . It was not until 1882 that Ferdinand von Lindemann rigorously proved its impossibility, by extending the work of Charles Hermite and proving that is a transcendental number. Alhazen's problem was not proved impossible to solve by compass and straightedge until the work of Jack Elkin. The study of constructible numbers, per se, was initiated by René Descartes in La Géométrie, an appendix to his book Discourse on the Method published in 1637. Descartes associated numbers to geometrical line segments in order to display the power of his philosophical method by solving an ancient straightedge and compass construction problem put forth by Pappus. See also Computable number Definable real number Notes References External links Constructible Numbers at Cut-the-knot Euclidean plane geometry Algebraic numbers
Constructible number
[ "Mathematics" ]
3,929
[ "Euclidean plane geometry", "Mathematical objects", "Algebraic numbers", "Planes (geometry)", "Numbers" ]
7,445
https://en.wikipedia.org/wiki/Classification%20of%20finite%20simple%20groups
In mathematics, the classification of finite simple groups (popularly called the enormous theorem) is a result of group theory stating that every finite simple group is either cyclic, or alternating, or belongs to a broad infinite class called the groups of Lie type, or else it is one of twenty-six exceptions, called sporadic (the Tits group is sometimes regarded as a sporadic group because it is not strictly a group of Lie type, in which case there would be 27 sporadic groups). The proof consists of tens of thousands of pages in several hundred journal articles written by about 100 authors, published mostly between 1955 and 2004. Simple groups can be seen as the basic building blocks of all finite groups, reminiscent of the way the prime numbers are the basic building blocks of the natural numbers. The Jordan–Hölder theorem is a more precise way of stating this fact about finite groups. However, a significant difference from integer factorization is that such "building blocks" do not necessarily determine a unique group, since there might be many non-isomorphic groups with the same composition series or, put in another way, the extension problem does not have a unique solution. Daniel Gorenstein (1923-1992), Richard Lyons, and Ronald Solomon are gradually publishing a simplified and revised version of the proof. Statement of the classification theorem The classification theorem has applications in many branches of mathematics, as questions about the structure of finite groups (and their action on other mathematical objects) can sometimes be reduced to questions about finite simple groups. Thanks to the classification theorem, such questions can sometimes be answered by checking each family of simple groups and each sporadic group. Daniel Gorenstein announced in 1983 that the finite simple groups had all been classified, but this was premature as he had been misinformed about the proof of the classification of quasithin groups. The completed proof of the classification was announced by after Aschbacher and Smith published a 1221-page proof for the missing quasithin case. Overview of the proof of the classification theorem wrote two volumes outlining the low rank and odd characteristic part of the proof, and wrote a 3rd volume covering the remaining characteristic 2 case. The proof can be broken up into several major pieces as follows: Groups of small 2-rank The simple groups of low 2-rank are mostly groups of Lie type of small rank over fields of odd characteristic, together with five alternating and seven characteristic 2 type and nine sporadic groups. The simple groups of small 2-rank include: Groups of 2-rank 0, in other words groups of odd order, which are all solvable by the Feit–Thompson theorem. Groups of 2-rank 1. The Sylow 2-subgroups are either cyclic, which is easy to handle using the transfer map, or generalized quaternion, which are handled with the Brauer–Suzuki theorem: in particular there are no simple groups of 2-rank 1 except for the cyclic group of order two. Groups of 2-rank 2. Alperin showed that the Sylow subgroup must be dihedral, quasidihedral, wreathed, or a Sylow 2-subgroup of U3(4). The first case was done by the Gorenstein–Walter theorem which showed that the only simple groups are isomorphic to L2(q) for q odd or A7, the second and third cases were done by the Alperin–Brauer–Gorenstein theorem which implies that the only simple groups are isomorphic to L3(q) or U3(q) for q odd or M11, and the last case was done by Lyons who showed that U3(4) is the only simple possibility. Groups of sectional 2-rank at most 4, classified by the Gorenstein–Harada theorem. The classification of groups of small 2-rank, especially ranks at most 2, makes heavy use of ordinary and modular character theory, which is almost never directly used elsewhere in the classification. All groups not of small 2 rank can be split into two major classes: groups of component type and groups of characteristic 2 type. This is because if a group has sectional 2-rank at least 5 then MacWilliams showed that its Sylow 2-subgroups are connected, and the balance theorem implies that any simple group with connected Sylow 2-subgroups is either of component type or characteristic 2 type. (For groups of low 2-rank the proof of this breaks down, because theorems such as the signalizer functor theorem only work for groups with elementary abelian subgroups of rank at least 3.) Groups of component type A group is said to be of component type if for some centralizer C of an involution, C/O(C) has a component (where O(C) is the core of C, the maximal normal subgroup of odd order). These are more or less the groups of Lie type of odd characteristic of large rank, and alternating groups, together with some sporadic groups. A major step in this case is to eliminate the obstruction of the core of an involution. This is accomplished by the B-theorem, which states that every component of C/O(C) is the image of a component of C. The idea is that these groups have a centralizer of an involution with a component that is a smaller quasisimple group, which can be assumed to be already known by induction. So to classify these groups one takes every central extension of every known finite simple group, and finds all simple groups with a centralizer of involution with this as a component. This gives a rather large number of different cases to check: there are not only 26 sporadic groups and 16 families of groups of Lie type and the alternating groups, but also many of the groups of small rank or over small fields behave differently from the general case and have to be treated separately, and the groups of Lie type of even and odd characteristic are also quite different. Groups of characteristic 2 type A group is of characteristic 2 type if the generalized Fitting subgroup F*(Y) of every 2-local subgroup Y is a 2-group. As the name suggests these are roughly the groups of Lie type over fields of characteristic 2, plus a handful of others that are alternating or sporadic or of odd characteristic. Their classification is divided into the small and large rank cases, where the rank is the largest rank of an odd abelian subgroup normalizing a nontrivial 2-subgroup, which is often (but not always) the same as the rank of a Cartan subalgebra when the group is a group of Lie type in characteristic 2. The rank 1 groups are the thin groups, classified by Aschbacher, and the rank 2 ones are the notorious quasithin groups, classified by Aschbacher and Smith. These correspond roughly to groups of Lie type of ranks 1 or 2 over fields of characteristic 2. Groups of rank at least 3 are further subdivided into 3 classes by the trichotomy theorem, proved by Aschbacher for rank 3 and by Gorenstein and Lyons for rank at least 4. The three classes are groups of GF(2) type (classified mainly by Timmesfeld), groups of "standard type" for some odd prime (classified by the Gilman–Griess theorem and work by several others), and groups of uniqueness type, where a result of Aschbacher implies that there are no simple groups. The general higher rank case consists mostly of the groups of Lie type over fields of characteristic 2 of rank at least 3 or 4. Existence and uniqueness of the simple groups The main part of the classification produces a characterization of each simple group. It is then necessary to check that there exists a simple group for each characterization and that it is unique. This gives a large number of separate problems; for example, the original proofs of existence and uniqueness of the monster group totaled about 200 pages, and the identification of the Ree groups by Thompson and Bombieri was one of the hardest parts of the classification. Many of the existence proofs and some of the uniqueness proofs for the sporadic groups originally used computer calculations, most of which have since been replaced by shorter hand proofs. History of the proof Gorenstein's program In 1972 announced a program for completing the classification of finite simple groups, consisting of the following 16 steps: Groups of low 2-rank. This was essentially done by Gorenstein and Harada, who classified the groups with sectional 2-rank at most 4. Most of the cases of 2-rank at most 2 had been done by the time Gorenstein announced his program. The semisimplicity of 2-layers. The problem is to prove that the 2-layer of the centralizer of an involution in a simple group is semisimple. Standard form in odd characteristic. If a group has an involution with a 2-component that is a group of Lie type of odd characteristic, the goal is to show that it has a centralizer of involution in "standard form" meaning that a centralizer of involution has a component that is of Lie type in odd characteristic and also has a centralizer of 2-rank 1. Classification of groups of odd type. The problem is to show that if a group has a centralizer of involution in "standard form" then it is a group of Lie type of odd characteristic. This was solved by Aschbacher's classical involution theorem. Quasi-standard form Central involutions Classification of alternating groups. Some sporadic groups Thin groups. The simple thin finite groups, those with 2-local p-rank at most 1 for odd primes p, were classified by Aschbacher in 1978 Groups with a strongly p-embedded subgroup for p odd The signalizer functor method for odd primes. The main problem is to prove a signalizer functor theorem for nonsolvable signalizer functors. This was solved by McBride in 1982. Groups of characteristic p type. This is the problem of groups with a strongly p-embedded 2-local subgroup with p odd, which was handled by Aschbacher. Quasithin groups. A quasithin group is one whose 2-local subgroups have p-rank at most 2 for all odd primes p, and the problem is to classify the simple ones of characteristic 2 type. This was completed by Aschbacher and Smith in 2004. Groups of low 2-local 3-rank. This was essentially solved by Aschbacher's trichotomy theorem for groups with e(G)=3. The main change is that 2-local 3-rank is replaced by 2-local p-rank for odd primes. Centralizers of 3-elements in standard form. This was essentially done by the Trichotomy theorem. Classification of simple groups of characteristic 2 type. This was handled by the Gilman–Griess theorem, with 3-elements replaced by p-elements for odd primes. Timeline of the proof Many of the items in the table below are taken from . The date given is usually the publication date of the complete proof of a result, which is sometimes several years later than the proof or first announcement of the result, so some of the items appear in the "wrong" order. Second-generation classification The proof of the theorem, as it stood around 1985 or so, can be called first generation. Because of the extreme length of the first generation proof, much effort has been devoted to finding a simpler proof, called a second-generation classification proof. This effort, called "revisionism", was originally led by Daniel Gorenstein. , ten volumes of the second generation proof have been published (Gorenstein, Lyons & Solomon 1994, 1996, 1998, 1999, 2002, 2005, 2018a, 2018b; & Capdeboscq, 2021, 2023). In 2012 Solomon estimated that the project would need another 5 volumes, but said that progress on them was slow. It is estimated that the new proof will eventually fill approximately 5,000 pages. (This length stems in part from the second generation proof being written in a more relaxed style.) However, with the publication of volume 9 of the GLS series, and including the Aschbacher–Smith contribution, this estimate was already reached, with several more volumes still in preparation (the rest of what was originally intended for volume 9, plus projected volumes 10 and 11). Aschbacher and Smith wrote their two volumes devoted to the quasithin case in such a way that those volumes can be part of the second generation proof. Gorenstein and his collaborators have given several reasons why a simpler proof is possible. The most important thing is that the correct, final statement of the theorem is now known. Simpler techniques can be applied that are known to be adequate for the types of groups we know to be finite simple. In contrast, those who worked on the first generation proof did not know how many sporadic groups there were, and in fact some of the sporadic groups (e.g., the Janko groups) were discovered while proving other cases of the classification theorem. As a result, many of the pieces of the theorem were proved using techniques that were overly general. Because the conclusion was unknown, the first generation proof consists of many stand-alone theorems, dealing with important special cases. Much of the work of proving these theorems was devoted to the analysis of numerous special cases. Given a larger, orchestrated proof, dealing with many of these special cases can be postponed until the most powerful assumptions can be applied. The price paid under this revised strategy is that these first generation theorems no longer have comparatively short proofs, but instead rely on the complete classification. Many first generation theorems overlap, and so divide the possible cases in inefficient ways. As a result, families and subfamilies of finite simple groups were identified multiple times. The revised proof eliminates these redundancies by relying on a different subdivision of cases. Finite group theorists have more experience at this sort of exercise, and have new techniques at their disposal. has called the work on the classification problem by Ulrich Meierfrankenfeld, Bernd Stellmacher, Gernot Stroth, and a few others, a third generation program. One goal of this is to treat all groups in characteristic 2 uniformly using the amalgam method. Length of proof Gorenstein has discussed some of the reasons why there might not be a short proof of the classification similar to the classification of compact Lie groups. The most obvious reason is that the list of simple groups is quite complicated: with 26 sporadic groups there are likely to be many special cases that have to be considered in any proof. So far no one has yet found a clean uniform description of the finite simple groups similar to the parameterization of the compact Lie groups by Dynkin diagrams. Atiyah and others have suggested that the classification ought to be simplified by constructing some geometric object that the groups act on and then classifying these geometric structures. The problem is that no one has been able to suggest an easy way to find such a geometric structure associated with a simple group. In some sense, the classification does work by finding geometric structures such as BN-pairs, but this only comes at the end of a very long and difficult analysis of the structure of a finite simple group. Another suggestion for simplifying the proof is to make greater use of representation theory. The problem here is that representation theory seems to require very tight control over the subgroups of a group in order to work well. For groups of small rank, one has such control and representation theory works very well, but for groups of larger rank no-one has succeeded in using it to simplify the classification. In the early days of the classification, there was a considerable effort made to use representation theory, but this never achieved much success in the higher rank case. Consequences of the classification This section lists some results that have been proved using the classification of finite simple groups. The Schreier conjecture The Signalizer functor theorem The B conjecture The Schur–Zassenhaus theorem for all groups (though this only uses the Feit–Thompson theorem). A transitive permutation group on a finite set with more than 1 element has a fixed-point-free element of prime power order. The classification of 2-transitive permutation groups. The classification of rank 3 permutation groups. The Sims conjecture Frobenius's conjecture on the number of solutions of . See also O'Nan–Scott theorem Notes Citations References Daniel Gorenstein (1985), "The Enormous Theorem", Scientific American, December 1, 1985, vol. 253, no. 6, pp. 104–115. Mark Ronan, Symmetry and the Monster, , Oxford University Press, 2006. (Concise introduction for lay reader) Marcus du Sautoy, Finding Moonshine, Fourth Estate, 2008, (another introduction for the lay reader. American edition published in 2009 as Symmetry: A Journey into the Patterns of Nature) Ron Solomon (1995) "On Finite Simple Groups and their Classification," Notices of the American Mathematical Society. (Not too technical and good on history. American version published in 2009 as Symmetry: A Journey into the Patterns of Nature) – article won Levi L. Conant prize for exposition External links ATLAS of Finite Group Representations. Searchable database of representations and other data for many finite simple groups. Elwes, Richard, "An enormous theorem: the classification of finite simple groups," Plus Magazine, Issue 41, December 2006. For laypeople. Madore, David (2003) Orders of nonabelian simple groups. Includes a list of all nonabelian simple groups up to order 1010. In what sense is the classification of all finite groups “impossible”? (Last updated in February 2024) Group theory Finite groups Theorems in algebra 2004 in science History of mathematics Mathematical classification systems
Classification of finite simple groups
[ "Mathematics" ]
3,679
[ "Mathematical theorems", "Mathematical structures", "Theorems in algebra", "Finite groups", "Group theory", "Fields of abstract algebra", "Algebraic structures", "nan", "Mathematical problems", "Algebra" ]
7,450
https://en.wikipedia.org/wiki/Context%20menu
A context menu (also called contextual, shortcut, and pop up or pop-up menu) is a menu in a graphical user interface (GUI) that appears upon user interaction, such as a right-click mouse operation. A context menu offers a limited set of choices that are available in the current state, or context, of the operating system or application to which the menu belongs. Usually the available choices are actions related to the selected object. From a technical point of view, such a context menu is a graphical control element. History Context menus first appeared in the Smalltalk environment on the Xerox Alto computer, where they were called pop-up menus; they were invented by Dan Ingalls in the mid-1970s. Microsoft Office v3.0 introduced the context menu for copy and paste functionality in 1990. Borland demonstrated extensive use of the context menu in 1991 at the Second Paradox Conference in Phoenix Arizona. Lotus 1-2-3/G for OS/2 v1.0 added additional formatting options in 1991. Borland Quattro Pro for Windows v1.0 introduced the Properties context menu option in 1992. Implementation Context menus are opened via various forms of user interaction that target a region of the GUI that supports context menus. The specific form of user interaction and the means by which a region is targeted vary: On a computer running Microsoft Windows, macOS, or Unix running the X Window System, clicking the secondary mouse button (usually the right button) opens a context menu for the region that is under the mouse pointer. For quickness, implementations may additionally support hold-and-release selection, meaning the pointer is held down and dragged, and released at desirable menu entry. On systems that support one-button mice, context menus are typically opened by pressing and holding the primary mouse button (this works on the icons in the Dock on macOS) or by pressing a keyboard/mouse button combination (e.g. Ctrl-mouse click in Classic Mac OS and macOS). A keyboard alternative for macOS is to enable Mouse keys in Universal Access. Then, depending on whether a laptop or compact or extended keyboard type is used, the shortcut is ++5 or +5 (numeric keypad) or ++i (laptop). On systems with a multi-touch interface such as MacBook or Surface, the context menu can be opened by pressing or tapping with two fingers instead of just one. Some cameras on smartphones for example recognize a QR code when a picture is taken. Then a pop-up appears if you want to 'open' the QR content. This could be anything like a website or to configure your phone to connect to Wi-Fi. See image. On some user interfaces, context menu items are accompanied by icons for quicker recognition upon navigation. Context menus can also have a top row of icons only for quick access to most frequently used options. Windows mouse click behavior is such that the context menu doesn't open while the mouse button is pressed, but only opens the menu when the button is released, so the user has to click again to select a context menu item. This behavior differs from that of macOS and most free software GUIs. In Microsoft Windows, pressing the Application key or Shift+F10 opens a context menu for the region that has focus. Context menus are sometimes hierarchically organized, allowing navigation through different levels of the menu structure. The implementations differ: Microsoft Word was one of the first applications to only show sub-entries of some menu entries after clicking an arrow icon on the context menu, otherwise executing an action associated with the parent entry. This makes it possible to quickly repeat an action with the parameters of the previous execution, and to better separate options from actions. X Window Managers The following window managers provide context menu functionality: Awesome IceWM—middle-click and right-click context menus on desktop, menu bar, title bars, and title icon olwm openbox sawfish Usability Context menus have received some criticism from usability analysts when improperly used, as some applications make certain features only available in context menus, which may confuse even experienced users (especially when the context menus can only be activated in a limited area of the application's client window). Context menus usually open in a fixed position under the pointer, but when the pointer is near a screen edge the menu will be displaced - thus reducing consistency and impeding use of muscle memory. If the context menu is being triggered by keyboard, such as by using Shift + F10, the context menu appears near the focused widget instead of the position of the pointer, to save recognition efforts. In documentation Microsoft's guidelines call for always using the term context menu, and explicitly deprecate shortcut menu. See also Menu key Pie menu Screen hotspot References External links Graphical control elements Graphical user interface elements Macintosh operating systems user interface Windows administration
Context menu
[ "Technology" ]
1,013
[ "Components", "Graphical user interface elements" ]
7,455
https://en.wikipedia.org/wiki/Chaparral
Chaparral ( ) is a shrubland plant community found primarily in California, in southern Oregon and in the northern portion of the Baja California Peninsula in Mexico. It is shaped by a Mediterranean climate (mild wet winters and hot dry summers) and infrequent, high-intensity crown fires. Many chaparral shrubs have hard sclerophyllous evergreen leaves, as contrasted with the associated soft-leaved, drought-deciduous, scrub community of coastal sage scrub, found often on drier, southern facing slopes. Three other closely related chaparral shrubland systems occur in southern Arizona, western Texas, and along the eastern side of central Mexico's mountain chains, all having summer rains in contrast to the Mediterranean climate of other chaparral formations. Chaparral comprises 9% of California's wildland vegetation and contains 20% of its plant species. Etymology The name comes from the Spanish word , which translates to "place of the scrub oak". Introduction In its natural state, chaparral is characterized by infrequent fires, with natural fire return intervals ranging between 30 years and over 150 years. Mature chaparral (at least 60 years since time of last fire) is characterized by nearly impenetrable, dense thickets (except the more open desert chaparral). These plants are flammable during the late summer and autumn months when conditions are characteristically hot and dry. They grow as woody shrubs with thick, leathery, and often small leaves, contain green leaves all year (are evergreen), and are typically drought resistant (with some exceptions). After the first rains following a fire, the landscape is dominated by small flowering herbaceous plants, known as fire followers, which die back with the summer dry period. Similar plant communities are found in the four other Mediterranean climate regions around the world, including the Mediterranean Basin (where it is known as ), central Chile (where it is called ), the South African Cape Region (known there as ), and in Western and Southern Australia (as ). According to the California Academy of Sciences, Mediterranean shrubland contains more than 20 percent of the world's plant diversity. The word chaparral is a loanword from Spanish , meaning place of the scrub oak, which itself comes from a Basque word, , that has the same meaning. Conservation International and other conservation organizations consider chaparral to be a biodiversity hotspot – a biological community with a large number of different species – that is under threat by human activity. California chaparral California chaparral and woodlands ecoregion The California chaparral and woodlands ecoregion, of the Mediterranean forests, woodlands, and scrub biome, has three sub-ecoregions with ecosystem–plant community subdivisions: California coastal sage and chaparral:In coastal Southern California and northwestern coastal Baja California, as well as all of the Channel Islands off California and Guadalupe Island (Mexico). California montane chaparral and woodlands:In southern and central coast adjacent and inland California regions, including covering some of the mountains of the California Coast Ranges, the Transverse Ranges, and the western slopes of the northern Peninsular Ranges. California interior chaparral and woodlands:In central interior California surrounding the Central Valley, covering the foothills and lower slopes of the northeastern Transverse Ranges and the western Sierra Nevada range. Chaparral and woodlands biota For the numerous individual plant and animal species found within the California chaparral and woodlands ecoregion, see: Flora of the California chaparral and woodlands Fauna of the California chaparral and woodlands. Some of the indicator plants of the California chaparral and woodlands ecoregion include: Quercus species – oaks: Quercus agrifolia – coast live oak Quercus berberidifolia – scrub oak Quercus chrysolepis – canyon live oak Quercus douglasii – blue oak Quercus wislizeni – interior live oak Artemisia species – sagebrush: Artemisia californica – California sagebrush, coastal sage brush Arctostaphylos species – manzanitas: Arctostaphylos glauca – bigberry manzanita Arctostaphylos manzanita – common manzanita Ceanothus species – California lilacs: Ceanothus cuneatus – buckbrush Ceanothus megacarpus – bigpod ceanothus Rhus species – sumacs: Rhus integrifolia – lemonade berry Rhus ovata – sugar bush Eriogonum species – buckwheats: Eriogonum fasciculatum – California buckwheat Salvia species – sages: Salvia mellifera – Californian black sage Chaparral soils and nutrient composition Chaparral characteristically is found in areas with steep topography and shallow stony soils, while adjacent areas with clay soils, even where steep, tend to be colonized by annual plants and grasses. Some chaparral species are adapted to nutrient-poor soils developed over serpentine and other ultramafic rock, with a high ratio of magnesium and iron to calcium and potassium, that are also generally low in essential nutrients such as nitrogen. California cismontane and transmontane chaparral subdivisions Another phytogeography system uses two California chaparral and woodlands subdivisions: the cismontane chaparral and the transmontane (desert) chaparral. California cismontane chaparral Cismontane chaparral ("this side of the mountain") refers to the chaparral ecosystem in the Mediterranean forests, woodlands, and scrub biome in California, growing on the western (and coastal) sides of large mountain range systems, such as the western slopes of the Sierra Nevada in the San Joaquin Valley foothills, western slopes of the Peninsular Ranges and California Coast Ranges, and south-southwest slopes of the Transverse Ranges in the Central Coast and Southern California regions. Cismontane chaparral plant species In Central and Southern California chaparral forms a dominant habitat. Members of the chaparral biota native to California, all of which tend to regrow quickly after fires, include: Adenostoma fasciculatum, chamise Adenostoma sparsifolium, redshanks Arctostaphylos spp., manzanita Ceanothus spp., ceanothus Cercocarpus spp., mountain mahogany Cneoridium dumosum, bush rue Eriogonum fasciculatum, California buckwheat Garrya spp., silk-tassel bush Hesperoyucca whipplei, yucca Heteromeles arbutifolia, toyon Acmispon glaber, deerweed Malosma laurina, laurel sumac Marah macrocarpus, wild cucumber Mimulus aurantiacus, bush monkeyflower Pickeringia montana, chaparral pea Prunus ilicifolia, islay or hollyleaf cherry Quercus berberidifolia, scrub oak Q. dumosa, scrub oak Q. wislizenii var. frutescens Rhamnus californica, California coffeeberry Rhus integrifolia, lemonade berry Rhus ovata, sugar bush Salvia apiana, Californian white sage Salvia mellifera, Californian black sage Xylococcus bicolor, mission manzanita Cismontane chaparral bird species The complex ecology of chaparral habitats supports a very large number of animal species. The following is a short list of birds which are an integral part of the cismontane chaparral ecosystems. Characteristic chaparral bird species include: Wrentit (Chamaea fasciata) California thrasher (Toxostoma redivivum) California towhee (Melozone crissalis) Spotted towhee (Pipilo maculatus) California scrub jay (Aphelocoma californica) Other common chaparral bird species include: Anna's hummingbird (Calypte anna) Bewick's wren (Thryomanes bewickii) Bushtit (Psaltriparus minimus) Costa's hummingbird (Calypte costae) Greater roadrunner (Geococcyx californianus) California transmontane (desert) chaparral Transmontane chaparral or desert chaparral—transmontane ("the other side of the mountain") chaparral—refers to the desert shrubland habitat and chaparral plant community growing in the rainshadow of these ranges. Transmontane chaparral features xeric desert climate, not Mediterranean climate habitats, and is also referred to as desert chaparral. Desert chaparral is a regional ecosystem subset of the deserts and xeric shrublands biome, with some plant species from the California chaparral and woodlands ecoregion. Unlike cismontane chaparral, which forms dense, impenetrable stands of plants, desert chaparral is often open, with only about 50 percent of the ground covered. Individual shrubs can reach up to in height. Transmontane chaparral or desert chaparral is found on the eastern slopes of major mountain range systems on the western sides of the deserts of California. The mountain systems include the southeastern Transverse Ranges (the San Bernardino and San Gabriel Mountains) in the Mojave Desert north and northeast of the Los Angeles basin and Inland Empire; and the northern Peninsular Ranges (San Jacinto, Santa Rosa, and Laguna Mountains), which separate the Colorado Desert (western Sonoran Desert) from lower coastal Southern California. It is distinguished from the cismontane chaparral found on the coastal side of the mountains, which experiences higher winter rainfall. Naturally, desert chaparral experiences less winter rainfall than cismontane chaparral. Plants in this community are characterized by small, hard (sclerophyllic) evergreen (non-deciduous) leaves. Desert chaparral grows above California's desert cactus scrub plant community and below the pinyon-juniper woodland. It is further distinguished from the deciduous sub-alpine scrub above the pinyon-juniper woodlands on the same side of the Peninsular ranges. Due to the lower annual rainfall (resulting in slower plant growth rates) when compared to cismontane chaparral, desert chaparral is more vulnerable to biodiversity loss and the invasion of non-native weeds and grasses if disturbed by human activity and frequent fire. Transmontane chaparral distribution Transmontane (desert) chaparral typically grows on the lower ( elevation) northern slopes of the southern Transverse Ranges (running east to west in San Bernardino and Los Angeles counties) and on the lower () eastern slopes of the Peninsular Ranges (running south to north from lower Baja California to Riverside and Orange counties and the Transverse Ranges). It can also be found in higher-elevation sky islands in the interior of the deserts, such as in the upper New York Mountains within the Mojave National Preserve in the Mojave Desert. The California transmontane (desert) chaparral is found in the rain shadow deserts of the following: Sierra Nevada creating the Great Basin Desert and northern Mojave Desert Transverse Ranges creating the western through eastern Mojave Desert Peninsular Ranges creating the Colorado Desert and Yuha Desert. Transmontane chaparral plants Adenostoma fasciculatum, chamise (a low shrub common to most chaparral with clusters of tiny needle like leaves or fascicles; similar in appearance to coastal Eriogonum fasciculatum) Agave deserti, desert agave Arctostaphylos glauca, bigberry manzanita (smooth red bark with large edible berries; glauca means blue-green, the color of its leaves) Ceanothus greggii, desert ceanothus, California lilac (a nitrogen fixer, has hair on both sides of leaves for heat dissipation) Cercocarpus ledifolius, curl leaf mountain mahogany, a nitrogen fixer important food source for desert bighorn sheep Dendromecon rigida, bush poppy (a fire follower with four petaled yellow flowers) Ephedra spp., Mormon teas Fremontodendron californicum, California flannel bush (lobed leaves with fine coating of hair, covered with yellow blossoms in spring) Opuntia acanthocarpa, buckhorn cholla (branches resemble antlers of a deer) Opuntia echinocarpa, silver or golden cholla (depending on color of the spines) Opuntia phaeacantha, desert prickly pear (fruit is important food source for animals) Purshia tridentata, buckbrush, antelope bitterbrush (Rosaceae family) Prunus fremontii, desert apricot Prunus fasciculata, desert almond (commonly infested with tent caterpillars of Malacosoma spp.) Prunus ilicifolia, holly-leaf cherry Quercus cornelius-mulleri, desert scrub oak or Muller's oak Rhus ovata, sugar bush Simmondsia chinensis, jojoba Yucca schidigera, Mojave yucca Hesperoyucca whipplei (syn. Yucca whipplei), foothill yucca – our lord's candle. Transmontane chaparral animals There is overlap of animals with those of the adjacent desert and pinyon-juniper communities. Canis latrans, coyote Lynx rufus, bobcat Neotoma sp., desert pack rat Odocoileus hemionus, mule deer Peromyscus truei, pinyon mouse Puma concolor, mountain lion Stagmomantis californica, California mantis Fire Chaparral is a coastal biome with hot, dry summers and mild, rainy winters. The chaparral area receives about of precipitation a year. This makes the chaparral most vulnerable to fire in the late summer and fall. The chaparral ecosystem as a whole is adapted to be able to recover from naturally infrequent, high-intensity fire (fires occurring between 30 and 150 years or more apart); indeed, chaparral regions are known culturally and historically for their impressive fires. (This does create a conflict with human development adjacent to and expanding into chaparral systems.) Additionally, Native Americans burned chaparral near villages on the coastal plain to promote plant species for textiles and food. Before a major fire, typical chaparral plant communities are dominated by manzanita, chamise Adenostoma fasciculatum and Ceanothus species, toyon (which can sometimes be interspersed with scrub oaks), and other drought-resistant shrubs with hard (sclerophyllous) leaves; these plants resprout (see resprouter) from underground burls after a fire. Plants that are long-lived in the seed bank or serotinous with induced germination after fire include chamise, Ceanothus, and fiddleneck. Some chaparral plant communities may grow so dense and tall that it becomes difficult for large animals and humans to penetrate, but may be teeming with smaller fauna in the understory. The seeds of many chaparral plant species are stimulated to germinate by some fire cue (heat or the chemicals from smoke or charred wood). During the time shortly after a fire, chaparral communities may contain soft-leaved herbaceous, fire following annual wildflowers and short-lived perennials that dominate the community for the first few years – until the burl resprouts and seedlings of chaparral shrub species create a mature, dense overstory. Seeds of annuals and shrubs lie dormant until the next fire creates the conditions needed for germination. Several shrub species such as Ceanothus fix nitrogen, increasing the availability of nitrogen compounds in the soil. Because of the hot, dry conditions that exist in the California summer and fall, chaparral is one of the most fire-prone plant communities in North America. Some fires are caused by lightning, but these are usually during periods of high humidity and low winds and are easily controlled. Nearly all of the very large wildfires are caused by human activity during periods of hot, dry easterly Santa Ana winds. These human-caused fires are commonly ignited by power line failures, vehicle fires and collisions, sparks from machinery, arson, or campfires. Threatened by high fire frequency Though adapted to infrequent fires, chaparral plant communities can be eliminated by frequent fires. A high frequency of fire (less than 10-15 years apart) will result in the loss of obligate seeding shrub species such as Manzanita spp. This high frequency disallows seeder plants to reach their reproductive size before the next fire and the community shifts to a sprouter-dominance. If high frequency fires continue over time, obligate resprouting shrub species can also be eliminated by exhausting their energy reserves below-ground. Today, frequent accidental ignitions can convert chaparral from a native shrubland to non-native annual grassland and drastically reduce species diversity, especially under drought brought about by climate change. Wildfire debate There are two older hypotheses relating to California chaparral fire regimes that caused considerable debate in the past within the fields of wildfire ecology and land management. Research over the past two decades have rejected these hypotheses: That older stands of chaparral become "senescent" or "decadent", thus implying that fire is necessary for the plants to remain healthy, That wildfire suppression policies have allowed dead chaparral to accumulate unnaturally, creating ample fuel for large fires. The perspective that older chaparral is unhealthy or unproductive may have originated during the 1940s when studies were conducted measuring the amount of forage available to deer populations in chaparral stands. However, according to recent studies, California chaparral is extraordinarily resilient to very long periods without fire and continues to maintain productive growth throughout pre-fire conditions. Seeds of many chaparral plants actually require 30 years or more worth of accumulated leaf litter before they will successfully germinate (e.g., scrub oak, Quercus berberidifolia; toyon, Heteromeles arbutifolia; and holly-leafed cherry, Prunus ilicifolia). When intervals between fires drop below 10 to 15 years, many chaparral species are eliminated and the system is typically replaced by non-native, invasive, weedy grassland. The idea that older chaparral is responsible for causing large fires was originally proposed in the 1980s by comparing wildfires in Baja California and southern California. It was suggested that fire suppression activities in southern California allowed more fuel to accumulate, which in turn led to larger fires. This is similar to the observation that fire suppression and other human-caused disturbances in dry, ponderosa pine forests in the Southwest of the United States has unnaturally increased forest density. Historically, mixed-severity fires likely burned through these forests every decade or so, burning understory plants, small trees, and downed logs at low-severity, and patches of trees at high-severity. However, chaparral has a high-intensity crown-fire regime, meaning that fires consume nearly all the above ground growth whenever they burn, with a historical frequency of 30 to 150 years or more. A detailed analysis of historical fire data concluded that fire suppression activities have been ineffective at excluding fire from southern California chaparral, unlike in ponderosa pine forests. In addition, the number of fires is increasing in step with population growth and exacerbated by climate change. Chaparral stand age does not have a significant correlation to its tendency to burn. Large, infrequent, high-intensity wildfires are part of the natural fire regime for California chaparral. Extreme weather conditions (low humidity, high temperature, high winds), drought, and low fuel moisture are the primary factors in determining how large a chaparral fire becomes. See also California Chaparral Institute California chaparral and woodlands ecoregion California coastal sage and chaparral California montane chaparral and woodlands California interior chaparral and woodlands Heath (habitat) Fire ecology Keystone species reintroduction: (sufficient) native keystone grazing species in grasslands will promote tree growth, reducing wildfire likelihood Garrigue International Association of Wildland Fire References Bibliography Haidinger, T.L., and J.E. Keeley. 1993. Role of high fire frequency in destruction of mixed chaparral. Madrono 40: 141–147. Halsey, R.W. 2008. Fire, Chaparral, and Survival in Southern California. Second Edition. Sunbelt Publications, San Diego, CA. 232 p. Hanes, T. L. 1971. Succession after fire in the chaparral of southern California. Ecol. Monographs 41: 27–52. Hubbard, R.F. 1986. Stand age and growth dynamics in chamise chaparral. Master's thesis, San Diego State University, San Diego, California. Keeley, J. E., C. J. Fotheringham, and M. Morais. 1999. Reexamining fire suppression impacts on brushland fire regimes. Science 284:1829–1832. Keeley, J.E. 1995. Future of California floristics and systematics: wildfire threats to the California flora. Madrono 42: 175–179. Keeley, J.E., A.H. Pfaff, and H.D. Stafford. 2005. Fire suppression impacts on postfire recovery of Sierra Nevada chaparral shrublands. International Journal of Wildland Fire 14: 255–265. Larigauderie, A., T.W. Hubbard, and J. Kummerow. 1990. Growth dynamics of two chaparral shrub species with time after fire. Madrono 37: 225–236. Minnich, R. A. 1983. Fire mosaics in southern California and northern Baja California. Science 219:1287–1294. Moritz, M.A., J.E. Keeley, E.A. Johnson, and A.A. Schaffner. 2004. Testing a basic assumption of shrubland fire management: How important is fuel age? Frontiers in Ecology and the Environment 2:67–72. Pratt, R. B., A. L. Jacobsen, A. R. Ramirez, A. M. Helms, C. A. Traugh, M. F. Tobin, M. S. Heffner, and S. D. Davis. 2013. Mortality of resprouting chaparral shrubs after a fire and during a record drought: physiological mechanisms and demographic consequences. Global Change Biology 20:893–907. Syphard, A. D., V. C. Radeloff, J. E. Keeley, T. J. Hawbaker, M. K. Clayton, S. I. Stewart, and R. B. Hammer. 2007. Human influence on California fire regimes. Ecological Applications 17:1388–1402. Vale, T. R. 2002. Fire, Native Peoples, and the Natural Landscape. Island Press, Washington, DC, USA. Venturas, M. D., E. D. MacKinnon, H. L. Dario, A. L. Jacobsen, R. B. Pratt, and S. D. Davis. 2016. Chaparral shrub hydraulic traits, size, and life history types relate to species mortality during California's historic drought of 2014. PLoS ONE 11(7): p.e0159145. Zedler, P.H. 1995. Fire frequency in southern California shrublands: biological effects and management options, pp. 101–112 in J.E. Keeley and T. Scott (eds.), Brushfires in California wildlands: ecology and resource management. International Association of Wildland Fire, Fairfield, Wash. External links The California Chaparral Institute website Mediterranean forests, woodlands, and scrub in the United States Plant communities of California Plants by habitat . . . San Bernardino Mountains San Gabriel Mountains Santa Susana Mountains Santa Ana Mountains Ecology of the Sierra Nevada (United States) Wildfire ecology Nearctic ecoregions Sclerophyll forests
Chaparral
[ "Biology" ]
5,017
[ "Plants by habitat", "Organisms by habitat", "Plants" ]
7,463
https://en.wikipedia.org/wiki/Cold%20fusion
Cold fusion is a hypothesized type of nuclear reaction that would occur at, or near, room temperature. It would contrast starkly with the "hot" fusion that is known to take place naturally within stars and artificially in hydrogen bombs and prototype fusion reactors under immense pressure and at temperatures of millions of degrees, and be distinguished from muon-catalyzed fusion. There is currently no accepted theoretical model that would allow cold fusion to occur. In 1989, two electrochemists at the University of Utah, Martin Fleischmann and Stanley Pons, reported that their apparatus had produced anomalous heat ("excess heat") of a magnitude they asserted would defy explanation except in terms of nuclear processes. They further reported measuring small amounts of nuclear reaction byproducts, including neutrons and tritium. The small tabletop experiment involved electrolysis of heavy water on the surface of a palladium (Pd) electrode. The reported results received wide media attention and raised hopes of a cheap and abundant source of energy. Many scientists tried to replicate the experiment with the few details available. Expectations diminished as a result of numerous failed replications, the retraction of several previously reported positive replications, the identification of methodological flaws and experimental errors in the original study, and, ultimately, the confirmation that Fleischmann and Pons had not observed the expected nuclear reaction byproducts. By late 1989, most scientists considered cold fusion claims dead, and cold fusion subsequently gained a reputation as pathological science. In 1989 the United States Department of Energy (DOE) concluded that the reported results of excess heat did not present convincing evidence of a useful source of energy and decided against allocating funding specifically for cold fusion. A second DOE review in 2004, which looked at new research, reached similar conclusions and did not result in DOE funding of cold fusion. Presently, since articles about cold fusion are rarely published in peer-reviewed mainstream scientific journals, they do not attract the level of scrutiny expected for mainstream scientific publications. Nevertheless, some interest in cold fusion has continued through the decades—for example, a Google-funded failed replication attempt was published in a 2019 issue of Nature. A small community of researchers continues to investigate it, often under the alternative designations low-energy nuclear reactions (LENR) or condensed matter nuclear science (CMNS). History Nuclear fusion is normally understood to occur at temperatures in the tens of millions of degrees. This is called "thermonuclear fusion". Since the 1920s, there has been speculation that nuclear fusion might be possible at much lower temperatures by catalytically fusing hydrogen absorbed in a metal catalyst. In 1989, a claim by Stanley Pons and Martin Fleischmann (then one of the world's leading electrochemists) that such cold fusion had been observed caused a brief media sensation before the majority of scientists criticized their claim as incorrect after many found they could not replicate the excess heat. Since the initial announcement, cold fusion research has continued by a small community of researchers who believe that such reactions happen and hope to gain wider recognition for their experimental evidence. Early research The ability of palladium to absorb hydrogen was recognized as early as the nineteenth century by Thomas Graham. In the late 1920s, two Austrian-born scientists, Friedrich Paneth and Kurt Peters, originally reported the transformation of hydrogen into helium by nuclear catalysis when hydrogen was absorbed by finely divided palladium at room temperature. However, the authors later retracted that report, saying that the helium they measured was due to background from the air. In 1927, Swedish scientist John Tandberg reported that he had fused hydrogen into helium in an electrolytic cell with palladium electrodes. On the basis of his work, he applied for a Swedish patent for "a method to produce helium and useful reaction energy". Due to Paneth and Peters's retraction and his inability to explain the physical process, his patent application was denied. After deuterium was discovered in 1932, Tandberg continued his experiments with heavy water. The final experiments made by Tandberg with heavy water were similar to the original experiment by Fleischmann and Pons. Fleischmann and Pons were not aware of Tandberg's work. The term "cold fusion" was used as early as 1956 in an article in The New York Times about Luis Alvarez's work on muon-catalyzed fusion. Paul Palmer and then Steven Jones of Brigham Young University used the term "cold fusion" in 1986 in an investigation of "geo-fusion", the possible existence of fusion involving hydrogen isotopes in a planetary core. In his original paper on this subject with Clinton Van Siclen, submitted in 1985, Jones had coined the term "piezonuclear fusion". Fleischmann–Pons experiment The most famous cold fusion claims were made by Stanley Pons and Martin Fleischmann in 1989. After a brief period of interest by the wider scientific community, their reports were called into question by nuclear physicists. Pons and Fleischmann never retracted their claims, but moved their research program from the US to France after the controversy erupted. Events preceding announcement Martin Fleischmann of the University of Southampton and Stanley Pons of the University of Utah hypothesized that the high compression ratio and mobility of deuterium that could be achieved within palladium metal using electrolysis might result in nuclear fusion. To investigate, they conducted electrolysis experiments using a palladium cathode and heavy water within a calorimeter, an insulated vessel designed to measure process heat. Current was applied continuously for many weeks, with the heavy water being renewed at intervals. Some deuterium was thought to be accumulating within the cathode, but most was allowed to bubble out of the cell, joining oxygen produced at the anode. For most of the time, the power input to the cell was equal to the calculated power leaving the cell within measurement accuracy, and the cell temperature was stable at around 30 °C. But then, at some point (in some of the experiments), the temperature rose suddenly to about 50 °C without changes in the input power. These high temperature phases would last for two days or more and would repeat several times in any given experiment once they had occurred. The calculated power leaving the cell was significantly higher than the input power during these high temperature phases. Eventually the high temperature phases would no longer occur within a particular cell. In 1988, Fleischmann and Pons applied to the United States Department of Energy for funding towards a larger series of experiments. Up to this point they had been funding their experiments using a small device built with $100,000 out-of-pocket. The grant proposal was turned over for peer review, and one of the reviewers was Steven Jones of Brigham Young University. Jones had worked for some time on muon-catalyzed fusion, a known method of inducing nuclear fusion without high temperatures, and had written an article on the topic entitled "Cold nuclear fusion" that had been published in Scientific American in July 1987. Fleischmann and Pons and co-workers met with Jones and co-workers on occasion in Utah to share research and techniques. During this time, Fleischmann and Pons described their experiments as generating considerable "excess energy", in the sense that it could not be explained by chemical reactions alone. They felt that such a discovery could bear significant commercial value and would be entitled to patent protection. Jones, however, was measuring neutron flux, which was not of commercial interest. To avoid future problems, the teams appeared to agree to publish their results simultaneously, though their accounts of their 6 March meeting differ. Announcement In mid-March 1989, both research teams were ready to publish their findings, and Fleischmann and Jones had agreed to meet at an airport on 24 March to send their papers to Nature via FedEx. Fleischmann and Pons, however, pressured by the University of Utah, which wanted to establish priority on the discovery, broke their apparent agreement, disclosing their work at a press conference on 23 March (they claimed in the press release that it would be published in Nature but instead submitted their paper to the Journal of Electroanalytical Chemistry). Jones, upset, faxed in his paper to Nature after the press conference. Fleischmann and Pons' announcement drew wide media attention, as well as attention from the scientific community. The 1986 discovery of high-temperature superconductivity had made scientists more open to revelations of unexpected but potentially momentous scientific results that could be replicated reliably even if they could not be explained by established theories. Many scientists were also reminded of the Mössbauer effect, a process involving nuclear transitions in a solid. Its discovery 30 years earlier had also been unexpected, though it was quickly replicated and explained within the existing physics framework. The announcement of a new purported clean source of energy came at a crucial time: adults still remembered the 1973 oil crisis and the problems caused by oil dependence, anthropogenic global warming was starting to become notorious, the anti-nuclear movement was labeling nuclear power plants as dangerous and getting them closed, people had in mind the consequences of strip mining, acid rain, the greenhouse effect and the Exxon Valdez oil spill, which happened the day after the announcement. In the press conference, Chase N. Peterson, Fleischmann and Pons, backed by the solidity of their scientific credentials, repeatedly assured the journalists that cold fusion would solve environmental problems, and would provide a limitless inexhaustible source of clean energy, using only seawater as fuel. They said the results had been confirmed dozens of times and they had no doubts about them. In the accompanying press release Fleischmann was quoted saying: "What we have done is to open the door of a new research area, our indications are that the discovery will be relatively easy to make into a usable technology for generating heat and power, but continued work is needed, first, to further understand the science and secondly, to determine its value to energy economics." Response and fallout Although the experimental protocol had not been published, physicists in several countries attempted, and failed, to replicate the excess heat phenomenon. The first paper submitted to Nature reproducing excess heat, although it passed peer review, was rejected because most similar experiments were negative and there were no theories that could explain a positive result; this paper was later accepted for publication by the journal Fusion Technology. Nathan Lewis, professor of chemistry at the California Institute of Technology, led one of the most ambitious validation efforts, trying many variations on the experiment without success, while CERN physicist Douglas R. O. Morrison said that "essentially all" attempts in Western Europe had failed. Even those reporting success had difficulty reproducing Fleischmann and Pons' results. On 10 April 1989, a group at Texas A&M University published results of excess heat and later that day a group at the Georgia Institute of Technology announced neutron production—the strongest replication announced up to that point due to the detection of neutrons and the reputation of the lab. On 12 April Pons was acclaimed at an ACS meeting. But Georgia Tech retracted their announcement on 13 April, explaining that their neutron detectors gave false positives when exposed to heat. Another attempt at independent replication, headed by Robert Huggins at Stanford University, which also reported early success with a light water control, became the only scientific support for cold fusion in 26 April US Congress hearings. But when he finally presented his results he reported an excess heat of only one degree Celsius, a result that could be explained by chemical differences between heavy and light water in the presence of lithium. He had not tried to measure any radiation and his research was derided by scientists who saw it later. For the next six weeks, competing claims, counterclaims, and suggested explanations kept what was referred to as "cold fusion" or "fusion confusion" in the news. In April 1989, Fleischmann and Pons published a "preliminary note" in the Journal of Electroanalytical Chemistry. This paper notably showed a gamma peak without its corresponding Compton edge, which indicated they had made a mistake in claiming evidence of fusion byproducts. Fleischmann and Pons replied to this critique, but the only thing left clear was that no gamma ray had been registered and that Fleischmann refused to recognize any mistakes in the data. A much longer paper published a year later went into details of calorimetry but did not include any nuclear measurements. Nevertheless, Fleischmann and Pons and a number of other researchers who found positive results remained convinced of their findings. The University of Utah asked Congress to provide $25 million to pursue the research, and Pons was scheduled to meet with representatives of President Bush in early May. On 30 April 1989, cold fusion was declared dead by The New York Times. The Times called it a circus the same day, and the Boston Herald attacked cold fusion the following day. On 1 May 1989, the American Physical Society held a session on cold fusion in Baltimore, including many reports of experiments that failed to produce evidence of cold fusion. At the end of the session, eight of the nine leading speakers stated that they considered the initial Fleischmann and Pons claim dead, with the ninth, Johann Rafelski, abstaining. Steven E. Koonin of Caltech called the Utah report a result of "the incompetence and delusion of Pons and Fleischmann," which was met with a standing ovation. Douglas R. O. Morrison, a physicist representing CERN, was the first to call the episode an example of pathological science. On 4 May, due to all this new criticism, the meetings with various representatives from Washington were cancelled. From 8 May, only the A&M tritium results kept cold fusion afloat. In July and November 1989, Nature published papers critical of cold fusion claims. Negative results were also published in several other scientific journals including Science, Physical Review Letters, and Physical Review C (nuclear physics). In August 1989, in spite of this trend, the state of Utah invested $4.5 million to create the National Cold Fusion Institute. The United States Department of Energy organized a special panel to review cold fusion theory and research. The panel issued its report in November 1989, concluding that results as of that date did not present convincing evidence that useful sources of energy would result from the phenomena attributed to cold fusion. The panel noted the large number of failures to replicate excess heat and the greater inconsistency of reports of nuclear reaction byproducts expected by established conjecture. Nuclear fusion of the type postulated would be inconsistent with current understanding and, if verified, would require established conjecture, perhaps even theory itself, to be extended in an unexpected way. The panel was against special funding for cold fusion research, but supported modest funding of "focused experiments within the general funding system". Cold fusion supporters continued to argue that the evidence for excess heat was strong, and in September 1990 the National Cold Fusion Institute listed 92 groups of researchers from 10 countries that had reported corroborating evidence of excess heat, but they refused to provide any evidence of their own arguing that it could endanger their patents. However, no further DOE nor NSF funding resulted from the panel's recommendation. By this point, however, academic consensus had moved decidedly toward labeling cold fusion as a kind of "pathological science". In March 1990, Michael H. Salamon, a physicist from the University of Utah, and nine co-authors reported negative results. University faculty were then "stunned" when a lawyer representing Pons and Fleischmann demanded the Salamon paper be retracted under threat of a lawsuit. The lawyer later apologized; Fleischmann defended the threat as a legitimate reaction to alleged bias displayed by cold-fusion critics. In early May 1990, one of the two A&M researchers, Kevin Wolf, acknowledged the possibility of spiking, but said that the most likely explanation was tritium contamination in the palladium electrodes or simply contamination due to sloppy work. In June 1990 an article in Science by science writer Gary Taubes destroyed the public credibility of the A&M tritium results when it accused its group leader John Bockris and one of his graduate students of spiking the cells with tritium. In October 1990 Wolf finally said that the results were explained by tritium contamination in the rods. An A&M cold fusion review panel found that the tritium evidence was not convincing and that, while they couldn't rule out spiking, contamination and measurements problems were more likely explanations, and Bockris never got support from his faculty to resume his research. On 30 June 1991, the National Cold Fusion Institute closed after it ran out of funds; it found no excess heat, and its reports of tritium production were met with indifference. On 1 January 1991, Pons left the University of Utah and went to Europe. In 1992, Pons and Fleischmann resumed research with Toyota Motor Corporation's IMRA lab in France. Fleischmann left for England in 1995, and the contract with Pons was not renewed in 1998 after spending $40 million with no tangible results. The IMRA laboratory stopped cold fusion research in 1998 after spending £12 million. Pons has made no public declarations since, and only Fleischmann continued giving talks and publishing papers. Mostly in the 1990s, several books were published that were critical of cold fusion research methods and the conduct of cold fusion researchers. Over the years, several books have appeared that defended them. Around 1998, the University of Utah had already dropped its research after spending over $1 million, and in the summer of 1997, Japan cut off research and closed its own lab after spending $20 million. Later research A 1991 review by a cold fusion proponent had calculated "about 600 scientists" were still conducting research. After 1991, cold fusion research only continued in relative obscurity, conducted by groups that had increasing difficulty securing public funding and keeping programs open. These small but committed groups of cold fusion researchers have continued to conduct experiments using Fleischmann and Pons electrolysis setups in spite of the rejection by the mainstream community. The Boston Globe estimated in 2004 that there were only 100 to 200 researchers working in the field, most suffering damage to their reputation and career. Since the main controversy over Pons and Fleischmann had ended, cold fusion research has been funded by private and small governmental scientific investment funds in the United States, Italy, Japan, and India. For example, it was reported in Nature, in May, 2019, that Google had spent approximately $10 million on cold fusion research. A group of scientists at well-known research labs (e.g., MIT, Lawrence Berkeley National Lab, and others) worked for several years to establish experimental protocols and measurement techniques in an effort to re-evaluate cold fusion to a high standard of scientific rigor. Their reported conclusion: no cold fusion. In 2021, following Nature's 2019 publication of anomalous findings that might only be explained by some localized fusion, scientists at the Naval Surface Warfare Center, Indian Head Division announced that they had assembled a group of scientists from the Navy, Army and National Institute of Standards and Technology to undertake a new, coordinated study. With few exceptions, researchers have had difficulty publishing in mainstream journals. The remaining researchers often term their field Low Energy Nuclear Reactions (LENR), Chemically Assisted Nuclear Reactions (CANR), Lattice Assisted Nuclear Reactions (LANR), Condensed Matter Nuclear Science (CMNS) or Lattice Enabled Nuclear Reactions; one of the reasons being to avoid the negative connotations associated with "cold fusion". The new names avoid making bold implications, like implying that fusion is actually occurring. The researchers who continue their investigations acknowledge that the flaws in the original announcement are the main cause of the subject's marginalization, and they complain of a chronic lack of funding and no possibilities of getting their work published in the highest impact journals. University researchers are often unwilling to investigate cold fusion because they would be ridiculed by their colleagues and their professional careers would be at risk. In 1994, David Goodstein, a professor of physics at Caltech, advocated increased attention from mainstream researchers and described cold fusion as: United States United States Navy researchers at the Space and Naval Warfare Systems Center (SPAWAR) in San Diego have been studying cold fusion since 1989. In 2002 they released a two-volume report, "Thermal and nuclear aspects of the Pd/D2O system", with a plea for funding. This and other published papers prompted a 2004 Department of Energy (DOE) review. 2004 DOE panel In August 2003, the U.S. Secretary of Energy, Spencer Abraham, ordered the DOE to organize a second review of the field. This was thanks to an April 2003 letter sent by MIT's Peter L. Hagelstein, and the publication of many new papers, including the Italian ENEA and other researchers in the 2003 International Cold Fusion Conference, and a two-volume book by U.S. SPAWAR in 2002. Cold fusion researchers were asked to present a review document of all the evidence since the 1989 review. The report was released in 2004. The reviewers were "split approximately evenly" on whether the experiments had produced energy in the form of heat, but "most reviewers, even those who accepted the evidence for excess power production, 'stated that the effects are not repeatable, the magnitude of the effect has not increased in over a decade of work, and that many of the reported experiments were not well documented'". In summary, reviewers found that cold fusion evidence was still not convincing 15 years later, and they did not recommend a federal research program. They only recommended that agencies consider funding individual well-thought studies in specific areas where research "could be helpful in resolving some of the controversies in the field". They summarized its conclusions thus: Cold fusion researchers placed a "rosier spin" on the report, noting that they were finally being treated like normal scientists, and that the report had increased interest in the field and caused "a huge upswing in interest in funding cold fusion research". However, in a 2009 BBC article on an American Chemical Society's meeting on cold fusion, particle physicist Frank Close was quoted stating that the problems that plagued the original cold fusion announcement were still happening: results from studies are still not being independently verified and inexplicable phenomena encountered are being labelled as "cold fusion" even if they are not, in order to attract the attention of journalists. In February 2012, millionaire Sidney Kimmel, convinced that cold fusion was worth investing in by a 19 April 2009 interview with physicist Robert Duncan on the US news show 60 Minutes, made a grant of $5.5 million to the University of Missouri to establish the Sidney Kimmel Institute for Nuclear Renaissance (SKINR). The grant was intended to support research into the interactions of hydrogen with palladium, nickel or platinum under extreme conditions. In March 2013 Graham K. Hubler, a nuclear physicist who worked for the Naval Research Laboratory for 40 years, was named director. One of the SKINR projects is to replicate a 1991 experiment in which a professor associated with the project, Mark Prelas, says bursts of millions of neutrons a second were recorded, which was stopped because "his research account had been frozen". He claims that the new experiment has already seen "neutron emissions at similar levels to the 1991 observation". In May 2016, the United States House Committee on Armed Services, in its report on the 2017 National Defense Authorization Act, directed the Secretary of Defense to "provide a briefing on the military utility of recent U.S. industrial base LENR advancements to the House Committee on Armed Services by September 22, 2016". Italy Since the Fleischmann and Pons announcement, the Italian national agency for new technologies, energy and sustainable economic development (ENEA) has funded Franco Scaramuzzi's research into whether excess heat can be measured from metals loaded with deuterium gas. Such research is distributed across ENEA departments, CNR laboratories, INFN, universities and industrial laboratories in Italy, where the group continues to try to achieve reliable reproducibility (i.e. getting the phenomenon to happen in every cell, and inside a certain frame of time). In 2006–2007, the ENEA started a research program which claimed to have found excess power of up to 500 percent, and in 2009, ENEA hosted the 15th cold fusion conference. Japan Between 1992 and 1997, Japan's Ministry of International Trade and Industry sponsored a "New Hydrogen Energy (NHE)" program of US$20 million to research cold fusion. Announcing the end of the program in 1997, the director and one-time proponent of cold fusion research Hideo Ikegami stated "We couldn't achieve what was first claimed in terms of cold fusion. (...) We can't find any reason to propose more money for the coming year or for the future." In 1999 the Japan C-F Research Society was established to promote the independent research into cold fusion that continued in Japan. The society holds annual meetings. Perhaps the most famous Japanese cold fusion researcher was Yoshiaki Arata, from Osaka University, who claimed in a demonstration to produce excess heat when deuterium gas was introduced into a cell containing a mixture of palladium and zirconium oxide, a claim supported by fellow Japanese researcher Akira Kitamura of Kobe University and Michael McKubre at SRI. India In the 1990s, India stopped its research in cold fusion at the Bhabha Atomic Research Centre because of the lack of consensus among mainstream scientists and the US denunciation of the research. Yet, in 2008, the National Institute of Advanced Studies recommended that the Indian government revive this research. Projects were commenced at Chennai's Indian Institute of Technology, the Bhabha Atomic Research Centre and the Indira Gandhi Centre for Atomic Research. However, there is still skepticism among scientists and, for all practical purposes, research has stalled since the 1990s. A special section in the Indian multidisciplinary journal Current Science published 33 cold fusion papers in 2015 by major cold fusion researchers including several Indian researchers. Reported results A cold fusion experiment usually includes: a metal, such as palladium or nickel, in bulk, thin films or powder; and deuterium, hydrogen, or both, in the form of water, gas or plasma. Electrolysis cells can be either open cell or closed cell. In open cell systems, the electrolysis products, which are gaseous, are allowed to leave the cell. In closed cell experiments, the products are captured, for example by catalytically recombining the products in a separate part of the experimental system. These experiments generally strive for a steady state condition, with the electrolyte being replaced periodically. There are also "heat-after-death" experiments, where the evolution of heat is monitored after the electric current is turned off. The most basic setup of a cold fusion cell consists of two electrodes submerged in a solution containing palladium and heavy water. The electrodes are then connected to a power source to transmit electricity from one electrode to the other through the solution. Even when anomalous heat is reported, it can take weeks for it to begin to appear—this is known as the "loading time," the time required to saturate the palladium electrode with hydrogen (see "Loading ratio" section). The Fleischmann and Pons early findings regarding helium, neutron radiation and tritium were never replicated satisfactorily, and its levels were too low for the claimed heat production and inconsistent with each other. Neutron radiation has been reported in cold fusion experiments at very low levels using different kinds of detectors, but levels were too low, close to background, and found too infrequently to provide useful information about possible nuclear processes. Excess heat and energy production An excess heat observation is based on an energy balance. Various sources of energy input and output are continuously measured. Under normal conditions, the energy input can be matched to the energy output to within experimental error. In experiments such as those run by Fleischmann and Pons, an electrolysis cell operating steadily at one temperature transitions to operating at a higher temperature with no increase in applied current. If the higher temperatures were real, and not an experimental artifact, the energy balance would show an unaccounted term. In the Fleischmann and Pons experiments, the rate of inferred excess heat generation was in the range of 10–20% of total input, though this could not be reliably replicated by most researchers. Researcher Nathan Lewis discovered that the excess heat in Fleischmann and Pons's original paper was not measured, but estimated from measurements that didn't have any excess heat. Unable to produce excess heat or neutrons, and with positive experiments being plagued by errors and giving disparate results, most researchers declared that heat production was not a real effect and ceased working on the experiments. In 1993, after their original report, Fleischmann reported "heat-after-death" experiments—where excess heat was measured after the electric current supplied to the electrolytic cell was turned off. This type of report has also become part of subsequent cold fusion claims. Helium, heavy elements, and neutrons Known instances of nuclear reactions, aside from producing energy, also produce nucleons and particles on readily observable ballistic trajectories. In support of their claim that nuclear reactions took place in their electrolytic cells, Fleischmann and Pons reported a neutron flux of 4,000 neutrons per second, as well as detection of tritium. The classical branching ratio for previously known fusion reactions that produce tritium would predict, with 1 watt of power, the production of 1012 neutrons per second, levels that would have been fatal to the researchers. In 2009, Mosier-Boss et al. reported what they called the first scientific report of highly energetic neutrons, using CR-39 plastic radiation detectors, but the claims cannot be validated without a quantitative analysis of neutrons. Several medium and heavy elements like calcium, titanium, chromium, manganese, iron, cobalt, copper and zinc have been reported as detected by several researchers, like Tadahiko Mizuno or George Miley. The report presented to the United States Department of Energy (DOE) in 2004 indicated that deuterium-loaded foils could be used to detect fusion reaction products and, although the reviewers found the evidence presented to them as inconclusive, they indicated that those experiments did not use state-of-the-art techniques. In response to doubts about the lack of nuclear products, cold fusion researchers have tried to capture and measure nuclear products correlated with excess heat. Considerable attention has been given to measuring 4He production. However, the reported levels are very near to background, so contamination by trace amounts of helium normally present in the air cannot be ruled out. In the report presented to the DOE in 2004, the reviewers' opinion was divided on the evidence for 4He, with the most negative reviews concluding that although the amounts detected were above background levels, they were very close to them and therefore could be caused by contamination from air. One of the main criticisms of cold fusion was that deuteron-deuteron fusion into helium was expected to result in the production of gamma rays—which were not observed and were not observed in subsequent cold fusion experiments. Cold fusion researchers have since claimed to find X-rays, helium, neutrons and nuclear transmutations. Some researchers also claim to have found them using only light water and nickel cathodes. The 2004 DOE panel expressed concerns about the poor quality of the theoretical framework cold fusion proponents presented to account for the lack of gamma rays. Proposed mechanisms Researchers in the field do not agree on a theory for cold fusion. One proposal considers that hydrogen and its isotopes can be absorbed in certain solids, including palladium hydride, at high densities. This creates a high partial pressure, reducing the average separation of hydrogen isotopes. However, the reduction in separation is not enough to create the fusion rates claimed in the original experiment, by a factor of ten. It was also proposed that a higher density of hydrogen inside the palladium and a lower potential barrier could raise the possibility of fusion at lower temperatures than expected from a simple application of Coulomb's law. Electron screening of the positive hydrogen nuclei by the negative electrons in the palladium lattice was suggested to the 2004 DOE commission, but the panel found the theoretical explanations not convincing and inconsistent with current physics theories. Criticism Criticism of cold fusion claims generally take one of two forms: either pointing out the theoretical implausibility that fusion reactions have occurred in electrolysis setups or criticizing the excess heat measurements as being spurious, erroneous, or due to poor methodology or controls. There are several reasons why known fusion reactions are an unlikely explanation for the excess heat and associated cold fusion claims. Repulsion forces Because nuclei are all positively charged, they strongly repel one another. Normally, in the absence of a catalyst such as a muon, very high kinetic energies are required to overcome this charged repulsion. Extrapolating from known fusion rates, the rate for uncatalyzed fusion at room-temperature energy would be 50 orders of magnitude lower than needed to account for the reported excess heat. In muon-catalyzed fusion there are more fusions because the presence of the muon causes deuterium nuclei to be 207 times closer than in ordinary deuterium gas. But deuterium nuclei inside a palladium lattice are further apart than in deuterium gas, and there should be fewer fusion reactions, not more. Paneth and Peters in the 1920s already knew that palladium can absorb up to 900 times its own volume of hydrogen gas, storing it at several thousands of times the atmospheric pressure. This led them to believe that they could increase the nuclear fusion rate by simply loading palladium rods with hydrogen gas. Tandberg then tried the same experiment but used electrolysis to make palladium absorb more deuterium and force the deuterium further together inside the rods, thus anticipating the main elements of Fleischmann and Pons' experiment. They all hoped that pairs of hydrogen nuclei would fuse together to form helium, which at the time was needed in Germany to fill zeppelins, but no evidence of helium or of increased fusion rate was ever found. This was also the belief of geologist Palmer, who convinced Steven Jones that the helium-3 occurring naturally in Earth perhaps came from fusion involving hydrogen isotopes inside catalysts like nickel and palladium. This led their team in 1986 to independently make the same experimental setup as Fleischmann and Pons (a palladium cathode submerged in heavy water, absorbing deuterium via electrolysis). Fleischmann and Pons had much the same belief, but they calculated the pressure to be of 1027 atmospheres, when cold fusion experiments achieve a loading ratio of only one to one, which has only between 10,000 and 20,000 atmospheres. John R. Huizenga says they had misinterpreted the Nernst equation, leading them to believe that there was enough pressure to bring deuterons so close to each other that there would be spontaneous fusions. Lack of expected reaction products Conventional deuteron fusion is a two-step process, in which an unstable high-energy intermediary is formed: H + H → He + 24 MeV Experiments have shown only three decay pathways for this excited-state nucleus, with the branching ratio showing the probability that any given intermediate follows a particular pathway. The products formed via these decay pathways are: He → n + He + 3.3 MeV (ratio=50%) He → p + H + 4.0 MeV (ratio=50%) He → He + γ + 24 MeV (ratio=10) Only about one in a million of the intermediaries take the third pathway, making its products very rare compared to the other paths. This result is consistent with the predictions of the Bohr model. If 1 watt (6.242 × 10 eV/s) were produced from ~2.2575 × 10 deuteron fusions per second, with the known branching ratios, the resulting neutrons and tritium (H) would be easily measured. Some researchers reported detecting He but without the expected neutron or tritium production; such a result would require branching ratios strongly favouring the third pathway, with the actual rates of the first two pathways lower by at least five orders of magnitude than observations from other experiments, directly contradicting both theoretically predicted and observed branching probabilities. Those reports of He production did not include detection of gamma rays, which would require the third pathway to have been changed somehow so that gamma rays are no longer emitted. The known rate of the decay process together with the inter-atomic spacing in a metallic crystal makes heat transfer of the 24 MeV excess energy into the host metal lattice prior to the intermediary's decay inexplicable by conventional understandings of momentum and energy transfer, and even then there would be measurable levels of radiation. Also, experiments indicate that the ratios of deuterium fusion remain constant at different energies. In general, pressure and chemical environment cause only small changes to fusion ratios. An early explanation invoked the Oppenheimer–Phillips process at low energies, but its magnitude was too small to explain the altered ratios. Setup of experiments Cold fusion setups utilize an input power source (to ostensibly provide activation energy), a platinum group electrode, a deuterium or hydrogen source, a calorimeter, and, at times, detectors to look for byproducts such as helium or neutrons. Critics have variously taken issue with each of these aspects and have asserted that there has not yet been a consistent reproduction of claimed cold fusion results in either energy output or byproducts. Some cold fusion researchers who claim that they can consistently measure an excess heat effect have argued that the apparent lack of reproducibility might be attributable to a lack of quality control in the electrode metal or the amount of hydrogen or deuterium loaded in the system. Critics have further taken issue with what they describe as mistakes or errors of interpretation that cold fusion researchers have made in calorimetry analyses and energy budgets. Reproducibility In 1989, after Fleischmann and Pons had made their claims, many research groups tried to reproduce the Fleischmann-Pons experiment, without success. A few other research groups, however, reported successful reproductions of cold fusion during this time. In July 1989, an Indian group from the Bhabha Atomic Research Centre (P. K. Iyengar and M. Srinivasan) and in October 1989, John Bockris' group from Texas A&M University reported on the creation of tritium. In December 1990, professor Richard Oriani of the University of Minnesota reported excess heat. Groups that did report successes found that some of their cells were producing the effect, while other cells that were built exactly the same and used the same materials were not producing the effect. Researchers that continued to work on the topic have claimed that over the years many successful replications have been made, but still have problems getting reliable replications. Reproducibility is one of the main principles of the scientific method, and its lack led most physicists to believe that the few positive reports could be attributed to experimental error. The DOE 2004 report said among its conclusions and recommendations: Loading ratio Cold fusion researchers (McKubre since 1994, ENEA in 2011) have speculated that a cell that is loaded with a deuterium/palladium ratio lower than 100% (or 1:1) will not produce excess heat. Since most of the negative replications from 1989 to 1990 did not report their ratios, this has been proposed as an explanation for failed reproducibility. This loading ratio is hard to obtain, and some batches of palladium never reach it because the pressure causes cracks in the palladium, allowing the deuterium to escape. Fleischmann and Pons never disclosed the deuterium/palladium ratio achieved in their cells; there are no longer any batches of the palladium used by Fleischmann and Pons (because the supplier now uses a different manufacturing process), and researchers still have problems finding batches of palladium that achieve heat production reliably. Misinterpretation of data Some research groups initially reported that they had replicated the Fleischmann and Pons results but later retracted their reports and offered an alternative explanation for their original positive results. A group at Georgia Tech found problems with their neutron detector, and Texas A&M discovered bad wiring in their thermometers. These retractions, combined with negative results from some famous laboratories, led most scientists to conclude, as early as 1989, that no positive result should be attributed to cold fusion. Calorimetry errors The calculation of excess heat in electrochemical cells involves certain assumptions. Errors in these assumptions have been offered as non-nuclear explanations for excess heat. One assumption made by Fleischmann and Pons is that the efficiency of electrolysis is nearly 100%, meaning nearly all the electricity applied to the cell resulted in electrolysis of water, with negligible resistive heating and substantially all the electrolysis product leaving the cell unchanged. This assumption gives the amount of energy expended converting liquid D2O into gaseous D2 and O2. The efficiency of electrolysis is less than one if hydrogen and oxygen recombine to a significant extent within the calorimeter. Several researchers have described potential mechanisms by which this process could occur and thereby account for excess heat in electrolysis experiments. Another assumption is that heat loss from the calorimeter maintains the same relationship with measured temperature as found when calibrating the calorimeter. This assumption ceases to be accurate if the temperature distribution within the cell becomes significantly altered from the condition under which calibration measurements were made. This can happen, for example, if fluid circulation within the cell becomes significantly altered. Recombination of hydrogen and oxygen within the calorimeter would also alter the heat distribution and invalidate the calibration. Publications The ISI identified cold fusion as the scientific topic with the largest number of published papers in 1989, of all scientific disciplines. The Nobel Laureate Julian Schwinger declared himself a supporter of cold fusion in the fall of 1989, after much of the response to the initial reports had turned negative. He tried to publish his theoretical paper "Cold Fusion: A Hypothesis" in Physical Review Letters, but the peer reviewers rejected it so harshly that he felt deeply insulted, and he resigned from the American Physical Society (publisher of PRL) in protest. The number of papers sharply declined after 1990 because of two simultaneous phenomena: first, scientists abandoned the field; second, journal editors declined to review new papers. Consequently, cold fusion fell off the ISI charts. Researchers who got negative results turned their backs on the field; those who continued to publish were simply ignored. A 1993 paper in Physics Letters A was the last paper published by Fleischmann, and "one of the last reports [by Fleischmann] to be formally challenged on technical grounds by a cold fusion skeptic." The Journal of Fusion Technology (FT) established a permanent feature in 1990 for cold fusion papers, publishing over a dozen papers per year and giving a mainstream outlet for cold fusion researchers. When editor-in-chief George H. Miley retired in 2001, the journal stopped accepting new cold fusion papers. This has been cited as an example of the importance of sympathetic influential individuals to the publication of cold fusion papers in certain journals. The decline of publications in cold fusion has been described as a "failed information epidemic". The sudden surge of supporters until roughly 50% of scientists support the theory, followed by a decline until there is only a very small number of supporters, has been described as a characteristic of pathological science. The lack of a shared set of unifying concepts and techniques has prevented the creation of a dense network of collaboration in the field; researchers perform efforts in their own and in disparate directions, making the transition to "normal" science more difficult. Cold fusion reports continued to be published in a few journals like Journal of Electroanalytical Chemistry and Il Nuovo Cimento. Some papers also appeared in Journal of Physical Chemistry, Physics Letters A, International Journal of Hydrogen Energy, and a number of Japanese and Russian journals of physics, chemistry, and engineering. Since 2005, Naturwissenschaften has published cold fusion papers; in 2009, the journal named a cold fusion researcher to its editorial board. In 2015 the Indian multidisciplinary journal Current Science published a special section devoted entirely to cold fusion related papers. In the 1990s, the groups that continued to research cold fusion and their supporters established (non-peer-reviewed) periodicals such as Fusion Facts, Cold Fusion Magazine, Infinite Energy Magazine and New Energy Times to cover developments in cold fusion and other fringe claims in energy production that were ignored in other venues. The internet has also become a major means of communication and self-publication for CF researchers. Conferences Cold fusion researchers were for many years unable to get papers accepted at scientific meetings, prompting the creation of their own conferences. The International Conference on Cold Fusion (ICCF) was first held in 1990 and has met every 12 to 18 months since. Attendees at some of the early conferences were described as offering no criticism to papers and presentations for fear of giving ammunition to external critics, thus allowing the proliferation of crackpots and hampering the conduct of serious science. Critics and skeptics stopped attending these conferences, with the notable exception of Douglas Morrison, who died in 2001. With the founding in 2004 of the International Society for Condensed Matter Nuclear Science (ISCMNS), the conference was renamed the International Conference on Condensed Matter Nuclear Science—for reasons that are detailed in the subsequent research section above—but reverted to the old name in 2008. Cold fusion research is often referenced by proponents as "low-energy nuclear reactions", or LENR, but according to sociologist Bart Simon the "cold fusion" label continues to serve a social function in creating a collective identity for the field. Since 2006, the American Physical Society (APS) has included cold fusion sessions at their semiannual meetings, clarifying that this does not imply a softening of skepticism. Since 2007, the American Chemical Society (ACS) meetings also include "invited symposium(s)" on cold fusion. An ACS program chair, Gopal Coimbatore, said that without a proper forum the matter would never be discussed and, "with the world facing an energy crisis, it is worth exploring all possibilities." On 22–25 March 2009, the American Chemical Society meeting included a four-day symposium in conjunction with the 20th anniversary of the announcement of cold fusion. Researchers working at the U.S. Navy's Space and Naval Warfare Systems Center (SPAWAR) reported detection of energetic neutrons using a heavy water electrolysis setup and a CR-39 detector, a result previously published in Naturwissenschaften. The authors claim that these neutrons are indicative of nuclear reactions. Without quantitative analysis of the number, energy, and timing of the neutrons and exclusion of other potential sources, this interpretation is unlikely to find acceptance by the wider scientific community. Patents Although details have not surfaced, it appears that the University of Utah forced the 23 March 1989 Fleischmann and Pons announcement to establish priority over the discovery and its patents before the joint publication with Jones. The Massachusetts Institute of Technology (MIT) announced on 12 April 1989 that it had applied for its own patents based on theoretical work of one of its researchers, Peter L. Hagelstein, who had been sending papers to journals from 5 to 12 April. An MIT graduate student applied for a patent but was reportedly rejected by the USPTO in part by the citation of the "negative" MIT Plasma Fusion Center's cold fusion experiment of 1989. On 2 December 1993 the University of Utah licensed all its cold fusion patents to ENECO, a new company created to profit from cold fusion discoveries, and in March 1998 it said that it would no longer defend its patents. The U.S. Patent and Trademark Office (USPTO) now rejects patents claiming cold fusion. Esther Kepplinger, the deputy commissioner of patents in 2004, said that this was done using the same argument as with perpetual motion machines: that they do not work. Patent applications are required to show that the invention is "useful", and this utility is dependent on the invention's ability to function. In general USPTO rejections on the sole grounds of the invention's being "inoperative" are rare, since such rejections need to demonstrate "proof of total incapacity", and cases where those rejections are upheld in a Federal Court are even rarer: nevertheless, in 2000, a rejection of a cold fusion patent was appealed in a Federal Court and it was upheld, in part on the grounds that the inventor was unable to establish the utility of the invention. A U.S. patent might still be granted when given a different name to disassociate it from cold fusion, though this strategy has had little success in the US: the same claims that need to be patented can identify it with cold fusion, and most of these patents cannot avoid mentioning Fleischmann and Pons' research due to legal constraints, thus alerting the patent reviewer that it is a cold-fusion-related patent. David Voss said in 1999 that some patents that closely resemble cold fusion processes, and that use materials used in cold fusion, have been granted by the USPTO. The inventor of three such patents had his applications initially rejected when they were reviewed by experts in nuclear science; but then he rewrote the patents to focus more on the electrochemical parts so they would be reviewed instead by experts in electrochemistry, who approved them. When asked about the resemblance to cold fusion, the patent holder said that it used nuclear processes involving "new nuclear physics" unrelated to cold fusion. Melvin Miles was granted in 2004 a patent for a cold fusion device, and in 2007 he described his efforts to remove all instances of "cold fusion" from the patent description to avoid having it rejected outright. At least one patent related to cold fusion has been granted by the European Patent Office. A patent only legally prevents others from using or benefiting from one's invention. However, the general public perceives a patent as a stamp of approval, and a holder of three cold fusion patents said the patents were very valuable and had helped in getting investments. Cultural references A 1990 Michael Winner film Bullseye!, starring Michael Caine and Roger Moore, referenced the Fleischmann and Pons experiment. The film – a comedy – concerned conmen trying to steal scientists' purported findings. However, the film had a poor reception, described as "appallingly unfunny". In Undead Science, sociologist Bart Simon gives some examples of cold fusion in popular culture, saying that some scientists use cold fusion as a synonym for outrageous claims made with no supporting proof, and courses of ethics in science give it as an example of pathological science. It has appeared as a joke in Murphy Brown and The Simpsons. It was adopted as a software product name Adobe ColdFusion and a brand of protein bars (Cold Fusion Foods). It has also appeared in advertising as a synonym for impossible science, for example a 1995 advertisement for Pepsi Max. The plot of The Saint, a 1997 action-adventure film, parallels the story of Fleischmann and Pons, although with a different ending. In Undead Science, Simon posits that film might have affected the public perception of cold fusion, pushing it further into the science fiction realm. Similarly, the tenth episode of 2000 science fiction TV drama Life Force ("Paradise Island") is also based around cold fusion, specifically the efforts of eccentric scientist Hepzibah McKinley (Amanda Walker), who is convinced she has perfected it based on her father's incomplete research into the subject. The episode explores its potential benefits and viability within the ongoing post-apocalyptic global warming scenario of the series. In the 2023 video game Atomic Heart, cold fusion is responsible for nearly all of the technological advances. See also Bubble fusion Cold fission Energy Catalyzer (E-cat) Faraday-efficiency effect Incredible utility (patent concept) Lattice confinement fusion Muon-catalyzed fusion Nuclear transmutation Patterson power cell Pyroelectric fusion Widom–Larsen theory Notes References Citations Citations of quotations Bibliography MIT Open Access Articles. (manuscript). In the foreword by the president of ENEA the belief is expressed that the cold fusion phenomenon is proved. External links International Society for Condensed Matter Nuclear Science (iscmns.org), organizes the ICCF conferences and publishes the Journal of Condensed Matter Nuclear Science. See: library.htm of published papers and proceedings. Low Energy Nuclear Reactions (LENR) Phenomena and Potential Applications : Naval Surface Warfare Center report NSWCDD-PN-15-0040 by Louis F. DeChiaro, PhD, 23 September 2015 1989 in science Discovery and invention controversies Electrolysis Fringe physics Nuclear fusion Nuclear physics Palladium
Cold fusion
[ "Physics", "Chemistry" ]
11,002
[ "Nuclear physics", "Electrochemistry", "Cold fusion", "Electrolysis", "Nuclear fusion" ]
7,466
https://en.wikipedia.org/wiki/Coal%20tar
Coal tar is a thick dark liquid which is a by-product of the production of coke and coal gas from coal. It is a type of creosote. It has both medical and industrial uses. Medicinally it is a topical medication applied to skin to treat psoriasis and seborrheic dermatitis (dandruff). It may be used in combination with ultraviolet light therapy. Industrially it is a railroad tie preservative and used in the surfacing of roads. Coal tar was listed as a known human carcinogen in the first Report on Carcinogens from the U.S. Federal Government, issued in 1980. Coal tar was discovered circa 1665 and used for medical purposes as early as the 1800s. Circa 1850, the discovery that it could be used as the main raw material for the synthesis of dyes engendered an entire industry. It is on the World Health Organization's List of Essential Medicines. Coal tar is available as a generic medication and over the counter. Side effects include skin irritation, sun sensitivity, allergic reactions, and skin discoloration. It is unclear if use during pregnancy is safe for the baby and use during breastfeeding is not typically recommended. The exact mechanism of action is unknown. It is a complex mixture of phenols, polycyclic aromatic hydrocarbons (PAHs), and heterocyclic compounds. It demonstrates antifungal, anti-inflammatory, anti-itch, and antiparasitic properties. Composition Coal tar is produced through thermal destruction (pyrolysis) of coal. Its composition varies with the process and type of coal used – lignite, bituminous or anthracite. Coal tar is a mixture of approximately 10,000 chemicals, of which only about 50% have been identified. Most of the chemical compounds are polycyclic aromatic hydrocarbon: polycyclic aromatic hydrocarbons (4-rings: chrysene, fluoranthene, pyrene, triphenylene, naphthacene, benzanthracene, 5-rings: picene, benzo[a]pyrene, benzo[e]pyrene, benzofluoranthenes, perylene, 6-rings: dibenzopyrenes, dibenzofluoranthenes, benzoperylenes, 7-rings: coronene) methylated and polymethylated derivatives, mono- and polyhydroxylated derivatives, and heterocyclic compounds. Others: benzene, toluene, xylenes, cumenes, coumarone, indene, benzofuran, naphthalene and methyl-naphthalenes, acenaphthene, fluorene, phenol, cresols, pyridine, picolines, phenanthracene, carbazole, quinolines, fluoranthene. Many of these constituents are known carcinogens. Derivatives Various phenolic coal tar derivatives have analgesic (pain-killer) properties. These included acetanilide, phenacetin, and paracetamol aka acetaminophen. Paracetamol may be the only coal-tar derived analgesic still in use today. Industrial phenol is now usually synthesized from crude oil rather than coal tar. Coal tar derivatives are contra-indicated for people with the inherited red cell blood disorder glucose-6-phosphate dehydrogenase deficiency (G6PD deficiency), as they can cause oxidative stress leading to red blood cell breakdown. Mechanism of action The exact mechanism of action is unknown. Coal tar is a complex mixture of phenols, polycyclic aromatic hydrocarbons (PAHs), and heterocyclic compounds. It is a keratolytic agent, which reduces the growth rate of skin cells and softens the skin's keratin. Uses Medicinal Coal tar is on the World Health Organization's List of Essential Medicines, the most effective and safe medicines needed in a health system. Coal tar is generally available as a generic medication and over the counter. Coal tar is used in medicated shampoo, soap and ointment. It demonstrates antifungal, anti-inflammatory, anti-itch, and antiparasitic properties. It may be applied topically as a treatment for dandruff and psoriasis, and to kill and repel head lice. It may be used in combination with ultraviolet light therapy. Coal tar may be used in two forms: crude coal tar () or a coal tar solution () also known as liquor carbonis detergens (LCD). Named brands include Denorex, Balnetar, Psoriasin, Tegrin, T/Gel, and Neutar. When used in the extemporaneous preparation of topical medications, it is supplied in the form of coal tar topical solution USP, which consists of a 20% w/v solution of coal tar in alcohol, with an additional 5% w/v of polysorbate 80 USP; this must then be diluted in an ointment base, such as petrolatum. Construction Coal tar was a component of the first sealed roads. In its original development by Edgar Purnell Hooley, tarmac was tar covered with granite chips. Later the filler used was industrial slag. Today, petroleum derived binders and sealers are more commonly used. These sealers are used to extend the life and reduce maintenance cost associated with asphalt pavements, primarily in asphalt road paving, car parks and walkways. Coal tar is incorporated into some parking-lot sealcoat products used to protect the structural integrity of the underlying pavement. Sealcoat products that are coal-tar based typically contain 20 to 35 percent coal-tar pitch. Research shows it is used throughout the United States of America, however several areas have banned its use in sealcoat products, including the District of Columbia; the city of Austin, Texas; Dane County, Wisconsin; the state of Washington; and several municipalities in Minnesota and others. Industry In modern times, coal tar is mostly traded as fuel and an application for tar, such as roofing. The total value of the trade in coal tar is around US$20 billion each year. As a fuel. In the manufacture of paints, synthetic dyes (notably tartrazine/Yellow #5), and photographic materials. For heating or to fire boilers. Like most heavy oils, it must be heated before it will flow easily. As a source of carbon black. As a binder in manufacturing graphite; a considerable portion of the materials in "green blocks" is coke oven volatiles (COV). During the baking process of the green blocks as a part of commercial graphite production, most of the coal tar binders are vaporised and are generally burned in an incinerator to prevent release into the atmosphere, as COV and coal tar can be injurious to health. As a main component of the electrode paste used in electric arc furnaces. Coal tar pitch act as the binder for solid filler that can be either coke or calcined anthracite, forming electrode paste, also widely known as Söderberg electrode paste. As a feed stock for higher-value fractions, such as naphtha, creosote and pitch. In the coal gas era, companies distilled coal tar to separate these out, leading to the discovery of many industrial chemicals. Some British companies included: Bonnington Chemical Works British Tar Products Lancashire Tar Distillers Midland Tar Distillers Newton, Chambers & Company (owners of Izal brand disinfectant) Sadlers Chemicals Safety Side effects of coal tar products include skin irritation, sun sensitivity, allergic reactions, and skin discoloration. It is unclear if use during pregnancy is safe for the baby and use during breastfeeding is not typically recommended. According to the National Psoriasis Foundation, coal tar is a valuable, safe and inexpensive treatment option for millions of people with psoriasis and other scalp or skin conditions. According to the FDA, coal tar concentrations between 0.5% and 5% are considered safe and effective for psoriasis. Cancer Long-term, consistent exposure to coal tar likely increases the risk of non-melanoma skin cancers. Evidence is inconclusive whether medical coal tar, which does not remain on the skin for the long periods seen in occupational exposure, causes cancer, because there is insufficient data to make a judgment. While coal tar consistently causes cancer in cohorts of workers with chronic occupational exposure, animal models, and mechanistic studies, the data on short-term use as medicine in humans has so far failed to show any consistently significant increase in rates of cancer. Coal tar contains many polycyclic aromatic hydrocarbons, and it is believed that their metabolites bind to DNA, damaging it. The PAHs found in coal tar and air pollution induce immunosenescence and cytotoxicity in epidermal cells. It's possible that the skin can repair itself from this damage after short-term exposure to PAHs but not after long-term exposure. Long-term skin exposure to these compounds can produce "tar warts", which can progress to squamous cell carcinoma. Coal tar was one of the first chemical substances proven to cause cancer from occupational exposure, during research in 1775 on the cause of chimney sweeps' carcinoma. Modern studies have shown that working with coal tar pitch, such as during the paving of roads or when working on roofs, increases the risk of cancer. The International Agency for Research on Cancer lists coal tars as Group 1 carcinogens, meaning they directly cause cancer. The U.S. Department of Health and Human Services lists coal tars as known human carcinogens. In response to public health concerns regarding the carcinogenicity of PAHs some municipalities, such as the city of Milwaukee, have banned the use of common coal tar-based road and driveway sealants citing concerns of elevated PAH content in groundwater. Other Coal tar causes increased sensitivity to sunlight, so skin treated with topical coal tar preparations should be protected from sunlight. The residue from the distillation of high-temperature coal tar, primarily a complex mixture of three or more membered condensed ring aromatic hydrocarbons, was listed on 13 January 2010 as a substance of very high concern by the European Chemicals Agency. Regulation Exposure to coal tar pitch volatiles can occur in the workplace by breathing, skin contact, or eye contact. The Occupational Safety and Health Administration (OSHA) has set the permissible exposure limit) to 0.2 mg/m3 benzene-soluble fraction over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 0.1 mg/m3 cyclohexane-extractable fraction over an 8-hour workday. At levels of 80 mg/m3, coal tar pitch volatiles are immediately dangerous to life and health. When used as a medication in the United States, coal tar preparations are considered over-the-counter drug pharmaceuticals and are subject to regulation by the Food and Drug Administration (FDA). See also Coal oil Wood tar References External links Antipsoriatics Coal IARC Group 1 carcinogens Materials World Health Organization essential medicines Wikipedia medicine articles ready to translate Drugs with unknown mechanisms of action
Coal tar
[ "Physics" ]
2,371
[ "Materials", "Matter" ]
7,480
https://en.wikipedia.org/wiki/Cross%20section%20%28physics%29
In physics, the cross section is a measure of the probability that a specific process will take place in a collision of two particles. For example, the Rutherford cross-section is a measure of probability that an alpha particle will be deflected by a given angle during an interaction with an atomic nucleus. Cross section is typically denoted (sigma) and is expressed in units of area, more specifically in barns. In a way, it can be thought of as the size of the object that the excitation must hit in order for the process to occur, but more exactly, it is a parameter of a stochastic process. When two discrete particles interact in classical physics, their mutual cross section is the area transverse to their relative motion within which they must meet in order to scatter from each other. If the particles are hard inelastic spheres that interact only upon contact, their scattering cross section is related to their geometric size. If the particles interact through some action-at-a-distance force, such as electromagnetism or gravity, their scattering cross section is generally larger than their geometric size. When a cross section is specified as the differential limit of a function of some final-state variable, such as particle angle or energy, it is called a differential cross section (see detailed discussion below). When a cross section is integrated over all scattering angles (and possibly other variables), it is called a total cross section or integrated total cross section. For example, in Rayleigh scattering, the intensity scattered at the forward and backward angles is greater than the intensity scattered sideways, so the forward differential scattering cross section is greater than the perpendicular differential cross section, and by adding all of the infinitesimal cross sections over the whole range of angles with integral calculus, we can find the total cross section. Scattering cross sections may be defined in nuclear, atomic, and particle physics for collisions of accelerated beams of one type of particle with targets (either stationary or moving) of a second type of particle. The probability for any given reaction to occur is in proportion to its cross section. Thus, specifying the cross section for a given reaction is a proxy for stating the probability that a given scattering process will occur. The measured reaction rate of a given process depends strongly on experimental variables such as the density of the target material, the intensity of the beam, the detection efficiency of the apparatus, or the angle setting of the detection apparatus. However, these quantities can be factored away, allowing measurement of the underlying two-particle collisional cross section. Differential and total scattering cross sections are among the most important measurable quantities in nuclear, atomic, and particle physics. With light scattering off of a particle, the cross section specifies the amount of optical power scattered from light of a given irradiance (power per area). Although the cross section has the same units as area, the cross section may not necessarily correspond to the actual physical size of the target given by other forms of measurement. It is not uncommon for the actual cross-sectional area of a scattering object to be much larger or smaller than the cross section relative to some physical process. For example, plasmonic nanoparticles can have light scattering cross sections for particular frequencies that are much larger than their actual cross-sectional areas. Collision among gas particles In a gas of finite-sized particles there are collisions among particles that depend on their cross-sectional size. The average distance that a particle travels between collisions depends on the density of gas particles. These quantities are related by where is the cross section of a two-particle collision (SI unit: m2), is the mean free path between collisions (SI unit: m), is the number density of the target particles (SI unit: m−3). If the particles in the gas can be treated as hard spheres of radius that interact by direct contact, as illustrated in Figure 1, then the effective cross section for the collision of a pair is If the particles in the gas interact by a force with a larger range than their physical size, then the cross section is a larger effective area that may depend on a variety of variables such as the energy of the particles. Cross sections can be computed for atomic collisions but also are used in the subatomic realm. For example, in nuclear physics a "gas" of low-energy neutrons collides with nuclei in a reactor or other nuclear device, with a cross section that is energy-dependent and hence also with well-defined mean free path between collisions. Attenuation of a beam of particles If a beam of particles enters a thin layer of material of thickness , the flux of the beam will decrease by according to where is the total cross section of all events, including scattering, absorption, or transformation to another species. The volumetric number density of scattering centers is designated by . Solving this equation exhibits the exponential attenuation of the beam intensity: where is the initial flux, and is the total thickness of the material. For light, this is called the Beer–Lambert law. Differential cross section Consider a classical measurement where a single particle is scattered off a single stationary target particle. Conventionally, a spherical coordinate system is used, with the target placed at the origin and the axis of this coordinate system aligned with the incident beam. The angle is the scattering angle, measured between the incident beam and the scattered beam, and the is the azimuthal angle. The impact parameter is the perpendicular offset of the trajectory of the incoming particle, and the outgoing particle emerges at an angle . For a given interaction (coulombic, magnetic, gravitational, contact, etc.), the impact parameter and the scattering angle have a definite one-to-one functional dependence on each other. Generally the impact parameter can neither be controlled nor measured from event to event and is assumed to take all possible values when averaging over many scattering events. The differential size of the cross section is the area element in the plane of the impact parameter, i.e. . The differential angular range of the scattered particle at angle is the solid angle element . The differential cross section is the quotient of these quantities, . It is a function of the scattering angle (and therefore also the impact parameter), plus other observables such as the momentum of the incoming particle. The differential cross section is always taken to be positive, even though larger impact parameters generally produce less deflection. In cylindrically symmetric situations (about the beam axis), the azimuthal angle is not changed by the scattering process, and the differential cross section can be written as . In situations where the scattering process is not azimuthally symmetric, such as when the beam or target particles possess magnetic moments oriented perpendicular to the beam axis, the differential cross section must also be expressed as a function of the azimuthal angle. For scattering of particles of incident flux off a stationary target consisting of many particles, the differential cross section at an angle is related to the flux of scattered particle detection in particles per unit time by Here is the finite angular size of the detector (SI unit: sr), is the number density of the target particles (SI unit: m−3), and is the thickness of the stationary target (SI unit: m). This formula assumes that the target is thin enough that each beam particle will interact with at most one target particle. The total cross section may be recovered by integrating the differential cross section over the full solid angle ( steradians): It is common to omit the "differential" qualifier when the type of cross section can be inferred from context. In this case, may be referred to as the integral cross section or total cross section. The latter term may be confusing in contexts where multiple events are involved, since "total" can also refer to the sum of cross sections over all events. The differential cross section is extremely useful quantity in many fields of physics, as measuring it can reveal a great amount of information about the internal structure of the target particles. For example, the differential cross section of Rutherford scattering provided strong evidence for the existence of the atomic nucleus. Instead of the solid angle, the momentum transfer may be used as the independent variable of differential cross sections. Differential cross sections in inelastic scattering contain resonance peaks that indicate the creation of metastable states and contain information about their energy and lifetime. Quantum scattering In the time-independent formalism of quantum scattering, the initial wave function (before scattering) is taken to be a plane wave with definite momentum : where and are the relative coordinates between the projectile and the target. The arrow indicates that this only describes the asymptotic behavior of the wave function when the projectile and target are too far apart for the interaction to have any effect. After scattering takes place it is expected that the wave function takes on the following asymptotic form: where is some function of the angular coordinates known as the scattering amplitude. This general form is valid for any short-ranged, energy-conserving interaction. It is not true for long-ranged interactions, so there are additional complications when dealing with electromagnetic interactions. The full wave function of the system behaves asymptotically as the sum The differential cross section is related to the scattering amplitude: This has the simple interpretation as the probability density for finding the scattered projectile at a given angle. A cross section is therefore a measure of the effective surface area seen by the impinging particles, and as such is expressed in units of area. The cross section of two particles (i.e. observed when the two particles are colliding with each other) is a measure of the interaction event between the two particles. The cross section is proportional to the probability that an interaction will occur; for example in a simple scattering experiment the number of particles scattered per unit of time (current of scattered particles ) depends only on the number of incident particles per unit of time (current of incident particles ), the characteristics of target (for example the number of particles per unit of surface ), and the type of interaction. For we have Relation to the S-matrix If the reduced masses and momenta of the colliding system are , and , before and after the collision respectively, the differential cross section is given by where the on-shell matrix is defined by in terms of the S-matrix. Here is the Dirac delta function. The computation of the S-matrix is the main goal of the scattering theory. Units Although the SI unit of total cross sections is m2, a smaller unit is usually used in practice. In nuclear and particle physics, the conventional unit is the barn b, where 1 b = 10−28 m2 = 100 fm2. Smaller prefixed units such as mb and μb are also widely used. Correspondingly, the differential cross section can be measured in units such as mb/sr. When the scattered radiation is visible light, it is conventional to measure the path length in centimetres. To avoid the need for conversion factors, the scattering cross section is expressed in cm2, and the number concentration in cm−3. The measurement of the scattering of visible light is known as nephelometry, and is effective for particles of 2–50 μm in diameter: as such, it is widely used in meteorology and in the measurement of atmospheric pollution. The scattering of X-rays can also be described in terms of scattering cross sections, in which case the square ångström is a convenient unit: 1 Å2 = 10−20 m2 = = 108 b. The sum of the scattering, photoelectric, and pair-production cross-sections (in barns) is charted as the "atomic attenuation coefficient" (narrow-beam), in barns. Scattering of light For light, as in other settings, the scattering cross section for particles is generally different from the geometrical cross section of the particle, and it depends upon the wavelength of light and the permittivity, shape, and size of the particle. The total amount of scattering in a sparse medium is proportional to the product of the scattering cross section and the number of particles present. In the interaction of light with particles, many processes occur, each with their own cross sections, including absorption, scattering, and photoluminescence. The sum of the absorption and scattering cross sections is sometimes referred to as the attenuation or extinction cross section. The total extinction cross section is related to the attenuation of the light intensity through the Beer–Lambert law, which says that attenuation is proportional to particle concentration: where is the attenuation at a given wavelength , is the particle concentration as a number density, and is the path length. The absorbance of the radiation is the logarithm (decadic or, more usually, natural) of the reciprocal of the transmittance : Combining the scattering and absorption cross sections in this manner is often necessitated by the inability to distinguish them experimentally, and much research effort has been put into developing models that allow them to be distinguished, the Kubelka-Munk theory being one of the most important in this area. Cross section and Mie theory Cross sections commonly calculated using Mie theory include efficiency coefficients for extinction , scattering , and Absorption cross sections. These are normalized by the geometrical cross sections of the particle as The cross section is defined by where is the energy flow through the surrounding surface, and is the intensity of the incident wave. For a plane wave the intensity is going to be , where is the impedance of the host medium. The main approach is based on the following. Firstly, we construct an imaginary sphere of radius (surface ) around the particle (the scatterer). The net rate of electromagnetic energy crosses the surface is where is the time averaged Poynting vector. If energy is absorbed within the sphere, otherwise energy is being created within the sphere. We will not consider this case here. If the host medium is non-absorbing, the energy must be absorbed by the particle. We decompose the total field into incident and scattered parts , and the same for the magnetic field . Thus, we can decompose into the three terms , where where , , and . All the field can be decomposed into the series of vector spherical harmonics (VSH). After that, all the integrals can be taken. In the case of a uniform sphere of radius , permittivity , and permeability , the problem has a precise solution. The scattering and extinction coefficients are Where . These are connected as Dipole approximation for the scattering cross section Let us assume that a particle supports only electric and magnetic dipole modes with polarizabilities and (here we use the notation of magnetic polarizability in the manner of Bekshaev et al. rather than the notation of Nieto-Vesperinas et al.) expressed through the Mie coefficients as Then the cross sections are given by and, finally, the electric and magnetic absorption cross sections are and For the case of a no-inside-gain particle, i.e. no energy is emitted by the particle internally (), we have a particular case of the Optical theorem Equality occurs for non-absorbing particles, i.e. for . Scattering of light on extended bodies In the context of scattering light on extended bodies, the scattering cross section, , describes the likelihood of light being scattered by a macroscopic particle. In general, the scattering cross section is different from the geometrical cross section of a particle, as it depends upon the wavelength of light and the permittivity in addition to the shape and size of the particle. The total amount of scattering in a sparse medium is determined by the product of the scattering cross section and the number of particles present. In terms of area, the total cross section () is the sum of the cross sections due to absorption, scattering, and luminescence: The total cross section is related to the absorbance of the light intensity through the Beer–Lambert law, which says that absorbance is proportional to concentration: , where is the absorbance at a given wavelength , is the concentration as a number density, and is the path length. The extinction or absorbance of the radiation is the logarithm (decadic or, more usually, natural) of the reciprocal of the transmittance : Relation to physical size There is no simple relationship between the scattering cross section and the physical size of the particles, as the scattering cross section depends on the wavelength of radiation used. This can be seen when looking at a halo surrounding the Moon on a decently foggy evening: Red light photons experience a larger cross sectional area of water droplets than photons of higher energy. The halo around the Moon thus has a perimeter of red light due to lower energy photons being scattering further from the center of the Moon. Photons from the rest of the visible spectrum are left within the center of the halo and perceived as white light. Meteorological range The scattering cross section is related to the meteorological range : The quantity is sometimes denoted , the scattering coefficient per unit length. Examples Elastic collision of two hard spheres The following equations apply to two hard spheres that undergo a perfectly elastic collision. Let and denote the radii of the scattering center and scattered sphere, respectively. The differential cross section is and the total cross section is In other words, the total scattering cross section is equal to the area of the circle (with radius ) within which the center of mass of the incoming sphere has to arrive for it to be deflected. Rutherford scattering In Rutherford scattering, an incident particle with charge and energy scatters off a fixed particle with charge . The differential cross section is where is the vacuum permittivity. The total cross section is infinite unless a cutoff for small scattering angles is applied. This is due to the long range of the Coulomb potential. Scattering from a 2D circular mirror The following example deals with a beam of light scattering off a circle with radius and a perfectly reflecting boundary. The beam consists of a uniform density of parallel rays, and the beam-circle interaction is modeled within the framework of geometric optics. Because the problem is genuinely two-dimensional, the cross section has unit of length (e.g., metre). Let be the angle between the light ray and the radius joining the reflection point of the ray with the center point of the mirror. Then the increase of the length element perpendicular to the beam is The reflection angle of this ray with respect to the incoming ray is , and the scattering angle is The differential relationship between incident and reflected intensity is The differential cross section is therefore () Its maximum at corresponds to backward scattering, and its minimum at corresponds to scattering from the edge of the circle directly forward. This expression confirms the intuitive expectations that the mirror circle acts like a diverging lens. The total cross section is equal to the diameter of the circle: Scattering from a 3D spherical mirror The result from the previous example can be used to solve the analogous problem in three dimensions, i.e., scattering from a perfectly reflecting sphere of radius . The plane perpendicular to the incoming light beam can be parameterized by cylindrical coordinates and . In any plane of the incoming and the reflected ray we can write (from the previous example): while the impact area element is In spherical coordinates, Together with the trigonometric identity we obtain The total cross section is See also Cross section (geometry) Flow velocity Luminosity (scattering theory) Linear attenuation coefficient Mass attenuation coefficient Neutron cross section Nuclear cross section Gamma ray cross section Partial wave analysis Particle detector Radar cross-section Rutherford scattering Scattering amplitude References Bibliography J. D. Bjorken, S. D. Drell, Relativistic Quantum Mechanics, 1964 P. Roman, Introduction to Quantum Theory, 1969 W. Greiner, J. Reinhardt, Quantum Electrodynamics, 1994 R. G. Newton. Scattering Theory of Waves and Particles. McGraw Hill, 1966. External links Nuclear Cross Section Scattering Cross Section IAEA – Nuclear Data Services BNL – National Nuclear Data Center Particle Data Group – The Review of Particle Physics IUPAC Goldbook – Definition: Reaction Cross Section IUPAC Goldbook – Definition: Collision Cross Section ShimPlotWell cross section plotter for nuclear data Atomic physics Physical quantities Dimensional analysis Experimental particle physics Measurement Nuclear physics Particle physics Scattering theory Scattering, absorption and radiative transfer (optics) Scattering Spectroscopy
Cross section (physics)
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
4,140
[ "Physical phenomena", "Physical quantities", "Quantum mechanics", "Spectroscopy", "Instrumental analysis", "Measurement", "Scattering", "Particle physics", " molecular", "Nuclear physics", " and optical physics", "Molecular physics", "Spectrum (physical sciences)", "Quantity", "Size", ...
7,492
https://en.wikipedia.org/wiki/Capability%20Maturity%20Model
The Capability Maturity Model (CMM) is a development model created in 1986 after a study of data collected from organizations that contracted with the U.S. Department of Defense, who funded the research. The term "maturity" relates to the degree of formality and optimization of processes, from ad hoc practices, to formally defined steps, to managed result metrics, to active optimization of the processes. The model's aim is to improve existing software development processes, but it can also be applied to other processes. In 2006, the Software Engineering Institute at Carnegie Mellon University developed the Capability Maturity Model Integration, which has largely superseded the CMM and addresses some of its drawbacks. Overview The Capability Maturity Model was originally developed as a tool for objectively assessing the ability of government contractors' processes to implement a contracted software project. The model is based on the process maturity framework first described in IEEE Software and, later, in the 1989 book Managing the Software Process by Watts Humphrey. It was later published as an article in 1993 and as a book by the same authors in 1994. Though the model comes from the field of software development, it is also used as a model to aid in business processes generally, and has also been used extensively worldwide in government offices, commerce, and industry. History Prior need for software processes In the 1980s, the use of computers grew more widespread, more flexible and less costly. Organizations began to adopt computerized information systems, and the demand for software development grew significantly. Many processes for software development were in their infancy, with few standard or "best practice" approaches defined. As a result, the growth was accompanied by growing pains: project failure was common, the field of computer science was still in its early years, and the ambitions for project scale and complexity exceeded the market capability to deliver adequate products within a planned budget. Individuals such as Edward Yourdon, Larry Constantine, Gerald Weinberg, Tom DeMarco, and David Parnas began to publish articles and books with research results in an attempt to professionalize the software-development processes. In the 1980s, several US military projects involving software subcontractors ran over-budget and were completed far later than planned, if at all. In an effort to determine why this was occurring, the United States Air Force funded a study at the Software Engineering Institute (SEI). Precursor The first application of a staged maturity model to IT was not by CMU/SEI, but rather by Richard L. Nolan, who, in 1973 published the stages of growth model for IT organizations. Watts Humphrey began developing his process maturity concepts during the later stages of his 27-year career at IBM. Development at Software Engineering Institute Active development of the model by the US Department of Defense Software Engineering Institute (SEI) began in 1986 when Humphrey joined the Software Engineering Institute located at Carnegie Mellon University in Pittsburgh, Pennsylvania after retiring from IBM. At the request of the U.S. Air Force he began formalizing his Process Maturity Framework to aid the U.S. Department of Defense in evaluating the capability of software contractors as part of awarding contracts. The result of the Air Force study was a model for the military to use as an objective evaluation of software subcontractors' process capability maturity. Humphrey based this framework on the earlier Quality Management Maturity Grid developed by Philip B. Crosby in his book "Quality is Free". Humphrey's approach differed because of his unique insight that organizations mature their processes in stages based on solving process problems in a specific order. Humphrey based his approach on the staged evolution of a system of software development practices within an organization, rather than measuring the maturity of each separate development process independently. The CMMI has thus been used by different organizations as a general and powerful tool for understanding and then improving general business process performance. Watts Humphrey's Capability Maturity Model (CMM) was published in 1988 and as a book in 1989, in Managing the Software Process. Organizations were originally assessed using a process maturity questionnaire and a Software Capability Evaluation method devised by Humphrey and his colleagues at the Software Engineering Institute. The full representation of the Capability Maturity Model as a set of defined process areas and practices at each of the five maturity levels was initiated in 1991, with Version 1.1 being published in July 1993. The CMM was published as a book in 1994 by the same authors Mark C. Paulk, Charles V. Weber, Bill Curtis, and Mary Beth Chrissis. Capability Maturity Model Integration The CMMI model's application in software development has sometimes been problematic. Applying multiple models that are not integrated within and across an organization could be costly in training, appraisals, and improvement activities. The Capability Maturity Model Integration (CMMI) project was formed to sort out the problem of using multiple models for software development processes, thus the CMMI model has superseded the CMM model, though the CMM model continues to be a general theoretical process capability model used in the public domain. In 2016, the responsibility for CMMI was transferred to the Information Systems Audit and Control Association (ISACA). ISACA subsequently released CMMI v2.0 in 2021. It was upgraded again to CMMI v3.0 in 2023. CMMI now places a greater emphasis on the process architecture which is typically realized as a process diagram. Copies of CMMI are available now only by subscription. Adapted to other processes The CMMI was originally intended as a tool to evaluate the ability of government contractors to perform a contracted software project. Though it comes from the area of software development, it can be, has been, and continues to be widely applied as a general model of the maturity of process (e.g., IT service management processes) in IS/IT (and other) organizations. Model topics Maturity models A maturity model can be viewed as a set of structured levels that describe how well the behaviors, practices and processes of an organization can reliably and sustainably produce required outcomes. A maturity model can be used as a benchmark for comparison and as an aid to understanding - for example, for comparative assessment of different organizations where there is something in common that can be used as a basis for comparison. In the case of the CMM, for example, the basis for comparison would be the organizations' software development processes. Structure The model involves five aspects: Maturity Levels: a 5-level process maturity continuum - where the uppermost (5th) level is a notional ideal state where processes would be systematically managed by a combination of process optimization and continuous process improvement. Key Process Areas: a Key Process Area identifies a cluster of related activities that, when performed together, achieve a set of goals considered important. Goals: the goals of a key process area summarize the states that must exist for that key process area to have been implemented in an effective and lasting way. The extent to which the goals have been accomplished is an indicator of how much capability the organization has established at that maturity level. The goals signify the scope, boundaries, and intent of each key process area. Common Features: common features include practices that implement and institutionalize a key process area. There are five types of common features: commitment to perform, ability to perform, activities performed, measurement and analysis, and verifying implementation. Key Practices: The key practices describe the elements of infrastructure and practice that contribute most effectively to the implementation and institutionalization of the area. Levels There are five levels defined along the continuum of the model and, according to the SEI: "Predictability, effectiveness, and control of an organization's software processes are believed to improve as the organization moves up these five levels. While not rigorous, the empirical evidence to date supports this belief". Initial (chaotic, ad hoc, individual heroics) - the starting point for use of a new or undocumented repeat process. Repeatable - the process is at least documented sufficiently such that repeating the same steps may be attempted. Defined - the process is defined/confirmed as a standard business process Capable - the process is quantitatively managed in accordance with agreed-upon metrics. Efficient - process management includes deliberate process optimization/improvement. Within each of these maturity levels are Key Process Areas which characterise that level, and for each such area there are five factors: goals, commitment, ability, measurement, and verification. These are not necessarily unique to CMMI, representing — as they do — the stages that organizations must go through on the way to becoming mature. The model provides a theoretical continuum along which process maturity can be developed incrementally from one level to the next. Skipping levels is not allowed/feasible. Level 1 - Initial It is characteristic of processes at this level that they are (typically) undocumented and in a state of dynamic change, tending to be driven in an ad hoc, uncontrolled and reactive manner by users or events. This provides a chaotic or unstable environment for the processes. (Example - a surgeon performing a new operation a small number of times - the levels of negative outcome are not known). Level 2 - Repeatable It is characteristic of this level of maturity that some processes are repeatable, possibly with consistent results. Process discipline is unlikely to be rigorous, but where it exists it may help to ensure that existing processes are maintained during times of stress. Level 3 - Defined It is characteristic of processes at this level that there are sets of defined and documented standard processes established and subject to some degree of improvement over time. These standard processes are in place. The processes may not have been systematically or repeatedly used - sufficient for the users to become competent or the process to be validated in a range of situations. This could be considered a developmental stage - with use in a wider range of conditions and user competence development the process can develop to next level of maturity. Level 4 - Managed (Capable) It is characteristic of processes at this level that, using process metrics, effective achievement of the process objectives can be evidenced across a range of operational conditions. The suitability of the process in multiple environments has been tested and the process refined and adapted. Process users have experienced the process in multiple and varied conditions, and are able to demonstrate competence. The process maturity enables adaptions to particular projects without measurable losses of quality or deviations from specifications. Process Capability is established from this level. (Example - surgeon performing an operation hundreds of times with levels of negative outcome approaching zero). Level 5 - Optimizing (Efficient)It is a characteristic of processes at this level that the focus is on continually improving process performance through both incremental and innovative technological changes/improvements. At maturity level 5, processes are concerned with addressing statistical common causes of process variation and changing the process (for example, to shift the mean of the process performance) to improve process performance. This would be done at the same time as maintaining the likelihood of achieving the established quantitative process-improvement objectives. Between 2008 and 2019, about 12% of appraisals given were at maturity levels 4 and 5. Critique The model was originally intended to evaluate the ability of government contractors to perform a software project. It has been used for and may be suited to that purpose, but critics pointed out that process maturity according to the CMM was not necessarily mandatory for successful software development. Software process framework The software process framework documented is intended to guide those wishing to assess an organization's or project's consistency with the Key Process Areas. For each maturity level there are five checklist types: {| class="wikitable" |- ! Type ! Description |- | Policy |Describes the policy contents and KPA goals recommended by the Key Process Areas. |- | Standard |Describes the recommended content of select work products described in the Key Process Areas. |- | Process | Describes the process information content recommended by the Key Process Areas. These are refined into checklists for: Roles, entry criteria, inputs, activities, outputs, exit criteria, reviews and audits, work products managed and controlled, measurements, documented procedures, training, and tools |- | Procedure | Describes the recommended content of documented procedures described in the Key Process Areas. |- | Level overcome | Provides an overview of an entire maturity level. These are further refined into checklists for: Key Process Areas purposes, goals, policies, and standards; process descriptions; procedures; training; tools; reviews and audits; work products; measurements |} See also Capability Immaturity Model Capability Maturity Model Integration People Capability Maturity Model Testing Maturity Model References External links CMMI Institute Architecture Maturity Models at The Open Group ISACA Software development process Maturity models Information technology management 1986 introductions
Capability Maturity Model
[ "Technology" ]
2,552
[ "Information technology", "Information technology management" ]
7,499
https://en.wikipedia.org/wiki/RDX
RDX (abbreviation of "Research Department eXplosive" or Royal Demolition eXplosive) or hexogen, among other names, is an organic compound with the formula (CH2N2O2)3. It is white, odorless, and tasteless, widely used as an explosive. Chemically, it is classified as a nitroamine alongside HMX, which is a more energetic explosive than TNT. It was used widely in World War II and remains common in military applications. RDX is often used in mixtures with other explosives and plasticizers or phlegmatizers (desensitizers); it is the explosive agent in C-4 plastic explosive and a key ingredient in Semtex. It is stable in storage and is considered one of the most energetic and brisant of the military high explosives, with a relative effectiveness factor of 1.60. Name RDX is also less commonly known as cyclonite, hexogen (particularly in Russian, French and German-influenced languages), T4, and, chemically, as cyclotrimethylene trinitramine. In the 1930s, the Royal Arsenal, Woolwich, started investigating cyclonite to use against German U-boats that were being built with thicker hulls. The goal was to develop an explosive more energetic than TNT. For security reasons, Britain termed cyclonite "Research Department Explosive" (R.D.X.). The term RDX appeared in the United States in 1946. The first public reference in the United Kingdom to the name RDX, or R.D.X., to use the official title, appeared in 1948; its authors were the managing chemist, ROF Bridgwater, the chemical research and development department, Woolwich, and the director of Royal Ordnance Factories, Explosives. Usage RDX was widely used during World War II, often in explosive mixtures with TNT such as Torpex, Composition B, Cyclotols, and H6. RDX was used in one of the first plastic explosives. The bouncing bomb depth charges used in the "Dambusters Raid" each contained of Torpex; The Tallboy and Grand Slam bombs designed by Barnes Wallis also used Torpex. RDX is believed to have been used in many bomb plots, including terrorist plots. RDX is the base for a number of common military explosives: Composition A: Granular explosive consisting of RDX and plasticizing wax, such as composition A-3 (91% RDX coated with 9% wax) and composition A-5 (98.5 to 99.1% RDX coated with 0.95 to 1.54% stearic acid). Composition B: Castable mixtures of 59.5% RDX and 39.4% TNT with 1% wax as desensitizer. Composition C: The original composition C was used in World War II, but there have been subsequent variations including C-2, C-3, and C-4. C-4 consists of RDX (91%); a plasticizer, dioctyl sebacate (5.3%); and a binder, which is usually polyisobutylene (2.1%); and oil (1.6%). Composition CH-6: 97.5% RDX, 1.5% calcium stearate, 0.5% polyisobutylene, and 0.5% graphite DBX (Depth Bomb Explosive): Castable mixture consisting of 21% RDX, 21% ammonium nitrate, 40% TNT, and 18% powdered aluminium, developed during World War II, it was to be used in underwater munitions as a substitute for Torpex employing only half the amount of then-scarce RDX, as the supply of RDX became more adequate, however, the mixture was shelved Cyclotol: Castable mixture of RDX (50–80%) with TNT (20–50%) designated by the amount of RDX/TNT, such as Cyclotol 70/30 HBX: Castable mixtures of RDX, TNT, powdered aluminium, and D-2 wax with calcium chloride H-6: Castable mixture of RDX, TNT, powdered aluminum, and paraffin wax (used as a phlegmatizing agent) PBX: RDX is also used as a major component of many polymer-bonded explosives (PBX); RDX-based PBXs typically consist of RDX and at least thirteen different polymer/co-polymer binders. Examples of RDX-based PBX formulations include, but are not limited to: PBX-9007, PBX-9010, PBX-9205, PBX-9407, PBX-9604, PBXN-106, PBXN-3, PBXN-6, PBXN-10, PBXN-201, PBX-0280, PBX Type I, PBXC-116, PBXAF-108, etc. Semtex (trade name): Plastic demolition explosive containing RDX and PETN as major energetic components Torpex: 42% RDX, 40% TNT, and 18% powdered aluminium; the mixture was designed during World War II and used mainly in underwater ordnance Outside military applications, RDX is also used in controlled demolition to raze structures. The demolition of the Jamestown Bridge in the U.S. state of Rhode Island was one instance where RDX shaped charges were used to remove the span. Synthesis RDX is classified by chemists as a hexahydro-1,3,5-triazine derivative. In laboratory settings (industrial routes are described below separately) it is obtained by treating hexamine with white fuming nitric acid. This nitrolysis reaction also produces methylene dinitrate, ammonium nitrate, and water as by-products. The overall reaction is: C6H12N4 + 10 HNO3 → C3H6N6O6 + 3 CH2(ONO2)2 + NH4NO3 + 3 H2O The conventional cheap nitration agent, called "mixed acid", cannot be used for RDX synthesis because concentrated sulfuric acid conventionally used to stimulate the nitronium ion formation decomposes hexamine into formaldehyde and ammonia. Modern syntheses employ hexahydro triacyl triazine as it avoids formation of HMX. History RDX was used by both sides in World War II. The US produced about per month during WWII and Germany about per month. RDX had the major advantages of possessing greater explosive force than TNT and required no additional raw materials for its manufacture. Thus, it was also extensively used in World War I Germany RDX was reported in 1898 by Georg Friedrich Henning (1863-1945), who obtained a German patent for its manufacture by nitrolysis of hexamine (hexamethylenetetramine) with concentrated nitric acid. In this patent, only the medical properties of RDX were mentioned. During WWI, Heinrich Brunswig (1865-1946) at the private military-industrial laboratory (Center for Scientific-Technical Research) in Neubabelsberg studied the compound more closely and in June 1916 filed two patent applications, one for its use in smokeless propellants and another for its use as an explosive, noting its excellent characteristics. The German military hadn't considered its adoption during the war due to the expense of production but started investigating its use in 1920, referring to it as hexogen. Research and development findings were not published further until Edmund von Herz, described as an Austrian and later a German citizen, rediscovered the explosive properties of RDX and applied for an Austrian patent in 1919, obtaining a British one in 1921 and an American one in 1922. All patents described the synthesis of the compound by nitrating hexamethylenetetramine. The British patent claims included the manufacture of RDX by nitration, its use with or without other explosives, its use as a bursting charge and as an initiator. The US patent claim was for the use of a hollow explosive device containing RDX and a detonator cap containing it. Herz was also the first to identify the cyclic nature of the molecule. In the 1930s, Germany developed improved production methods. During World War II, Germany used the code names W Salt, SH Salt, K-method, the E-method, and the KA-method. These names represented the identities of the developers of the various chemical routes to RDX. The W-method was developed by Wolfram in 1934 and gave RDX the code name "W-Salz". It used sulfamic acid, formaldehyde, and nitric acid. SH-Salz (SH salt) was from Schnurr, who developed a batch-process in 1937–38 based on nitrolysis of hexamine. The K-method, from Knöffler, involved addition of ammonium nitrate to the hexamine/nitric acid process. The E-method, developed by Ebele, proved to be identical to the Ross and Schiessler process described below. The KA-method, also developed by Knöffler, turned out to be identical to the Bachmann process described below. The explosive shells fired by the MK 108 cannon and the warhead of the R4M rocket, both used in Luftwaffe fighter aircraft as offensive armament, both used hexogen as their explosive base. UK In the United Kingdom (UK), RDX was manufactured from 1933 by the research department in a pilot plant at the Royal Arsenal in Woolwich, London, a larger pilot plant being built at the RGPF Waltham Abbey just outside London in 1939. In 1939 a twin-unit industrial-scale plant was designed to be installed at a new site, ROF Bridgwater, away from London and production of RDX started at Bridgwater on one unit in August 1941. The ROF Bridgwater plant brought in ammonia and methanol as raw materials: the methanol was converted to formaldehyde and some of the ammonia converted to nitric acid, which was concentrated for RDX production. The rest of the ammonia was reacted with formaldehyde to produce hexamine. The hexamine plant was supplied by Imperial Chemical Industries. It incorporated some features based on data obtained from the United States (US). RDX was produced by continually adding hexamine and concentrated nitric acid to a cooled mixture of hexamine and nitric acid in the nitrator. The RDX was purified and processed for its intended use; recovery and reuse of some methanol and nitric acid also was carried out. The hexamine-nitration and RDX purification plants were duplicated (i.e. twin-unit) to provide some insurance against loss of production due to fire, explosion, or air attack. The United Kingdom and British Empire were fighting without allies against Nazi Germany until the middle of 1941 and had to be self-sufficient. At that time (1941), the UK had the capacity to produce (160,000 lb) of RDX per week; both Canada, an allied country and self-governing dominion within the British Empire, and the US were looked upon to supply ammunition and explosives, including RDX. By 1942 the Royal Air Force's annual requirement was forecast to be of RDX, much of which came from North America (Canada and the US). Canada A different method of production to the Woolwich process was found and used in Canada, possibly at the McGill University department of chemistry. This was based on reacting paraformaldehyde and ammonium nitrate in acetic anhydride. A UK patent application was made by Robert Walter Schiessler (Pennsylvania State University) and James Hamilton Ross (McGill, Canada) in May 1942; the UK patent was issued in December 1947. Gilman states that the same method of production had been independently discovered by Ebele in Germany prior to Schiessler and Ross, but that this was not known by the Allies. Urbański provides details of five methods of production, and he refers to this method as the (German) E-method. UK, US, and Canadian production and development At the beginning of the 1940s, the major US explosive manufacturers, E. I. du Pont de Nemours & Company and Hercules, had several decades of experience of manufacturing trinitrotoluene (TNT) and had no wish to experiment with new explosives. US Army Ordnance held the same viewpoint and wanted to continue using TNT. RDX had been tested by Picatinny Arsenal in 1929, and it was regarded as too expensive and too sensitive. The Navy proposed to continue using ammonium picrate. In contrast, the National Defense Research Committee (NDRC), who had visited The Royal Arsenal, Woolwich, thought new explosives were necessary. James B. Conant, chairman of Division B, wished to involve academic research into this area. Conant therefore set up an experimental explosives research laboratory at the Bureau of Mines, Bruceton, Pennsylvania, using Office of Scientific Research and Development (OSRD) funding. Woolwich method In 1941, the UK's Tizard Mission visited the US Army and Navy departments and part of the information handed over included details of the "Woolwich" method of manufacture of RDX and its stabilisation by mixing it with beeswax. The UK was asking that the US and Canada, combined, supply (440,000 lb) of RDX per day. A decision was taken by William H. P. Blandy, chief of the Bureau of Ordnance, to adopt RDX for use in mines and torpedoes. Given the immediate need for RDX, the US Army Ordnance, at Blandy's request, built a plant that copied the equipment and process used at Woolwich. The result was the Wabash River Ordnance Works run by E. I. du Pont de Nemours & Company. At that time, this works had the largest nitric acid plant in the world. The Woolwich process was expensive: it needed of strong nitric acid for every pound of RDX. By early 1941, the NDRC was researching new processes. The Woolwich or direct nitration process has at least two serious disadvantages: (1) it used large amounts of nitric acid and (2) at least one-half of the formaldehyde is lost. One mole of hexamethylenetetramine could produce at most one mole of RDX. At least three laboratories with no previous explosive experience were instructed to develop better production methods for RDX; they were based at Cornell, Michigan, and Pennsylvania State universities. Werner Emmanuel Bachmann, from Michigan, successfully developed the "combination process" by combining the Ross and Schiessler process used in Canada (aka the German E-method) with direct nitration. The combination process required large quantities of acetic anhydride instead of nitric acid in the old British "Woolwich process". Ideally, the combination process could produce two moles of RDX from each mole of hexamethylenetetramine. The expanded production of RDX could not continue to rely on the use of natural beeswax to desensitize the explosive as in the original British composition (RDX/BWK-91/9). A substitute stabilizer based on petroleum was developed at the Bruceton Explosives Research Laboratory in Pennsylvania, with the resulting explosive designated Composition A-3. Bachmann process The National Defence Research Committee (NDRC) instructed three companies to develop pilot plants. They were the Western Cartridge Company, E. I. du Pont de Nemours & Company, and Tennessee Eastman Company, part of Eastman Kodak. At the Eastman Chemical Company (TEC), a leading manufacturer of acetic anhydride, Werner Emmanuel Bachmann developed a continuous-flow process for RDX utilizing an ammonium nitrate/nitric acid mixture as a nitrating agent in a medium of acetic acid and acetic anhydride. RDX was crucial to the war effort and the current batch-production process was too slow. In February 1942, TEC began producing small amounts of RDX at its Wexler Bend pilot plant, which led to the US government authorizing TEC to design and build Holston Ordnance Works (H.O.W.) in June 1942. By April 1943, RDX was being manufactured there. At the end of 1944, the Holston plant and the Wabash River Ordnance Works, which used the Woolwich process, were producing (50 million pounds) of Composition B per month. The Bachmann process yields both RDX and HMX, with the major product determined by the specific reaction conditions. Military compositions The United Kingdom's intention in World War II was to use "desensitised" RDX. In the original Woolwich process, RDX was phlegmatized with beeswax, but later paraffin wax was used, based on the work carried out at Bruceton. In the event the UK was unable to obtain sufficient RDX to meet its needs, some of the shortfall was met by substituting amatol, a mixture of ammonium nitrate and TNT. Karl Dönitz was reputed to have claimed that "an aircraft can no more kill a U-boat than a crow can kill a mole". Nonetheless, by May 1942 Wellington bombers began to deploy depth charges containing Torpex, a mixture of RDX, TNT, and aluminium, which had up to 50 percent more destructive power than TNT-filled depth charges. Considerable quantities of the RDX–TNT mixture were produced at the Holston Ordnance Works, with Tennessee Eastman developing an automated mixing and cooling process based around the use of stainless steel conveyor belts. Terrorism A Semtex bomb was used in the Pan Am Flight 103 (known also as the Lockerbie) bombing in 1988. A belt laden with of RDX explosives tucked under the dress of the assassin was used in the assassination of former Indian prime minister Rajiv Gandhi in 1991. The 1993 Bombay bombings used RDX placed into several vehicles as bombs. RDX was the main component used for the 2006 Mumbai train bombings and the Jaipur bombings in 2008. It also is believed to be the explosive used in the 2010 Moscow Metro bombings. Traces of RDX were found on pieces of wreckage from 1999 Russian apartment bombings and 2004 Russian aircraft bombings. FSB reports on the bombs used in the 1999 apartment bombings indicated that while RDX was not a part of the main charge, each bomb contained plastic explosive used as a booster charge. Ahmed Ressam, the al-Qaeda Millennium Bomber, used a small quantity of RDX as one of the components in the bomb that he prepared to detonate in Los Angeles International Airport on New Year's Eve 1999–2000; the bomb could have produced a blast forty times greater than that of a devastating car bomb. In July 2012, the Kenyan government arrested two Iranian nationals and charged them with illegal possession of of RDX. According to the Kenyan Police, the Iranians planned to use the RDX for "attacks on Israeli, US, UK and Saudi Arabian targets". RDX was used in the assassination of Lebanese Prime Minister Rafic Hariri on February 14, 2005. In the 2019 Pulwama attack in India, 250 kg of high-grade RDX was used by Jaish-e-Mohammed. The attack resulted in the deaths of 44 Central Reserve Police Force (CRPF) personnel as well as the attacker. Two letter bombs sent to journalists in Ecuador were disguised as USB flash drives which contained RDX that would detonate when plugged in. Stability RDX has a high nitrogen content and a high oxygen to carbon ratio, (O:C ratio), both of which indicate its explosive potential for formation of N2 and CO2. RDX undergoes a deflagration to detonation transition (DDT) in confinement and certain circumstances. The velocity of detonation of RDX at a density of 1.80 g/cm3 is 8750 m/s. It starts to decompose at approximately 170 °C and melts at 204 °C. At room temperature, it is very stable. It burns rather than explodes. It detonates only with a detonator, being unaffected even by small arms fire. This property makes it a useful military explosive. It is less sensitive than pentaerythritol tetranitrate (PETN). Under normal conditions, RDX has a Figure of Insensitivity of exactly 80 (RDX defines the reference point). RDX sublimes in vacuum, which restricts or prevents its use in some applications. RDX, when exploded in air, has about 1.5 times the explosive energy of TNT per unit weight and about 2.0 times per unit volume. RDX is insoluble in water, with solubility 0.05975 g/L at temperature of 25 °C. Toxicity The substance's toxicity has been studied for many years. RDX has caused convulsions (seizures) in military field personnel ingesting it, and in munition workers inhaling its dust during manufacture. At least one fatality was attributed to RDX toxicity in a European munitions manufacturing plant. During the Vietnam War, at least 40 American soldiers were hospitalized with composition C-4 (which is 91% RDX) intoxication from December 1968 to December 1969. C-4 was frequently used by soldiers as a fuel to heat food, and the food was generally mixed by the same knife that was used to cut C-4 into small pieces prior to burning. Soldiers were exposed to C-4 either due to inhaling the fumes, or due to ingestion, made possible by many small particles adhering to the knife having been deposited into the cooked food. The symptom complex involved nausea, vomiting, generalized seizures, and prolonged postictal confusion and amnesia; which indicated toxic encephalopathy. Oral toxicity of RDX depends on its physical form; in rats, the LD50 was found to be 100 mg/kg for finely powdered RDX, and 300 mg/kg for coarse, granular RDX. A case has been reported of a human child hospitalized in status epilepticus following the ingestion of 84.82 mg/kg dose of RDX (or 1.23 g for the patient's body weight of 14.5 kg) in the "plastic explosive" form. The substance has low to moderate toxicity with a possible human carcinogen classification. Further research is ongoing, however, and this classification may be revised by the United States Environmental Protection Agency (EPA). Remediating RDX-contaminated water supplies has proven to be successful. It is known to be a kidney toxin in humans and highly toxic to earthworms and plants, thus army testing ranges where RDX was used heavily may need to undergo environmental remediation. Concerns have been raised by research published in late 2017 indicating that the issue has not been addressed correctly by U.S. officials. Civilian use RDX has been used as a rodenticide because of its toxicity. Biodegradation RDX is degraded by the organisms in sewage sludge as well as the fungus Phanaerocheate chrysosporium. Both wild and transgenic plants can phytoremediate explosives from soil and water. One by-product of the environmental decomposition is R-salt. Alternatives FOX-7 is considered to be approximately a 1-to-1 replacement for RDX in almost all applications. Notes References Bibliography . See also . Urbański translation openlibrary.org, Macmillan, NY, 1964, . Further reading External links ADI Limited (Australia). Archive.org leads to Thales group products page that shows some military specifications. NLM Hazardous Substances Databank (US) – Cyclonite (RDX) CDC – NIOSH Pocket Guide to Chemical Hazards nla.gov.au, Army News (Darwin, NT), October 2, 1943, p 3. "Britain's New Explosive: Experts Killed in Terrific Blast", uses "Research Department formula X" nla.gov.au, The Courier-Mail (Brisbane, Qld.), September 27, 1943, p 1. Explosive chemicals Nitroamines Triazines Convulsants GABAA receptor negative allosteric modulators Rodenticides Rocket propellants
RDX
[ "Chemistry", "Biology" ]
5,107
[ "Explosive chemicals", "Biocides", "Rodenticides" ]
7,512
https://en.wikipedia.org/wiki/Concentration
In chemistry, concentration is the abundance of a constituent divided by the total volume of a mixture. Several types of mathematical description can be distinguished: mass concentration, molar concentration, number concentration, and volume concentration. The concentration can refer to any kind of chemical mixture, but most frequently refers to solutes and solvents in solutions. The molar (amount) concentration has variants, such as normal concentration and osmotic concentration. Dilution is reduction of concentration, e.g. by adding solvent to a solution. The verb to concentrate means to increase concentration, the opposite of dilute. Etymology Concentration-, concentratio, action or an act of coming together at a single place, bringing to a common center, was used in post-classical Latin in 1550 or earlier, similar terms attested in Italian (1589), Spanish (1589), English (1606), French (1632). Qualitative description Often in informal, non-technical language, concentration is described in a qualitative way, through the use of adjectives such as "dilute" for solutions of relatively low concentration and "concentrated" for solutions of relatively high concentration. To concentrate a solution, one must add more solute (for example, alcohol), or reduce the amount of solvent (for example, water). By contrast, to dilute a solution, one must add more solvent, or reduce the amount of solute. Unless two substances are miscible, there exists a concentration at which no further solute will dissolve in a solution. At this point, the solution is said to be saturated. If additional solute is added to a saturated solution, it will not dissolve, except in certain circumstances, when supersaturation may occur. Instead, phase separation will occur, leading to coexisting phases, either completely separated or mixed as a suspension. The point of saturation depends on many variables, such as ambient temperature and the precise chemical nature of the solvent and solute. Concentrations are often called levels, reflecting the mental schema of levels on the vertical axis of a graph, which can be high or low (for example, "high serum levels of bilirubin" are concentrations of bilirubin in the blood serum that are greater than normal). Quantitative notation There are four quantities that describe concentration: Mass concentration The mass concentration is defined as the mass of a constituent divided by the volume of the mixture : The SI unit is kg/m3 (equal to g/L). Molar concentration The molar concentration is defined as the amount of a constituent (in moles) divided by the volume of the mixture : The SI unit is mol/m3. However, more commonly the unit mol/L (= mol/dm3) is used. Number concentration The number concentration is defined as the number of entities of a constituent in a mixture divided by the volume of the mixture : The SI unit is 1/m3. Volume concentration The volume concentration (not to be confused with volume fraction) is defined as the volume of a constituent divided by the volume of the mixture : Being dimensionless, it is expressed as a number, e.g., 0.18 or 18%. There seems to be no standard notation in the English literature. The letter used here is normative in German literature (see Volumenkonzentration). Related quantities Several other quantities can be used to describe the composition of a mixture. These should not be called concentrations. Normality Normality is defined as the molar concentration divided by an equivalence factor . Since the definition of the equivalence factor depends on context (which reaction is being studied), the International Union of Pure and Applied Chemistry and National Institute of Standards and Technology discourage the use of normality. Molality The molality of a solution is defined as the amount of a constituent (in moles) divided by the mass of the solvent (not the mass of the solution): The SI unit for molality is mol/kg. Mole fraction The mole fraction is defined as the amount of a constituent (in moles) divided by the total amount of all constituents in a mixture : The SI unit is mol/mol. However, the deprecated parts-per notation is often used to describe small mole fractions. Mole ratio The mole ratio is defined as the amount of a constituent divided by the total amount of all other constituents in a mixture: If is much smaller than , the mole ratio is almost identical to the mole fraction. The SI unit is mol/mol. However, the deprecated parts-per notation is often used to describe small mole ratios. Mass fraction The mass fraction is the fraction of one substance with mass to the mass of the total mixture , defined as: The SI unit is kg/kg. However, the deprecated parts-per notation is often used to describe small mass fractions. Mass ratio The mass ratio is defined as the mass of a constituent divided by the total mass of all other constituents in a mixture: If is much smaller than , the mass ratio is almost identical to the mass fraction. The SI unit is kg/kg. However, the deprecated parts-per notation is often used to describe small mass ratios. Dependence on volume and temperature Concentration depends on the variation of the volume of the solution with temperature, due mainly to thermal expansion. Table of concentrations and related quantities See also References External links
Concentration
[ "Chemistry" ]
1,112
[ "Concentration" ]
7,519
https://en.wikipedia.org/wiki/Convolution
In mathematics (in particular, functional analysis), convolution is a mathematical operation on two functions ( and ) that produces a third function (). The term convolution refers to both the resulting function and to the process of computing it. It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. The integral is evaluated for all values of shift, producing the convolution function. The choice of which function is reflected and shifted before the integral does not change the integral result (see commutativity). Graphically, it expresses how the 'shape' of one function is modified by the other. Some features of convolution are similar to cross-correlation: for real-valued functions, of a continuous or discrete variable, convolution differs from cross-correlation () only in that either or is reflected about the y-axis in convolution; thus it is a cross-correlation of and , or and . For complex-valued functions, the cross-correlation operator is the adjoint of the convolution operator. Convolution has applications that include probability, statistics, acoustics, spectroscopy, signal processing and image processing, geophysics, engineering, physics, computer vision and differential equations. The convolution can be defined for functions on Euclidean space and other groups (as algebraic structures). For example, periodic functions, such as the discrete-time Fourier transform, can be defined on a circle and convolved by periodic convolution. (See row 18 at .) A discrete convolution can be defined for functions on the set of integers. Generalizations of convolution have applications in the field of numerical analysis and numerical linear algebra, and in the design and implementation of finite impulse response filters in signal processing. Computing the inverse of the convolution operation is known as deconvolution. Definition The convolution of and is written , denoting the operator with the symbol . It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. As such, it is a particular kind of integral transform: An equivalent definition is (see commutativity): While the symbol is used above, it need not represent the time domain. At each , the convolution formula can be described as the area under the function weighted by the function shifted by the amount . As changes, the weighting function emphasizes different parts of the input function ; If is a positive value, then is equal to that slides or is shifted along the -axis toward the right (toward ) by the amount of , while if is a negative value, then is equal to that slides or is shifted toward the left (toward ) by the amount of . For functions , supported on only (i.e., zero for negative arguments), the integration limits can be truncated, resulting in: For the multi-dimensional formulation of convolution, see domain of definition (below). Notation A common engineering notational convention is: which has to be interpreted carefully to avoid confusion. For instance, is equivalent to , but is in fact equivalent to . Relations with other transforms Given two functions and with bilateral Laplace transforms (two-sided Laplace transform) and respectively, the convolution operation can be defined as the inverse Laplace transform of the product of and . More precisely, Let , then Note that is the bilateral Laplace transform of . A similar derivation can be done using the unilateral Laplace transform (one-sided Laplace transform). The convolution operation also describes the output (in terms of the input) of an important class of operations known as linear time-invariant (LTI). See LTI system theory for a derivation of convolution as the result of LTI constraints. In terms of the Fourier transforms of the input and output of an LTI operation, no new frequency components are created. The existing ones are only modified (amplitude and/or phase). In other words, the output transform is the pointwise product of the input transform with a third transform (known as a transfer function). See Convolution theorem for a derivation of that property of convolution. Conversely, convolution can be derived as the inverse Fourier transform of the pointwise product of two Fourier transforms. Visual explanation Historical developments One of the earliest uses of the convolution integral appeared in D'Alembert's derivation of Taylor's theorem in Recherches sur différents points importants du système du monde, published in 1754. Also, an expression of the type: is used by Sylvestre François Lacroix on page 505 of his book entitled Treatise on differences and series, which is the last of 3 volumes of the encyclopedic series: , Chez Courcier, Paris, 1797–1800. Soon thereafter, convolution operations appear in the works of Pierre Simon Laplace, Jean-Baptiste Joseph Fourier, Siméon Denis Poisson, and others. The term itself did not come into wide use until the 1950s or 1960s. Prior to that it was sometimes known as Faltung (which means folding in German), composition product, superposition integral, and Carson's integral. Yet it appears as early as 1903, though the definition is rather unfamiliar in older uses. The operation: is a particular case of composition products considered by the Italian mathematician Vito Volterra in 1913. Circular convolution When a function is periodic, with period , then for functions, , such that exists, the convolution is also periodic and identical to: where is an arbitrary choice. The summation is called a periodic summation of the function . When is a periodic summation of another function, , then is known as a circular or cyclic convolution of and . And if the periodic summation above is replaced by , the operation is called a periodic convolution of and . Discrete convolution For complex-valued functions and defined on the set of integers, the discrete convolution of and is given by: or equivalently (see commutativity) by: The convolution of two finite sequences is defined by extending the sequences to finitely supported functions on the set of integers. When the sequences are the coefficients of two polynomials, then the coefficients of the ordinary product of the two polynomials are the convolution of the original two sequences. This is known as the Cauchy product of the coefficients of the sequences. Thus when has finite support in the set (representing, for instance, a finite impulse response), a finite summation may be used: Circular discrete convolution When a function is periodic, with period then for functions, such that exists, the convolution is also periodic and identical to: The summation on is called a periodic summation of the function If is a periodic summation of another function, then is known as a circular convolution of and When the non-zero durations of both and are limited to the interval   reduces to these common forms: The notation for cyclic convolution denotes convolution over the cyclic group of integers modulo . Circular convolution arises most often in the context of fast convolution with a fast Fourier transform (FFT) algorithm. Fast convolution algorithms In many situations, discrete convolutions can be converted to circular convolutions so that fast transforms with a convolution property can be used to implement the computation. For example, convolution of digit sequences is the kernel operation in multiplication of multi-digit numbers, which can therefore be efficiently implemented with transform techniques (; ). requires arithmetic operations per output value and operations for outputs. That can be significantly reduced with any of several fast algorithms. Digital signal processing and other applications typically use fast convolution algorithms to reduce the cost of the convolution to O( log ) complexity. The most common fast convolution algorithms use fast Fourier transform (FFT) algorithms via the circular convolution theorem. Specifically, the circular convolution of two finite-length sequences is found by taking an FFT of each sequence, multiplying pointwise, and then performing an inverse FFT. Convolutions of the type defined above are then efficiently implemented using that technique in conjunction with zero-extension and/or discarding portions of the output. Other fast convolution algorithms, such as the Schönhage–Strassen algorithm or the Mersenne transform, use fast Fourier transforms in other rings. The Winograd method is used as an alternative to the FFT. It significantly speeds up 1D, 2D, and 3D convolution. If one sequence is much longer than the other, zero-extension of the shorter sequence and fast circular convolution is not the most computationally efficient method available. Instead, decomposing the longer sequence into blocks and convolving each block allows for faster algorithms such as the overlap–save method and overlap–add method. A hybrid convolution method that combines block and FIR algorithms allows for a zero input-output latency that is useful for real-time convolution computations. Domain of definition The convolution of two complex-valued functions on is itself a complex-valued function on , defined by: and is well-defined only if and decay sufficiently rapidly at infinity in order for the integral to exist. Conditions for the existence of the convolution may be tricky, since a blow-up in at infinity can be easily offset by sufficiently rapid decay in . The question of existence thus may involve different conditions on and : Compactly supported functions If and are compactly supported continuous functions, then their convolution exists, and is also compactly supported and continuous . More generally, if either function (say ) is compactly supported and the other is locally integrable, then the convolution is well-defined and continuous. Convolution of and is also well defined when both functions are locally square integrable on and supported on an interval of the form (or both supported on ). Integrable functions The convolution of and exists if and are both Lebesgue integrable functions in (), and in this case is also integrable . This is a consequence of Tonelli's theorem. This is also true for functions in , under the discrete convolution, or more generally for the convolution on any group. Likewise, if ()  and  ()  where ,  then  (),  and In the particular case , this shows that is a Banach algebra under the convolution (and equality of the two sides holds if and are non-negative almost everywhere). More generally, Young's inequality implies that the convolution is a continuous bilinear map between suitable spaces. Specifically, if satisfy: then so that the convolution is a continuous bilinear mapping from to . The Young inequality for convolution is also true in other contexts (circle group, convolution on ). The preceding inequality is not sharp on the real line: when , there exists a constant such that: The optimal value of was discovered in 1975 and independently in 1976, see Brascamp–Lieb inequality. A stronger estimate is true provided : where is the weak norm. Convolution also defines a bilinear continuous map for , owing to the weak Young inequality: Functions of rapid decay In addition to compactly supported functions and integrable functions, functions that have sufficiently rapid decay at infinity can also be convolved. An important feature of the convolution is that if f and g both decay rapidly, then f∗g also decays rapidly. In particular, if f and g are rapidly decreasing functions, then so is the convolution f∗g. Combined with the fact that convolution commutes with differentiation (see #Properties), it follows that the class of Schwartz functions is closed under convolution . Distributions If f is a smooth function that is compactly supported and g is a distribution, then f∗g is a smooth function defined by More generally, it is possible to extend the definition of the convolution in a unique way with the same as f above, so that the associative law remains valid in the case where f is a distribution, and g a compactly supported distribution . Measures The convolution of any two Borel measures μ and ν of bounded variation is the measure defined by In particular, where is a measurable set and is the indicator function of . This agrees with the convolution defined above when μ and ν are regarded as distributions, as well as the convolution of L1 functions when μ and ν are absolutely continuous with respect to the Lebesgue measure. The convolution of measures also satisfies the following version of Young's inequality where the norm is the total variation of a measure. Because the space of measures of bounded variation is a Banach space, convolution of measures can be treated with standard methods of functional analysis that may not apply for the convolution of distributions. Properties Algebraic properties The convolution defines a product on the linear space of integrable functions. This product satisfies the following algebraic properties, which formally mean that the space of integrable functions with the product given by convolution is a commutative associative algebra without identity . Other linear spaces of functions, such as the space of continuous functions of compact support, are closed under the convolution, and so also form commutative associative algebras. Commutativity Proof: By definition: Changing the variable of integration to the result follows. Associativity Proof: This follows from using Fubini's theorem (i.e., double integrals can be evaluated as iterated integrals in either order). Distributivity Proof: This follows from linearity of the integral. Associativity with scalar multiplication for any real (or complex) number . Multiplicative identity No algebra of functions possesses an identity for the convolution. The lack of identity is typically not a major inconvenience, since most collections of functions on which the convolution is performed can be convolved with a delta distribution (a unitary impulse, centered at zero) or, at the very least (as is the case of L1) admit approximations to the identity. The linear space of compactly supported distributions does, however, admit an identity under the convolution. Specifically, where δ is the delta distribution. Inverse element Some distributions S have an inverse element S−1 for the convolution which then must satisfy from which an explicit formula for S−1 may be obtained.The set of invertible distributions forms an abelian group under the convolution. Complex conjugation Time reversal If    then   Proof (using convolution theorem): Relationship with differentiation Proof: Relationship with integration If and then Integration If f and g are integrable functions, then the integral of their convolution on the whole space is simply obtained as the product of their integrals: This follows from Fubini's theorem. The same result holds if f and g are only assumed to be nonnegative measurable functions, by Tonelli's theorem. Differentiation In the one-variable case, where is the derivative. More generally, in the case of functions of several variables, an analogous formula holds with the partial derivative: A particular consequence of this is that the convolution can be viewed as a "smoothing" operation: the convolution of f and g is differentiable as many times as f and g are in total. These identities hold for example under the condition that f and g are absolutely integrable and at least one of them has an absolutely integrable (L1) weak derivative, as a consequence of Young's convolution inequality. For instance, when f is continuously differentiable with compact support, and g is an arbitrary locally integrable function, These identities also hold much more broadly in the sense of tempered distributions if one of f or g is a rapidly decreasing tempered distribution, a compactly supported tempered distribution or a Schwartz function and the other is a tempered distribution. On the other hand, two positive integrable and infinitely differentiable functions may have a nowhere continuous convolution. In the discrete case, the difference operator D f(n) = f(n + 1) − f(n) satisfies an analogous relationship: Convolution theorem The convolution theorem states that where denotes the Fourier transform of . Convolution in other types of transformations Versions of this theorem also hold for the Laplace transform, two-sided Laplace transform, Z-transform and Mellin transform. Convolution on matrices If is the Fourier transform matrix, then , where is face-splitting product, denotes Kronecker product, denotes Hadamard product (this result is an evolving of count sketch properties). This can be generalized for appropriate matrices : from the properties of the face-splitting product. Translational equivariance The convolution commutes with translations, meaning that where τxf is the translation of the function f by x defined by If f is a Schwartz function, then τxf is the convolution with a translated Dirac delta function τxf = f ∗ τx δ. So translation invariance of the convolution of Schwartz functions is a consequence of the associativity of convolution. Furthermore, under certain conditions, convolution is the most general translation invariant operation. Informally speaking, the following holds Suppose that S is a bounded linear operator acting on functions which commutes with translations: S(τxf) = τx(Sf) for all x. Then S is given as convolution with a function (or distribution) gS; that is Sf = gS ∗ f. Thus some translation invariant operations can be represented as convolution. Convolutions play an important role in the study of time-invariant systems, and especially LTI system theory. The representing function gS is the impulse response of the transformation S. A more precise version of the theorem quoted above requires specifying the class of functions on which the convolution is defined, and also requires assuming in addition that S must be a continuous linear operator with respect to the appropriate topology. It is known, for instance, that every continuous translation invariant continuous linear operator on L1 is the convolution with a finite Borel measure. More generally, every continuous translation invariant continuous linear operator on Lp for 1 ≤ p < ∞ is the convolution with a tempered distribution whose Fourier transform is bounded. To wit, they are all given by bounded Fourier multipliers. Convolutions on groups If G is a suitable group endowed with a measure λ, and if f and g are real or complex valued integrable functions on G, then we can define their convolution by It is not commutative in general. In typical cases of interest G is a locally compact Hausdorff topological group and λ is a (left-) Haar measure. In that case, unless G is unimodular, the convolution defined in this way is not the same as . The preference of one over the other is made so that convolution with a fixed function g commutes with left translation in the group: Furthermore, the convention is also required for consistency with the definition of the convolution of measures given below. However, with a right instead of a left Haar measure, the latter integral is preferred over the former. On locally compact abelian groups, a version of the convolution theorem holds: the Fourier transform of a convolution is the pointwise product of the Fourier transforms. The circle group T with the Lebesgue measure is an immediate example. For a fixed g in L1(T), we have the following familiar operator acting on the Hilbert space L2(T): The operator T is compact. A direct calculation shows that its adjoint T* is convolution with By the commutativity property cited above, T is normal: T* T = TT* . Also, T commutes with the translation operators. Consider the family S of operators consisting of all such convolutions and the translation operators. Then S is a commuting family of normal operators. According to spectral theory, there exists an orthonormal basis {hk} that simultaneously diagonalizes S. This characterizes convolutions on the circle. Specifically, we have which are precisely the characters of T. Each convolution is a compact multiplication operator in this basis. This can be viewed as a version of the convolution theorem discussed above. A discrete example is a finite cyclic group of order n. Convolution operators are here represented by circulant matrices, and can be diagonalized by the discrete Fourier transform. A similar result holds for compact groups (not necessarily abelian): the matrix coefficients of finite-dimensional unitary representations form an orthonormal basis in L2 by the Peter–Weyl theorem, and an analog of the convolution theorem continues to hold, along with many other aspects of harmonic analysis that depend on the Fourier transform. Convolution of measures Let G be a (multiplicatively written) topological group. If μ and ν are Radon measures on G, then their convolution μ∗ν is defined as the pushforward measure of the group action and can be written as : for each measurable subset E of G. The convolution is also a Radon measure, whose total variation satisfies In the case when G is locally compact with (left-)Haar measure λ, and μ and ν are absolutely continuous with respect to a λ, so that each has a density function, then the convolution μ∗ν is also absolutely continuous, and its density function is just the convolution of the two separate density functions. In fact, if either measure is absolutely continuous with respect to the Haar measure, then so is their convolution. If μ and ν are probability measures on the topological group then the convolution μ∗ν is the probability distribution of the sum X + Y of two independent random variables X and Y whose respective distributions are μ and ν. Infimal convolution In convex analysis, the infimal convolution of proper (not identically ) convex functions on is defined by: It can be shown that the infimal convolution of convex functions is convex. Furthermore, it satisfies an identity analogous to that of the Fourier transform of a traditional convolution, with the role of the Fourier transform is played instead by the Legendre transform: We have: Bialgebras Let (X, Δ, ∇, ε, η) be a bialgebra with comultiplication Δ, multiplication ∇, unit η, and counit ε. The convolution is a product defined on the endomorphism algebra End(X) as follows. Let φ, ψ ∈ End(X), that is, φ, ψ: X → X are functions that respect all algebraic structure of X, then the convolution φ∗ψ is defined as the composition The convolution appears notably in the definition of Hopf algebras . A bialgebra is a Hopf algebra if and only if it has an antipode: an endomorphism S such that Applications Convolution and related operations are found in many applications in science, engineering and mathematics. Convolutional neural networks apply multiple cascaded convolution kernels with applications in machine vision and artificial intelligence. Though these are actually cross-correlations rather than convolutions in most cases. In non-neural-network-based image processing In digital image processing convolutional filtering plays an important role in many important algorithms in edge detection and related processes (see Kernel (image processing)) In optics, an out-of-focus photograph is a convolution of the sharp image with a lens function. The photographic term for this is bokeh. In image processing applications such as adding blurring. In digital data processing In analytical chemistry, Savitzky–Golay smoothing filters are used for the analysis of spectroscopic data. They can improve signal-to-noise ratio with minimal distortion of the spectra In statistics, a weighted moving average is a convolution. In acoustics, reverberation is the convolution of the original sound with echoes from objects surrounding the sound source. In digital signal processing, convolution is used to map the impulse response of a real room on a digital audio signal. In electronic music convolution is the imposition of a spectral or rhythmic structure on a sound. Often this envelope or structure is taken from another sound. The convolution of two signals is the filtering of one through the other. In electrical engineering, the convolution of one function (the input signal) with a second function (the impulse response) gives the output of a linear time-invariant system (LTI). At any given moment, the output is an accumulated effect of all the prior values of the input function, with the most recent values typically having the most influence (expressed as a multiplicative factor). The impulse response function provides that factor as a function of the elapsed time since each input value occurred. In physics, wherever there is a linear system with a "superposition principle", a convolution operation makes an appearance. For instance, in spectroscopy line broadening due to the Doppler effect on its own gives a Gaussian spectral line shape and collision broadening alone gives a Lorentzian line shape. When both effects are operative, the line shape is a convolution of Gaussian and Lorentzian, a Voigt function. In time-resolved fluorescence spectroscopy, the excitation signal can be treated as a chain of delta pulses, and the measured fluorescence is a sum of exponential decays from each delta pulse. In computational fluid dynamics, the large eddy simulation (LES) turbulence model uses the convolution operation to lower the range of length scales necessary in computation thereby reducing computational cost. In probability theory, the probability distribution of the sum of two independent random variables is the convolution of their individual distributions. In kernel density estimation, a distribution is estimated from sample points by convolution with a kernel, such as an isotropic Gaussian. In radiotherapy treatment planning systems, most part of all modern codes of calculation applies a convolution-superposition algorithm. In structural reliability, the reliability index can be defined based on the convolution theorem. The definition of reliability index for limit state functions with nonnormal distributions can be established corresponding to the joint distribution function. In fact, the joint distribution function can be obtained using the convolution theory. In Smoothed-particle hydrodynamics, simulations of fluid dynamics are calculated using particles, each with surrounding kernels. For any given particle , some physical quantity is calculated as a convolution of with a weighting function, where denotes the neighbors of particle : those that are located within its kernel. The convolution is approximated as a summation over each neighbor. In Fractional calculus convolution is instrumental in various definitions of fractional integral and fractional derivative. See also Analog signal processing Circulant matrix Convolution for optical broad-beam responses in scattering media Convolution power Convolution quotient Dirichlet convolution Generalized signal averaging List of convolutions of probability distributions LTI system theory#Impulse response and convolution Multidimensional discrete convolution Scaled correlation Titchmarsh convolution theorem Toeplitz matrix (convolutions can be considered a Toeplitz matrix operation where each row is a shifted copy of the convolution kernel) Wavelet transform Notes References Further reading . Dominguez-Torres, Alejandro (Nov 2, 2010). "Origin and history of convolution". 41 pgs. https://slideshare.net/Alexdfar/origin-adn-history-of-convolution. Cranfield, Bedford MK43 OAL, UK. Retrieved Mar 13, 2013. . . . . . . . . . . . External links Earliest Uses: The entry on Convolution has some historical information. Convolution, on The Data Analysis BriefBook https://jhu.edu/~signals/convolve/index.html Visual convolution Java Applet https://jhu.edu/~signals/discreteconv2/index.html Visual convolution Java Applet for discrete-time functions https://get-the-solution.net/projects/discret-convolution discret-convolution online calculator https://lpsa.swarthmore.edu/Convolution/CI.html Convolution demo and visualization in JavaScript https://phiresky.github.io/convolution-demo/ Another convolution demo in JavaScript Lectures on Image Processing: A collection of 18 lectures in pdf format from Vanderbilt University. Lecture 7 is on 2-D convolution., by Alan Peters https://archive.org/details/Lectures_on_Image_Processing Convolution Kernel Mask Operation Interactive tutorial Convolution at MathWorld Freeverb3 Impulse Response Processor: Opensource zero latency impulse response processor with VST plugins Stanford University CS 178 interactive Flash demo showing how spatial convolution works. A video lecture on the subject of convolution given by Salman Khan Example of FFT convolution for pattern-recognition (image processing) Intuitive Guide to Convolution A blogpost about an intuitive interpretation of convolution. Functional analysis Image processing Fourier analysis Bilinear maps Feature detection (computer vision)
Convolution
[ "Mathematics" ]
6,373
[ "Functional analysis", "Mathematical objects", "Functions and mappings", "Mathematical relations" ]
7,522
https://en.wikipedia.org/wiki/Calorimetry
In chemistry and thermodynamics, calorimetry () is the science or act of measuring changes in state variables of a body for the purpose of deriving the heat transfer associated with changes of its state due, for example, to chemical reactions, physical changes, or phase transitions under specified constraints. Calorimetry is performed with a calorimeter. Scottish physician and scientist Joseph Black, who was the first to recognize the distinction between heat and temperature, is said to be the founder of the science of calorimetry. Indirect calorimetry calculates heat that living organisms produce by measuring either their production of carbon dioxide and nitrogen waste (frequently ammonia in aquatic organisms, or urea in terrestrial ones), or from their consumption of oxygen. Lavoisier noted in 1780 that heat production can be predicted from oxygen consumption this way, using multiple regression. The dynamic energy budget theory explains why this procedure is correct. Heat generated by living organisms may also be measured by direct calorimetry, in which the entire organism is placed inside the calorimeter for the measurement. A widely used modern instrument is the differential scanning calorimeter, a device which allows thermal data to be obtained on small amounts of material. It involves heating the sample at a controlled rate and recording the heat flow either into or from the specimen. Classical calorimetric calculation of heat Cases with differentiable equation of state for a one-component body Basic classical calculation with respect to volume Calorimetry requires that a reference material that changes temperature have known definite thermal constitutive properties. The classical rule, recognized by Clausius and Kelvin, is that the pressure exerted by the calorimetric material is fully and rapidly determined solely by its temperature and volume; this rule is for changes that do not involve phase change, such as melting of ice. There are many materials that do not comply with this rule, and for them, the present formula of classical calorimetry does not provide an adequate account. Here the classical rule is assumed to hold for the calorimetric material being used, and the propositions are mathematically written: The thermal response of the calorimetric material is fully described by its pressure as the value of its constitutive function of just the volume and the temperature . All increments are here required to be very small. This calculation refers to a domain of volume and temperature of the body in which no phase change occurs, and there is only one phase present. An important assumption here is continuity of property relations. A different analysis is needed for phase change When a small increment of heat is gained by a calorimetric body, with small increments, of its volume, and of its temperature, the increment of heat, , gained by the body of calorimetric material, is given by where denotes the latent heat with respect to volume, of the calorimetric material at constant controlled temperature . The surroundings' pressure on the material is instrumentally adjusted to impose a chosen volume change, with initial volume . To determine this latent heat, the volume change is effectively the independently instrumentally varied quantity. This latent heat is not one of the widely used ones, but is of theoretical or conceptual interest. denotes the heat capacity, of the calorimetric material at fixed constant volume , while the pressure of the material is allowed to vary freely, with initial temperature . The temperature is forced to change by exposure to a suitable heat bath. It is customary to write simply as , or even more briefly as . This latent heat is one of the two widely used ones. The latent heat with respect to volume is the heat required for unit increment in volume at constant temperature. It can be said to be 'measured along an isotherm', and the pressure the material exerts is allowed to vary freely, according to its constitutive law . For a given material, it can have a positive or negative sign or exceptionally it can be zero, and this can depend on the temperature, as it does for water about 4 C. The concept of latent heat with respect to volume was perhaps first recognized by Joseph Black in 1762. The term 'latent heat of expansion' is also used. The latent heat with respect to volume can also be called the 'latent energy with respect to volume'. For all of these usages of 'latent heat', a more systematic terminology uses 'latent heat capacity'. The heat capacity at constant volume is the heat required for unit increment in temperature at constant volume. It can be said to be 'measured along an isochor', and again, the pressure the material exerts is allowed to vary freely. It always has a positive sign. This means that for an increase in the temperature of a body without change of its volume, heat must be supplied to it. This is consistent with common experience. Quantities like are sometimes called 'curve differentials', because they are measured along curves in the surface. Classical theory for constant-volume (isochoric) calorimetry Constant-volume calorimetry is calorimetry performed at a constant volume. This involves the use of a constant-volume calorimeter. Heat is still measured by the above-stated principle of calorimetry. This means that in a suitably constructed calorimeter, called a bomb calorimeter, the increment of volume can be made to vanish, . For constant-volume calorimetry: where denotes the increment in temperature and denotes the heat capacity at constant volume. Classical heat calculation with respect to pressure From the above rule of calculation of heat with respect to volume, there follows one with respect to pressure. In a process of small increments, of its pressure, and of its temperature, the increment of heat, , gained by the body of calorimetric material, is given by where denotes the latent heat with respect to pressure, of the calorimetric material at constant temperature, while the volume and pressure of the body are allowed to vary freely, at pressure and temperature ; denotes the heat capacity, of the calorimetric material at constant pressure, while the temperature and volume of the body are allowed to vary freely, at pressure and temperature . It is customary to write simply as , or even more briefly as . The new quantities here are related to the previous ones: where denotes the partial derivative of with respect to evaluated for and denotes the partial derivative of with respect to evaluated for . The latent heats and are always of opposite sign. It is common to refer to the ratio of specific heats as often just written as . Calorimetry through phase change, equation of state shows one jump discontinuity An early calorimeter was that used by Laplace and Lavoisier, as shown in the figure above. It worked at constant temperature, and at atmospheric pressure. The latent heat involved was then not a latent heat with respect to volume or with respect to pressure, as in the above account for calorimetry without phase change. The latent heat involved in this calorimeter was with respect to phase change, naturally occurring at constant temperature. This kind of calorimeter worked by measurement of mass of water produced by the melting of ice, which is a phase change. Cumulation of heating For a time-dependent process of heating of the calorimetric material, defined by a continuous joint progression of and , starting at time and ending at time , there can be calculated an accumulated quantity of heat delivered, . This calculation is done by mathematical integration along the progression with respect to time. This is because increments of heat are 'additive'; but this does not mean that heat is a conservative quantity. The idea that heat was a conservative quantity was invented by Lavoisier, and is called the 'caloric theory'; by the middle of the nineteenth century it was recognized as mistaken. Written with the symbol , the quantity is not at all restricted to be an increment with very small values; this is in contrast with . One can write . This expression uses quantities such as which are defined in the section below headed 'Mathematical aspects of the above rules'. Mathematical aspects of the above rules The use of 'very small' quantities such as is related to the physical requirement for the quantity to be 'rapidly determined' by and ; such 'rapid determination' refers to a physical process. These 'very small' quantities are used in the Leibniz approach to the infinitesimal calculus. The Newton approach uses instead 'fluxions' such as , which makes it more obvious that must be 'rapidly determined'. In terms of fluxions, the above first rule of calculation can be written where denotes the time denotes the time rate of heating of the calorimetric material at time denotes the time rate of change of volume of the calorimetric material at time denotes the time rate of change of temperature of the calorimetric material. The increment and the fluxion are obtained for a particular time that determines the values of the quantities on the righthand sides of the above rules. But this is not a reason to expect that there should exist a mathematical function . For this reason, the increment is said to be an 'imperfect differential' or an 'inexact differential'. Some books indicate this by writing instead of . Also, the notation đQ is used in some books. Carelessness about this can lead to error. The quantity is properly said to be a functional of the continuous joint progression of and , but, in the mathematical definition of a function, is not a function of . Although the fluxion is defined here as a function of time , the symbols and respectively standing alone are not defined here. Physical scope of the above rules of calorimetry The above rules refer only to suitable calorimetric materials. The terms 'rapidly' and 'very small' call for empirical physical checking of the domain of validity of the above rules. The above rules for the calculation of heat belong to pure calorimetry. They make no reference to thermodynamics, and were mostly understood before the advent of thermodynamics. They are the basis of the 'thermo' contribution to thermodynamics. The 'dynamics' contribution is based on the idea of work, which is not used in the above rules of calculation. Experimentally conveniently measured coefficients Empirically, it is convenient to measure properties of calorimetric materials under experimentally controlled conditions. Pressure increase at constant volume For measurements at experimentally controlled volume, one can use the assumption, stated above, that the pressure of the body of calorimetric material is can be expressed as a function of its volume and temperature. For measurement at constant experimentally controlled volume, the isochoric coefficient of pressure rise with temperature, is defined by Expansion at constant pressure For measurements at experimentally controlled pressure, it is assumed that the volume of the body of calorimetric material can be expressed as a function of its temperature and pressure . This assumption is related to, but is not the same as, the above used assumption that the pressure of the body of calorimetric material is known as a function of its volume and temperature; anomalous behaviour of materials can affect this relation. The quantity that is conveniently measured at constant experimentally controlled pressure, the isobar volume expansion coefficient, is defined by Compressibility at constant temperature For measurements at experimentally controlled temperature, it is again assumed that the volume of the body of calorimetric material can be expressed as a function of its temperature and pressure , with the same provisos as mentioned just above. The quantity that is conveniently measured at constant experimentally controlled temperature, the isothermal compressibility, is defined by Relation between classical calorimetric quantities Assuming that the rule is known, one can derive the function of that is used above in the classical heat calculation with respect to pressure. This function can be found experimentally from the coefficients and through the mathematically deducible relation . Connection between calorimetry and thermodynamics Thermodynamics developed gradually over the first half of the nineteenth century, building on the above theory of calorimetry which had been worked out before it, and on other discoveries. According to Gislason and Craig (2005): "Most thermodynamic data come from calorimetry..." According to Kondepudi (2008): "Calorimetry is widely used in present day laboratories." In terms of thermodynamics, the internal energy of the calorimetric material can be considered as the value of a function of , with partial derivatives and . Then it can be shown that one can write a thermodynamic version of the above calorimetric rules: with and . Again, further in terms of thermodynamics, the internal energy of the calorimetric material can sometimes, depending on the calorimetric material, be considered as the value of a function of , with partial derivatives and , and with being expressible as the value of a function of , with partial derivatives and . Then, according to Adkins (1975), it can be shown that one can write a further thermodynamic version of the above calorimetric rules: with and . Beyond the calorimetric fact noted above that the latent heats and are always of opposite sign, it may be shown, using the thermodynamic concept of work, that also Special interest of thermodynamics in calorimetry: the isothermal segments of a Carnot cycle Calorimetry has a special benefit for thermodynamics. It tells about the heat absorbed or emitted in the isothermal segment of a Carnot cycle. A Carnot cycle is a special kind of cyclic process affecting a body composed of material suitable for use in a heat engine. Such a material is of the kind considered in calorimetry, as noted above, that exerts a pressure that is very rapidly determined just by temperature and volume. Such a body is said to change reversibly. A Carnot cycle consists of four successive stages or segments: (1) a change in volume from a volume to a volume at constant temperature so as to incur a flow of heat into the body (known as an isothermal change) (2) a change in volume from to a volume at a variable temperature just such as to incur no flow of heat (known as an adiabatic change) (3) another isothermal change in volume from to a volume at constant temperature such as to incur a flow or heat out of the body and just such as to precisely prepare for the following change (4) another adiabatic change of volume from back to just such as to return the body to its starting temperature . In isothermal segment (1), the heat that flows into the body is given by     and in isothermal segment (3) the heat that flows out of the body is given by . Because the segments (2) and (4) are adiabats, no heat flows into or out of the body during them, and consequently the net heat supplied to the body during the cycle is given by . This quantity is used by thermodynamics and is related in a special way to the net work done by the body during the Carnot cycle. The net change of the body's internal energy during the Carnot cycle, , is equal to zero, because the material of the working body has the special properties noted above. Special interest of calorimetry in thermodynamics: relations between classical calorimetric quantities Relation of latent heat with respect to volume, and the equation of state The quantity , the latent heat with respect to volume, belongs to classical calorimetry. It accounts for the occurrence of energy transfer by work in a process in which heat is also transferred; the quantity, however, was considered before the relation between heat and work transfers was clarified by the invention of thermodynamics. In the light of thermodynamics, the classical calorimetric quantity is revealed as being tightly linked to the calorimetric material's equation of state . Provided that the temperature is measured in the thermodynamic absolute scale, the relation is expressed in the formula . Difference of specific heats Advanced thermodynamics provides the relation . From this, further mathematical and thermodynamic reasoning leads to another relation between classical calorimetric quantities. The difference of specific heats is given by . Practical constant-volume calorimetry (bomb calorimetry) for thermodynamic studies Constant-volume calorimetry is calorimetry performed at a constant volume. This involves the use of a constant-volume calorimeter. No work is performed in constant-volume calorimetry, so the heat measured equals the change in internal energy of the system. The heat capacity at constant volume is assumed to be independent of temperature. Heat is measured by the principle of calorimetry. where ΔU is change in internal energy, ΔT is change in temperature and CV is the heat capacity at constant volume. In constant-volume calorimetry the pressure is not held constant. If there is a pressure difference between initial and final states, the heat measured needs adjustment to provide the enthalpy change. One then has where ΔH is change in enthalpy and V is the unchanging volume of the sample chamber. See also Isothermal microcalorimetry (IMC) Isothermal titration calorimetry Sorption calorimetry Reaction calorimeter References Books . External links Heat transfer
Calorimetry
[ "Physics", "Chemistry" ]
3,605
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Thermodynamics" ]
7,534
https://en.wikipedia.org/wiki/Centripetal%20force
A centripetal force (from Latin centrum, "center" and petere, "to seek") is a force that makes a body follow a curved path. The direction of the centripetal force is always orthogonal to the motion of the body and towards the fixed point of the instantaneous center of curvature of the path. Isaac Newton described it as "a force by which bodies are drawn or impelled, or in any way tend, towards a point as to a centre". In Newtonian mechanics, gravity provides the centripetal force causing astronomical orbits. One common example involving centripetal force is the case in which a body moves with uniform speed along a circular path. The centripetal force is directed at right angles to the motion and also along the radius towards the centre of the circular path. The mathematical description was derived in 1659 by the Dutch physicist Christiaan Huygens. Formula From the kinematics of curved motion it is known that an object moving at tangential speed v along a path with radius of curvature r accelerates toward the center of curvature at a rate Here, is the centripetal acceleration and is the difference between the velocity vectors at and . By Newton's second law, the cause of acceleration is a net force acting on the object, which is proportional to its mass m and its acceleration. The force, usually referred to as a centripetal force, has a magnitude and is, like centripetal acceleration, directed toward the center of curvature of the object's trajectory. Derivation The centripetal acceleration can be inferred from the diagram of the velocity vectors at two instances. In the case of uniform circular motion the velocities have constant magnitude. Because each one is perpendicular to its respective position vector, simple vector subtraction implies two similar isosceles triangles with congruent angles – one comprising a base of and a leg length of , and the other a base of (position vector difference) and a leg length of : Therefore, can be substituted with : The direction of the force is toward the center of the circle in which the object is moving, or the osculating circle (the circle that best fits the local path of the object, if the path is not circular). The speed in the formula is squared, so twice the speed needs four times the force, at a given radius. This force is also sometimes written in terms of the angular velocity ω of the object about the center of the circle, related to the tangential velocity by the formula so that Expressed using the orbital period T for one revolution of the circle, the equation becomes In particle accelerators, velocity can be very high (close to the speed of light in vacuum) so the same rest mass now exerts greater inertia (relativistic mass) thereby requiring greater force for the same centripetal acceleration, so the equation becomes: where is the Lorentz factor. Thus the centripetal force is given by: which is the rate of change of relativistic momentum . Sources In the case of an object that is swinging around on the end of a rope in a horizontal plane, the centripetal force on the object is supplied by the tension of the rope. The rope example is an example involving a 'pull' force. The centripetal force can also be supplied as a 'push' force, such as in the case where the normal reaction of a wall supplies the centripetal force for a wall of death or a Rotor rider. Newton's idea of a centripetal force corresponds to what is nowadays referred to as a central force. When a satellite is in orbit around a planet, gravity is considered to be a centripetal force even though in the case of eccentric orbits, the gravitational force is directed towards the focus, and not towards the instantaneous center of curvature. Another example of centripetal force arises in the helix that is traced out when a charged particle moves in a uniform magnetic field in the absence of other external forces. In this case, the magnetic force is the centripetal force that acts towards the helix axis. Analysis of several cases Below are three examples of increasing complexity, with derivations of the formulas governing velocity and acceleration. Uniform circular motion Uniform circular motion refers to the case of constant rate of rotation. Here are two approaches to describing this case. Calculus derivation In two dimensions, the position vector , which has magnitude (length) and directed at an angle above the x-axis, can be expressed in Cartesian coordinates using the unit vectors and : The assumption of uniform circular motion requires three things: The object moves only on a circle. The radius of the circle does not change in time. The object moves with constant angular velocity around the circle. Therefore, where is time. The velocity and acceleration of the motion are the first and second derivatives of position with respect to time: The term in parentheses is the original expression of in Cartesian coordinates. Consequently, negative shows that the acceleration is pointed towards the center of the circle (opposite the radius), hence it is called "centripetal" (i.e. "center-seeking"). While objects naturally follow a straight path (due to inertia), this centripetal acceleration describes the circular motion path caused by a centripetal force. Derivation using vectors The image at right shows the vector relationships for uniform circular motion. The rotation itself is represented by the angular velocity vector Ω, which is normal to the plane of the orbit (using the right-hand rule) and has magnitude given by: with θ the angular position at time t. In this subsection, dθ/dt is assumed constant, independent of time. The distance traveled dℓ of the particle in time dt along the circular path is which, by properties of the vector cross product, has magnitude rdθ and is in the direction tangent to the circular path. Consequently, In other words, Differentiating with respect to time, Lagrange's formula states: Applying Lagrange's formula with the observation that Ω • r(t) = 0 at all times, In words, the acceleration is pointing directly opposite to the radial displacement r at all times, and has a magnitude: where vertical bars |...| denote the vector magnitude, which in the case of r(t) is simply the radius r of the path. This result agrees with the previous section, though the notation is slightly different. When the rate of rotation is made constant in the analysis of nonuniform circular motion, that analysis agrees with this one. A merit of the vector approach is that it is manifestly independent of any coordinate system. Example: The banked turn The upper panel in the image at right shows a ball in circular motion on a banked curve. The curve is banked at an angle θ from the horizontal, and the surface of the road is considered to be slippery. The objective is to find what angle the bank must have so the ball does not slide off the road. Intuition tells us that, on a flat curve with no banking at all, the ball will simply slide off the road; while with a very steep banking, the ball will slide to the center unless it travels the curve rapidly. Apart from any acceleration that might occur in the direction of the path, the lower panel of the image above indicates the forces on the ball. There are two forces; one is the force of gravity vertically downward through the center of mass of the ball mg, where m is the mass of the ball and g is the gravitational acceleration; the second is the upward normal force exerted by the road at a right angle to the road surface man. The centripetal force demanded by the curved motion is also shown above. This centripetal force is not a third force applied to the ball, but rather must be provided by the net force on the ball resulting from vector addition of the normal force and the force of gravity. The resultant or net force on the ball found by vector addition of the normal force exerted by the road and vertical force due to gravity must equal the centripetal force dictated by the need to travel a circular path. The curved motion is maintained so long as this net force provides the centripetal force requisite to the motion. The horizontal net force on the ball is the horizontal component of the force from the road, which has magnitude . The vertical component of the force from the road must counteract the gravitational force: , which implies . Substituting into the above formula for yields a horizontal force to be: On the other hand, at velocity |v| on a circular path of radius r, kinematics says that the force needed to turn the ball continuously into the turn is the radially inward centripetal force Fc of magnitude: Consequently, the ball is in a stable path when the angle of the road is set to satisfy the condition: or, As the angle of bank θ approaches 90°, the tangent function approaches infinity, allowing larger values for |v|2/r. In words, this equation states that for greater speeds (bigger |v|) the road must be banked more steeply (a larger value for θ), and for sharper turns (smaller r) the road also must be banked more steeply, which accords with intuition. When the angle θ does not satisfy the above condition, the horizontal component of force exerted by the road does not provide the correct centripetal force, and an additional frictional force tangential to the road surface is called upon to provide the difference. If friction cannot do this (that is, the coefficient of friction is exceeded), the ball slides to a different radius where the balance can be realized. These ideas apply to air flight as well. See the FAA pilot's manual. Nonuniform circular motion As a generalization of the uniform circular motion case, suppose the angular rate of rotation is not constant. The acceleration now has a tangential component, as shown the image at right. This case is used to demonstrate a derivation strategy based on a polar coordinate system. Let r(t) be a vector that describes the position of a point mass as a function of time. Since we are assuming circular motion, let , where R is a constant (the radius of the circle) and ur is the unit vector pointing from the origin to the point mass. The direction of ur is described by θ, the angle between the x-axis and the unit vector, measured counterclockwise from the x-axis. The other unit vector for polar coordinates, uθ is perpendicular to ur and points in the direction of increasing θ. These polar unit vectors can be expressed in terms of Cartesian unit vectors in the x and y directions, denoted and respectively: and One can differentiate to find velocity: where is the angular velocity . This result for the velocity matches expectations that the velocity should be directed tangentially to the circle, and that the magnitude of the velocity should be . Differentiating again, and noting that we find that the acceleration, a is: Thus, the radial and tangential components of the acceleration are: and where is the magnitude of the velocity (the speed). These equations express mathematically that, in the case of an object that moves along a circular path with a changing speed, the acceleration of the body may be decomposed into a perpendicular component that changes the direction of motion (the centripetal acceleration), and a parallel, or tangential component, that changes the speed. General planar motion Polar coordinates The above results can be derived perhaps more simply in polar coordinates, and at the same time extended to general motion within a plane, as shown next. Polar coordinates in the plane employ a radial unit vector uρ and an angular unit vector uθ, as shown above. A particle at position r is described by: where the notation ρ is used to describe the distance of the path from the origin instead of R to emphasize that this distance is not fixed, but varies with time. The unit vector uρ travels with the particle and always points in the same direction as r(t). Unit vector uθ also travels with the particle and stays orthogonal to uρ. Thus, uρ and uθ form a local Cartesian coordinate system attached to the particle, and tied to the path travelled by the particle. By moving the unit vectors so their tails coincide, as seen in the circle at the left of the image above, it is seen that uρ and uθ form a right-angled pair with tips on the unit circle that trace back and forth on the perimeter of this circle with the same angle θ(t) as r(t). When the particle moves, its velocity is To evaluate the velocity, the derivative of the unit vector uρ is needed. Because uρ is a unit vector, its magnitude is fixed, and it can change only in direction, that is, its change duρ has a component only perpendicular to uρ. When the trajectory r(t) rotates an amount dθ, uρ, which points in the same direction as r(t), also rotates by dθ. See image above. Therefore, the change in uρ is or In a similar fashion, the rate of change of uθ is found. As with uρ, uθ is a unit vector and can only rotate without changing size. To remain orthogonal to uρ while the trajectory r(t) rotates an amount dθ, uθ, which is orthogonal to r(t), also rotates by dθ. See image above. Therefore, the change duθ is orthogonal to uθ and proportional to dθ (see image above): The equation above shows the sign to be negative: to maintain orthogonality, if duρ is positive with dθ, then duθ must decrease. Substituting the derivative of uρ into the expression for velocity: To obtain the acceleration, another time differentiation is done: Substituting the derivatives of uρ and uθ, the acceleration of the particle is: As a particular example, if the particle moves in a circle of constant radius R, then dρ/dt = 0, v = vθ, and: where These results agree with those above for nonuniform circular motion. See also the article on non-uniform circular motion. If this acceleration is multiplied by the particle mass, the leading term is the centripetal force and the negative of the second term related to angular acceleration is sometimes called the Euler force. For trajectories other than circular motion, for example, the more general trajectory envisioned in the image above, the instantaneous center of rotation and radius of curvature of the trajectory are related only indirectly to the coordinate system defined by uρ and uθ and to the length |r(t)| = ρ. Consequently, in the general case, it is not straightforward to disentangle the centripetal and Euler terms from the above general acceleration equation. To deal directly with this issue, local coordinates are preferable, as discussed next. Local coordinates Local coordinates mean a set of coordinates that travel with the particle, and have orientation determined by the path of the particle. Unit vectors are formed as shown in the image at right, both tangential and normal to the path. This coordinate system sometimes is referred to as intrinsic or path coordinates or nt-coordinates, for normal-tangential, referring to these unit vectors. These coordinates are a very special example of a more general concept of local coordinates from the theory of differential forms. Distance along the path of the particle is the arc length s, considered to be a known function of time. A center of curvature is defined at each position s located a distance ρ (the radius of curvature) from the curve on a line along the normal un (s). The required distance ρ(s) at arc length s is defined in terms of the rate of rotation of the tangent to the curve, which in turn is determined by the path itself. If the orientation of the tangent relative to some starting position is θ(s), then ρ(s) is defined by the derivative dθ/ds: The radius of curvature usually is taken as positive (that is, as an absolute value), while the curvature κ is a signed quantity. A geometric approach to finding the center of curvature and the radius of curvature uses a limiting process leading to the osculating circle. See image above. Using these coordinates, the motion along the path is viewed as a succession of circular paths of ever-changing center, and at each position s constitutes non-uniform circular motion at that position with radius ρ. The local value of the angular rate of rotation then is given by: with the local speed v given by: As for the other examples above, because unit vectors cannot change magnitude, their rate of change is always perpendicular to their direction (see the left-hand insert in the image above): Consequently, the velocity and acceleration are: and using the chain-rule of differentiation: with the tangential acceleration In this local coordinate system, the acceleration resembles the expression for nonuniform circular motion with the local radius ρ(s), and the centripetal acceleration is identified as the second term. Extending this approach to three dimensional space curves leads to the Frenet–Serret formulas. Alternative approach Looking at the image above, one might wonder whether adequate account has been taken of the difference in curvature between ρ(s) and ρ(s + ds) in computing the arc length as ds = ρ(s)dθ. Reassurance on this point can be found using a more formal approach outlined below. This approach also makes connection with the article on curvature. To introduce the unit vectors of the local coordinate system, one approach is to begin in Cartesian coordinates and describe the local coordinates in terms of these Cartesian coordinates. In terms of arc length s, let the path be described as: Then an incremental displacement along the path ds is described by: where primes are introduced to denote derivatives with respect to s. The magnitude of this displacement is ds, showing that: (Eq. 1) This displacement is necessarily a tangent to the curve at s, showing that the unit vector tangent to the curve is: while the outward unit vector normal to the curve is Orthogonality can be verified by showing that the vector dot product is zero. The unit magnitude of these vectors is a consequence of Eq. 1. Using the tangent vector, the angle θ of the tangent to the curve is given by: and The radius of curvature is introduced completely formally (without need for geometric interpretation) as: The derivative of θ can be found from that for sinθ: Now: in which the denominator is unity. With this formula for the derivative of the sine, the radius of curvature becomes: where the equivalence of the forms stems from differentiation of Eq. 1: With these results, the acceleration can be found: as can be verified by taking the dot product with the unit vectors ut(s) and un(s). This result for acceleration is the same as that for circular motion based on the radius ρ. Using this coordinate system in the inertial frame, it is easy to identify the force normal to the trajectory as the centripetal force and that parallel to the trajectory as the tangential force. From a qualitative standpoint, the path can be approximated by an arc of a circle for a limited time, and for the limited time a particular radius of curvature applies, the centrifugal and Euler forces can be analyzed on the basis of circular motion with that radius. This result for acceleration agrees with that found earlier. However, in this approach, the question of the change in radius of curvature with s is handled completely formally, consistent with a geometric interpretation, but not relying upon it, thereby avoiding any questions the image above might suggest about neglecting the variation in ρ. Example: circular motion To illustrate the above formulas, let x, y be given as: Then: which can be recognized as a circular path around the origin with radius α. The position s = 0 corresponds to [α, 0], or 3 o'clock. To use the above formalism, the derivatives are needed: With these results, one can verify that: The unit vectors can also be found: which serve to show that s = 0 is located at position [ρ, 0] and s = ρπ/2 at [0, ρ], which agrees with the original expressions for x and y. In other words, s is measured counterclockwise around the circle from 3 o'clock. Also, the derivatives of these vectors can be found: To obtain velocity and acceleration, a time-dependence for s is necessary. For counterclockwise motion at variable speed v(t): where v(t) is the speed and t is time, and s(t = 0) = 0. Then: where it already is established that α = ρ. This acceleration is the standard result for non-uniform circular motion. See also Analytical mechanics Applied mechanics Bertrand theorem Central force Centrifugal force Circular motion Classical mechanics Coriolis force Dynamics (physics) Eskimo yo-yo Example: circular motion Fictitious force Frenet-Serret formulas History of centrifugal and centripetal forces Kinematics Kinetics Orthogonal coordinates Reactive centrifugal force Statics Notes and references Further reading Centripetal force vs. Centrifugal force, from an online Regents Exam physics tutorial by the Oswego City School District External links Notes from Physics and Astronomy HyperPhysics at Georgia State University Force Mechanics Kinematics Rotation Acceleration Articles containing video clips
Centripetal force
[ "Physics", "Mathematics", "Technology", "Engineering" ]
4,460
[ "Machines", "Force", "Kinematics", "Physical quantities", "Acceleration", "Physical phenomena", "Quantity", "Mass", "Classical mechanics", "Rotation", "Physical systems", "Motion (physics)", "Mechanics", "Mechanical engineering", "Wikipedia categories named after physical quantities", ...
7,543
https://en.wikipedia.org/wiki/Computational%20complexity%20theory
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and explores the relationships between these classifications. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e., the amount of resources needed to solve them, such as time and storage. Other measures of complexity are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do. The P versus NP problem, one of the seven Millennium Prize Problems, is part of the field of computational complexity. Closely related fields in theoretical computer science are analysis of algorithms and computability theory. A key distinction between analysis of algorithms and computational complexity theory is that the former is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the latter asks a more general question about all possible algorithms that could be used to solve the same problem. More precisely, computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources. In turn, imposing restrictions on the available resources is what distinguishes computational complexity from computability theory: the latter theory asks what kinds of problems can, in principle, be solved algorithmically. Computational problems Problem instances A computational problem can be viewed as an infinite collection of instances together with a set (possibly empty) of solutions for every instance. The input string for a computational problem is referred to as a problem instance, and should not be confused with the problem itself. In computational complexity theory, a problem refers to the abstract question to be solved. In contrast, an instance of this problem is a rather concrete utterance, which can serve as the input for a decision problem. For example, consider the problem of primality testing. The instance is a number (e.g., 15) and the solution is "yes" if the number is prime and "no" otherwise (in this case, 15 is not prime and the answer is "no"). Stated another way, the instance is a particular input to the problem, and the solution is the output corresponding to the given input. To further highlight the difference between a problem and an instance, consider the following instance of the decision version of the travelling salesman problem: Is there a route of at most 2000 kilometres passing through all of Germany's 15 largest cities? The quantitative answer to this particular problem instance is of little use for solving other instances of the problem, such as asking for a round trip through all sites in Milan whose total length is at most 10 km. For this reason, complexity theory addresses computational problems and not particular problem instances. Representing problem instances When considering computational problems, a problem instance is a string over an alphabet. Usually, the alphabet is taken to be the binary alphabet (i.e., the set {0,1}), and thus the strings are bitstrings. As in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, and graphs can be encoded directly via their adjacency matrices, or by encoding their adjacency lists in binary. Even though some proofs of complexity-theoretic theorems regularly assume some concrete choice of input encoding, one tries to keep the discussion abstract enough to be independent of the choice of encoding. This can be achieved by ensuring that different representations can be transformed into each other efficiently. Decision problems as formal languages Decision problems are one of the central objects of study in computational complexity theory. A decision problem is a type of computational problem where the answer is either yes or no (alternatively, 1 or 0). A decision problem can be viewed as a formal language, where the members of the language are instances whose output is yes, and the non-members are those instances whose output is no. The objective is to decide, with the aid of an algorithm, whether a given input string is a member of the formal language under consideration. If the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a decision problem is the following. The input is an arbitrary graph. The problem consists in deciding whether the given graph is connected or not. The formal language associated with this decision problem is then the set of all connected graphs — to obtain a precise definition of this language, one has to decide how graphs are encoded as binary strings. Function problems A function problem is a computational problem where a single output (of a total function) is expected for every input, but the output is more complex than that of a decision problem—that is, the output is not just yes or no. Notable examples include the traveling salesman problem and the integer factorization problem. It is tempting to think that the notion of function problems is much richer than the notion of decision problems. However, this is not really the case, since function problems can be recast as decision problems. For example, the multiplication of two integers can be expressed as the set of triples such that the relation holds. Deciding whether a given triple is a member of this set corresponds to solving the problem of multiplying two numbers. Measuring the size of an instance To measure the difficulty of solving a computational problem, one may wish to see how much time the best algorithm requires to solve the problem. However, the running time may, in general, depend on the instance. In particular, larger instances will require more time to solve. Thus the time required to solve a problem (or the space required, or any measure of complexity) is calculated as a function of the size of the instance. The input size is typically measured in bits. Complexity theory studies how algorithms scale as input size increases. For instance, in the problem of finding whether a graph is connected, how much more time does it take to solve a problem for a graph with vertices compared to the time taken for a graph with vertices? If the input size is , the time taken can be expressed as a function of . Since the time taken on different inputs of the same size can be different, the worst-case time complexity is defined to be the maximum time taken over all inputs of size . If is a polynomial in , then the algorithm is said to be a polynomial time algorithm. Cobham's thesis argues that a problem can be solved with a feasible amount of resources if it admits a polynomial-time algorithm. Machine models and complexity measures Turing machine A Turing machine is a mathematical model of a general computing machine. It is a theoretical device that manipulates symbols contained on a strip of tape. Turing machines are not intended as a practical computing technology, but rather as a general model of a computing machine—anything from an advanced supercomputer to a mathematician with a pencil and paper. It is believed that if a problem can be solved by an algorithm, there exists a Turing machine that solves the problem. Indeed, this is the statement of the Church–Turing thesis. Furthermore, it is known that everything that can be computed on other models of computation known to us today, such as a RAM machine, Conway's Game of Life, cellular automata, lambda calculus or any programming language can be computed on a Turing machine. Since Turing machines are easy to analyze mathematically, and are believed to be as powerful as any other model of computation, the Turing machine is the most commonly used model in complexity theory. Many types of Turing machines are used to define complexity classes, such as deterministic Turing machines, probabilistic Turing machines, non-deterministic Turing machines, quantum Turing machines, symmetric Turing machines and alternating Turing machines. They are all equally powerful in principle, but when resources (such as time or space) are bounded, some of these may be more powerful than others. A deterministic Turing machine is the most basic Turing machine, which uses a fixed set of rules to determine its future actions. A probabilistic Turing machine is a deterministic Turing machine with an extra supply of random bits. The ability to make probabilistic decisions often helps algorithms solve problems more efficiently. Algorithms that use random bits are called randomized algorithms. A non-deterministic Turing machine is a deterministic Turing machine with an added feature of non-determinism, which allows a Turing machine to have multiple possible future actions from a given state. One way to view non-determinism is that the Turing machine branches into many possible computational paths at each step, and if it solves the problem in any of these branches, it is said to have solved the problem. Clearly, this model is not meant to be a physically realizable model, it is just a theoretically interesting abstract machine that gives rise to particularly interesting complexity classes. For examples, see non-deterministic algorithm. Other machine models Many machine models different from the standard multi-tape Turing machines have been proposed in the literature, for example random-access machines. Perhaps surprisingly, each of these models can be converted to another without providing any extra computational power. The time and memory consumption of these alternate models may vary. What all these models have in common is that the machines operate deterministically. However, some computational problems are easier to analyze in terms of more unusual resources. For example, a non-deterministic Turing machine is a computational model that is allowed to branch out to check many different possibilities at once. The non-deterministic Turing machine has very little to do with how we physically want to compute algorithms, but its branching exactly captures many of the mathematical models we want to analyze, so that non-deterministic time is a very important resource in analyzing computational problems. Complexity measures For a precise definition of what it means to solve a problem using a given amount of time and space, a computational model such as the deterministic Turing machine is used. The time required by a deterministic Turing machine on input is the total number of state transitions, or steps, the machine makes before it halts and outputs the answer ("yes" or "no"). A Turing machine is said to operate within time if the time required by on each input of length is at most . A decision problem can be solved in time if there exists a Turing machine operating in time that solves the problem. Since complexity theory is interested in classifying problems based on their difficulty, one defines sets of problems based on some criteria. For instance, the set of problems solvable within time on a deterministic Turing machine is then denoted by DTIME(). Analogous definitions can be made for space requirements. Although time and space are the most well-known complexity resources, any complexity measure can be viewed as a computational resource. Complexity measures are very generally defined by the Blum complexity axioms. Other complexity measures used in complexity theory include communication complexity, circuit complexity, and decision tree complexity. The complexity of an algorithm is often expressed using big O notation. Best, worst and average case complexity The best, worst and average case complexity refer to three different ways of measuring the time complexity (or any other complexity measure) of different inputs of the same size. Since some inputs of size may be faster to solve than others, we define the following complexities: Best-case complexity: This is the complexity of solving the problem for the best input of size . Average-case complexity: This is the complexity of solving the problem on an average. This complexity is only defined with respect to a probability distribution over the inputs. For instance, if all inputs of the same size are assumed to be equally likely to appear, the average case complexity can be defined with respect to the uniform distribution over all inputs of size . Amortized analysis: Amortized analysis considers both the costly and less costly operations together over the whole series of operations of the algorithm. Worst-case complexity: This is the complexity of solving the problem for the worst input of size . The order from cheap to costly is: Best, average (of discrete uniform distribution), amortized, worst. For example, the deterministic sorting algorithm quicksort addresses the problem of sorting a list of integers. The worst-case is when the pivot is always the largest or smallest value in the list (so the list is never divided). In this case, the algorithm takes time O(). If we assume that all possible permutations of the input list are equally likely, the average time taken for sorting is . The best case occurs when each pivoting divides the list in half, also needing time. Upper and lower bounds on the complexity of problems To classify the computation time (or similar resources, such as space consumption), it is helpful to demonstrate upper and lower bounds on the maximum amount of time required by the most efficient algorithm to solve a given problem. The complexity of an algorithm is usually taken to be its worst-case complexity unless specified otherwise. Analyzing a particular algorithm falls under the field of analysis of algorithms. To show an upper bound on the time complexity of a problem, one needs to show only that there is a particular algorithm with running time at most . However, proving lower bounds is much more difficult, since lower bounds make a statement about all possible algorithms that solve a given problem. The phrase "all possible algorithms" includes not just the algorithms known today, but any algorithm that might be discovered in the future. To show a lower bound of for a problem requires showing that no algorithm can have time complexity lower than . Upper and lower bounds are usually stated using the big O notation, which hides constant factors and smaller terms. This makes the bounds independent of the specific details of the computational model used. For instance, if , in big O notation one would write . Complexity classes Defining complexity classes A complexity class is a set of problems of related complexity. Simpler complexity classes are defined by the following factors: The type of computational problem: The most commonly used problems are decision problems. However, complexity classes can be defined based on function problems, counting problems, optimization problems, promise problems, etc. The model of computation: The most common model of computation is the deterministic Turing machine, but many complexity classes are based on non-deterministic Turing machines, Boolean circuits, quantum Turing machines, monotone circuits, etc. The resource (or resources) that is being bounded and the bound: These two properties are usually stated together, such as "polynomial time", "logarithmic space", "constant depth", etc. Some complexity classes have complicated definitions that do not fit into this framework. Thus, a typical complexity class has a definition like the following: The set of decision problems solvable by a deterministic Turing machine within time . (This complexity class is known as DTIME().) But bounding the computation time above by some concrete function often yields complexity classes that depend on the chosen machine model. For instance, the language can be solved in linear time on a multi-tape Turing machine, but necessarily requires quadratic time in the model of single-tape Turing machines. If we allow polynomial variations in running time, Cobham-Edmonds thesis states that "the time complexities in any two reasonable and general models of computation are polynomially related" . This forms the basis for the complexity class P, which is the set of decision problems solvable by a deterministic Turing machine within polynomial time. The corresponding set of function problems is FP. Important complexity classes Many important complexity classes can be defined by bounding the time or space used by the algorithm. Some important complexity classes of decision problems defined in this manner are the following: Logarithmic-space classes do not account for the space required to represent the problem. It turns out that PSPACE = NPSPACE and EXPSPACE = NEXPSPACE by Savitch's theorem. Other important complexity classes include BPP, ZPP and RP, which are defined using probabilistic Turing machines; AC and NC, which are defined using Boolean circuits; and BQP and QMA, which are defined using quantum Turing machines. #P is an important complexity class of counting problems (not decision problems). Classes like IP and AM are defined using Interactive proof systems. ALL is the class of all decision problems. Hierarchy theorems For the complexity classes defined in this way, it is desirable to prove that relaxing the requirements on (say) computation time indeed defines a bigger set of problems. In particular, although DTIME() is contained in DTIME(), it would be interesting to know if the inclusion is strict. For time and space requirements, the answer to such questions is given by the time and space hierarchy theorems respectively. They are called hierarchy theorems because they induce a proper hierarchy on the classes defined by constraining the respective resources. Thus there are pairs of complexity classes such that one is properly included in the other. Having deduced such proper set inclusions, we can proceed to make quantitative statements about how much more additional time or space is needed in order to increase the number of problems that can be solved. More precisely, the time hierarchy theorem states that . The space hierarchy theorem states that . The time and space hierarchy theorems form the basis for most separation results of complexity classes. For instance, the time hierarchy theorem tells us that P is strictly contained in EXPTIME, and the space hierarchy theorem tells us that L is strictly contained in PSPACE. Reduction Many complexity classes are defined using the concept of a reduction. A reduction is a transformation of one problem into another problem. It captures the informal notion of a problem being at most as difficult as another problem. For instance, if a problem can be solved using an algorithm for , is no more difficult than , and we say that reduces to . There are many different types of reductions, based on the method of reduction, such as Cook reductions, Karp reductions and Levin reductions, and the bound on the complexity of reductions, such as polynomial-time reductions or log-space reductions. The most commonly used reduction is a polynomial-time reduction. This means that the reduction process takes polynomial time. For example, the problem of squaring an integer can be reduced to the problem of multiplying two integers. This means an algorithm for multiplying two integers can be used to square an integer. Indeed, this can be done by giving the same input to both inputs of the multiplication algorithm. Thus we see that squaring is not more difficult than multiplication, since squaring can be reduced to multiplication. This motivates the concept of a problem being hard for a complexity class. A problem is hard for a class of problems if every problem in can be reduced to . Thus no problem in is harder than , since an algorithm for allows us to solve any problem in . The notion of hard problems depends on the type of reduction being used. For complexity classes larger than P, polynomial-time reductions are commonly used. In particular, the set of problems that are hard for NP is the set of NP-hard problems. If a problem is in and hard for , then is said to be complete for . This means that is the hardest problem in . (Since many problems could be equally hard, one might say that is one of the hardest problems in .) Thus the class of NP-complete problems contains the most difficult problems in NP, in the sense that they are the ones most likely not to be in P. Because the problem P = NP is not solved, being able to reduce a known NP-complete problem, , to another problem, , would indicate that there is no known polynomial-time solution for . This is because a polynomial-time solution to would yield a polynomial-time solution to . Similarly, because all NP problems can be reduced to the set, finding an NP-complete problem that can be solved in polynomial time would mean that P = NP. Important open problems P versus NP problem The complexity class P is often seen as a mathematical abstraction modeling those computational tasks that admit an efficient algorithm. This hypothesis is called the Cobham–Edmonds thesis. The complexity class NP, on the other hand, contains many problems that people would like to solve efficiently, but for which no efficient algorithm is known, such as the Boolean satisfiability problem, the Hamiltonian path problem and the vertex cover problem. Since deterministic Turing machines are special non-deterministic Turing machines, it is easily observed that each problem in P is also member of the class NP. The question of whether P equals NP is one of the most important open questions in theoretical computer science because of the wide implications of a solution. If the answer is yes, many important problems can be shown to have more efficient solutions. These include various types of integer programming problems in operations research, many problems in logistics, protein structure prediction in biology, and the ability to find formal proofs of pure mathematics theorems. The P versus NP problem is one of the Millennium Prize Problems proposed by the Clay Mathematics Institute. There is a US$1,000,000 prize for resolving the problem. Problems in NP not known to be in P or NP-complete It was shown by Ladner that if then there exist problems in that are neither in nor -complete. Such problems are called NP-intermediate problems. The graph isomorphism problem, the discrete logarithm problem and the integer factorization problem are examples of problems believed to be NP-intermediate. They are some of the very few NP problems not known to be in or to be -complete. The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic. An important unsolved problem in complexity theory is whether the graph isomorphism problem is in , -complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete. If graph isomorphism is NP-complete, the polynomial time hierarchy collapses to its second level. Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due to László Babai and Eugene Luks has run time for graphs with vertices, although some recent work by Babai offers some potentially new perspectives on this. The integer factorization problem is the computational problem of determining the prime factorization of a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a prime factor less than . No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as the RSA algorithm. The integer factorization problem is in and in (and even in UP and co-UP). If the problem is -complete, the polynomial time hierarchy will collapse to its first level (i.e., will equal ). The best known algorithm for integer factorization is the general number field sieve, which takes time to factor an odd integer . However, the best known quantum algorithm for this problem, Shor's algorithm, does run in polynomial time. Unfortunately, this fact doesn't say much about where the problem lies with respect to non-quantum complexity classes. Separations between other complexity classes Many known complexity classes are suspected to be unequal, but this has not been proved. For instance , but it is possible that . If is not equal to , then is not equal to either. Since there are many known complexity classes between and , such as , , , , , , etc., it is possible that all these complexity classes collapse to one class. Proving that any of these classes are unequal would be a major breakthrough in complexity theory. Along the same lines, is the class containing the complement problems (i.e. problems with the yes/no answers reversed) of problems. It is believed that is not equal to ; however, it has not yet been proven. It is clear that if these two complexity classes are not equal then is not equal to , since . Thus if we would have whence . Similarly, it is not known if (the set of all problems that can be solved in logarithmic space) is strictly contained in or equal to . Again, there are many complexity classes between the two, such as and , and it is not known if they are distinct or equal classes. It is suspected that and are equal. However, it is currently open if . Intractability A problem that can theoretically be solved, but requires impractical and finite resources (e.g., time) to do so, is known as an . Conversely, a problem that can be solved in practice is called a , literally "a problem that can be handled". The term infeasible (literally "cannot be done") is sometimes used interchangeably with intractable, though this risks confusion with a feasible solution in mathematical optimization. Tractable problems are frequently identified with problems that have polynomial-time solutions (, ); this is known as the Cobham–Edmonds thesis. Problems that are known to be intractable in this sense include those that are EXPTIME-hard. If is not the same as , then NP-hard problems are also intractable in this sense. However, this identification is inexact: a polynomial-time solution with large degree or large leading coefficient grows quickly, and may be impractical for practical size problems; conversely, an exponential-time solution that grows slowly may be practical on realistic input, or a solution that takes a long time in the worst case may take a short time in most cases or the average case, and thus still be practical. Saying that a problem is not in does not imply that all large cases of the problem are hard or even that most of them are. For example, the decision problem in Presburger arithmetic has been shown not to be in , yet algorithms have been written that solve the problem in reasonable times in most cases. Similarly, algorithms can solve the NP-complete knapsack problem over a wide range of sizes in less than quadratic time and SAT solvers routinely handle large instances of the NP-complete Boolean satisfiability problem. To see why exponential-time algorithms are generally unusable in practice, consider a program that makes operations before halting. For small , say 100, and assuming for the sake of example that the computer does operations each second, the program would run for about years, which is the same order of magnitude as the age of the universe. Even with a much faster computer, the program would only be useful for very small instances and in that sense the intractability of a problem is somewhat independent of technological progress. However, an exponential-time algorithm that takes operations is practical until gets relatively large. Similarly, a polynomial time algorithm is not always practical. If its running time is, say, , it is unreasonable to consider it efficient and it is still useless except on small instances. Indeed, in practice even or algorithms are often impractical on realistic sizes of problems. Continuous complexity theory Continuous complexity theory can refer to complexity theory of problems that involve continuous functions that are approximated by discretizations, as studied in numerical analysis. One approach to complexity theory of numerical analysis is information based complexity. Continuous complexity theory can also refer to complexity theory of the use of analog computation, which uses continuous dynamical systems and differential equations. Control theory can be considered a form of computation and differential equations are used in the modelling of continuous-time and hybrid discrete-continuous-time systems. History An early example of algorithm complexity analysis is the running time analysis of the Euclidean algorithm done by Gabriel Lamé in 1844. Before the actual research explicitly devoted to the complexity of algorithmic problems started off, numerous foundations were laid out by various researchers. Most influential among these was the definition of Turing machines by Alan Turing in 1936, which turned out to be a very robust and flexible simplification of a computer. The beginning of systematic studies in computational complexity is attributed to the seminal 1965 paper "On the Computational Complexity of Algorithms" by Juris Hartmanis and Richard E. Stearns, which laid out the definitions of time complexity and space complexity, and proved the hierarchy theorems. In addition, in 1965 Edmonds suggested to consider a "good" algorithm to be one with running time bounded by a polynomial of the input size. Earlier papers studying problems solvable by Turing machines with specific bounded resources include John Myhill's definition of linear bounded automata (Myhill 1960), Raymond Smullyan's study of rudimentary sets (1961), as well as Hisao Yamada's paper on real-time computations (1962). Somewhat earlier, Boris Trakhtenbrot (1956), a pioneer in the field from the USSR, studied another specific complexity measure. As he remembers: In 1967, Manuel Blum formulated a set of axioms (now known as Blum axioms) specifying desirable properties of complexity measures on the set of computable functions and proved an important result, the so-called speed-up theorem. The field began to flourish in 1971 when Stephen Cook and Leonid Levin proved the existence of practically relevant problems that are NP-complete. In 1972, Richard Karp took this idea a leap forward with his landmark paper, "Reducibility Among Combinatorial Problems", in which he showed that 21 diverse combinatorial and graph theoretical problems, each infamous for its computational intractability, are NP-complete. See also Computational complexity Descriptive complexity theory Game complexity Leaf language Limits of computation List of complexity classes List of computability and complexity topics List of unsolved problems in computer science Parameterized complexity Proof complexity Quantum complexity theory Structural complexity theory Transcomputational problem Computational complexity of mathematical operations Works on complexity References Citations Textbooks Surveys External links The Complexity Zoo Scott Aaronson: Why Philosophers Should Care About Computational Complexity Computational fields of study
Computational complexity theory
[ "Technology" ]
6,194
[ "Computational fields of study", "Computing and society" ]
7,550
https://en.wikipedia.org/wiki/Craig%20Venter
John Craig Venter (born October 14, 1946) is an American scientist. He is known for leading one of the first draft sequences of the human genome and led the first team to transfect a cell with a synthetic chromosome. Venter founded Celera Genomics, the Institute for Genomic Research (TIGR) and the J. Craig Venter Institute (JCVI). He was the co-founder of Human Longevity Inc. and Synthetic Genomics. He was listed on Time magazine's 2007 and 2008 Time 100 list of the most influential people in the world. In 2010, the British magazine New Statesman listed Craig Venter at 14th in the list of "The World's 50 Most Influential Figures 2010". In 2012, Venter was honored with Dan David Prize for his contribution to genome research. He was elected to the American Philosophical Society in 2013. He is a member of the USA Science and Engineering Festival's advisory board. Early life and education Venter was born in Salt Lake City, Utah, the son of Elisabeth and John Venter. His family moved to Millbrae, California during his childhood. In his youth, he did not take his education seriously, preferring to spend his time on the water in boats or surfing. According to his biography, A Life Decoded, he was said never to be a terribly engaged student, having Cs and Ds on his eighth-grade report cards. Venter considered that his behavior in his adolescence was indicative of attention deficit hyperactivity disorder (ADHD), and later found ADHD-linked genetic variants in his own DNA. He graduated from Mills High School. His father died suddenly at age 59 from cardiac arrest, giving him a lifelong awareness of his own mortality. He quotes a saying: "If you want immortality, do something meaningful with your life." Although he opposed the Vietnam War, Venter was drafted and enlisted in the United States Navy where he worked as a hospital corpsman in the intensive-care ward of a field hospital. He served from 1967 to 1968 at the Naval Support Activity Danang in Vietnam. While in Vietnam, he attempted suicide by swimming out to sea, but changed his mind more than a mile out. Being confronted with severely injured and dying marines on a daily basis instilled in him a desire to study medicine, although he later switched to biomedical research. Venter began his college education in 1969 at a community college, College of San Mateo in California, and later transferred to the University of California, San Diego, where he studied under biochemist Nathan O. Kaplan. He received a Bachelor of Science in biochemistry in 1972 and a Doctor of Philosophy in physiology and pharmacology in 1975 from UCSD. Career After working as an associate professor, and later as full professor, at the State University of New York at Buffalo, he joined the National Institutes of Health in 1984. EST controversy While an employee of the NIH, Venter learned how to identify mRNA and began to learn more about those expressed in the human brain. The short cDNA sequence fragments Venter discovered by automated DNA sequencing, he named expressed sequence tags, or ESTs. The NIH Office of Technology Transfer decided to file a patent on the ESTs discovered by Venter. patent the genes identified based on studies of mRNA expression in the human brain. When Venter disclosed the NIH strategy during a Congressional hearing, a firestorm of controversy erupted. The NIH later stopped the effort and abandoned the patent applications it had filed, following public outcry. Human Genome Project Venter was passionate about the power of genomics to transform healthcare radically. Venter believed that shotgun sequencing was the fastest and most effective way to get useful human genome data. The method was rejected by the Human Genome Project however, since some geneticists felt it would not be accurate enough for a genome as complicated as that of humans, that it would be logistically more difficult, and that it would cost significantly more. Venter viewed the slow pace of progress in the Human Genome project as an opportunity to continue his interest in trying his shotgun sequencing method to speed up the human genome sequencing so when he was offered funding from a DNA sequencing company to start Celera Genomics. The company planned to profit from their work by creating genomic data to which users could subscribe for a fee. The goal consequently put pressure on the public genome program and spurred several groups to redouble their efforts to produce the full sequence. Venter's effort won him renown as he and his team at Celera Corporation shared credit for sequencing the first draft human genome with the publicly funded Human Genome Project. In 2000, Venter and Francis Collins of the National Institutes of Health and U.S. Public Genome Project jointly made the announcement of the mapping of the human genome, a full three years ahead of the expected end of the Public Genome Program. The announcement was made along with U.S. President Bill Clinton, and UK Prime Minister Tony Blair. Venter and Collins thus shared an award for "Biography of the Year" from A&E Network. On February 15, 2001, the Human Genome Project consortium published the first Human Genome in the journal Nature, followed one day later by a Celera publication in Science. Despite some claims that shotgun sequencing was in some ways less accurate than the clone-by-clone method chosen by the Human Genome Project, the technique became widely accepted by the scientific community. Venter was fired by Celera in early 2002. According to his biography, Venter was fired because of a conflict with the main investor, Tony White, specifically barring him from attending the White House ceremony celebrating the achievement of sequencing the human genome. Global Ocean Sampling Expedition The Global Ocean Sampling Expedition (GOS) is an ocean exploration genome project with the goal of assessing the genetic diversity in marine microbial communities and to understand their role in nature's fundamental processes. Begun as a Sargasso Sea pilot sampling project in August 2003, the full Expedition was announced by Venter on March 4, 2004. The project, which used Venter's personal yacht, Sorcerer II, started in Halifax, Canada, circumnavigated the globe and returned to the U.S. in January 2006. Synthetic Genomics In June 2005, Venter co-founded Synthetic Genomics, a firm dedicated to using modified microorganisms to produce clean fuels and biochemicals. In July 2009, ExxonMobil announced a $600 million collaboration with Synthetic Genomics to research and develop next-generation biofuels. Venter continues to work on the creation of engineered diatomic microalgae for the production of biofuels. Venter is seeking to patent the first partially synthetic species possibly to be named Mycoplasma laboratorium. There is speculation that this line of research could lead to producing bacteria that have been engineered to perform specific reactions, for example, produce fuels, make medicines, combat global warming, and so on. In May 2010, a team of scientists led by Venter became the first to create successfully what was described as "synthetic life". This was done by synthesizing a very long DNA molecule containing an entire bacterium genome, and introducing this into another cell, analogous to the accomplishment of Eckard Wimmer's group, who synthesized and ligated an RNA virus genome and "booted" it in cell lysate. The single-celled organism contains four "watermarks" written into its DNA to identify it as synthetic and to help trace its descendants. The watermarks include Code table for entire alphabet with punctuations Names of 46 contributing scientists Three quotations The secret email address for the cell. On March 25, 2016, Venter reported the creation of Syn 3.0, a synthetic genome having the fewest genes of any freely living organism (473 genes). Their aim was to strip away all nonessential genes, leaving only the minimal set necessary to support life. This stripped-down, fast reproducing cell is expected to be a valuable tool for researchers in the field. In August 2018, Venter retired as chairman of the board, saying he wanted to focus on his work at the J. Craig Venter Institute. He will remain as a scientific advisor to the board. J. Craig Venter Institute In 2006 Venter founded the J. Craig Venter Institute (JCVI), a nonprofit which conducts research in synthetic biology. It has facilities in La Jolla and in Rockville, Maryland and employs over 200 people. In April 2022 Venter sold the La Jolla JCVI facility to the University of California, San Diego for $25 million. Venter will continue to lead a separate nonprofit research group, also known as the J. Craig Venter Institute, and stressed that he is not retiring. The Venter Institute has out grown its current building with multiple new facility hires and will be moving into new space in 2025. Individual human genome On September 4, 2007, a team led by Sam Levy published one of the first genomes of an individual human—Venter's own DNA sequence. Some of the sequences in Venter's genome are associated with wet earwax, increased risk of antisocial behavior, Alzheimer's and cardiovascular diseases. The Human Reference Genome Browser is a web application for the navigation and analysis of Venter's recently published genome. The HuRef database consists of approximately 32 million DNA reads sequenced using microfluidic Sanger sequencing, assembled into 4,528 scaffolds and 4.1 million DNA variations identified by genome analysis. These variants include single-nucleotide polymorphisms (SNPs), block substitutions, short and large indels, and structural variations like insertions, deletions, inversions and copy number changes. The browser enables scientists to navigate the HuRef genome assembly and sequence variations, and to compare it with the NCBI human build 36 assembly in the context of the NCBI and Ensembl annotations. The browser provides a comparative view between NCBI and HuRef consensus sequences, the sequence multi-alignment of the HuRef assembly, Ensembl and dbSNP annotations, HuRef variants, and the underlying variant evidence and functional analysis. The interface also represents the haplotype blocks from which diploid genome sequence can be inferred and the relation of variants to gene annotations. The display of variants and gene annotations are linked to external public resources including dbSNP, Ensembl, Online Mendelian Inheritance in Man (OMIM) and Gene Ontology (GO). Users can search the HuRef genome using HUGO gene names, Ensembl and dbSNP identifiers, HuRef contig or scaffold locations, or NCBI chromosome locations. Users can then easily and quickly browse any genomic region via the simple and intuitive pan and zoom controls; furthermore, data relevant to specific loci can be exported for further analysis. Human Longevity, Inc. On March 4, 2014, Venter and co-founders Peter Diamandis and Robert Hariri announced the formation of Human Longevity, Inc., a company focused on extending the healthy, "high performance" human lifespan. At the time of the announcement the company had already raised $70 million in venture financing, which was expected to last 18 months. Venter served as the chairman and chief executive officer (CEO) until May 2018, when he retired. The company said that it plans to sequence 40,000 genomes per year, with an initial focus on cancer genomes and the genomes of cancer patients. Human Longevity filed a lawsuit in 2018 against Venter, accusing him of stealing trade secrets. Allegations were made stating that Venter had departed with his company computer that contained valuable information that could be used to start a competing business. The lawsuit was ultimately dismissed by a California judge on the basis that Human Longevity were unable to present a case that met the legal threshold required for a company, or individual, to sue when its trade secrets have been stolen. Human Longevity's mission is to extend healthy human lifespan by the use of high-resolution big data diagnostics from genomics, metabolomics, microbiomics, and proteomics, and the use of stem cell therapy. Published books Venter is the author of three books, the first of which is an autobiography titled A Life Decoded. In Venter's second book, Life at the Speed of Light, he announced his theory that this is the generation in which there appears to be a dovetailing of the two previously diverse fields of science represented by computer programming and the genetic programming of life by DNA sequencing. He was applauded for his position on this by futurist Ray Kurzweil. Venter's most recent book, co-authored by David Ewing Duncan, The Voyage of Sorcerer II: The Expedition that Unlocked the Secrets of the Ocean’s Microbiome, details the Global Ocean Sampling Expedition, spanning a 15-year period during which microbes from the world's oceans were collected and their DNA sequenced. Personal life After a 12-year marriage to Barbara Rae-Venter, with whom he had a son, Christopher, he married Claire M. Fraser remaining married to her until 2005. In late 2008 he married Heather Kowalski. They live in the La Jolla neighborhood of San Diego, CA. Venter is an atheist. Venter was 75 when he sold his main research building to UCSD in 2022. The institute had out grown the space and will be moving to a new facility in 2025. The Venter Institute campus in Rockville MD also continues to expand. He said he has no intention of retiring. He has a home in La Jolla and a ranch in Borrego Springs, California, as well as homes in two small towns in Maine. He indulges in two passions: sailing and flying a Cirrus 22T plane, which he calls "the ultimate freedom". In popular culture Venter has been the subject of articles in several magazines, including Wired, The Economist, Australian science magazine Cosmos, and The Atlantic. Venter appears in the two-hour 2001 NOVA special, "Cracking the code of life". On May 16, 2004, Venter gave the commencement speech at Boston University. On December 4, 2007, Venter gave the Dimbleby lecture for the BBC in London. Venter gave the Distinguished Public Lecture during the 2007 Michaelmas Term at the James Martin 21st Century School at Oxford University. Its title was "Genomics – From humans to the environment". Venter delivered the 2008 convocation speech for Faculty of Science honours and specialization students at the University of Alberta. In February 2008, he gave a speech about his current work at the TED conference. Venter was featured in Time magazine's "The Top 10 Everything of 2008" article. Number three in 2008's Top 10 Scientific Discoveries was a piece outlining his work stitching together the 582,000 base pairs necessary to invent the genetic information for a whole new bacterium. On May 20, 2010, Venter announced the creation of first self-replicating semi-synthetic bacterial cell. In the June 2011 issue of Men's Journal, Venter was featured as the "Survival Skills" celebrity of the month. He shared various anecdotes and advice, including stories of his time in Vietnam, as well as mentioning a bout with melanoma on his back, which subsequently resulted in his "giving a pound of flesh" to surgery. In May 2011, Venter was the commencement speaker at the 157th commencement of Syracuse University. In May 2017, Venter was the guest of honor and keynote speaker at the inauguration ceremony of the Center for Systems Biology Dresden. Awards and nominations 1996: Golden Plate Award of the American Academy of Achievement 1999: Newcomb Cleveland Prize 2000: Jacob Heskel Gabbay Award in Biotechnology and Medicine 2001: Biotechnology Heritage Award with Francis Collins, from the Biotechnology Industry Organization (BIO) and the Chemical Heritage Foundation 2002: Association for Molecular Pathology Award for Excellence in Molecular Diagnostics 2007: On May 10, 2007, Venter was awarded an honorary doctorate from Arizona State University, and on October 24 of the same year, he received an honorary doctorate from Imperial College London. 2008: Double Helix Medal from Cold Spring Harbor Laboratory 2008: Kistler Prize from Foundation For the Future for genome research 2008: ENI award for Research & Environment 2008: National Medal of Science from President Obama 2010: On May 8, 2010, Venter received an honorary doctor of science degree from Clarkson University for his work on the human genome. 2011: On April 21, 2011, Venter received the 2011 Benjamin Rush Medal from William & Mary School of Law. 2011: Dickson Prize in Medicine 2020: Edogawa NICHE Prize for his contribution to research and development pertaining to the Human genome Works Venter has authored over 200 publications in scientific journals. editor Roger Highfield editor Roger Highfield See also Artificial gene synthesis Full genome sequencing Genetic testing Genome: The Autobiography of a Species in 23 Chapters Personal genomics Pharmacogenomics Predictive medicine Synthetic Organism Designer References Further reading External links Human Longevity, Inc. HuRef Genome Browser J. Craig Venter Institute Sorcerer II Expedition Synthetic Genomics The Institute for Genomic Research (TIGR) Media Cracking the code to life, The Guardian, October 8, 2007 Craig Venter interview, Wired Science, December 2007 (video) Video of interview/discussion with Craig Venter by Carl Zimmer on Bloggingheads.tv – TED (Technology Entertainment Design) conference (video) Webcast of Venter talk 'Genomics: From humans to the environment' at The James Martin 21st Century School The Richard Dimbleby Lecture 2007 – Dr. J. Craig Venter – A DNA Driven World A short course on synthetic genomics. Edge Master Class 2009 1946 births Living people American atheists American chairpersons of corporations American geneticists American technology chief executives American technology company founders Biotechnologists Human Genome Project scientists Leeuwenhoek Medal winners Life extensionists Members of the United States National Academy of Sciences Military personnel from Salt Lake City Researchers of artificial life Scientists from Salt Lake City United States Navy corpsmen United States Navy personnel of the Vietnam War University at Buffalo faculty University of California, San Diego alumni Members of the National Academy of Medicine
Craig Venter
[ "Engineering" ]
3,762
[ "Human Genome Project scientists" ]
7,555
https://en.wikipedia.org/wiki/Casimir%20effect
In quantum field theory, the Casimir effect (or Casimir force) is a physical force acting on the macroscopic boundaries of a confined space which arises from the quantum fluctuations of a field. The term Casimir pressure is sometimes used when it is described in units of force per unit area. It is named after the Dutch physicist Hendrik Casimir, who predicted the effect for electromagnetic systems in 1948. In the same year Casimir, together with Dirk Polder, described a similar effect experienced by a neutral atom in the vicinity of a macroscopic interface which is called the Casimir–Polder force. Their result is a generalization of the London–van der Waals force and includes retardation due to the finite speed of light. The fundamental principles leading to the London–van der Waals force, the Casimir force, and the Casimir–Polder force can be formulated on the same footing. In 1997 a direct experiment by Steven K. Lamoreaux quantitatively measured the Casimir force to be within 5% of the value predicted by the theory. The Casimir effect can be understood by the idea that the presence of macroscopic material interfaces, such as electrical conductors and dielectrics, alter the vacuum expectation value of the energy of the second-quantized electromagnetic field. Since the value of this energy depends on the shapes and positions of the materials, the Casimir effect manifests itself as a force between such objects. Any medium supporting oscillations has an analogue of the Casimir effect. For example, beads on a string as well as plates submerged in turbulent water or gas illustrate the Casimir force. In modern theoretical physics, the Casimir effect plays an important role in the chiral bag model of the nucleon; in applied physics it is significant in some aspects of emerging microtechnologies and nanotechnologies. Physical properties The typical example is of two uncharged conductive plates in a vacuum, placed a few nanometers apart. In a classical description, the lack of an external field means that no field exists between the plates, and no force connects them. When this field is instead studied using the quantum electrodynamic vacuum, it is seen that the plates do affect the virtual photons that constitute the field, and generate a net force – either an attraction or a repulsion depending on the plates' specific arrangement. Although the Casimir effect can be expressed in terms of virtual particles interacting with the objects, it is best described and more easily calculated in terms of the zero-point energy of a quantized field in the intervening space between the objects. This force has been measured and is a striking example of an effect captured formally by second quantization. The treatment of boundary conditions in these calculations is controversial. In fact, "Casimir's original goal was to compute the van der Waals force between polarizable molecules" of the conductive plates. Thus it can be interpreted without any reference to the zero-point energy (vacuum energy) of quantum fields. Because the strength of the force falls off rapidly with distance, it is measurable only when the distance between the objects is small. This force becomes so strong that it becomes the dominant force between uncharged conductors at submicron scales. In fact, at separations of 10 nm – about 100 times the typical size of an atom – the Casimir effect produces the equivalent of about 1 atmosphere of pressure (the precise value depends on surface geometry and other factors). History Dutch physicists Hendrik Casimir and Dirk Polder at Philips Research Labs proposed the existence of a force between two polarizable atoms and between such an atom and a conducting plate in 1947; this special form is called the Casimir–Polder force. After a conversation with Niels Bohr, who suggested it had something to do with zero-point energy, Casimir alone formulated the theory predicting a force between neutral conducting plates in 1948. This latter phenomenon is called the Casimir effect. Predictions of the force were later extended to finite-conductivity metals and dielectrics, while later calculations considered more general geometries. Experiments before 1997 observed the force qualitatively, and indirect validation of the predicted Casimir energy was made by measuring the thickness of liquid helium films. Finally, in 1997 Lamoreaux's direct experiment quantitatively measured the force to within 5% of the value predicted by the theory. Subsequent experiments approached an accuracy of a few percent. Possible causes Vacuum energy The causes of the Casimir effect are described by quantum field theory, which states that all of the various fundamental fields, such as the electromagnetic field, must be quantized at each and every point in space. In a simplified view, a "field" in physics may be envisioned as if space were filled with interconnected vibrating balls and springs, and the strength of the field can be visualized as the displacement of a ball from its rest position. Vibrations in this field propagate and are governed by the appropriate wave equation for the particular field in question. The second quantization of quantum field theory requires that each such ball-spring combination be quantized, that is, that the strength of the field be quantized at each point in space. At the most basic level, the field at each point in space is a simple harmonic oscillator, and its quantization places a quantum harmonic oscillator at each point. Excitations of the field correspond to the elementary particles of particle physics. However, even the vacuum has a vastly complex structure, so all calculations of quantum field theory must be made in relation to this model of the vacuum. The vacuum has, implicitly, all of the properties that a particle may have: spin, or polarization in the case of light, energy, and so on. On average, most of these properties cancel out: the vacuum is, after all, "empty" in this sense. One important exception is the vacuum energy or the vacuum expectation value of the energy. The quantization of a simple harmonic oscillator states that the lowest possible energy or zero-point energy that such an oscillator may have is Summing over all possible oscillators at all points in space gives an infinite quantity. Since only differences in energy are physically measurable (with the notable exception of gravitation, which remains beyond the scope of quantum field theory), this infinity may be considered a feature of the mathematics rather than of the physics. This argument is the underpinning of the theory of renormalization. Dealing with infinite quantities in this way was a cause of widespread unease among quantum field theorists before the development in the 1970s of the renormalization group, a mathematical formalism for scale transformations that provides a natural basis for the process. When the scope of the physics is widened to include gravity, the interpretation of this formally infinite quantity remains problematic. There is currently no compelling explanation as to why it should not result in a cosmological constant that is many orders of magnitude larger than observed. However, since we do not yet have any fully coherent quantum theory of gravity, there is likewise no compelling reason as to why it should instead actually result in the value of the cosmological constant that we observe. The Casimir effect for fermions can be understood as the spectral asymmetry of the fermion operator , where it is known as the Witten index. Relativistic van der Waals force Alternatively, a 2005 paper by Robert Jaffe of MIT states that "Casimir effects can be formulated and Casimir forces can be computed without reference to zero-point energies. They are relativistic, quantum forces between charges and currents. The Casimir force (per unit area) between parallel plates vanishes as alpha, the fine structure constant, goes to zero, and the standard result, which appears to be independent of alpha, corresponds to the alpha approaching infinity limit", and that "The Casimir force is simply the (relativistic, retarded) van der Waals force between the metal plates." Casimir and Polder's original paper used this method to derive the Casimir–Polder force. In 1978, Schwinger, DeRadd, and Milton published a similar derivation for the Casimir effect between two parallel plates. More recently, Nikolic proved from first principles of quantum electrodynamics that the Casimir force does not originate from the vacuum energy of the electromagnetic field, and explained in simple terms why the fundamental microscopic origin of Casimir force lies in van der Waals forces. Effects Casimir's observation was that the second-quantized quantum electromagnetic field, in the presence of bulk bodies such as metals or dielectrics, must obey the same boundary conditions that the classical electromagnetic field must obey. In particular, this affects the calculation of the vacuum energy in the presence of a conductor or dielectric. Consider, for example, the calculation of the vacuum expectation value of the electromagnetic field inside a metal cavity, such as, for example, a radar cavity or a microwave waveguide. In this case, the correct way to find the zero-point energy of the field is to sum the energies of the standing waves of the cavity. To each and every possible standing wave corresponds an energy; say the energy of the th standing wave is . The vacuum expectation value of the energy of the electromagnetic field in the cavity is then with the sum running over all possible values of enumerating the standing waves. The factor of is present because the zero-point energy of the th mode is , where is the energy increment for the th mode. (It is the same as appears in the equation .) Written in this way, this sum is clearly divergent; however, it can be used to create finite expressions. In particular, one may ask how the zero-point energy depends on the shape of the cavity. Each energy level depends on the shape, and so one should write for the energy level, and for the vacuum expectation value. At this point comes an important observation: The force at point on the wall of the cavity is equal to the change in the vacuum energy if the shape of the wall is perturbed a little bit, say by , at . That is, one has This value is finite in many practical calculations. Attraction between the plates can be easily understood by focusing on the one-dimensional situation. Suppose that a moveable conductive plate is positioned at a short distance from one of two widely separated plates (distance apart). With , the states within the slot of width are highly constrained so that the energy of any one mode is widely separated from that of the next. This is not the case in the large region where there is a large number of states (about ) with energy evenly spaced between and the next mode in the narrow slot, or in other words, all slightly larger than . Now on shortening by an amount (which is negative), the mode in the narrow slot shrinks in wavelength and therefore increases in energy proportional to , whereas all the states that lie in the large region lengthen and correspondingly decrease their energy by an amount proportional to (note the different denominator). The two effects nearly cancel, but the net change is slightly negative, because the energy of all the modes in the large region are slightly larger than the single mode in the slot. Thus the force is attractive: it tends to make slightly smaller, the plates drawing each other closer, across the thin slot. Derivation of Casimir effect assuming zeta-regularization In the original calculation done by Casimir, he considered the space between a pair of conducting metal plates at distance apart. In this case, the standing waves are particularly easy to calculate, because the transverse component of the electric field and the normal component of the magnetic field must vanish on the surface of a conductor. Assuming the plates lie parallel to the -plane, the standing waves are where stands for the electric component of the electromagnetic field, and, for brevity, the polarization and the magnetic components are ignored here. Here, and are the wavenumbers in directions parallel to the plates, and is the wavenumber perpendicular to the plates. Here, is an integer, resulting from the requirement that vanish on the metal plates. The frequency of this wave is where is the speed of light. The vacuum energy is then the sum over all possible excitation modes. Since the area of the plates is large, we may sum by integrating over two of the dimensions in -space. The assumption of periodic boundary conditions yields, where is the area of the metal plates, and a factor of 2 is introduced for the two possible polarizations of the wave. This expression is clearly infinite, and to proceed with the calculation, it is convenient to introduce a regulator (discussed in greater detail below). The regulator will serve to make the expression finite, and in the end will be removed. The zeta-regulated version of the energy per unit-area of the plate is In the end, the limit is to be taken. Here is just a complex number, not to be confused with the shape discussed previously. This integral sum is finite for real and larger than 3. The sum has a pole at , but may be analytically continued to , where the expression is finite. The above expression simplifies to: where polar coordinates were introduced to turn the double integral into a single integral. The in front is the Jacobian, and the comes from the angular integration. The integral converges if , resulting in The sum diverges at in the neighborhood of zero, but if the damping of large-frequency excitations corresponding to analytic continuation of the Riemann zeta function to is assumed to make sense physically in some way, then one has But and so one obtains The analytic continuation has evidently lost an additive positive infinity, somehow exactly accounting for the zero-point energy (not included above) outside the slot between the plates, but which changes upon plate movement within a closed system. The Casimir force per unit area for idealized, perfectly conducting plates with vacuum between them is where is the reduced Planck constant, is the speed of light, is the distance between the two plates The force is negative, indicating that the force is attractive: by moving the two plates closer together, the energy is lowered. The presence of shows that the Casimir force per unit area is very small, and that furthermore, the force is inherently of quantum-mechanical origin. By integrating the equation above it is possible to calculate the energy required to separate to infinity the two plates as: where is the reduced Planck constant, is the speed of light, is the area of one of the plates, is the distance between the two plates In Casimir's original derivation, a moveable conductive plate is positioned at a short distance from one of two widely separated plates (distance apart). The zero-point energy on both sides of the plate is considered. Instead of the above ad hoc analytic continuation assumption, non-convergent sums and integrals are computed using Euler–Maclaurin summation with a regularizing function (e.g., exponential regularization) not so anomalous as in the above. More recent theory Casimir's analysis of idealized metal plates was generalized to arbitrary dielectric and realistic metal plates by Evgeny Lifshitz and his students. Using this approach, complications of the bounding surfaces, such as the modifications to the Casimir force due to finite conductivity, can be calculated numerically using the tabulated complex dielectric functions of the bounding materials. Lifshitz's theory for two metal plates reduces to Casimir's idealized force law for large separations much greater than the skin depth of the metal, and conversely reduces to the force law of the London dispersion force (with a coefficient called a Hamaker constant) for small , with a more complicated dependence on for intermediate separations determined by the dispersion of the materials. Lifshitz's result was subsequently generalized to arbitrary multilayer planar geometries as well as to anisotropic and magnetic materials, but for several decades the calculation of Casimir forces for non-planar geometries remained limited to a few idealized cases admitting analytical solutions. For example, the force in the experimental sphere–plate geometry was computed with an approximation (due to Derjaguin) that the sphere radius is much larger than the separation , in which case the nearby surfaces are nearly parallel and the parallel-plate result can be adapted to obtain an approximate force (neglecting both skin-depth and higher-order curvature effects). However, in the 2010s a number of authors developed and demonstrated a variety of numerical techniques, in many cases adapted from classical computational electromagnetics, that are capable of accurately calculating Casimir forces for arbitrary geometries and materials, from simple finite-size effects of finite plates to more complicated phenomena arising for patterned surfaces or objects of various shapes. Measurement One of the first experimental tests was conducted by Marcus Sparnaay at Philips in Eindhoven (Netherlands), in 1958, in a delicate and difficult experiment with parallel plates, obtaining results not in contradiction with the Casimir theory, but with large experimental errors. The Casimir effect was measured more accurately in 1997 by Steve K. Lamoreaux of Los Alamos National Laboratory, and by Umar Mohideen and Anushree Roy of the University of California, Riverside. In practice, rather than using two parallel plates, which would require phenomenally accurate alignment to ensure they were parallel, the experiments use one plate that is flat and another plate that is a part of a sphere with a very large radius. In 2001, a group (Giacomo Bressi, Gianni Carugno, Roberto Onofrio and Giuseppe Ruoso) at the University of Padua (Italy) finally succeeded in measuring the Casimir force between parallel plates using microresonators. Numerous variations of these experiments are summarized in the 2009 review by Klimchitskaya. In 2013, a conglomerate of scientists from Hong Kong University of Science and Technology, University of Florida, Harvard University, Massachusetts Institute of Technology, and Oak Ridge National Laboratory demonstrated a compact integrated silicon chip that can measure the Casimir force. The integrated chip defined by electron-beam lithography does not need extra alignment, making it an ideal platform for measuring Casimir force between complex geometries. In 2017 and 2021, the same group from Hong Kong University of Science and Technology demonstrated the non-monotonic Casimir force and distance-independent Casimir force, respectively, using this on-chip platform. Regularization In order to be able to perform calculations in the general case, it is convenient to introduce a regulator in the summations. This is an artificial device, used to make the sums finite so that they can be more easily manipulated, followed by the taking of a limit so as to remove the regulator. The heat kernel or exponentially regulated sum is where the limit is taken in the end. The divergence of the sum is typically manifested as for three-dimensional cavities. The infinite part of the sum is associated with the bulk constant which does not depend on the shape of the cavity. The interesting part of the sum is the finite part, which is shape-dependent. The Gaussian regulator is better suited to numerical calculations because of its superior convergence properties, but is more difficult to use in theoretical calculations. Other, suitably smooth, regulators may be used as well. The zeta function regulator is completely unsuited for numerical calculations, but is quite useful in theoretical calculations. In particular, divergences show up as poles in the complex plane, with the bulk divergence at . This sum may be analytically continued past this pole, to obtain a finite part at . Not every cavity configuration necessarily leads to a finite part (the lack of a pole at ) or shape-independent infinite parts. In this case, it should be understood that additional physics has to be taken into account. In particular, at extremely large frequencies (above the plasma frequency), metals become transparent to photons (such as X-rays), and dielectrics show a frequency-dependent cutoff as well. This frequency dependence acts as a natural regulator. There are a variety of bulk effects in solid state physics, mathematically very similar to the Casimir effect, where the cutoff frequency comes into explicit play to keep expressions finite. (These are discussed in greater detail in Landau and Lifshitz, "Theory of Continuous Media".) Generalities The Casimir effect can also be computed using the mathematical mechanisms of functional integrals of quantum field theory, although such calculations are considerably more abstract, and thus difficult to comprehend. In addition, they can be carried out only for the simplest of geometries. However, the formalism of quantum field theory makes it clear that the vacuum expectation value summations are in a certain sense summations over so-called "virtual particles". More interesting is the understanding that the sums over the energies of standing waves should be formally understood as sums over the eigenvalues of a Hamiltonian. This allows atomic and molecular effects, such as the Van der Waals force, to be understood as a variation on the theme of the Casimir effect. Thus one considers the Hamiltonian of a system as a function of the arrangement of objects, such as atoms, in configuration space. The change in the zero-point energy as a function of changes of the configuration can be understood to result in forces acting between the objects. In the chiral bag model of the nucleon, the Casimir energy plays an important role in showing the mass of the nucleon is independent of the bag radius. In addition, the spectral asymmetry is interpreted as a non-zero vacuum expectation value of the baryon number, cancelling the topological winding number of the pion field surrounding the nucleon. A "pseudo-Casimir" effect can be found in liquid crystal systems, where the boundary conditions imposed through anchoring by rigid walls give rise to a long-range force, analogous to the force that arises between conducting plates. Dynamical Casimir effect The dynamical Casimir effect is the production of particles and energy from an accelerated moving mirror. This reaction was predicted by certain numerical solutions to quantum mechanics equations made in the 1970s. In May 2011 an announcement was made by researchers at the Chalmers University of Technology, in Gothenburg, Sweden, of the detection of the dynamical Casimir effect. In their experiment, microwave photons were generated out of the vacuum in a superconducting microwave resonator. These researchers used a modified SQUID to change the effective length of the resonator in time, mimicking a mirror moving at the required relativistic velocity. If confirmed this would be the first experimental verification of the dynamical Casimir effect. In March 2013 an article appeared on the PNAS scientific journal describing an experiment that demonstrated the dynamical Casimir effect in a Josephson metamaterial. In July 2019 an article was published describing an experiment providing evidence of optical dynamical Casimir effect in a dispersion-oscillating fibre. In 2020, Frank Wilczek et al., proposed a resolution to the information loss paradox associated with the moving mirror model of the dynamical Casimir effect. Constructed within the framework of quantum field theory in curved spacetime, the dynamical Casimir effect (moving mirror) has been used to help understand the Unruh effect. Repulsive forces There are few instances wherein the Casimir effect can give rise to repulsive forces between uncharged objects. Evgeny Lifshitz showed (theoretically) that in certain circumstances (most commonly involving liquids), repulsive forces can arise. This has sparked interest in applications of the Casimir effect toward the development of levitating devices. An experimental demonstration of the Casimir-based repulsion predicted by Lifshitz was carried out by Munday et al. who described it as "quantum levitation". Other scientists have also suggested the use of gain media to achieve a similar levitation effect, though this is controversial because these materials seem to violate fundamental causality constraints and the requirement of thermodynamic equilibrium (Kramers–Kronig relations). Casimir and Casimir–Polder repulsion can in fact occur for sufficiently anisotropic electrical bodies; for a review of the issues involved with repulsion see Milton et al. A notable recent development on repulsive Casimir forces relies on using chiral materials. Q.-D. Jiang at Stockholm University and Nobel Laureate Frank Wilczek at MIT show that chiral "lubricant" can generate repulsive, enhanced, and tunable Casimir interactions. Timothy Boyer showed in his work published in 1968 that a conductor with spherical symmetry will also show this repulsive force, and the result is independent of radius. Further work shows that the repulsive force can be generated with materials of carefully chosen dielectrics. Speculative applications It has been suggested that the Casimir forces have application in nanotechnology, in particular silicon integrated circuit technology based micro- and nanoelectromechanical systems, and so-called Casimir oscillators. In 1995 and 1998 Maclay et al. published the first models of a microelectromechanical system (MEMS) with Casimir forces. While not exploiting the Casimir force for useful work, the papers drew attention from the MEMS community due to the revelation that Casimir effect needs to be considered as a vital factor in the future design of MEMS. In particular, Casimir effect might be the critical factor in the stiction failure of MEMS. In 2001, Capasso et al. showed how the force can be used to control the mechanical motion of a MEMS device, The researchers suspended a polysilicon plate from a torsional rod – a twisting horizontal bar just a few microns in diameter. When they brought a metallized sphere close up to the plate, the attractive Casimir force between the two objects made the plate rotate. They also studied the dynamical behaviour of the MEMS device by making the plate oscillate. The Casimir force reduced the rate of oscillation and led to nonlinear phenomena, such as hysteresis and bistability in the frequency response of the oscillator. According to the team, the system's behaviour agreed well with theoretical calculations. The Casimir effect shows that quantum field theory allows the energy density in very small regions of space to be negative relative to the ordinary vacuum energy, and the energy densities cannot be arbitrarily negative as the theory breaks down at atomic distances. Such prominent physicists such as Stephen Hawking and Kip Thorne, have speculated that such effects might make it possible to stabilize a traversable wormhole. See also Negative energy Scharnhorst effect Van der Waals force Squeezed vacuum References Further reading Introductory readings Casimir effect description from University of California, Riverside's version of the Usenet physics FAQ. A. Lambrecht, The Casimir effect: a force from nothing, Physics World, September 2002. Papers, books and lectures (Includes discussion of French naval analogy.) (Also includes discussion of French naval analogy.) Patent No. PCT/RU2011/000847 Author Urmatskih. Temperature dependence Measurements Recast Usual View of Elusive Force from NIST External links Casimir effect article search on arxiv.org G. Lang, The Casimir Force web site, 2002 J. Babb, bibliography on the Casimir Effect web site, 2009 H. Nikolic, The origin of Casimir effect; Vacuum energy or van der Waals force? presentation slides, 2018 Quantum field theory Physical phenomena Force Levitation Articles containing video clips
Casimir effect
[ "Physics", "Mathematics" ]
5,604
[ "Quantum field theory", "Physical phenomena", "Force", "Physical quantities", "Quantity", "Mass", "Classical mechanics", "Quantum mechanics", "Levitation", "Motion (physics)", "Wikipedia categories named after physical quantities", "Matter" ]
7,591
https://en.wikipedia.org/wiki/Cholera
Cholera () is an infection of the small intestine by some strains of the bacterium Vibrio cholerae. Symptoms may range from none, to mild, to severe. The classic symptom is large amounts of watery diarrhea lasting a few days. Vomiting and muscle cramps may also occur. Diarrhea can be so severe that it leads within hours to severe dehydration and electrolyte imbalance. This may result in sunken eyes, cold skin, decreased skin elasticity, and wrinkling of the hands and feet. Dehydration can cause the skin to turn bluish. Symptoms start two hours to five days after exposure. Cholera is caused by a number of types of Vibrio cholerae, with some types producing more severe disease than others. It is spread mostly by unsafe water and unsafe food that has been contaminated with human feces containing the bacteria. Undercooked shellfish is a common source. Humans are the only known host for the bacteria. Risk factors for the disease include poor sanitation, insufficient clean drinking water, and poverty. Cholera can be diagnosed by a stool test, or a rapid dipstick test, although the dipstick test is less accurate. Prevention methods against cholera include improved sanitation and access to clean water. Cholera vaccines that are given by mouth provide reasonable protection for about six months, and confer the added benefit of protecting against another type of diarrhea caused by E. coli. In 2017, the US Food and Drug Administration (FDA) approved a single-dose, live, oral cholera vaccine called Vaxchora for adults aged 18–64 who are travelling to an area of active cholera transmission. It offers limited protection to young children. People who survive an episode of cholera have long-lasting immunity for at least three years (the period tested). The primary treatment for affected individuals is oral rehydration salts (ORS), the replacement of fluids and electrolytes by using slightly sweet and salty solutions. Rice-based solutions are preferred. In children, zinc supplementation has also been found to improve outcomes. In severe cases, intravenous fluids, such as Ringer's lactate, may be required, and antibiotics may be beneficial. The choice of antibiotic is aided by antibiotic sensitivity testing. Cholera continues to affect an estimated 3–5 million people worldwide and causes 28,800–130,000 deaths a year. To date, seven cholera pandemics have occurred, with the most recent beginning in 1961, and continuing today. The illness is rare in high-income countries, and affects children most severely. Cholera occurs as both outbreaks and chronically in certain areas. Areas with an ongoing risk of disease include Africa and Southeast Asia. The risk of death among those affected is usually less than 5%, given improved treatment, but may be as high as 50% without such access to treatment. Descriptions of cholera are found as early as the 5th century BCE in Sanskrit literature. In Europe, cholera was a term initially used to describe any kind of gastroenteritis, and was not used for this disease until the early 19th century. The study of cholera in England by John Snow between 1849 and 1854 led to significant advances in the field of epidemiology because of his insights about transmission via contaminated water, and a map of the same was the first recorded incidence of epidemiological tracking. Signs and symptoms The primary symptoms of cholera are profuse diarrhea and vomiting of clear fluid. These symptoms usually start suddenly, half a day to five days after ingestion of the bacteria. The diarrhea is frequently described as "rice water" in nature and may have a fishy odor. An untreated person with cholera may produce of diarrhea a day. Severe cholera, without treatment, kills about half of affected individuals. If the severe diarrhea is not treated, it can result in life-threatening dehydration and electrolyte imbalances. Estimates of the ratio of asymptomatic to symptomatic infections have ranged from 3 to 100. Cholera has been nicknamed the "blue death" because a person's skin may turn bluish-gray from extreme loss of fluids. Fever is rare and should raise suspicion for secondary infection. Patients can be lethargic and might have sunken eyes, dry mouth, cold clammy skin, or wrinkled hands and feet. Kussmaul breathing, a deep and labored breathing pattern, can occur because of acidosis from stool bicarbonate losses and lactic acidosis associated with poor perfusion. Blood pressure drops due to dehydration, peripheral pulse is rapid and thready, and urine output decreases with time. Muscle cramping and weakness, altered consciousness, seizures, or even coma due to electrolyte imbalances are common, especially in children. Cause Transmission Cholera bacteria have been found in shellfish and plankton. Transmission is usually through the fecal-oral route of contaminated food or water caused by poor sanitation. Most cholera cases in developed countries are a result of transmission by food, while in developing countries it is more often water. Food transmission can occur when people harvest seafood such as oysters in waters infected with sewage, as Vibrio cholerae accumulates in planktonic crustaceans and the oysters eat the zooplankton. People infected with cholera often have diarrhea, and disease transmission may occur if this highly liquid stool, colloquially referred to as "rice-water", contaminates water used by others. A single diarrheal event can cause a one-million fold increase in numbers of V. cholerae in the environment. The source of the contamination is typically other people with cholera when their untreated diarrheal discharge is allowed to get into waterways, groundwater or drinking water supplies. Drinking any contaminated water and eating any foods washed in the water, as well as shellfish living in the affected waterway, can cause a person to contract an infection. Cholera is rarely spread directly from person to person. V. cholerae also exists outside the human body in natural water sources, either by itself or through interacting with phytoplankton, zooplankton, or biotic and abiotic detritus. Drinking such water can also result in the disease, even without prior contamination through fecal matter. Selective pressures exist however in the aquatic environment that may reduce the virulence of V. cholerae. Specifically, animal models indicate that the transcriptional profile of the pathogen changes as it prepares to enter an aquatic environment. This transcriptional change results in a loss of ability of V. cholerae to be cultured on standard media, a phenotype referred to as 'viable but non-culturable' (VBNC) or more conservatively 'active but non-culturable' (ABNC). One study indicates that the culturability of V. cholerae drops 90% within 24 hours of entering the water, and furthermore that this loss in culturability is associated with a loss in virulence. Both toxic and non-toxic strains exist. Non-toxic strains can acquire toxicity through a temperate bacteriophage. Susceptibility About 100million bacteria must typically be ingested to cause cholera in a normal healthy adult. This dose, however, is less in those with lowered gastric acidity (for instance those using proton pump inhibitors). Children are also more susceptible, with two- to four-year-olds having the highest rates of infection. Individuals' susceptibility to cholera is also affected by their blood type, with those with type O blood being the most susceptible. Persons with lowered immunity, such as persons with AIDS or malnourished children, are more likely to develop a severe case if they become infected. Any individual, even a healthy adult in middle age, can undergo a severe case, and each person's case should be measured by the loss of fluids, preferably in consultation with a professional health care provider. The cystic fibrosis genetic mutation known as delta-F508 in humans has been said to maintain a selective heterozygous advantage: heterozygous carriers of the mutation (who are not affected by cystic fibrosis) are more resistant to V. cholerae infections. In this model, the genetic deficiency in the cystic fibrosis transmembrane conductance regulator channel proteins interferes with bacteria binding to the intestinal epithelium, thus reducing the effects of an infection. Mechanism When consumed, most bacteria do not survive the acidic conditions of the human stomach. The few surviving bacteria conserve their energy and stored nutrients during the passage through the stomach by shutting down protein production. When the surviving bacteria exit the stomach and reach the small intestine, they must propel themselves through the thick mucus that lines the small intestine to reach the intestinal walls where they can attach and thrive. Once the cholera bacteria reach the intestinal wall, they no longer need the flagella to move. The bacteria stop producing the protein flagellin to conserve energy and nutrients by changing the mix of proteins that they express in response to the changed chemical surroundings. On reaching the intestinal wall, V. cholerae start producing the toxic proteins that give the infected person a watery diarrhea. This carries the multiplying new generations of V. cholerae bacteria out into the drinking water of the next host if proper sanitation measures are not in place. The cholera toxin (CTX or CT) is an oligomeric complex made up of six protein subunits: a single copy of the A subunit (part A), and five copies of the B subunit (part B), connected by a disulfide bond. The five B subunits form a five-membered ring that binds to GM1 gangliosides on the surface of the intestinal epithelium cells. The A1 portion of the A subunit is an enzyme that ADP-ribosylates G proteins, while the A2 chain fits into the central pore of the B subunit ring. Upon binding, the complex is taken into the cell via receptor-mediated endocytosis. Once inside the cell, the disulfide bond is reduced, and the A1 subunit is freed to bind with a human partner protein called ADP-ribosylation factor 6 (Arf6). Binding exposes its active site, allowing it to permanently ribosylate the Gs alpha subunit of the heterotrimeric G protein. This results in constitutive cAMP production, which in turn leads to the secretion of water, sodium, potassium, and bicarbonate into the lumen of the small intestine and rapid dehydration. The gene encoding the cholera toxin was introduced into V. cholerae by horizontal gene transfer. Virulent strains of V. cholerae carry a variant of a temperate bacteriophage called CTXφ. Microbiologists have studied the genetic mechanisms by which the V. cholerae bacteria turn off the production of some proteins and turn on the production of other proteins as they respond to the series of chemical environments they encounter, passing through the stomach, through the mucous layer of the small intestine, and on to the intestinal wall. Of particular interest have been the genetic mechanisms by which cholera bacteria turn on the protein production of the toxins that interact with host cell mechanisms to pump chloride ions into the small intestine, creating an ionic pressure which prevents sodium ions from entering the cell. The chloride and sodium ions create a salt-water environment in the small intestines, which through osmosis can pull up to six liters of water per day through the intestinal cells, creating the massive amounts of diarrhea. The host can become rapidly dehydrated unless treated properly. By inserting separate, successive sections of V. cholerae DNA into the DNA of other bacteria, such as E. coli that would not naturally produce the protein toxins, researchers have investigated the mechanisms by which V. cholerae responds to the changing chemical environments of the stomach, mucous layers, and intestinal wall. Researchers have discovered a complex cascade of regulatory proteins controls expression of V. cholerae virulence determinants. In responding to the chemical environment at the intestinal wall, the V. cholerae bacteria produce the TcpP/TcpH proteins, which, together with the ToxR/ToxS proteins, activate the expression of the ToxT regulatory protein. ToxT then directly activates expression of virulence genes that produce the toxins, causing diarrhea in the infected person and allowing the bacteria to colonize the intestine. Current research aims at discovering "the signal that makes the cholera bacteria stop swimming and start to colonize (that is, adhere to the cells of) the small intestine." Genetic structure Amplified fragment length polymorphism fingerprinting of the pandemic isolates of V. cholerae has revealed variation in the genetic structure. Two clusters have been identified: Cluster I and Cluster II. For the most part, Cluster I consists of strains from the 1960s and 1970s, while Cluster II largely contains strains from the 1980s and 1990s, based on the change in the clone structure. This grouping of strains is best seen in the strains from the African continent. Antibiotic resistance In many areas of the world, antibiotic resistance is increasing within cholera bacteria. In Bangladesh, for example, most cases are resistant to tetracycline, trimethoprim-sulfamethoxazole, and erythromycin. Rapid diagnostic assay methods are available for the identification of multi-drug resistant cases. New generation antimicrobials have been discovered which are effective against cholera bacteria in in vitro studies. Diagnosis A rapid dipstick test is available to determine the presence of V. cholerae. In those samples that test positive, further testing should be done to determine antibiotic resistance. In epidemic situations, a clinical diagnosis may be made by taking a patient history and doing a brief examination. Treatment via hydration and over-the-counter hydration solutions can be started without or before confirmation by laboratory analysis, especially where cholera is a common problem. Stool and swab samples collected in the acute stage of the disease, before antibiotics have been administered, are the most useful specimens for laboratory diagnosis. If an epidemic of cholera is suspected, the most common causative agent is V. cholerae O1. If V. cholerae serogroup O1 is not isolated, the laboratory should test for V. cholerae O139. However, if neither of these organisms is isolated, it is necessary to send stool specimens to a reference laboratory. Infection with V. cholerae O139 should be reported and handled in the same manner as that caused by V. cholerae O1. The associated diarrheal illness should be referred to as cholera and must be reported in the United States. Prevention The World Health Organization (WHO) recommends focusing on prevention, preparedness, and response to combat the spread of cholera. They also stress the importance of an effective surveillance system. Governments can play a role in all of these areas. Water, sanitation and hygiene Although cholera may be life-threatening, prevention of the disease is normally straightforward if proper sanitation practices are followed. In developed countries, due to their nearly universal advanced water treatment and sanitation practices, cholera is rare. For example, the last major outbreak of cholera in the United States occurred in 1910–1911. Cholera is mainly a risk in developing countries in those areas where access to WASH (water, sanitation and hygiene) infrastructure is still inadequate. Effective sanitation practices, if instituted and adhered to in time, are usually sufficient to stop an epidemic. There are several points along the cholera transmission path at which its spread may be halted: Sterilization: Proper disposal and treatment of all materials that may have come into contact with the feces of other people with cholera (e.g., clothing, bedding, etc.) are essential. These should be sanitized by washing in hot water, using chlorine bleach if possible. Hands that touch cholera patients or their clothing, bedding, etc., should be thoroughly cleaned and disinfected with chlorinated water or other effective antimicrobial agents. Sewage and fecal sludge management: In cholera-affected areas, sewage and fecal sludge need to be treated and managed carefully in order to stop the spread of this disease via human excreta. Provision of sanitation and hygiene is an important preventative measure. Open defecation, release of untreated sewage, or dumping of fecal sludge from pit latrines or septic tanks into the environment need to be prevented. In many cholera affected zones, there is a low degree of sewage treatment. Therefore, the implementation of dry toilets that do not contribute to water pollution, as they do not flush with water, may be an interesting alternative to flush toilets. Sources: Warnings about possible cholera contamination should be posted around contaminated water sources with directions on how to decontaminate the water (boiling, chlorination etc.) for possible use. Water purification: All water used for drinking, washing, or cooking should be sterilized by either boiling, chlorination, ozone water treatment, ultraviolet light sterilization (e.g., by solar water disinfection), or antimicrobial filtration in any area where cholera may be present. Chlorination and boiling are often the least expensive and most effective means of halting transmission. Cloth filters or sari filtration, though very basic, have significantly reduced the occurrence of cholera when used in poor villages in Bangladesh that rely on untreated surface water. Better antimicrobial filters, like those present in advanced individual water treatment hiking kits, are most effective. Public health education and adherence to appropriate sanitation practices are of primary importance to help prevent and control transmission of cholera and other diseases. Handwashing with soap or ash after using a toilet and before handling food or eating is also recommended for cholera prevention by WHO Africa. Surveillance Surveillance and prompt reporting allow for containing cholera epidemics rapidly. Cholera exists as a seasonal disease in many endemic countries, occurring annually mostly during rainy seasons. Surveillance systems can provide early alerts to outbreaks, therefore leading to coordinated response and assist in preparation of preparedness plans. Efficient surveillance systems can also improve the risk assessment for potential cholera outbreaks. Understanding the seasonality and location of outbreaks provides guidance for improving cholera control activities for the most vulnerable. For prevention to be effective, it is important that cases be reported to national health authorities. Vaccination Spanish physician Jaume Ferran i Clua developed the first successful cholera inoculation in 1885, the first to immunize humans against a bacterial disease. His vaccine and inoculation was rather controversial and was rejected by his peers and several investigation commissions but it ended up demonstrating its effectiveness and being recognized for it: out of the 30 thousand people he vaccinated only 54 died. Russian-Jewish bacteriologist Waldemar Haffkine also developed a human cholera vaccine in July 1892. He conducted a massive inoculation program in British India. Persons who survive an episode of cholera have long-lasting immunity for at least 3 years (the period tested). A number of safe and effective oral vaccines for cholera are available. The World Health Organization (WHO) has three prequalified oral cholera vaccines (OCVs): Dukoral, Sanchol, and Euvichol. Dukoral, an orally administered, inactivated whole-cell vaccine, has an overall efficacy of about 52% during the first year after being given and 62% in the second year, with minimal side effects. It is available in over 60 countries. However, it is not currently recommended by the Centers for Disease Control and Prevention (CDC) for most people traveling from the United States to endemic countries. The vaccine that the US Food and Drug Administration (FDA) recommends, Vaxchora, is an oral attenuated live vaccine, that is effective for adults aged 18–64 as a single dose. One injectable vaccine was found to be effective for two to three years. The protective efficacy was 28% lower in children less than five years old. However, , it has limited availability. Work is under way to investigate the role of mass vaccination. The WHO recommends immunization of high-risk groups, such as children and people with HIV, in countries where this disease is endemic. If people are immunized broadly, herd immunity results, with a decrease in the amount of contamination in the environment. WHO recommends that oral cholera vaccination be considered in areas where the disease is endemic (with seasonal peaks), as part of the response to outbreaks, or in a humanitarian crisis during which the risk of cholera is high. OCV has been recognized as an adjunct tool for prevention and control of cholera. The WHO has prequalified three bivalent cholera vaccines—Dukoral (SBL Vaccines), containing a non-toxic B-subunit of cholera toxin and providing protection against V. cholerae O1; and two vaccines developed using the same transfer of technology—ShanChol (Shantha Biotec) and Euvichol (EuBiologics Co.), which have bivalent O1 and O139 oral killed cholera vaccines. Oral cholera vaccination could be deployed in a diverse range of situations from cholera-endemic areas and locations of humanitarian crises, but no clear consensus exists. Sari filtration Developed for use in Bangladesh, the "sari filter" is a simple and cost-effective appropriate technology method for reducing the contamination of drinking water. Used sari cloth is preferable but other types of used cloth can be used with some effect, though the effectiveness will vary significantly. Used cloth is more effective than new cloth, as the repeated washing reduces the space between the fibers. Water collected in this way has a greatly reduced pathogen count—though it will not necessarily be perfectly safe, it is an improvement for poor people with limited options. In Bangladesh this practice was found to decrease rates of cholera by nearly half. It involves folding a sari four to eight times. Between uses the cloth should be rinsed in clean water and dried in the sun to kill any bacteria on it. A nylon cloth appears to work as well but is not as affordable. Treatment Continued eating speeds the recovery of normal intestinal function. The WHO recommends this generally for cases of diarrhea no matter what the underlying cause. A CDC training manual specifically for cholera states: "Continue to breastfeed your baby if the baby has watery diarrhea, even when traveling to get treatment. Adults and older children should continue to eat frequently." Fluids The most common error in caring for patients with cholera is to underestimate the speed and volume of fluids required. In most cases, cholera can be successfully treated with oral rehydration therapy (ORT), which is highly effective, safe, and simple to administer. Rice-based solutions are preferred to glucose-based ones due to greater efficiency. In severe cases with significant dehydration, intravenous rehydration may be necessary. Ringer's lactate is the preferred solution, often with added potassium. Large volumes and continued replacement until diarrhea has subsided may be needed. Ten percent of a person's body weight in fluid may need to be given in the first two to four hours. This method was first tried on a mass scale during the Bangladesh Liberation War, and was found to have much success. Despite widespread beliefs, fruit juices and commercial fizzy drinks like cola are not ideal for rehydration of people with serious infections of the intestines, and their excessive sugar content may even harm water uptake. If commercially produced oral rehydration solutions are too expensive or difficult to obtain, solutions can be made. One such recipe calls for 1 liter of boiled water, 1/2 teaspoon of salt, 6 teaspoons of sugar, and added mashed banana for potassium and to improve taste. Electrolytes As there frequently is initially acidosis, the potassium level may be normal, even though large losses have occurred. As the dehydration is corrected, potassium levels may decrease rapidly, and thus need to be replaced. This is best done by Oral Rehydration Solution (ORS). Antibiotics Antibiotic treatments for one to three days shorten the course of the disease and reduce the severity of the symptoms. Use of antibiotics also reduces fluid requirements. People will recover without them, however, if sufficient hydration is maintained. The WHO only recommends antibiotics in those with severe dehydration. Doxycycline is typically used first line, although some strains of V. cholerae have shown resistance. Testing for resistance during an outbreak can help determine appropriate future choices. Other antibiotics proven to be effective include cotrimoxazole, erythromycin, tetracycline, chloramphenicol, and furazolidone. Fluoroquinolones, such as ciprofloxacin, also may be used, but resistance has been reported. Antibiotics improve outcomes in those who are both severely and not severely dehydrated. Azithromycin and tetracycline may work better than doxycycline or ciprofloxacin. Zinc supplementation In Bangladesh zinc supplementation reduced the duration and severity of diarrhea in children with cholera when given with antibiotics and rehydration therapy as needed. It reduced the length of disease by eight hours and the amount of diarrhea stool by 10%. Supplementation appears to be also effective in both treating and preventing infectious diarrhea due to other causes among children in the developing world. Prognosis If people with cholera are treated quickly and properly, the mortality rate is less than 1%; however, with untreated cholera, the mortality rate rises to 50–60%. For certain genetic strains of cholera, such as the one present during the 2010 epidemic in Haiti and the 2004 outbreak in India, death can occur within two hours of becoming ill. Epidemiology Cholera affects an estimated 2.8 million people worldwide, and causes approximately 95,000 deaths a year (uncertainty range: 21,000–143,000) . This occurs mainly in the developing world. In the early 1980s, death rates are believed to have still been higher than three million a year. It is difficult to calculate exact numbers of cases, as many go unreported due to concerns that an outbreak may have a negative impact on the tourism of a country. As of 2004, cholera remained both epidemic and endemic in many areas of the world. Recent major outbreaks are the 2010s Haiti cholera outbreak and the 2016–2022 Yemen cholera outbreak. In October 2016, an outbreak of cholera began in war-ravaged Yemen. WHO called it "the worst cholera outbreak in the world". In 2019, 93% of the reported 923,037 cholera cases were from Yemen (with 1911 deaths reported). Between September 2019 and September 2020, a global total of over 450,000 cases and over 900 deaths was reported; however, the accuracy of these numbers suffer from over-reporting from countries that report suspected cases (and not laboratory confirmed cases), as well as under-reporting from countries that do not report official cases (such as Bangladesh, India and Philippines). Although much is known about the mechanisms behind the spread of cholera, researchers still do not have a full understanding of what makes cholera outbreaks happen in some places and not others. Lack of treatment of human feces and lack of treatment of drinking water greatly facilitate its spread. Bodies of water have been found to serve as a reservoir of infection, and seafood shipped long distances can spread the disease. Cholera had disappeared from the Americas for most of the 20th century, but it reappeared toward the end of that century, beginning with a severe outbreak in Peru. This was followed by the 2010s Haiti cholera outbreak and another outbreak of cholera in Haiti amid the 2018–2023 Haitian crisis. the disease is endemic in Africa and some areas of eastern and western Asia (Bangladesh, India and Yemen). Cholera is not endemic in Europe; all reported cases had a travel history to endemic areas. History of outbreaks The word cholera is from kholera from χολή kholē "bile". Cholera likely has its origins in the Indian subcontinent as evidenced by its prevalence in the region for centuries. References to cholera appear in the European literature as early as 1642, from the Dutch physician Jakob de Bondt's description in his De Medicina Indorum. (The "Indorum" of the title refers to the East Indies. He also gave first European descriptions of other diseases.) But at the time, the word "cholera" was historically used by European physicians to refer to any gastrointestinal upset resulting in yellow diarrhea. De Bondt thus used a common word already in regular use to describe the new disease. This was a frequent practice of the time. It was not until the 1830s that the name for severe yellow diarrhea changed in English from "cholera" to "cholera morbus" to differentiate it from what was then known as "Asiatic cholera", or that associated with origins in India and the East. Early outbreaks in the Indian subcontinent are believed to have been the result of crowded, poor living conditions, as well as the presence of pools of still water, both of which provide ideal conditions for cholera to thrive. The disease first spread by travelers along trade routes (land and sea) to Russia in 1817, later to the rest of Europe, and from Europe to North America and the rest of the world, (hence the name "Asiatic cholera"). Seven cholera pandemics have occurred since the early 19th century; the first one did not reach the Americas. The seventh pandemic originated in Indonesia in 1961. The first cholera pandemic occurred in the Bengal region of India, near Calcutta starting in 1817 through 1824. The disease dispersed from India to Southeast Asia, the Middle East, Europe, and Eastern Africa. The movement of British Army and Navy ships and personnel is believed to have contributed to the range of the pandemic, since the ships carried people with the disease to the shores of the Indian Ocean, from Africa to Indonesia, and north to China and Japan. The second pandemic lasted from 1826 to 1837 and particularly affected North America and Europe. Advancements in transport and global trade, and increased human migration, including soldiers, meant that more people were carrying the disease more widely. The third pandemic erupted in 1846, persisted until 1860, extended to North Africa, and reached North and South America. It was introduced to North America at Quebec, Canada, via Irish immigrants from the Great Famine. In this pandemic, Brazil was affected for the first time. The fourth pandemic lasted from 1863 to 1875, spreading from India to Naples and Spain, and reaching the United States at New Orleans, Louisiana in 1873. It spread throughout the Mississippi River system on the continent. The fifth pandemic was from 1881 to 1896. It started in India and spread to Europe, Asia, and South America. The sixth pandemic ran from 1899 to 1923. These epidemics had a lower number of fatalities because physicians and researchers had a greater understanding of the cholera bacteria. Egypt, the Arabian peninsula, Persia, India, and the Philippines were hit hardest during these epidemics. Other areas, such as Germany in 1892 (primarily the city of Hamburg, where more than 8.600 people died) and Naples from 1910 to 1911, also had severe outbreaks. The seventh pandemic originated in 1961 in Indonesia and is marked by the emergence of a new strain, nicknamed El Tor, which still persists () in developing countries. This pandemic had initially subsided about 1975 and was thought to have ended, but, as noted, it has persisted. There were a rise in cases in the 1990s and since. Cholera became widespread in the 19th century. Since then it has killed tens of millions of people. In Russia alone, between 1847 and 1851, more than one million people died from the disease. It killed 150,000 Americans during the second pandemic. Between 1900 and 1920, perhaps eight million people died of cholera in India. Cholera officially became the first reportable disease in the United States due to the significant effects it had on health. John Snow, in England, in 1854 was the first to identify the importance of contaminated water as its source of transmission. Cholera is now no longer considered a pressing health threat in Europe and North America due to filtering and chlorination of water supplies, but it still strongly affects populations in developing countries. In the past, vessels flew a yellow quarantine flag if any crew members or passengers had cholera. No one aboard a vessel flying a yellow flag would be allowed ashore for an extended period, typically 30 to 40 days. Historically many different claimed remedies have existed in folklore. Many of the older remedies were based on the miasma theory, that the disease was transmitted by bad air. Some believed that abdominal chilling made one more susceptible, and flannel and cholera belts were included in army kits. In the 1854–1855 outbreak in Naples, homeopathic camphor was used according to Hahnemann. Dr. Hahnemann laid down three main remedies that would be curative in that disease; in early and simple cases camphor; in later stages with excessive cramping, cuprum or with excessive evacuations and profuse cold sweat, veratrum album. These are the Trio Cholera remedies used by homoeopaths around the world. T. J. Ritter's Mother's Remedies book lists tomato syrup as a home remedy from northern America. Elecampane was recommended in the United Kingdom, according to William Thomas Fernie. The first effective human vaccine was developed in 1885, and the first effective antibiotic was developed in 1948. Cholera cases are much less frequent in developed countries where governments have helped to establish water sanitation practices and effective medical treatments. In the 19th century the United States, for example, had a severe cholera problem similar to those in some developing countries. It had three large cholera outbreaks in the 1800s, which can be attributed to Vibrio cholerae spread through interior waterways such as the Erie Canal and the extensive Mississippi River valley system, as well as the major ports along the Eastern Seaboard and their cities upriver. The island of Manhattan in New York City touches the Atlantic Ocean, where cholera collected from river waters and ship discharges just off the coast. At this time, New York City did not have as effective a sanitation system as it developed in the later 20th century, so cholera spread through the city's water supply. Cholera morbus is a historical term that was used to refer to gastroenteritis rather than specifically to what is now defined as the disease of cholera. Research One of the major contributions to fighting cholera was made by the physician and pioneer medical scientist John Snow (1813–1858), who in 1854 found a link between cholera and contaminated drinking water. Dr. Snow proposed a microbial origin for epidemic cholera in 1849. In his major "state of the art" review of 1855, he proposed a substantially complete and correct model for the cause of the disease. In two pioneering epidemiological field studies, he was able to demonstrate human sewage contamination was the most probable disease vector in two major epidemics in London in 1854. His model was not immediately accepted, but it was increasingly seen as plausible as medical microbiology developed over the next 30 years or so. For his work on cholera, John Snow is often regarded as the "Father of Epidemiology". The bacterium was isolated in 1854 by Italian anatomist Filippo Pacini, but its exact nature and his results were not widely known. In the same year, the Catalan Joaquim Balcells i Pascual discovered the bacterium. In 1856 António Augusto da Costa Simões and José Ferreira de Macedo Pinto, two Portuguese researchers, are believed to have done the same. Between the mid-1850s and the 1900s, cities in developed nations made massive investment in clean water supply and well-separated sewage treatment infrastructures. This eliminated the threat of cholera epidemics from the major developed cities in the world. In 1883, Robert Koch identified V. cholerae with a microscope as the bacillus causing the disease. Hemendra Nath Chatterjee, a Bengali scientist, was the first to formulate and demonstrate the effectiveness of oral rehydration salt (ORS) to treat diarrhea. In his 1953 paper, published in The Lancet, he states that promethazine can stop vomiting during cholera and then oral rehydration is possible. The formulation of the fluid replacement solution was 4 g of sodium chloride, 25 g of glucose and 1000 ml of water. Indian medical scientist Sambhu Nath De discovered the cholera toxin, the animal model of cholera, and successfully demonstrated the method of transmission of cholera pathogen Vibrio cholerae. Robert Allan Phillips, working at US Naval Medical Research Unit Two in Southeast Asia, evaluated the pathophysiology of the disease using modern laboratory chemistry techniques. He developed a protocol for rehydration. His research led the Lasker Foundation to award him its prize in 1967. More recently, in 2002, Alam, et al., studied stool samples from patients at the International Centre for Diarrhoeal Disease in Dhaka, Bangladesh. From the various experiments they conducted, the researchers found a correlation between the passage of V. cholerae through the human digestive system and an increased infectivity state. Furthermore, the researchers found the bacterium creates a hyperinfected state where genes that control biosynthesis of amino acids, iron uptake systems, and formation of periplasmic nitrate reductase complexes were induced just before defecation. These induced characteristics allow the cholera vibrios to survive in the "rice water" stools, an environment of limited oxygen and iron, of patients with a cholera infection. Global Strategy In 2017, the WHO launched the "Ending Cholera: a global roadmap to 2030" strategy which aims to reduce cholera deaths by 90% by 2030. The strategy was developed by the Global Task Force on Cholera Control (GTFCC) which develops country-specific plans and monitors progress. The approach to achieve this goal combines surveillance, water sanitation, rehydration treatment and oral vaccines. Specifically, the control strategy focuses on three approaches: i) early detection and response to outbreaks to contain outbreaks, ii) stopping cholera transmission through improved sanitation and vaccines in hotspots, and iii) a global framework for cholera control through the GTFCC. The WHO and the GTFCC do not consider global cholera eradication a viable goal. Even though humans are the only host of cholera, the bacterium can persist in the environment without a human host. While global eradication is not possible, elimination of human to human transmission may be possible. Local elimination is possible, which has been underway most recently during the 2010s Haiti cholera outbreak. Haiti aims to achieve certification of elimination by 2022. The GTFCC targets 47 countries, 13 of which have established vaccination campaigns. Society and culture Health policy In many developing countries, cholera still reaches its victims through contaminated water sources, and countries without proper sanitation techniques have greater incidence of the disease. Governments can play a role in this. In 2008, for example, the Zimbabwean cholera outbreak was due partly to the government's role, according to a report from the James Baker Institute. The Haitian government's inability to provide safe drinking water after the 2010 earthquake led to an increase in cholera cases as well. Similarly, South Africa's cholera outbreak was exacerbated by the government's policy of privatizing water programs. The wealthy elite of the country were able to afford safe water while others had to use water from cholera-infected rivers. According to Rita R. Colwell of the James Baker Institute, if cholera does begin to spread, government preparedness is crucial. A government's ability to contain the disease before it extends to other areas can prevent a high death toll and the development of an epidemic or even pandemic. Effective disease surveillance can ensure that cholera outbreaks are recognized as soon as possible and dealt with appropriately. Oftentimes, this will allow public health programs to determine and control the cause of the cases, whether it is unsanitary water or seafood that have accumulated a lot of Vibrio cholerae specimens. Having an effective surveillance program contributes to a government's ability to prevent cholera from spreading. In the year 2000 in the state of Kerala in India, the Kottayam district was determined to be "Cholera-affected"; this pronouncement led to task forces that concentrated on educating citizens with 13,670 information sessions about human health. These task forces promoted the boiling of water to obtain safe water, and provided chlorine and oral rehydration salts. Ultimately, this helped to control the spread of the disease to other areas and minimize deaths. On the other hand, researchers have shown that most of the citizens infected during the 1991 cholera outbreak in Bangladesh lived in rural areas, and were not recognized by the government's surveillance program. This inhibited physicians' abilities to detect cholera cases early. According to Colwell, the quality and inclusiveness of a country's health care system affects the control of cholera, as it did in the Zimbabwean cholera outbreak. While sanitation practices are important, when governments respond quickly and have readily available vaccines, the country will have a lower cholera death toll. Affordability of vaccines can be a problem; if the governments do not provide vaccinations, only the wealthy may be able to afford them and there will be a greater toll on the country's poor. The speed with which government leaders respond to cholera outbreaks is important. Besides contributing to an effective or declining public health care system and water sanitation treatments, government can have indirect effects on cholera control and the effectiveness of a response to cholera. A country's government can impact its ability to prevent disease and control its spread. A speedy government response backed by a fully functioning health care system and financial resources can prevent cholera's spread. This limits cholera's ability to cause death, or at the very least a decline in education, as children are kept out of school to minimize the risk of infection. Inversely, poor government response can lead to civil unrest and cholera riots. Notable cases Tchaikovsky's death has traditionally been attributed to cholera, most probably contracted through drinking contaminated water several days earlier. Tchaikovsky's mother died of cholera, and his father became sick with cholera at this time but made a full recovery. Some scholars, however, including English musicologist and Tchaikovsky authority David Brown and biographer Anthony Holden, have theorized that his death was a suicide. 2010s Haiti cholera outbreak. Ten months after the 2010 earthquake, an outbreak swept over Haiti, traced to a United Nations base of peacekeepers from Nepal. This marks the worst cholera outbreak in recent history, as well as the best documented cholera outbreak in modern public health. Adam Mickiewicz, Polish poet and novelist, is thought to have died of cholera in Istanbul in 1855. Sadi Carnot, physicist, a pioneer of thermodynamics (d. 1832) Charles X, King of France (d. 1836) James K. Polk, eleventh president of the United States (d. 1849) Carl von Clausewitz, Prussian soldier and German military theorist (d. 1831) Elliot Bovill, Chief Justice of the Straits Settlements (1893) Nikola Tesla, Serbian-American inventor, engineer and futurist known for his contributions to the design of the modern alternating current (AC) electricity supply system, contracted cholera in 1873 at the age of 17. He was bedridden for nine months, and near death multiple times, but survived and fully recovered. In popular culture Unlike tuberculosis ("consumption") which in literature and the arts was often romanticized as a disease of denizens of the demimonde or those with an artistic temperament, cholera is a disease which almost entirely affects the poor living in unsanitary conditions. This, and the unpleasant course of the disease – which includes voluminous "rice-water" diarrhea, the hemorrhaging of liquids from the mouth, and violent muscle contractions which continue even after death – has discouraged the disease from being romanticized, or even being factually presented in popular culture. The 1889 novel Mastro-don Gesualdo by Giovanni Verga presents the course of a cholera epidemic across the island of Sicily, but does not show the suffering of those affected. Cholera is a major plot device in The Painted Veil, a 1925 novel by W. Somerset Maugham. The story concerns a shy bacteriologist who discovers his young, pretty wife is having an adulterous affair. The doctor exacts revenge on his wife by inducing her to travel with him to mainland China which is in the grips of an horrific cholera outbreak. The ravages of the disease are frankly described in the novel. In Thomas Mann's novella Death in Venice, first published in 1912 as Der Tod in Venedig, Mann "presented the disease as emblematic of the final 'bestial degradation' of the sexually transgressive author Gustav von Aschenbach." Contrary to the actual facts of how violently cholera kills, Mann has his protagonist die peacefully on a beach in a deck chair. Luchino Visconti's 1971 film version also hid from the audience the actual course of the disease. Mann's novella was also made into an opera by Benjamin Britten in 1973, his last one, and into a ballet by John Neumeier for his Hamburg Ballet company, in December 2003.* The Horseman on the Roof (orig. French Le Hussard sur le toit) is a 1951 adventure novel written by Jean Giono. It tells the story of Angelo Pardi, a young Italian carbonaro colonel of hussars, caught up in the 1832 cholera epidemic in Provence. In 1995, it was made into a film of the same name directed by Jean-Paul Rappeneau. In Gabriel Garcia Márquez's 1985 novel Love in the Time of Cholera, cholera is "a looming background presence rather than a central figure requiring vile description." The novel was adapted in 2007 for the film of the same name directed by Mike Newell. In The Secret Garden, Mary Lennox's parents die from cholera. Country examples Zambia In Zambia, widespread cholera outbreaks have occurred since 1977, most commonly in the capital city of Lusaka. In 2017, an outbreak of cholera was declared in Zambia after laboratory confirmation of Vibrio cholerae O1, biotype El Tor, serotype Ogawa, from stool samples from two patients with acute watery diarrhea. There was a rapid increase in the number of cases from several hundred cases in early December 2017 to approximately 2,000 by early January 2018. With intensification of the rains, new cases increased on a daily basis reaching a peak on the first week of January 2018 with over 700 cases reported. In collaboration with partners, the Zambia Ministry of Health (MoH) launched a multifaceted public health response that included increased chlorination of the Lusaka municipal water supply, provision of emergency water supplies, water quality monitoring and testing, enhanced surveillance, epidemiologic investigations, a cholera vaccination campaign, aggressive case management and health care worker training, and laboratory testing of clinical samples. The Zambian Ministry of Health implemented a reactive one-dose Oral Cholera Vaccine (OCV) campaign in April 2016 in three Lusaka compounds, followed by a pre-emptive second-round in December. Nigeria In June 2024, the Nigeria Centre for Disease Control and Prevention (NCDC) announced a total of 1,141 suspected and 65 confirmed cases of cholera with 30 deaths from 96 Local Government Areas (LGAs) in 30 states of the country. NCDC, in its public health advisory, said Abia, Bayelsa, Bauchi, Cross River, Delta, Imo, Katsina, Lagos, Nasarawa and Zamfara states were the 10 states that contributed 90 percent of the burden of cholera in the country at the time. India The city of Kolkata, India in the state of West Bengal in the Ganges delta has been described as the "homeland of cholera", with regular outbreaks and pronounced seasonality. In India, where the disease is endemic, cholera outbreaks occur every year between dry seasons and rainy seasons. India is also characterized by high population density, unsafe drinking water, open drains, and poor sanitation which provide an optimal niche for survival, sustenance and transmission of Vibrio cholerae. Democratic Republic of Congo In Goma in the Democratic Republic of Congo, cholera has left an enduring mark on human and medical history. Cholera pandemics in the 19th and 20th centuries led to the growth of epidemiology as a science and in recent years it has continued to press advances in the concepts of disease ecology, basic membrane biology, and transmembrane signaling and in the use of scientific information and treatment design. Explanatory notes References Further reading Bilson, Geoffrey. A Darkened House: Cholera in Nineteenth-Century Canada (U of Toronto Press, 1980). Gilbert, Pamela K. "Cholera and Nation: Doctoring the Social Body in Victorian England" (SUNY Press, 2008). Snowden, Frank M. Naples in the Time of Cholera, 1884–1911 (Cambridge UP, 1995). Vinten-Johansen, Peter, ed. Investigating Cholera in Broad Street: A History in Documents (Broadview Press, 2020). regarding 1850s in England. Vinten-Johansen, Peter, et al. Cholera, chloroform, and the science of medicine: a life of John Snow (2003). External links Prevention and control of cholera outbreaks: WHO policy and recommendations CholeraWorld Health Organization Cholera – Vibrio cholerae infectionCenters for Disease Control and Prevention Diarrhea Foodborne illnesses Gastrointestinal tract disorders Intestinal infectious diseases Tropical diseases Epidemics Pandemics Wikipedia medicine articles ready to translate Wikipedia emergency medicine articles ready to translate Sanitation Waterborne diseases Vaccine-preventable diseases
Cholera
[ "Biology" ]
10,365
[ "Vaccination", "Vaccine-preventable diseases" ]
7,593
https://en.wikipedia.org/wiki/Calculator
An electronic calculator is typically a portable electronic device used to perform calculations, ranging from basic arithmetic to complex mathematics. The first solid-state electronic calculator was created in the early 1960s. Pocket-sized devices became available in the 1970s, especially after the Intel 4004, the first microprocessor, was developed by Intel for the Japanese calculator company Busicom. Modern electronic calculators vary from cheap, give-away, credit-card-sized models to sturdy desktop models with built-in printers. They became popular in the mid-1970s as the incorporation of integrated circuits reduced their size and cost. By the end of that decade, prices had dropped to the point where a basic calculator was affordable to most and they became common in schools. In addition to general purpose calculators, there are those designed for specific markets. For example, there are scientific calculators, which include trigonometric and statistical calculations. Some calculators even have the ability to do computer algebra. Graphing calculators can be used to graph functions defined on the real line, or higher-dimensional Euclidean space. , basic calculators cost little, but scientific and graphing models tend to cost more. Computer operating systems as far back as early Unix have included interactive calculator programs such as dc and hoc, and interactive BASIC could be used to do calculations on most 1970s and 1980s home computers. Calculator functions are included in most smartphones, tablets, and personal digital assistant (PDA) type devices. With the very wide availability of smartphones and the like, dedicated hardware calculators, while still widely used, are less common than they once were. In 1986, calculators still represented an estimated 41% of the world's general-purpose hardware capacity to compute information. By 2007, this had diminished to less than 0.05%. Design Input Electronic calculators contain a keyboard with buttons for digits and arithmetical operations; some even contain "00" and "000" buttons to make larger or smaller numbers easier to enter. Most basic calculators assign only one digit or operation on each button; however, in more specific calculators, a button can perform multi-function working with key combinations. Display output Calculators usually have liquid-crystal displays (LCD) as output in place of historical light-emitting diode (LED) displays and vacuum fluorescent displays (VFD); details are provided in the section Technical improvements. Large-sized figures are often used to improve readability; while using decimal separator (usually a point rather than a comma) instead of or in addition to vulgar fractions. Various symbols for function commands may also be shown on the display. Fractions such as are displayed as decimal approximations, for example rounded to . Also, some fractions (such as , which is ; to 14 significant figures) can be difficult to recognize in decimal form; as a result, many scientific calculators are able to work in vulgar fractions or mixed numbers. Memory Calculators also have the ability to save numbers into computer memory. Basic calculators usually store only one number at a time; more specific types are able to store many numbers represented in variables. Usually these variables are named ans or ans(0). The variables can also be used for constructing formulas. Some models have the ability to extend memory capacity to store more numbers; the extended memory address is termed an array index. Power source Power sources of calculators are batteries, solar cells or mains electricity (for old models), turning on with a switch or button. Some models even have no turn-off button but they provide some way to put off (for example, leaving no operation for a moment, covering solar cell exposure, or closing their lid). Crank-powered calculators were also common in the early computer era. Key layout The following keys are common to most pocket calculators. While the arrangement of the digits is standard, the positions of other keys vary from model to model; the illustration is an example. The arrangement of digits on calculator and other numeric keypads with the -- keys two rows above the -- keys is derived from calculators and cash registers. It is notably different from the layout of telephone Touch-Tone keypads which have the -- keys on top and -- keys on the third row. Internal workings In general, a basic electronic calculator consists of the following components: Power source (mains electricity, battery and/or solar cell) Keypad (input device) – consists of keys used to input numbers and function commands (addition, multiplication, square-root, etc.) Display panel (output device) – displays input numbers, commands and results. Liquid-crystal displays (LCDs), vacuum fluorescent displays (VFDs), and light-emitting diode (LED) displays use seven segments to represent each digit in a basic calculator. Advanced calculators may use dot matrix displays. A printing calculator, in addition to a display panel, has a printing unit that prints results in ink onto a roll of paper, using a printing mechanism. Processor chip (microprocessor or central processing unit). Clock rate of a processor chip refers to the frequency at which the central processing unit (CPU) is running. It is used as an indicator of the processor's speed, and is measured in clock cycles per second or hertz (Hz). For basic calculators, the speed can vary from a few hundred hertz to the kilohertz range. Example A basic explanation as to how calculations are performed in a simple four-function calculator: To perform the calculation , one presses keys in the following sequence on most calculators:     . When   is entered, it is picked up by the scanning unit; the number 25 is encoded and sent to the X register; Next, when the key is pressed, the "addition" instruction is also encoded and sent to the flag or the status register; The second number is encoded and sent to the X register. This "pushes" (shifts) the first number out into the Y register; When the key is pressed, a "message" (signal) from the flag or status register tells the permanent or non-volatile memory that the operation to be done is "addition"; The numbers in the X and Y registers are then loaded into the ALU and the calculation is carried out following instructions from the permanent or non-volatile memory; The answer, 34 is sent (shifted) back to the X register. From there, it is converted by the binary decoder unit into a decimal number (usually binary-coded decimal), and then shown on the display panel. Other functions are usually performed using repeated additions or subtractions. Numeric representation Most pocket calculators do all their calculations in binary-coded decimal (BCD) rather than binary. BCD is common in electronic systems where a numeric value is to be displayed, especially in systems consisting solely of digital logic, and not containing a microprocessor. By employing BCD, the manipulation of numerical data for display can be greatly simplified by treating each digit as a separate single sub-circuit. This matches much more closely the physical reality of display hardware—a designer might choose to use a series of separate identical seven-segment displays to build a metering circuit, for example. If the numeric quantity were stored and manipulated as pure binary, interfacing to such a display would require complex circuitry. Therefore, in cases where the calculations are relatively simple, working throughout with BCD can lead to a simpler overall system than converting to and from binary. (For example, CDs keep the track number in BCD, limiting them to 99 tracks.) The same argument applies when hardware of this type uses an embedded microcontroller or other small processor. Often, smaller code results when representing numbers internally in BCD format, since a conversion from or to binary representation can be expensive on such limited processors. For these applications, some small processors feature BCD arithmetic modes, which assist when writing routines that manipulate BCD quantities. Where calculators have added functions (such as square root, or trigonometric functions), software algorithms are required to produce high precision results. Sometimes significant design effort is needed to fit all the desired functions in the limited memory space available in the calculator chip, with acceptable calculation time. History Precursors to the electronic calculator The first known tools used to aid arithmetic calculations were: bones (used to tally items), pebbles, and counting boards, and the abacus, known to have been used by Sumerians and Egyptians before 2000 BC. Except for the Antikythera mechanism (an "out of the time" astronomical device), development of computing tools arrived near the start of the 17th century: the geometric-military compass (by Galileo), logarithms and Napier bones (by Napier), and the slide rule (by Edmund Gunter). The Renaissance saw the invention of the mechanical calculator by Wilhelm Schickard in 1623, and later by Blaise Pascal in 1642. A device that was at times somewhat over-promoted as being able to perform all four arithmetic operations with minimal human intervention. Pascal's calculator could add and subtract two numbers directly and thus, if the tedium could be borne, multiply and divide by repetition. Schickard's machine, constructed several decades earlier, used a clever set of mechanised multiplication tables to ease the process of multiplication and division with the adding machine as a means of completing this operation. There is a debate about whether Pascal or Shickard should be credited as the known inventor of a calculating machine due to the differences (like the different aims) of both inventions. Schickard and Pascal were followed by Gottfried Leibniz who spent forty years designing a four-operation mechanical calculator, the stepped reckoner, inventing in the process his leibniz wheel, but who couldn't design a fully operational machine. There were also five unsuccessful attempts to design a calculating clock in the 17th century. The 18th century saw the arrival of some notable improvements, first by Poleni with the first fully functional calculating clock and four-operation machine, but these machines were almost always one of a kind. Luigi Torchi invented the first direct multiplication machine in 1834: this was also the second key-driven machine in the world, following that of James White (1822). It was not until the 19th century and the Industrial Revolution that real developments began to occur. Although machines capable of performing all four arithmetic functions existed prior to the 19th century, the refinement of manufacturing and fabrication processes during the eve of the industrial revolution made large scale production of more compact and modern units possible. The Arithmometer, invented in 1820 as a four-operation mechanical calculator, was released to production in 1851 as an adding machine and became the first commercially successful unit; forty years later, by 1890, about 2,500 arithmometers had been sold plus a few hundreds more from two arithmometer clone makers (Burkhardt, Germany, 1878 and Layton, UK, 1883) and Felt and Tarrant, the only other competitor in true commercial production, had sold 100 comptometers. It wasn't until 1902 that the familiar push-button user interface was developed, with the introduction of the Dalton Adding Machine, developed by James L. Dalton in the United States. In 1921, Edith Clarke invented the "Clarke calculator", a simple graph-based calculator for solving line equations involving hyperbolic functions. This allowed electrical engineers to simplify calculations for inductance and capacitance in power transmission lines. The Curta calculator was developed in 1948 and, although costly, became popular for its portability. This purely mechanical hand-held device could do addition, subtraction, multiplication and division. By the early 1970s electronic pocket calculators ended manufacture of mechanical calculators, although the Curta remains a popular collectable item. Development of electronic calculators The first mainframe computers, initially using vacuum tubes and later transistors in the logic circuits, appeared in the 1940s and 1950s. Electronic circuits developed for computers also had application to electronic calculators. The Casio Computer Company, in Japan, released the Model 14-A calculator in 1957, which was the world's first all-electric (relatively) compact calculator. It did not use electronic logic but was based on relay technology, and was built into a desk. The IBM 608 plugboard programmable calculator was IBM's first all-transistor product, released in 1957; this was a console type system, with input and output on punched cards, and replaced the earlier, larger, vacuum-tube IBM 603. In October 1961, the world's first all-electronic desktop calculator, the British Bell Punch/Sumlock Comptometer ANITA (A New Inspiration To Arithmetic/Accounting) was announced. This machine used vacuum tubes, cold-cathode tubes and Dekatrons in its circuits, with 12 cold-cathode "Nixie" tubes for its display. Two models were displayed, the Mk VII for continental Europe and the Mk VIII for Britain and the rest of the world, both for delivery from early 1962. The Mk VII was a slightly earlier design with a more complicated mode of multiplication, and was soon dropped in favour of the simpler Mark VIII. The ANITA had a full keyboard, similar to mechanical comptometers of the time, a feature that was unique to it and the later Sharp CS-10A among electronic calculators. The ANITA weighed roughly due to its large tube system. Bell Punch had been producing key-driven mechanical calculators of the comptometer type under the names "Plus" and "Sumlock", and had realised in the mid-1950s that the future of calculators lay in electronics. They employed the young graduate Norbert Kitz, who had worked on the early British Pilot ACE computer project, to lead the development. The ANITA sold well since it was the only electronic desktop calculator available, and was silent and quick. The tube technology of the ANITA was superseded in June 1963 by the U.S. manufactured Friden EC-130, which had an all-transistor design, a stack of four 13-digit numbers displayed on a cathode-ray tube (CRT), and introduced Reverse Polish Notation (RPN) to the calculator market for a price of $2200, which was about three times the cost of an electromechanical calculator of the time. Like Bell Punch, Friden was a manufacturer of mechanical calculators that had decided that the future lay in electronics. In 1964 more all-transistor electronic calculators were introduced: Sharp introduced the CS-10A, which weighed and cost 500,000 yen ($), and Industria Macchine Elettroniche of Italy introduced the IME 84, to which several extra keyboard and display units could be connected so that several people could make use of it (but apparently not at the same time). The Victor 3900 was the first to use integrated circuits in place of individual transistors, but production problems delayed sales until 1966. There followed a series of electronic calculator models from these and other manufacturers, including Canon, Mathatronics, Olivetti, SCM (Smith-Corona-Marchant), Sony, Toshiba, and Wang. The early calculators used hundreds of germanium transistors, which were cheaper than silicon transistors, on multiple circuit boards. Display types used were CRT, cold-cathode Nixie tubes, and filament lamps. Memory technology was usually based on the delay-line memory or the magnetic-core memory, though the Toshiba "Toscal" BC-1411 appears to have used an early form of dynamic RAM built from discrete components. Already there was a desire for smaller and less power-hungry machines. Bulgaria's ELKA 6521, introduced in 1965, was developed by the Central Institute for Calculation Technologies and built at the Elektronika factory in Sofia. The name derives from ELektronen KAlkulator, and it weighed around . It is the first calculator in the world which includes the square root function. Later that same year were released the ELKA 22 (with a luminescent display) and the ELKA 25, with an built-in printer. Several other models were developed until the first pocket model, the ELKA 101, was released in 1974. The writing on it was in Roman script, and it was exported to western countries. Programmable calculators The first desktop programmable calculators were produced in the mid-1960s. They included the Mathatronics Mathatron (1964) and the Olivetti Programma 101 (late 1965) which were solid-state, desktop, printing, floating point, algebraic entry, programmable, stored-program electronic calculators. Both could be programmed by the end user and print out their results. The Programma 101 saw much wider distribution and had the added feature of offline storage of programs via magnetic cards. Another early programmable desktop calculator (and maybe the first Japanese one) was the Casio (AL-1000) produced in 1967. It featured a nixie tubes display and had transistor electronics and ferrite core memory. The Monroe Epic programmable calculator came on the market in 1967. A large, printing, desk-top unit, with an attached floor-standing logic tower, it could be programmed to perform many computer-like functions. However, the only branch instruction was an implied unconditional branch (GOTO) at the end of the operation stack, returning the program to its starting instruction. Thus, it was not possible to include any conditional branch (IF-THEN-ELSE) logic. During this era, the absence of the conditional branch was sometimes used to distinguish a programmable calculator from a computer. The first Soviet programmable desktop calculator ISKRA 123, powered by the power grid, was released at the start of the 1970s. 1970s to mid-1980s The electronic calculators of the mid-1960s were large and heavy desktop machines due to their use of hundreds of transistors on several circuit boards with a large power consumption that required an AC power supply. There were great efforts to put the logic required for a calculator into fewer and fewer integrated circuits (chips) and calculator electronics was one of the leading edges of semiconductor development. U.S. semiconductor manufacturers led the world in large scale integration (LSI) semiconductor development, squeezing more and more functions into individual integrated circuits. This led to alliances between Japanese calculator manufacturers and U.S. semiconductor companies: Canon Inc. with Texas Instruments, Hayakawa Electric (later renamed Sharp Corporation) with North-American Rockwell Microelectronics (later renamed Rockwell International), Busicom with Mostek and Intel, and General Instrument with Sanyo. Pocket calculators By 1970, a calculator could be made using just a few chips of low power consumption, allowing portable models powered from rechargeable batteries. The first handheld calculator was a 1967 prototype called Cal Tech, whose development was led by Jack Kilby at Texas Instruments in a research project to produce a portable calculator. It could add, multiply, subtract, and divide, and its output device was a paper tape. As a result of the "Cal-Tech" project, Texas Instruments was granted master patents on portable calculators. The first commercially produced portable calculators appeared in Japan in 1970, and were soon marketed around the world. These included the Sanyo ICC-0081 "Mini Calculator", the Canon Pocketronic, and the Sharp QT-8B "micro Compet". The Canon Pocketronic was a development from the "Cal-Tech" project. It had no traditional display; numerical output was on thermal paper tape. Sharp put in great efforts in size and power reduction and introduced in January 1971 the Sharp EL-8, also marketed as the Facit 1111, which was close to being a pocket calculator. It weighed 1.59 pounds (721 grams), had a vacuum fluorescent display, rechargeable NiCad batteries, and initially sold for US$395. However, integrated circuit development efforts culminated in early 1971 with the introduction of the first "calculator on a chip", the MK6010 by Mostek, followed by Texas Instruments later in the year. Although these early hand-held calculators were very costly, these advances in electronics, together with developments in display technology (such as the vacuum fluorescent display, LED, and LCD), led within a few years to the cheap pocket calculator available to all. In 1971, Pico Electronics and General Instrument also introduced their first collaboration in ICs, a full single chip calculator IC for the Monroe Royal Digital III calculator. Pico was a spinout by five GI design engineers whose vision was to create single chip calculator ICs. Pico and GI went on to have significant success in the burgeoning handheld calculator market. The first truly pocket-sized electronic calculator was the Busicom LE-120A "HANDY", which was marketed early in 1971. Made in Japan, this was also the first calculator to use an LED display, the first hand-held calculator to use a single integrated circuit (then proclaimed as a "calculator on a chip"), the Mostek MK6010, and the first electronic calculator to run off replaceable batteries. Using four AA-size cells the LE-120A measures . The first European-made pocket-sized calculator, DB 800 was made in May 1971 by Digitron in Buje, Croatia (former Yugoslavia) with four functions and an eight-digit display and special characters for a negative number and a warning that the calculation has too many digits to display. The first American-made pocket-sized calculator, the Bowmar 901B (popularly termed The Bowmar Brain), measuring , came out in the Autumn of 1971, with four functions and an eight-digit red LED display, for , while in August 1972 the four-function Sinclair Executive became the first slimline pocket calculator measuring and weighing . It retailed for around £79 ( at the time). By the end of the decade, similar calculators were priced less than £5 ($). Following protracted development over the course of two years including a botched partnership with Texas Instruments, Eldorado Electrodata released five pocket calculators in 1972. One called the Touch Magic was "no bigger than a pack of cigarettes" according to Administrative Management. The first Soviet Union made pocket-sized calculator, the Elektronika B3-04 was developed by the end of 1973 and sold at the start of 1974. One of the first low-cost calculators was the Sinclair Cambridge, launched in August 1973. It retailed for £29.95 ($), or £5 ($) less in kit form, and later models included some scientific functions. The Sinclair calculators were successful because they were far cheaper than the competition; however, their design led to slow and less accurate computations of transcendental functions (maximum three decimal places of accuracy). Scientific pocket calculators Meanwhile, Hewlett-Packard (HP) had been developing a pocket calculator. Launched in early 1972, it was unlike the other basic four-function pocket calculators then available in that it was the first pocket calculator with scientific functions that could replace a slide rule. The $395 HP-35, along with nearly all later HP engineering calculators, uses reverse Polish notation (RPN), also called postfix notation. A calculation like "8 plus 5" is, using RPN, performed by pressing , , , and ; instead of the algebraic infix notation: , , , . It had 35 buttons and was based on Mostek Mk6020 chip. The first Soviet scientific pocket-sized calculator the "B3-18" was completed by the end of 1975. In 1973, Texas Instruments (TI) introduced the SR-10, (SR signifying slide rule) an algebraic entry pocket calculator using scientific notation for $150. Shortly after the SR-11 featured an added key for entering pi (π). It was followed the next year by the SR-50 which added log and trig functions to compete with the HP-35, and in 1977 the mass-marketed TI-30 line which is still produced. In 1978, a new company, Calculated Industries arose which focused on specialized markets. Their first calculator, the Loan Arranger (1978) was a pocket calculator marketed to the Real Estate industry with preprogrammed functions to simplify the process of calculating payments and future values. In 1985, CI launched a calculator for the construction industry called the Construction Master which came preprogrammed with common construction calculations (such as angles, stairs, roofing math, pitch, rise, run, and feet-inch fraction conversions). This would be the first in a line of construction related calculators. Programmable pocket calculators The first programmable pocket calculator was the HP-65, in 1974; it had a capacity of 100 instructions, and could store and retrieve programs with a built-in magnetic card reader. Two years later the HP-25C introduced continuous memory, i.e., programs and data were retained in CMOS memory during power-off. In 1979, HP released the first alphanumeric, programmable, expandable calculator, the HP-41C. It could be expanded with random-access memory (RAM, for memory) and read-only memory (ROM, for software) modules, and peripherals like bar code readers, microcassette and floppy disk drives, paper-roll thermal printers, and miscellaneous communication interfaces (RS-232, HP-IL, HP-IB). The first Soviet pocket battery-powered programmable calculator, Elektronika B3-21, was developed by the end of 1976 and released at the start of 1977. The successor of B3-21, the Elektronika B3-34 wasn't backward compatible with B3-21, even if it kept the reverse Polish notation (RPN). Thus B3-34 defined a new command set, which later was used in a series of later programmable Soviet calculators. Despite very limited abilities (98 bytes of instruction memory and about 19 stack and addressable registers), people managed to write all kinds of programs for them, including adventure games and libraries of calculus-related functions for engineers. Hundreds, perhaps thousands, of programs were written for these machines, from practical scientific and business software, which were used in real-life offices and labs, to fun games for children. The Elektronika MK-52 calculator (using the extended B3-34 command set, and featuring internal EEPROM memory for storing programs and external interface for EEPROM cards and other periphery) was used in Soviet spacecraft program (for Soyuz TM-7 flight) as a backup of the board computer. This series of calculators was also noted for a large number of highly counter-intuitive mysterious undocumented features, somewhat similar to "synthetic programming" of the American HP-41, which were exploited by applying normal arithmetic operations to error messages, jumping to nonexistent addresses and other methods. A number of respected monthly publications, including the popular science magazine Nauka i Zhizn (Наука и жизнь, Science and Life), featured special columns, dedicated to optimization methods for calculator programmers and updates on undocumented features for hackers, which grew into a whole esoteric science with many branches, named "yeggogology" ("еггогология"). The error messages on those calculators appear as a Russian word "YEGGOG" ("ЕГГОГ") which, unsurprisingly, is translated to "Error". A similar hacker culture in the US revolved around the HP-41, which was also noted for a large number of undocumented features and was much more powerful than B3-34. Technical improvements Through the 1970s the hand-held electronic calculator underwent rapid development. The red LED and blue/green vacuum fluorescent displays consumed a lot of power and the calculators either had a short battery life (often measured in hours, so rechargeable nickel-cadmium batteries were common) or were large so that they could take larger, higher capacity batteries. In the early 1970s liquid-crystal displays (LCDs) were in their infancy and there was a great deal of concern that they only had a short operating lifetime. Busicom introduced the Busicom LE-120A "HANDY" calculator, the first pocket-sized calculator and the first with an LED display, and announced the Busicom LC with LCD. However, there were problems with this display and the calculator never went on sale. The first successful calculators with LCDs were manufactured by Rockwell International and sold from 1972 by other companies under such names as: Dataking LC-800, Harden DT/12, Ibico 086, Lloyds 40, Lloyds 100, Prismatic 500 (a.k.a. P500), Rapid Data Rapidman 1208LC. The LCDs were an early form using the Dynamic Scattering Mode DSM with the numbers appearing as bright against a dark background. To present a high-contrast display these models illuminated the LCD using a filament lamp and solid plastic light guide, which negated the low power consumption of the display. These models appear to have been sold only for a year or two. A more successful series of calculators using a reflective DSM-LCD was launched in 1972 by Sharp Inc with the Sharp EL-805, which was a slim pocket calculator. This, and another few similar models, used Sharp's Calculator On Substrate (COS) technology. An extension of one glass plate needed for the liquid crystal display was used as a substrate to mount the needed chips based on a new hybrid technology. The COS technology may have been too costly since it was only used in a few models before Sharp reverted to conventional circuit boards. In the mid-1970s the first calculators appeared with field-effect, twisted nematic (TN) LCDs with dark numerals against a grey background, though the early ones often had a yellow filter over them to cut out damaging ultraviolet rays. The advantage of LCDs is that they are passive light modulators reflecting light, which require much less power than light-emitting displays such as LEDs or VFDs. This led the way to the first credit-card-sized calculators, such as the Casio Mini Card LC-78 of 1978, which could run for months of normal use on button cells. There were also improvements to the electronics inside the calculators. All of the logic functions of a calculator had been squeezed into the first "calculator on a chip" integrated circuits (ICs) in 1971, but this was leading edge technology of the time and yields were low and costs were high. Many calculators continued to use two or more ICs, especially the scientific and the programmable ones, into the late 1970s. The power consumption of the integrated circuits was also reduced, especially with the introduction of CMOS technology. Appearing in the Sharp "EL-801" in 1972, the transistors in the logic cells of CMOS ICs only used any appreciable power when they changed state. The LED and VFD displays often required added driver transistors or ICs, whereas the LCDs were more amenable to being driven directly by the calculator IC itself. With this low power consumption came the possibility of using solar cells as the power source, realised around 1978 by calculators such as the Royal Solar 1, Sharp EL-8026, and Teal Photon. Mass-market phase At the start of the 1970s, hand-held electronic calculators were very costly, at two or three weeks' wages, and so were a luxury item. The high price was due to their construction requiring many mechanical and electronic components which were costly to produce, and production runs that were too small to exploit economies of scale. Many firms saw that there were good profits to be made in the calculator business with the margin on such high prices. However, the cost of calculators fell as components and their production methods improved, and the effect of economies of scale was felt. By 1976, the cost of the cheapest four-function pocket calculator had dropped to a few dollars, about 1/20 of the cost five years before. The results of this were that the pocket calculator was affordable, and that it was now difficult for the manufacturers to make a profit from calculators, leading to many firms dropping out of the business or closing. The firms that survived making calculators tended to be those with high outputs of higher quality calculators, or producing high-specification scientific and programmable calculators. Mid-1980s to present The first calculator capable of symbolic computing was the HP-28C, released in 1987. It could, for example, solve quadratic equations symbolically. The first graphing calculator was the Casio fx-7000G released in 1985. The two leading manufacturers, HP and TI, released increasingly feature-laden calculators during the 1980s and 1990s. At the turn of the millennium, the line between a graphing calculator and a handheld computer was not always clear, as some very advanced calculators such as the TI-89, the Voyage 200 and HP-49G could differentiate and integrate functions, solve differential equations, run word processing and PIM software, and connect by wire or IR to other calculators/computers. The HP 12c financial calculator is still produced. It was introduced in 1981 and is still being made with few changes. The HP 12c featured the reverse Polish notation mode of data entry. In 2003 several new models were released, including an improved version of the HP 12c, the "HP 12c platinum edition" which added more memory, more built-in functions, and the addition of the algebraic mode of data entry. Calculated Industries competed with the HP 12c in the mortgage and real estate markets by differentiating the key labeling; changing the "I", "PV", "FV" to easier labeling terms such as "Int", "Term", "Pmt", and not using the reverse Polish notation. However, CI's more successful calculators involved a line of construction calculators, which evolved and expanded in the 1990s to present. According to Mark Bollman, a mathematics and calculator historian and associate professor of mathematics at Albion College, the "Construction Master is the first in a long and profitable line of CI construction calculators" which carried them through the 1980s, 1990s, and to the present. Use in education In most countries, students use calculators for schoolwork. There was some initial resistance to the idea out of fear that basic or elementary arithmetic skills would suffer. There remains disagreement about the importance of the ability to perform calculations in the head, with some curricula restricting calculator use until a certain level of proficiency has been obtained, while others concentrate more on teaching estimation methods and problem-solving. Research suggests that inadequate guidance in the use of calculating tools can restrict the kind of mathematical thinking that students engage in. Others have argued that calculator use can even cause core mathematical skills to atrophy, or that such use can prevent understanding of advanced algebraic concepts. In December 2011 the UK's Minister of State for Schools, Nick Gibb, voiced concern that children can become "too dependent" on the use of calculators. As a result, the use of calculators is to be included as part of a review of the Curriculum. In the United States, many math educators and boards of education have enthusiastically endorsed the National Council of Teachers of Mathematics (NCTM) standards and actively promoted the use of classroom calculators from kindergarten through high school. Calculators may in some circumstances be used within school and college examinations. In the United Kingdom there are limitations on the type of calculator which may be used in an examination to avoid malpractice. Some calculators which offer additional functionality have an "exam mode" setting which makes them compliant with examination regulations. Personal computers Personal computers often come with a calculator utility program that emulates the appearance and functions of a calculator, using the graphical user interface to portray a calculator. Examples include the Windows Calculator, Apple's Calculator, and KDE's KCalc. Most personal data assistants (PDAs) and smartphones also have such a feature. Calculators compared to computers The fundamental difference between a calculator and computer is that a computer can be programmed in a way that allows the program to take different branches according to intermediate results, while calculators are pre-designed with specific functions (such as addition, multiplication, and logarithms) built in. The distinction is not clear-cut: some devices classed as programmable calculators have programming functions, sometimes with support for programming languages (such as RPL or TI-BASIC). For instance, instead of a hardware multiplier, a calculator might implement floating point mathematics with code in read-only memory (ROM), and compute trigonometric functions with the CORDIC algorithm because CORDIC does not require much multiplication. Bit serial logic designs are more common in calculators whereas bit parallel designs dominate general-purpose computers, because a bit serial design minimizes chip complexity, but takes many more clock cycles. This distinction blurs with high-end calculators, which use processor chips associated with computer and embedded systems design, more so the Z80, MC68000, and ARM architectures, and some custom designs specialized for the calculator market. See also Calculator spelling Comparison of HP graphing calculators Comparison of Texas Instruments graphing calculators Formula calculator HP calculators History of computing hardware Scientific calculator Software calculator Solar-powered calculator Photomath Notes References Sources Further reading – Complex computer – G. R. Stibitz, Bell Laboratories, 1954 (filed 1941, refiled 1944), electromechanical (relay) device that could calculate complex numbers, record, and print results. – Miniature electronic calculator – J. S. Kilby, Texas Instruments, 1974 (originally filed 1967), handheld () battery operated electronic device with thermal printer – Floating Point Calculator With RAM Shift Register – 1977 (originally filed GB March 1971, US July 1971), very early single chip calculator claim. – Extended Numerical Keyboard with Structured Data-Entry Capability – J. H. Redin, 1997 (originally filed 1996), Usage of Verbal Numerals as a way to enter a number. European Patent Office Database – Many patents about mechanical calculators are in classifications G06C15/04, G06C15/06, G06G3/02, G06G3/04 Collectors Guide to Pocket Calculators. by Guy Ball and Bruce Flamm, 1997, – includes an extensive history of early pocket calculators and highlights over 1,500 different models from the early 1970s. Book still in print. (64 pages) External links 30th Anniversary of the Calculator – From Sharp's web presentation of its history; including a picture of the CS-10A desktop calculator The Museum of HP calculators (Slide Rules and Mechanical Calculators section) Microprocessor and single chip calculator history; foundations in Glenrothes, Scotland HP-35 – A thorough analysis of the HP-35 firmware including the Cordic algorithms and the bugs in the early ROM Bell Punch Company and the development of the Anita calculator – The story of the first electronic desktop calculator Dentaku-Museum – Shows mainly Japanese calculators but also others. American inventions Mathematical tools Office equipment 20th-century inventions Electronic calculators
Calculator
[ "Mathematics", "Technology" ]
8,453
[ "Calculators", "Applied mathematics", "Mathematical tools", "nan", "History of computing" ]
7,594
https://en.wikipedia.org/wiki/Cash%20register
A cash register, sometimes called a till or automated money handling system, is a mechanical or electronic device for registering and calculating transactions at a point of sale. It is usually attached to a drawer for storing cash and other valuables. A modern cash register is usually attached to a printer that can print out receipts for record-keeping purposes. History An early mechanical cash register was invented by James Ritty and John Birch following the American Civil War. James was the owner of a saloon in Dayton, Ohio, US, and wanted to stop employees from pilfering his profits. The Ritty Model I was invented in 1879 after seeing a tool that counted the revolutions of the propeller on a steamship. With the help of James' brother John Ritty, they patented it in 1879. It was called Ritty's Incorruptible Cashier and it was invented to stop cashiers from pilfering and eliminate employee theft and embezzlement. Early mechanical registers were entirely mechanical, without receipts. The employee was required to ring up every transaction on the register, and when the total key was pushed, the drawer opened and a bell would ring, alerting the manager to a sale taking place. Those original machines were nothing but simple adding machines. For example, the Rittys’ patent application filed in 1879 for their “improved cash register” describes the device as follows: “The machine consists, essentially, of an inclosed case or frame provided with an index dial and indicator operated by a system of levers or keys and connected with a series of co-operating disks marked with numbers on their peripheries, a row of which numbers are disclosed by a transverse opening or openings in the case to show at a glance the sum-total of cash receipts.” Since the registration is done with the process of returning change, according to Bill Bryson odd pricing came about because by charging odd amounts like 49 and 99 cents (or 45 and 95 cents when nickels are more used than pennies), the cashier very probably had to open the till for the penny change and thus announce the sale. Shortly after the patent, Ritty became overwhelmed with the responsibilities of running two businesses, so he sold all of his interests in the cash register business to Jacob H. Eckert of Cincinnati, a china and glassware salesman, who formed the National Manufacturing Company. In 1884 Eckert sold the company to John H. Patterson, who renamed the company the National Cash Register Company and improved the cash register by adding a paper roll to record sales transactions, thereby creating the journal for internal bookkeeping purposes, and the receipt for external bookkeeping purposes. The original purpose of the receipt was enhanced fraud protection. The business owner could read the receipts to ensure that cashiers charged customers the correct amount for each transaction and did not embezzle the cash drawer. It also prevents a customer from defrauding the business by falsely claiming receipt of a lesser amount of change or a transaction that never happened in the first place. The first evidence of an actual cash register was used in Coalton, Ohio, at the old mining company. In 1906, while working at the National Cash Register company, inventor Charles F. Kettering designed a cash register with an electric motor. A leading designer, builder, manufacturer, seller and exporter of cash registers from the 1950s until the 1970s was London-based (and later Brighton-based) Gross Cash Registers Ltd., founded by brothers Sam and Henry Gross. Their cash registers were particularly popular around the time of decimalisation in Britain in early 1971, Henry having designed one of the few known models of cash register which could switch currencies from £sd to £p so that retailers could easily change from one to the other on or after Decimal Day. Sweda also had decimal-ready registers where the retailer used a special key on Decimal Day for the conversion. In current use In some jurisdictions the law also requires customers to collect the receipt and keep it at least for a short while after leaving the shop, again to check that the shop records sales, so that it cannot evade sales taxes. Often cash registers are attached to scales, barcode scanners, checkstands, and debit card or credit card terminals. Increasingly, dedicated cash registers are being replaced with general purpose computers with POS software. Today, point of sale systems scan the barcode (usually EAN or UPC) for each item, retrieve the price from a database, calculate deductions for items on sale (or, in British retail terminology, "special offer", "multibuy" or "buy one, get one free"), calculate the sales tax or VAT, calculate differential rates for preferred customers, actualize inventory, time and date stamp the transaction, record the transaction in detail including each item purchased, record the method of payment, keep totals for each product or type of product sold as well as total sales for specified periods, and do other tasks as well. These POS terminals will often also identify the cashier on the receipt, and carry additional information or offers. Currently, many cash registers are individual computers. They may be running traditionally in-house software or general purpose software such as DOS. Many of the newer ones have touch screens. They may be connected to computerized point of sale networks using any type of protocol. Such systems may be accessed remotely for the purpose of obtaining records or troubleshooting. Many businesses also use tablet computers as cash registers, utilizing the sale system as downloadable app-software. Cash drawer A cash drawer is usually a compartment underneath a cash register in which the cash from transactions is kept. The drawer typically contains a removable till. The till is usually a plastic or wooden tray divided into compartments used to store each denomination of bank notes and coins separately in order to make counting easier. The removable till allows money to be removed from the sales floor to a more secure location for counting and creating bank deposits. Some modern cash drawers are individual units separate from the rest of the cash register. A cash drawer is usually of strong construction and may be integral with the register or a separate piece that the register sits atop. It slides in and out of its lockable box and is secured by a spring-loaded catch. When a transaction that involves cash is completed, the register sends an electrical impulse to a solenoid to release the catch and open the drawer. Cash drawers that are integral to a stand-alone register often have a manual release catch underneath to open the drawer in the event of a power failure. More advanced cash drawers have eliminated the manual release in favor of a cylinder lock, requiring a key to manually open the drawer. The cylinder lock usually has several positions: locked, unlocked, online (will open if an impulse is given), and release. The release position is an intermittent position with a spring to push the cylinder back to the unlocked position. In the "locked" position, the drawer will remain latched even when an electric impulse is sent to the solenoid. Some cash drawers are designed to store notes upright & facing forward, instead of the traditional flat and front to back position. This allows more varieties of notes to be stored. Some cash drawers are flip top in design, where they flip open instead of sliding out like an ordinary drawer, resembling a cashbox instead. A cash register's drawer can only be opened by an instruction from the cash register except when using special keys, generally held by the owner and some employees (e.g. manager). This reduces the amount of contact most employees have with cash and other valuables. It also reduces risks of an employee taking money from the drawer without a record and the owner's consent, such as when a customer does not expressly ask for a receipt but still has to be given change (cash is more easily checked against recorded sales than inventory). Cash registers include a key labeled "No Sale", abbreviated "NS" on many modern electronic cash registers. Its function is to open the drawer, printing a receipt stating "No Sale" and recording in the register log that the register was opened. Some cash registers require a numeric password or physical key to be used when attempting to open the till. Management functions An often used non-sale function is the aforementioned "no sale". In case of needing to correct change given to the customer, or to make change from a neighboring register, this function will open the cash drawer of the register. Where non-management staff are given access, management can scrutinize the count of "no sales" in the log to look for suspicious patterns. Generally requiring a management key, besides programming prices into the register, are the report functions. An X-report will read the current sales figures from memory and produce a paper printout. A Z-report will act like an "X" report, except that counters will be reset to zero. Manual input Registers will typically feature a numerical pad, QWERTY or custom keyboard, touch screen interface, or a combination of these input methods for the cashier to enter products and fees by hand and access information necessary to complete the sale. For older registers as well as at restaurants and other establishments that do not sell barcoded items, the manual input may be the only method of interacting with the register. While customization was previously limited to larger chains that could afford to have physical keyboards custom-built for their needs, the customization of register inputs is now more widespread with the use of touch screens that can display a variety of point of sale software. Scanner Modern cash registers may be connected to a handheld or stationary barcode reader so that a customer's purchases can be more rapidly scanned than would be possible by keying numbers into the register by hand. The use of scanners should also help prevent errors that result from manually entering the product's barcode or pricing. At grocers, the register's scanner may be combined with a scale for measuring product that is sold by weight. Receipt printer Cashiers are often required to provide a receipt to the customer after a purchase has been made. Registers typically use thermal printers to print receipts, although older dot matrix printers are still in use at some retailers. Alternatively, retailers can forgo issuing paper receipts in some jurisdictions by instead asking the customer for an email to which their receipt can be sent. The receipts of larger retailers tend to include unique barcodes or other information identifying the transaction so that the receipt can be scanned to facilitate returns or other customer services. Security deactivation In stores that use electronic article surveillance, a pad or other surface will be attached to the register that deactivates security devices embedded in or attached to the items being purchased. This will prevent a customer's purchase from setting off security alarms at the store's exit. Remote peripherals In settings like a restaurant, remote pheripherals are sometimes used to speed up processing of orders. These include printers or screens in the kitchen to show staff the incoming orders. Waiters often use mobile devices like phones or tablets connected to a central cash register to takes orders and can use small, mobile bluetooth printers to print receipts directly at the table. Self-service cash register Some corporations and supermarkets have introduced self-checkout machines, where the customer is trusted to scan the barcodes (or manually identify uncoded items like fruit), and place the items into a bagging area. The bag is weighed, and the machine halts the checkout when the weight of something in the bag does not match the weight in the inventory database. Normally, an employee is watching over several such checkouts to prevent theft or exploitation of the machines' weaknesses (for example, intentional misidentification of expensive produce or dry goods). Payment on these machines is accepted by debit card/credit card, or cash via coin slot and bank note scanner. Store employees are also needed to authorize "age-restricted" purchases, such as alcohol, solvents or knives, which can either be done remotely by the employee observing the self-checkout, or by means of a "store login" which the operator has to enter. Gallery See also Credit card terminal EFTPOS Point of sale Point of sale display References Retail store elements 1884 introductions American inventions Cash 19th-century inventions
Cash register
[ "Technology" ]
2,494
[ "Components", "Retail store elements" ]
7,597
https://en.wikipedia.org/wiki/Processor%20design
Processor design is a subfield of computer science and computer engineering (fabrication) that deals with creating a processor, a key component of computer hardware. The design process involves choosing an instruction set and a certain execution paradigm (e.g. VLIW or RISC) and results in a microarchitecture, which might be described in e.g. VHDL or Verilog. For microprocessor design, this description is then manufactured employing some of the various semiconductor device fabrication processes, resulting in a die which is bonded onto a chip carrier. This chip carrier is then soldered onto, or inserted into a socket on, a printed circuit board (PCB). The mode of operation of any processor is the execution of lists of instructions. Instructions typically include those to compute or manipulate data values using registers, change or retrieve values in read/write memory, perform relational tests between data values and to control program flow. Processor designs are often tested and validated on one or several FPGAs before sending the design of the processor to a foundry for semiconductor fabrication. Details Basics CPU design is divided into multiple components. Information is transferred through datapaths (such as ALUs and pipelines). These datapaths are controlled through logic by control units. Memory components include register files and caches to retain information, or certain actions. Clock circuitry maintains internal rhythms and timing through clock drivers, PLLs, and clock distribution networks. Pad transceiver circuitry with allows signals to be received and sent and a logic gate cell library which is used to implement the logic. Logic gates are the foundation for processor design as they are used to implement most of the processor's components. CPUs designed for high-performance markets might require custom (optimized or application specific (see below)) designs for each of these items to achieve frequency, power-dissipation, and chip-area goals whereas CPUs designed for lower performance markets might lessen the implementation burden by acquiring some of these items by purchasing them as intellectual property. Control logic implementation techniques (logic synthesis using CAD tools) can be used to implement datapaths, register files, and clocks. Common logic styles used in CPU design include unstructured random logic, finite-state machines, microprogramming (common from 1965 to 1985), and Programmable logic arrays (common in the 1980s, no longer common). Implementation logic Device types used to implement the logic include: Individual vacuum tubes, individual transistors and semiconductor diodes, and transistor-transistor logic small-scale integration logic chips – no longer used for CPUs Programmable array logic and programmable logic devices – no longer used for CPUs Emitter-coupled logic (ECL) gate arrays – no longer common CMOS gate arrays – no longer used for CPUs CMOS mass-produced ICs – the vast majority of CPUs by volume CMOS ASICs – only for a minority of special applications due to expense Field-programmable gate arrays (FPGA) – common for soft microprocessors, and more or less required for reconfigurable computing A CPU design project generally has these major tasks: Programmer-visible instruction set architecture, which can be implemented by a variety of microarchitectures Architectural study and performance modeling in ANSI C/C++ or SystemC High-level synthesis (HLS) or register transfer level (RTL, e.g. logic) implementation RTL verification Circuit design of speed critical components (caches, registers, ALUs) Logic synthesis or logic-gate-level design Timing analysis to confirm that all logic and circuits will run at the specified operating frequency Physical design including floorplanning, place and route of logic gates Checking that RTL, gate-level, transistor-level and physical-level representations are equivalent Checks for signal integrity, chip manufacturability Re-designing a CPU core to a smaller die area helps to shrink everything (a "photomask shrink"), resulting in the same number of transistors on a smaller die. It improves performance (smaller transistors switch faster), reduces power (smaller wires have less parasitic capacitance) and reduces cost (more CPUs fit on the same wafer of silicon). Releasing a CPU on the same size die, but with a smaller CPU core, keeps the cost about the same but allows higher levels of integration within one very-large-scale integration chip (additional cache, multiple CPUs or other components), improving performance and reducing overall system cost. As with most complex electronic designs, the logic verification effort (proving that the design does not have bugs) now dominates the project schedule of a CPU. Key CPU architectural innovations include index register, cache, virtual memory, instruction pipelining, superscalar, CISC, RISC, virtual machine, emulators, microprogram, and stack. Micro-architectural concepts Research topics A variety of new CPU design ideas have been proposed, including reconfigurable logic, clockless CPUs, computational RAM, and optical computing. Performance analysis and benchmarking Benchmarking is a way of testing CPU speed. Examples include SPECint and SPECfp, developed by Standard Performance Evaluation Corporation, and ConsumerMark developed by the Embedded Microprocessor Benchmark Consortium EEMBC. Some of the commonly used metrics include: Instructions per second - Most consumers pick a computer architecture (normally Intel IA32 architecture) to be able to run a large base of pre-existing pre-compiled software. Being relatively uninformed on computer benchmarks, some of them pick a particular CPU based on operating frequency (see Megahertz Myth). FLOPS - The number of floating point operations per second is often important in selecting computers for scientific computations. Performance per watt - System designers building parallel computers, such as Google, pick CPUs based on their speed per watt of power, because the cost of powering the CPU outweighs the cost of the CPU itself. Some system designers building parallel computers pick CPUs based on the speed per dollar. System designers building real-time computing systems want to guarantee worst-case response. That is easier to do when the CPU has low interrupt latency and when it has deterministic response. (DSP) Computer programmers who program directly in assembly language want a CPU to support a full featured instruction set. Low power - For systems with limited power sources (e.g. solar, batteries, human power). Small size or low weight - for portable embedded systems, systems for spacecraft. Environmental impact - Minimizing environmental impact of computers during manufacturing and recycling as well during use. Reducing waste, reducing hazardous materials. (see Green computing). There may be tradeoffs in optimizing some of these metrics. In particular, many design techniques that make a CPU run faster make the "performance per watt", "performance per dollar", and "deterministic response" much worse, and vice versa. Markets There are several different markets in which CPUs are used. Since each of these markets differ in their requirements for CPUs, the devices designed for one market are in most cases inappropriate for the other markets. General purpose computing , in the general-purpose computing market, that is, desktop, laptop, and server computers commonly used in businesses and homes, the Intel IA-32 and the 64-bit version x86-64 architecture dominate the market, with its rivals PowerPC and SPARC maintaining much smaller customer bases. Yearly, hundreds of millions of IA-32 architecture CPUs are used by this market. A growing percentage of these processors are for mobile implementations such as netbooks and laptops. Since these devices are used to run countless different types of programs, these CPU designs are not specifically targeted at one type of application or one function. The demands of being able to run a wide range of programs efficiently has made these CPU designs among the more advanced technically, along with some disadvantages of being relatively costly, and having high power consumption. High-end processor economics In 1984, most high-performance CPUs required four to five years to develop. Scientific computing Scientific computing is a much smaller niche market (in revenue and units shipped). It is used in government research labs and universities. Before 1990, CPU design was often done for this market, but mass market CPUs organized into large clusters have proven to be more affordable. The main remaining area of active hardware design and research for scientific computing is for high-speed data transmission systems to connect mass market CPUs. Embedded design As measured by units shipped, most CPUs are embedded in other machinery, such as telephones, clocks, appliances, vehicles, and infrastructure. Embedded processors sell in the volume of many billions of units per year, however, mostly at much lower price points than that of the general purpose processors. These single-function devices differ from the more familiar general-purpose CPUs in several ways: Low cost is of high importance. It is important to maintain a low power dissipation as embedded devices often have a limited battery life and it is often impractical to include cooling fans. To give lower system cost, peripherals are integrated with the processor on the same silicon chip. Keeping peripherals on-chip also reduces power consumption as external GPIO ports typically require buffering so that they can source or sink the relatively high current loads that are required to maintain a strong signal outside of the chip. Many embedded applications have a limited amount of physical space for circuitry; keeping peripherals on-chip will reduce the space required for the circuit board. The program and data memories are often integrated on the same chip. When the only allowed program memory is ROM, the device is known as a microcontroller. For many embedded applications, interrupt latency will be more critical than in some general-purpose processors. Embedded processor economics The embedded CPU family with the largest number of total units shipped is the 8051, averaging nearly a billion units per year. The 8051 is widely used because it is very inexpensive. The design time is now roughly zero, because it is widely available as commercial intellectual property. It is now often embedded as a small part of a larger system on a chip. The silicon cost of an 8051 is now as low as US$0.001, because some implementations use as few as 2,200 logic gates and take 0.4730 square millimeters of silicon. As of 2009, more CPUs are produced using the ARM architecture family instruction sets than any other 32-bit instruction set. The ARM architecture and the first ARM chip were designed in about one and a half years and 5 human years of work time. The 32-bit Parallax Propeller microcontroller architecture and the first chip were designed by two people in about 10 human years of work time. The 8-bit AVR architecture and first AVR microcontroller was conceived and designed by two students at the Norwegian Institute of Technology. The 8-bit 6502 architecture and the first MOS Technology 6502 chip were designed in 13 months by a group of about 9 people. Research and educational CPU design The 32-bit Berkeley RISC I and RISC II processors were mostly designed by a series of students as part of a four quarter sequence of graduate courses. This design became the basis of the commercial SPARC processor design. For about a decade, every student taking the 6.004 class at MIT was part of a team—each team had one semester to design and build a simple 8 bit CPU out of 7400 series integrated circuits. One team of 4 students designed and built a simple 32 bit CPU during that semester. Some undergraduate courses require a team of 2 to 5 students to design, implement, and test a simple CPU in a FPGA in a single 15-week semester. The MultiTitan CPU was designed with 2.5 man years of effort, which was considered "relatively little design effort" at the time. 24 people contributed to the 3.5 year MultiTitan research project, which included designing and building a prototype CPU. Soft microprocessor cores For embedded systems, the highest performance levels are often not needed or desired due to the power consumption requirements. This allows for the use of processors which can be totally implemented by logic synthesis techniques. These synthesized processors can be implemented in a much shorter amount of time, giving quicker time-to-market. See also Amdahl's law Central processing unit Comparison of instruction set architectures Complex instruction set computer CPU cache Electronic design automation Heterogeneous computing High-level synthesis History of general-purpose CPUs Integrated circuit design Microarchitecture Microprocessor Minimal instruction set computer Moore's law Reduced instruction set computer System on a chip Network on a chip Process design kit – a set of documents created or accumulated for a semiconductor device production process Uncore References General references Processor Design: An Introduction Central processing unit Computer engineering Design engineering
Processor design
[ "Technology", "Engineering" ]
2,646
[ "Electrical engineering", "Design engineering", "Design", "Computer engineering" ]
7,607
https://en.wikipedia.org/wiki/Collagen%20helix
In molecular biology, the collagen triple helix or type-2 helix is the main secondary structure of various types of fibrous collagen, including type I collagen. In 1954, Ramachandran & Kartha (13, 14) advanced a structure for the collagen triple helix on the basis of fiber diffraction data. It consists of a triple helix made of the repetitious amino acid sequence glycine-X-Y, where X and Y are frequently proline or hydroxyproline. Collagen folded into a triple helix is known as tropocollagen. Collagen triple helices are often bundled into fibrils which themselves form larger fibres, as in tendons. Structure Glycine, proline, and hydroxyproline must be in their designated positions with the correct configuration. For example, hydroxyproline in the Y position increases the thermal stability of the triple helix, but not when it is located in the X position. The thermal stabilization is also hindered when the hydroxyl group has the wrong configuration. Due to the high abundance of glycine and proline contents, collagen fails to form a regular α-helix and β-sheet structure. Three left-handed helical strands twist to form a right-handed triple helix. A collagen triple helix has 3.3 residues per turn. Each of the three chains is stabilized by the steric repulsion due to the pyrrolidine rings of proline and hydroxyproline residues. The pyrrolidine rings keep out of each other's way when the polypeptide chain assumes this extended helical form, which is much more open than the tightly coiled form of the alpha helix. The three chains are hydrogen bonded to each other. The hydrogen bond donors are the peptide NH groups of glycine residues. The hydrogen bond acceptors are the CO groups of residues on the other chains. The OH group of hydroxyproline does not participate in hydrogen bonding but stabilises the trans isomer of proline by stereoelectronic effects, therefore stabilizing the entire triple helix. The rise of the collagen helix (superhelix) is 2.9 Å (0.29 nm) per residue. The center of the collagen triple helix is very small and hydrophobic, and every third residue of the helix must have contact with the center. Due to the very tiny and tight space at the center, only the small hydrogen of the glycine side chain is capable of interacting with the center. This contact is impossible even when a slightly bigger amino acid residue is present other than glycine. References Protein structural motifs Helices Protein folds
Collagen helix
[ "Biology" ]
566
[ "Protein tandem repeats", "Protein structural motifs", "Protein classification" ]
7,609
https://en.wikipedia.org/wiki/Cosmic%20censorship%20hypothesis
The weak and the strong cosmic censorship hypotheses are two mathematical conjectures about the structure of gravitational singularities arising in general relativity. Singularities that arise in the solutions of Einstein's equations are typically hidden within event horizons, and therefore cannot be observed from the rest of spacetime. Singularities that are not so hidden are called naked. The weak cosmic censorship hypothesis was conceived by Roger Penrose in 1969 and posits that no naked singularities exist in the universe. Basics Since the physical behavior of singularities is unknown, if singularities can be observed from the rest of spacetime, causality may break down, and physics may lose its predictive power. The issue cannot be avoided, since according to the Penrose–Hawking singularity theorems, singularities are inevitable in physically reasonable situations. Still, in the absence of naked singularities, the universe, as described by the general theory of relativity, is deterministic: it is possible to predict the entire evolution of the universe (possibly excluding some finite regions of space hidden inside event horizons of singularities), knowing only its condition at a certain moment of time (more precisely, everywhere on a spacelike three-dimensional hypersurface, called the Cauchy surface). Failure of the cosmic censorship hypothesis leads to the failure of determinism, because it is yet impossible to predict the behavior of spacetime in the causal future of a singularity. Cosmic censorship is not merely a problem of formal interest; some form of it is assumed whenever black hole event horizons are mentioned. The hypothesis was first formulated by Roger Penrose in 1969, and it is not stated in a completely formal way. In a sense it is more of a research program proposal: part of the research is to find a proper formal statement that is physically reasonable, falsifiable, and sufficiently general to be interesting. Because the statement is not a strictly formal one, there is sufficient latitude for (at least) two independent formulations: a weak form, and a strong form. Weak and strong cosmic censorship hypothesis The weak and the strong cosmic censorship hypotheses are two conjectures concerned with the global geometry of spacetimes. The weak cosmic censorship hypothesis asserts there can be no singularity visible from future null infinity. In other words, singularities need to be hidden from an observer at infinity by the event horizon of a black hole. Mathematically, the conjecture states that, for generic initial data, the causal structure is such that the maximal Cauchy development possesses a complete future null infinity. The strong cosmic censorship hypothesis asserts that, generically, general relativity is a deterministic theory, in the same sense that classical mechanics is a deterministic theory. In other words, the classical fate of all observers should be predictable from the initial data. Mathematically, the conjecture states that the maximal Cauchy development of generic compact or asymptotically flat initial data is locally inextendible as a regular Lorentzian manifold. Taken in its strongest sense, the conjecture suggests locally inextendibility of the maximal Cauchy development as a continuous Lorentzian manifold [very Strong Cosmic Censorship]. This strongest version was disproven in 2018 by Mihalis Dafermos and Jonathan Luk for the Cauchy horizon of an uncharged, rotating black hole. The two conjectures are mathematically independent, as there exist spacetimes for which weak cosmic censorship is valid but strong cosmic censorship is violated and, conversely, there exist spacetimes for which weak cosmic censorship is violated but strong cosmic censorship is valid. Example The Kerr metric, corresponding to a black hole of mass and angular momentum , can be used to derive the effective potential for particle orbits restricted to the equator (as defined by rotation). This potential looks like: where is the coordinate radius, and are the test-particle's conserved energy and angular momentum respectively (constructed from the Killing vectors). To preserve cosmic censorship, the black hole is restricted to the case of . For there to exist an event horizon around the singularity, the requirement must be satisfied. This amounts to the angular momentum of the black hole being constrained to below a critical value, outside of which the horizon would disappear. The following thought experiment is reproduced from Hartle's Gravity: Problems with the concept There are a number of difficulties in formalizing the hypothesis: There are technical difficulties with properly formalizing the notion of a singularity. It is not difficult to construct spacetimes which have naked singularities, but which are not "physically reasonable"; the canonical example of such a spacetime is perhaps the "superextremal" Reissner–Nordström solution, which contains a singularity at that is not surrounded by a horizon. A formal statement needs some set of hypotheses which exclude these situations. Caustics may occur in simple models of gravitational collapse, and can appear to lead to singularities. These have more to do with the simplified models of bulk matter used, and in any case have nothing to do with general relativity, and need to be excluded. Computer models of gravitational collapse have shown that naked singularities can arise, but these models rely on very special circumstances (such as spherical symmetry). These special circumstances need to be excluded by some hypotheses. In 1991, John Preskill and Kip Thorne bet against Stephen Hawking that the hypothesis was false. Hawking conceded the bet in 1997, due to the discovery of the special situations just mentioned, which he characterized as "technicalities". Hawking later reformulated the bet to exclude those technicalities. The revised bet is still open (although Hawking died in 2018), the prize being "clothing to cover the winner's nakedness". Counter-example An exact solution to the scalar-Einstein equations which forms a counterexample to many formulations of the cosmic censorship hypothesis was found by Mark D. Roberts in 1985: where is a constant. See also Black hole information paradox Chronology protection conjecture Firewall (physics) Fuzzball (string theory) Thorne–Hawking–Preskill bet References Further reading External links The old bet (conceded in 1997) The new bet Black holes General relativity
Cosmic censorship hypothesis
[ "Physics", "Astronomy" ]
1,269
[ "Black holes", "Physical phenomena", "Physical quantities", "Unsolved problems in physics", "Astrophysics", "General relativity", "Density", "Theory of relativity", "Stellar phenomena", "Astronomical objects" ]
7,622
https://en.wikipedia.org/wiki/Complex%20instruction%20set%20computer
A complex instruction set computer (CISC ) is a computer architecture in which single instructions can execute several low-level operations (such as a load from memory, an arithmetic operation, and a memory store) or are capable of multi-step operations or addressing modes within single instructions. The term was retroactively coined in contrast to reduced instruction set computer (RISC) and has therefore become something of an umbrella term for everything that is not RISC, where the typical differentiating characteristic is that most RISC designs use uniform instruction length for almost all instructions, and employ strictly separate load and store instructions. Examples of CISC architectures include complex mainframe computers to simplistic microcontrollers where memory load and store operations are not separated from arithmetic instructions. Specific instruction set architectures that have been retroactively labeled CISC are System/360 through z/Architecture, the PDP-11 and VAX architectures, and many others. Well known microprocessors and microcontrollers that have also been labeled CISC in many academic publications include the Motorola 6800, 6809 and 68000 families; the Intel 8080, iAPX 432, x86 and 8051 families; the Zilog Z80, Z8 and Z8000 families; the National Semiconductor NS320xx family; the MOS Technology 6502 family; and others. Some designs have been regarded as borderline cases by some writers. For instance, the Microchip Technology PIC has been labeled RISC in some circles and CISC in others. Incitements and benefits Before the RISC philosophy became prominent, many computer architects tried to bridge the so-called semantic gap, i.e., to design instruction sets that directly support high-level programming constructs such as procedure calls, loop control, and complex addressing modes, allowing data structure and array accesses to be combined into single instructions. Instructions are also typically highly encoded in order to further enhance the code density. The compact nature of such instruction sets results in smaller program sizes and fewer main memory accesses (which were often slow), which at the time (early 1960s and onwards) resulted in a tremendous saving on the cost of computer memory and disc storage, as well as faster execution. It also meant good programming productivity even in assembly language, as high level languages such as Fortran or Algol were not always available or appropriate. Indeed, microprocessors in this category are sometimes still programmed in assembly language for certain types of critical applications. New instructions In the 1970s, analysis of high-level languages indicated compilers produced some complex corresponding machine language. It was determined that new instructions could improve performance. Some instructions were added that were never intended to be used in assembly language but fit well with compiled high-level languages. Compilers were updated to take advantage of these instructions. The benefits of semantically rich instructions with compact encodings can be seen in modern processors as well, particularly in the high-performance segment where caches are a central component (as opposed to most embedded systems). This is because these fast, but complex and expensive, memories are inherently limited in size, making compact code beneficial. Of course, the fundamental reason they are needed is that main memories (i.e., dynamic RAM today) remain slow compared to a (high-performance) CPU core. Design issues While many designs achieved the aim of higher throughput at lower cost and also allowed high-level language constructs to be expressed by fewer instructions, it was observed that this was not always the case. For instance, low-end versions of complex architectures (i.e. using less hardware) could lead to situations where it was possible to improve performance by not using a complex instruction (such as a procedure call or enter instruction) but instead using a sequence of simpler instructions. One reason for this was that architects (microcode writers) sometimes "over-designed" assembly language instructions, including features that could not be implemented efficiently on the basic hardware available. There could, for instance, be "side effects" (above conventional flags), such as the setting of a register or memory location that was perhaps seldom used; if this was done via ordinary (non duplicated) internal buses, or even the external bus, it would demand extra cycles every time, and thus be quite inefficient. Even in balanced high-performance designs, highly encoded and (relatively) high-level instructions could be complicated to decode and execute efficiently within a limited transistor budget. Such architectures therefore required a great deal of work on the part of the processor designer in cases where a simpler, but (typically) slower, solution based on decode tables and/or microcode sequencing is not appropriate. At a time when transistors and other components were a limited resource, this also left fewer components and less opportunity for other types of performance optimizations. The RISC idea The circuitry that performs the actions defined by the microcode in many (but not all) CISC processors is, in itself, a processor which in many ways is reminiscent in structure to very early CPU designs. In the early 1970s, this gave rise to ideas to return to simpler processor designs in order to make it more feasible to cope without (then relatively large and expensive) ROM tables and/or PLA structures for sequencing and/or decoding. An early (retroactively) RISC-labeled processor (IBM 801 IBM's Watson Research Center, mid-1970s) was a tightly pipelined simple machine originally intended to be used as an internal microcode kernel, or engine, in CISC designs, but also became the processor that introduced the RISC idea to a somewhat larger audience. Simplicity and regularity also in the visible instruction set would make it easier to implement overlapping processor stages (pipelining) at the machine code level (i.e. the level seen by compilers). However, pipelining at that level was already used in some high-performance CISC "supercomputers" in order to reduce the instruction cycle time (despite the complications of implementing within the limited component count and wiring complexity feasible at the time). Internal microcode execution in CISC processors, on the other hand, could be more or less pipelined depending on the particular design, and therefore more or less akin to the basic structure of RISC processors. The CDC 6600 supercomputer, first delivered in 1965, has also been retroactively described as RISC. It had a load–store architecture which allowed up to five loads and two stores to be in progress simultaneously under programmer control. It also had multiple function units which could operate at the same time. Superscalar In a more modern context, the complex variable-length encoding used by some of the typical CISC architectures makes it complicated, but still feasible, to build a superscalar implementation of a CISC programming model directly; the in-order superscalar original Pentium and the out-of-order superscalar Cyrix 6x86 are well-known examples of this. The frequent memory accesses for operands of a typical CISC machine may limit the instruction-level parallelism that can be extracted from the code, although this is strongly mediated by the fast cache structures used in modern designs, as well as by other measures. Due to inherently compact and semantically rich instructions, the average amount of work performed per machine code unit (i.e. per byte or bit) is higher for a CISC than a RISC processor, which may give it a significant advantage in a modern cache-based implementation. Transistors for logic, PLAs, and microcode are no longer scarce resources; only large high-speed cache memories are limited by the maximum number of transistors today. Although complex, the transistor count of CISC decoders do not grow exponentially like the total number of transistors per processor (the majority typically used for caches). Together with better tools and enhanced technologies, this has led to new implementations of highly encoded and variable-length designs without load–store limitations (i.e. non-RISC). This governs re-implementations of older architectures such as the ubiquitous x86 (see below) as well as new designs for microcontrollers for embedded systems, and similar uses. The superscalar complexity in the case of modern x86 was solved by converting instructions into one or more micro-operations and dynamically issuing those micro-operations, i.e. indirect and dynamic superscalar execution; the Pentium Pro and AMD K5 are early examples of this. It allows a fairly simple superscalar design to be located after the (fairly complex) decoders (and buffers), giving, so to speak, the best of both worlds in many respects. This technique is also used in IBM z196 and later z/Architecture microprocessors. CISC and RISC terms The terms CISC and RISC have become less meaningful with the continued evolution of both CISC and RISC designs and implementations. The first highly (or tightly) pipelined x86 implementations, the 486 designs from Intel, AMD, Cyrix, and IBM, supported every instruction that their predecessors did, but achieved maximum efficiency only on a fairly simple x86 subset that was only a little more than a typical RISC instruction set (i.e., without typical RISC load–store limits). The Intel P5 Pentium generation was a superscalar version of these principles. However, modern x86 processors also (typically) decode and split instructions into dynamic sequences of internally buffered micro-operations, which helps execute a larger subset of instructions in a pipelined (overlapping) fashion, and facilitates more advanced extraction of parallelism out of the code stream, for even higher performance. Contrary to popular simplifications (present also in some academic texts,) not all CISCs are microcoded or have "complex" instructions. As CISC became a catch-all term meaning anything that's not a load–store (RISC) architecture, it's not the number of instructions, nor the complexity of the implementation or of the instructions, that define CISC, but that arithmetic instructions also perform memory accesses. Compared to a small 8-bit CISC processor, a RISC floating-point instruction is complex. CISC does not even need to have complex addressing modes; 32- or 64-bit RISC processors may well have more complex addressing modes than small 8-bit CISC processors. A PDP-10, a PDP-8, an Intel 80386, an Intel 4004, a Motorola 68000, a System z mainframe, a Burroughs B5000, a VAX, a Zilog Z80000, and a MOS Technology 6502 all vary widely in the number, sizes, and formats of instructions, the number, types, and sizes of registers, and the available data types. Some have hardware support for operations like scanning for a substring, arbitrary-precision BCD arithmetic, or transcendental functions, while others have only 8-bit addition and subtraction. But they are all in the CISC category. because they have "load-operate" instructions that load and/or store memory contents within the same instructions that perform the actual calculations. For instance, the PDP-8, having only 8 fixed-length instructions and no microcode at all, is a CISC because of how the instructions work, PowerPC, which has over 230 instructions (more than some VAXes), and complex internals like register renaming and a reorder buffer, is a RISC, while Minimal CISC has 8 instructions, but is clearly a CISC because it combines memory access and computation in the same instructions. See also Explicitly parallel instruction computing Minimal instruction set computer Reduced instruction set computer One-instruction set computer Zero instruction set computer Very long instruction word Microcode Comparison of instruction set architectures References General references Tanenbaum, Andrew S. (2006) Structured Computer Organization, Fifth Edition, Pearson Education, Inc. Upper Saddle River, NJ. Further reading Classes of computers
Complex instruction set computer
[ "Technology" ]
2,500
[ "Computers", "Computer systems", "Classes of computers" ]
7,647
https://en.wikipedia.org/wiki/Counter%20%28digital%29
In digital logic and computing, a counter is a device which stores (and sometimes displays) the number of times a particular event or process has occurred, often in relationship to a clock. The most common type is a sequential digital logic circuit with an input line called the clock and multiple output lines. The values on the output lines represent a number in the binary or BCD number system. Each pulse applied to the clock input increments or decrements the number in the counter. A counter circuit is usually constructed of several flip-flops connected in a cascade. Counters are a very widely used component in digital circuits, and are manufactured as separate integrated circuits and also incorporated as parts of larger integrated circuits. Electronic counters An electronic counter is a sequential logic circuit that has a clock input signal and a group of output signals that represent an integer "counts" value. Upon each qualified clock edge, the circuit will increment (or decrement, depending on circuit design) the counts. When the counts have reached the end of the counting sequence (maximum counts when incrementing; zero counts when decrementing), the next clock will cause the counts to overflow or underflow, and the counting sequence will start over. Internally, counters use flip-flops to represent the current counts and to retain the counts between clocks. Depending on the type of counter, the output may be a direct representation of the counts (a binary number), or it may be encoded. Examples of the latter include ring counters and counters that output Gray codes. Many counters provide additional input signals to facilitate dynamic control of the counting sequence, such as: Reset – sets counts to zero. Some IC manufacturers name it "clear" or "master reset (MR)". Enable – allows or inhibits counting. Direction – determines whether counts will increment or decrement. Data – parallel input data which represents a particular counts value. Load – copies parallel input data to the counts. Some counters provide a Terminal Count output which indicates that the next clock will cause overflow or underflow. This is commonly used to implement counter cascading (combining two or more counters to create a single, larger counter) by connecting the Terminal Count output of one counter to the Enable input of the next counter. The modulus of a counter is the number of states in its count sequence. The maximum possible modulus is determined by the number of flip-flops. For example, a four-bit counter can have a modulus of up to 16 (2^4). Counters are generally classified as either synchronous or asynchronous. In synchronous counters, all flip-flops share a common clock and change state at the same time. In asynchronous counters, each flip-flop has a unique clock, and the flip-flop states change at different times. Counters are categorized in various ways. For example: Modulus counter – counts through a particular number of states. Decade counter – modulus ten counter (counts through ten states). – counts up and down, as directed by a control input, or by the use of separate "up" and "down" clocks. Ring counter – formed by a "circular" shift register. Johnson counter – a twisted ring counter. Gray-code counter – outputs a sequence of Gray codes. Shift register generator counter – based on a shift register with feedback. Counters are implemented in a variety of ways, including as dedicated MSI and LSI integrated circuits, as embedded counters within ASICs, as general-purpose counter and timer peripherals in microcontrollers, and as IP blocks in FPGAs. Asynchronous (ripple) counter An asynchronous (ripple) counter is a "chain" of toggle (T) flip-flops in which the least-significant flip-flop (bit 0) is clocked by an external signal (the counter input clock), and all other flip-flops are clocked by the output of the nearest, less significant flip-flop (e.g., bit 0 clocks the bit 1 flip-flop, bit 1 clocks the bit 2 flip-flop, etc.). The first flip-flop is clocked by rising edges; all other flip-flops in the chain are clocked by falling clock edges. Each flip-flop introduces a delay from clock edge to output toggle, thus causing the counter bits to change at different times and producing a ripple effect as the counter input clock propagates through the chain. When implemented with discrete flip-flops, ripple counters are commonly implemented with JK flip-flops, with each flip-flop configured to toggle when clocked (i.e., J and K are both connected to logic high). In the simplest case, a one-bit counter consists of a single flip-flop. This counter will increment (by toggling its output) once per clock cycle and will count from zero to one before overflowing (starting over at zero). Each output state corresponds to two clock cycles; consequently, the flip-flop output frequency is exactly half the frequency of the input clock. If this output is then used as the clock signal for a second flip-flop, the pair of flip-flops will form a two-bit ripple counter with the following state sequence: Additional flip-flops may be added to the chain to form counters of any arbitrary word size, with the output frequency of each bit equal to exactly half the frequency of the nearest, less significant bit. Ripple counters exhibit unstable output states while the input clock propagates through the circuit. The duration of this instability (the output settling time) is proportional to the number of flip-flops. This makes ripple counters unsuitable for use in synchronous circuits that require the counter to have a fast output settling time. Also, it is often impractical to use ripple counter output bits as clocks for external circuits because the ripple effect causes timing skew between the bits. Ripple counters are commonly used as general-purpose counters and clock frequency dividers in applications where the instantaneous count and timing skew is unimportant. Synchronous counter In a synchronous counter, the clock inputs of the flip-flops are connected, and the common clock simultaneously triggers all flip-flops. Consequently, all of the flip-flops change state at the same time (in parallel). For example, the circuit shown to the right is an ascending (up-counting) four-bit synchronous counter implemented with JK flip-flops. Each bit of this counter is allowed to toggle when all of the less significant bits are at a logic high state. Upon clock rising edge, bit 1 toggles if bit 0 is logic high; bit 2 toggles if bits 0 and 1 are both high; bit 3 toggles if bits 2, 1, and 0 are all high. Decade counter A decade counter counts in decimal digits, rather than binary. A decade counter may have each (that is, it may count in binary-coded decimal, as the 7490 integrated circuit did) or other binary encodings. A decade counter is a binary counter designed to count to 1001 (decimal 9). An ordinary four-stage counter can be easily modified to a decade counter by adding a NAND gate as in the schematic to the right. Notice that FF2 and FF4 provide the inputs to the NAND gate. The NAND gate outputs are connected to the CLR input of each of the FFs.". It counts from 0 to 9 and then resets to zero. The counter output can be set to zero by pulsing the reset line low. The count then increments on each clock pulse until it reaches 1001 (decimal 9). When it increments to 1010 (decimal 10), both inputs of the NAND gate go high. The result is that the NAND output goes low, and resets the counter to zero. D going low can be a CARRY OUT signal, indicating that there has been a count of ten. Ring counter A ring counter is a circular shift register that is initiated such that only one of its flip-flops is the state one while others are in their zero states. A ring counter is a shift register (a cascade connection of flip-flops) with the output of the last one connected to the input of the first, that is, in a ring. Typically, a pattern consisting of a single bit is circulated, so the state repeats every n clock cycles if n flip-flops are used. Johnson counter A Johnson counter (or switch-tail ring counter, twisted ring counter, walking ring counter, or Möbius counter) is a modified ring counter, where the output from the last stage is inverted and fed back as input to the first stage. The register cycles through a sequence of bit-patterns, whose length is equal to twice the length of the shift register, continuing indefinitely. These counters find specialist applications similar to the decade counter (note: the 74x4017 decade counter is a Johnson counter), digital-to-analog conversion, etc. They can be implemented easily using D- or JK-type flip-flops. Computer science counters In computability theory, a counter is considered a type of memory. A counter stores a single natural number (initially zero) and can be arbitrarily long. A counter is usually considered in conjunction with a finite-state machine (FSM), which can perform the following operations on the counter: Check whether the counter is zero Increment the counter by one. Decrement the counter by one (if it's already zero, this leaves it unchanged). The following machines are listed in order of power, with each one being strictly more powerful than the one below it: Deterministic or non-deterministic FSM plus two counters Non-deterministic FSM plus one stack Non-deterministic FSM plus one counter Deterministic FSM plus one counter Deterministic or non-deterministic FSM. For the first and last, it doesn't matter whether the FSM is a deterministic finite automaton or a nondeterministic finite automaton. They have the same power. The first two and the last one are levels of the Chomsky hierarchy. The first machine, an FSM plus two counters, is equivalent in power to a Turing machine. See the article on counter machines for a proof. Web counter A web counter or hit counter is a computer program that indicates the number of visitors or hits a particular webpage has received. Once set up, these counters will be incremented by one every time the web page is accessed in a web browser. The number is usually displayed as an inline digital image or in plain text or on a physical counter such as a mechanical counter. Images may be presented in a variety of fonts, or styles; the classic example is the wheels of an odometer. Web counter was popular in the mid to late 1990s and early 2000s, later replaced by more detailed and complete web traffic measures. Computer based counters Many automation systems use PC and laptops to monitor different parameters of machines and production data. Counters may count parameters such as the number of pieces produced, the production batch number, and measurements of the amounts of material used. Mechanical counters Long before electronics became common, mechanical devices were used to count events. These are known as tally counters. They typically consist of a series of disks mounted on an axle, with the digits zero through nine marked on their edge. The right-most disk moves one increment with each event. Each disk except the left-most has a protrusion that moves the next disk to the left one increment after the completion of one revolution. Such counters were used as odometers for bicycles and cars and in tape recorders, fuel dispensers, in production machinery as well as in other machinery. One of the largest manufacturers was the Veeder-Root company, and their name was often used for this type of counter. Handheld tally counters are used mainly for stocktaking and counting people attending events. Electromechanical counters were used to accumulate totals in tabulating machines that pioneered the data processing industry. See also Time to digital converter Geneva drive Pace count beads Prayer beads Asynchronous circuit Synchronous circuit References External links Numeral systems Digital circuits Unary operations
Counter (digital)
[ "Mathematics" ]
2,571
[ "Functions and mappings", "Unary operations", "Mathematical objects", "Numeral systems", "Mathematical relations", "Numbers" ]
7,673
https://en.wikipedia.org/wiki/Costume
Costume is the distinctive style of dress and/or makeup of an individual or group that reflects class, gender, occupation, ethnicity, nationality, activity or epoch—in short, culture. The term also was traditionally used to describe typical appropriate clothing for certain activities, such as riding costume, swimming costume, dance costume, and evening costume. Appropriate and acceptable costume is subject to changes in fashion and local cultural norms. This general usage has gradually been replaced by the terms "dress", "attire", "robes" or "wear" and usage of "costume" has become more limited to unusual or out-of-date clothing and to attire intended to evoke a change in identity, such as theatrical, Halloween, and mascot costumes. Before the advent of ready-to-wear apparel, clothing was made by hand. When made for commercial sale it was made, as late as the beginning of the 20th century, by "costumiers", often women who ran businesses that met the demand for complicated or intimate female costume, including millinery and corsetry. Etymology Derived from the Italian language and passed down through French, the term "costume" shares its origins with the word signifying fashion or custom. Variedly, the term "costume," indicating clothing exclusively from the eighteenth century onward, can be traced back to the Latin consuetudo, meaning "custom" or "usage." National costume National costume or regional costume expresses local (or exiled) identity and emphasizes a culture's unique attributes. They are often a source of national pride. Examples include the Scottish kilt, Turkish Zeybek, or Japanese kimono. In Bhutan there is a traditional national dress prescribed for men and women, including the monarchy. These have been in vogue for thousands of years and have developed into a distinctive dress style. The dress worn by men is known as Gho which is a robe worn up to knee-length and is fastened at the waist by a band called the Kera. The front part of the dress which is formed like a pouch, in olden days was used to hold baskets of food and short dagger, but now it is used to keep cell phone, purse and the betel nut called Doma. The dress worn by women consist of three pieces known as Kira, Tego and Wonju. The long dress which extends up to the ankle is Kira. The jacket worn above this is Tego which is provided with Wonju, the inner jacket. However, while visiting the Dzong or monastery a long scarf or stoll, called Kabney is worn by men across the shoulder, in colours appropriate to their ranks. Women also wear scarfs or stolls called Rachus, made of raw silk with embroidery, over their shoulder but not indicative of their rank. Theatrical costume Costume often refers to a particular style of clothing worn to portray the wearer as a character or type of character at a social event in a theatrical performance on the stage or in film or television. In combination with other aspects of stagecraft, theatrical costumes can help actors portray characters' and their contexts as well as communicate information about the historical period/era, geographic location and time of day, season or weather of the theatrical performance. Some stylized theatrical costumes, such as Harlequin and Pantaloon in the Commedia dell'arte, exaggerate an aspect of a character. Costume construction A costume technician is a term used for a person that constructs and/or alters the costumes. The costume technician is responsible for taking the two dimensional sketch and translating it to create a garment that resembles the designer's rendering. It is important for a technician to keep the ideas of the designer in mind when building the garment. Draping and cutting Draping is the art of manipulating fabric directly on a dress form or body form as the first step to create a pattern. A body form can be padded to a person's specific measurements. Flat drafting is the art of drawing patterns onto paper based on measurements to create a pattern. Cutting is the act of tracing a pattern onto fabric and cutting out the pieces. These pieces are put together to create a final costume. In costuming, the person who creates a pattern is called a cutter/draper, and in fashion this person is more commonly called a pattern drafter, though both techniques may be used in both fields. Draping is especially useful with stretchy fabrics or bias cut garments as the maker can see how it will be effected by body curves and the pull of gravity. Jobs Costume designer Designs and creates a concept for the costumes for the play or performance. Costume technician Constructs and patterns the costumes for the play or performance. Wardrobe supervisor Oversees the wardrobe crew and run of the show from backstage. They are responsible for maintaining the good condition of the costumes. Milliner Also known as a hatmaker, responsible for the manufacturing of hats and headwear. Religious festivals Wearing costumes is an important part of holidays developed from religious festivals such as Mardi Gras (in the lead up to Easter), and Halloween (related to All Hallow's Eve). Mardi Gras costumes usually take the form of jesters and other fantasy characters; Halloween costumes traditionally take the form of supernatural creatures such as ghosts, vampires, pop-culture icons and angels. Halloween costumes developed from pre-Christian religious traditions: to avoid being terrorized by evil spirits walking the Earth during the harvest festival Samhain, the Celts donned disguises. In the eighth century, Pope Gregory VIII designated November 1 as All Saints Day, and the preceding days as All Hallows Eve; Samhain's costuming tradition was incorporated into these Christian holidays. Given the Catholic and pagan roots of the holiday, it has been repudiated by some Protestants. However, in the modern era, Halloween "is widely celebrated in almost every corner of American life," and the wearing of costumes forms part of a secular tradition. In 2022, United States households spent an average of $100 preparing for Halloween, with $34 going to costume-related spending. Christmas costumes typically portray characters such as Santa Claus (developed from Saint Nicholas). In Australia, the United Kingdom and the United States the American version of a Santa suit and beard is popular; in the Netherlands, the costume of Zwarte Piet is customary. Easter costumes are associated with the Easter Bunny or other animal costumes. In Judaism, a common practice is to dress up on Purim. During this holiday, Jews celebrate the change of their destiny. They were delivered from being the victims of an evil decree against them and were instead allowed by the King to destroy their enemies. A quote from the Book of Esther, which says: "On the contrary" () is the reason that wearing a costume has become customary for this holiday. Buddhist religious festivals in Tibet, Bhutan, Mongolia and Lhasa and Sikkim in India perform the Cham dance, which is a popular dance form utilising masks and costumes. Parades and processions Parades and processions provide opportunities for people to dress up in historical or imaginative costumes. For example, in 1879 the artist Hans Makart designed costumes and scenery to celebrate the wedding anniversary of the Austro-Hungarian Emperor and Empress and led the people of Vienna in a costume parade that became a regular event until the mid-twentieth century. Uncle Sam costumes are worn on Independence Day in the United States. The Lion Dance, which is part of Chinese New Year celebrations, is performed in costume. Some costumes, such as the ones used in the Dragon Dance, need teams of people to create the required effect. Sporting events and parties Public sporting events such as fun runs also provide opportunities for wearing costumes, as do private masquerade balls and fancy dress parties. Mascots Costumes are popularly employed at sporting events, during which fans dress as their team's representative mascot to show their support. Businesses use mascot costumes to bring in people to their business either by placing their mascot in the street by their business or sending their mascot out to sporting events, festivals, national celebrations, fairs, and parades. Mascots appear at organizations wanting to raise awareness of their work. Children's Book authors create mascots from the main character to present at their book signings. Animal costumes that are visually very similar to mascot costumes are also popular among the members of the furry fandom, where the costumes are referred to as fursuits and match one's animal persona, or "fursona". Children Costumes also serve as an avenue for children to explore and role-play. For example, children may dress up as characters from history or fiction, such as pirates, princesses, cowboys, or superheroes. They may also dress in uniforms used in common jobs, such as nurses, police officers, or firefighters, or as zoo or farm animals. Young boys tend to prefer costumes that reinforce stereotypical ideas of being male, and young girls tend to prefer costumes that reinforce stereotypical ideas of being female. Cosplay Cosplay, a word of Japanese origin that in English is short for "costume display" or "costume play", is a performance art in which participants wear costumes and accessories to represent a specific character or idea that is usually always identified with a unique name (as opposed to a generic word). These costume wearers often interact to create a subculture centered on role play, so they can be seen most often in play groups, or at a gathering or convention. A significant number of these costumes are homemade and unique, and depend on the character, idea, or object the costume wearer is attempting to imitate or represent. The costumes themselves are often artistically judged to how well they represent the subject or object that the costume wearer is attempting to contrive. Design Costume design is the envisioning of clothing and the overall appearance of a character or performer. Costume may refer to the style of dress particular to a nation, a class, or a period. In many cases, it may contribute to the fullness of the artistic, visual world that is unique to a particular theatrical or cinematic production. The most basic designs are produced to denote status, provide protection or modesty, or provide visual interest to a character. Costumes may be for, but not limited to, theater, cinema, or musical performances. Costume design should not be confused with costume coordination, which merely involves altering existing clothing, although both processes are used to create stage clothes. Organizations The Costume Designers Guild's international membership includes motion picture, television, and commercial costume designers, assistant costume designers and costume illustrators, and totals over 750 members. The National Costumers Association is an 80 year old association of professional costumers and costume shops. Publications The Costume Designer is a quarterly magazine devoted to the costume design industry. Notable designers and awards Notable costume designers include recipients of the Academy Award for Best Costume Design, Tony Award for Best Costume Design, and Drama Desk Award for Outstanding Costume Design. Edith Head and Orry-Kelly, both of whom were born late in 1897, were two of Hollywood's most notable costume designers. Industry Professional-grade costumes are typically designed and produced by costume companies who can design and create unique costumes. These companies have often been in business for over 100 years, and continue to work with individual clients to create professional quality costumes. Professional costume houses rent and sell costumes for the trade. This includes companies that create mascots, costumes for film, TV costumes and theatrical costumes. Larger costume companies have warehouses full of costumes for rental to customers. There is an industry where costumers work with clients and design costumes from scratch. They then will create original costumes specifically to the clients specifications. See also References External links http://costumesocietyamerica.com/ The Costume Society, UK National Costumers Association Costume design
Costume
[ "Engineering" ]
2,400
[ "Costume design", "Design" ]
7,677
https://en.wikipedia.org/wiki/Computer%20monitor
A computer monitor is an output device that displays information in pictorial or textual form. A discrete monitor comprises a visual display, support electronics, power supply, housing, electrical connectors, and external user controls. The display in modern monitors is typically an LCD with LED backlight, having by the 2010s replaced CCFL backlit LCDs. Before the mid-2000s, most monitors used a cathode-ray tube (CRT) as the image output technology. A monitor is typically connected to its host computer via DisplayPort, HDMI, USB-C, DVI, or VGA. Monitors sometimes use other proprietary connectors and signals to connect to a computer, which is less common. Originally computer monitors were used for data processing while television sets were used for video. From the 1980s onward, computers (and their monitors) have been used for both data processing and video, while televisions have implemented some computer functionality. Since 2010, the typical display aspect ratio of both televisions and computer monitors changed from 4:3 to 16:9 Modern computer monitors are often functionally interchangeable with television sets and vice versa. As most computer monitors do not include integrated speakers, TV tuners, or remote controls, external components such as a DTA box may be needed to use a computer monitor as a TV set. History Early electronic computer front panels were fitted with an array of light bulbs where the state of each particular bulb would indicate the on/off state of a particular register bit inside the computer. This allowed the engineers operating the computer to monitor the internal state of the machine, so this panel of lights came to be known as the 'monitor'. As early monitors were only capable of displaying a very limited amount of information and were very transient, they were rarely considered for program output. Instead, a line printer was the primary output device, while the monitor was limited to keeping track of the program's operation. Computer monitors were formerly known as visual display units (VDU), particularly in British English. This term mostly fell out of use by the 1990s. Technologies Multiple technologies have been used for computer monitors. Until the 21st century most used cathode-ray tubes but they have largely been superseded by LCD monitors. Cathode-ray tube The first computer monitors used cathode-ray tubes (CRTs). Prior to the advent of home computers in the late 1970s, it was common for a video display terminal (VDT) using a CRT to be physically integrated with a keyboard and other components of the workstation in a single large chassis, typically limiting them to emulation of a paper teletypewriter, thus the early epithet of 'glass TTY'. The display was monochromatic and far less sharp and detailed than on a modern monitor, necessitating the use of relatively large text and severely limiting the amount of information that could be displayed at one time. High-resolution CRT displays were developed for specialized military, industrial and scientific applications but they were far too costly for general use; wider commercial use became possible after the release of a slow, but affordable Tektronix 4010 terminal in 1972. Some of the earliest home computers (such as the TRS-80 and Commodore PET) were limited to monochrome CRT displays, but color display capability was already a possible feature for a few MOS 6500 series-based machines (such as introduced in 1977 Apple II computer or Atari 2600 console), and the color output was a specialty of the more graphically sophisticated Atari 8-bit computers, introduced in 1979. Either computer could be connected to the antenna terminals of an ordinary color TV set or used with a purpose-made CRT color monitor for optimum resolution and color quality. Lagging several years behind, in 1981 IBM introduced the Color Graphics Adapter, which could display four colors with a resolution of pixels, or it could produce pixels with two colors. In 1984 IBM introduced the Enhanced Graphics Adapter which was capable of producing 16 colors and had a resolution of . By the end of the 1980s color progressive scan CRT monitors were widely available and increasingly affordable, while the sharpest prosumer monitors could clearly display high-definition video, against the backdrop of efforts at HDTV standardization from the 1970s to the 1980s failing continuously, leaving consumer SDTVs to stagnate increasingly far behind the capabilities of computer CRT monitors well into the 2000s. During the following decade, maximum display resolutions gradually increased and prices continued to fall as CRT technology remained dominant in the PC monitor market into the new millennium, partly because it remained cheaper to produce. CRTs still offer color, grayscale, motion, and latency advantages over today's LCDs, but improvements to the latter have made them much less obvious. The dynamic range of early LCD panels was very poor, and although text and other motionless graphics were sharper than on a CRT, an LCD characteristic known as pixel lag caused moving graphics to appear noticeably smeared and blurry. Liquid-crystal display There are multiple technologies that have been used to implement liquid-crystal displays (LCD). Throughout the 1990s, the primary use of LCD technology as computer monitors was in laptops where the lower power consumption, lighter weight, and smaller physical size of LCDs justified the higher price versus a CRT. Commonly, the same laptop would be offered with an assortment of display options at increasing price points: (active or passive) monochrome, passive color, or active matrix color (TFT). As volume and manufacturing capability have improved, the monochrome and passive color technologies were dropped from most product lines. TFT-LCD is a variant of LCD which is now the dominant technology used for computer monitors. The first standalone LCDs appeared in the mid-1990s selling for high prices. As prices declined they became more popular, and by 1997 were competing with CRT monitors. Among the first desktop LCD computer monitors were the Eizo FlexScan L66 in the mid-1990s, the SGI 1600SW, Apple Studio Display and the ViewSonic VP140 in 1998. In 2003, LCDs outsold CRTs for the first time, becoming the primary technology used for computer monitors. The physical advantages of LCD over CRT monitors are that LCDs are lighter, smaller, and consume less power. In terms of performance, LCDs produce less or no flicker, reducing eyestrain, sharper image at native resolution, and better checkerboard contrast. On the other hand, CRT monitors have superior blacks, viewing angles, and response time, can use arbitrary lower resolutions without aliasing, and flicker can be reduced with higher refresh rates, though this flicker can also be used to reduce motion blur compared to less flickery displays such as most LCDs. Many specialized fields such as vision science remain dependent on CRTs, the best LCD monitors having achieved moderate temporal accuracy, and so can be used only if their poor spatial accuracy is unimportant. High dynamic range (HDR) has been implemented into high-end LCD monitors to improve grayscale accuracy. Since around the late 2000s, widescreen LCD monitors have become popular, in part due to television series, motion pictures and video games transitioning to widescreen, which makes squarer monitors unsuited to display them correctly. Organic light-emitting diode Organic light-emitting diode (OLED) monitors provide most of the benefits of both LCD and CRT monitors with few of their drawbacks, though much like plasma panels or very early CRTs they suffer from burn-in, and remain very expensive. Measurements of performance The performance of a monitor is measured by the following parameters: Display geometry: Viewable image size – is usually measured diagonally, but the actual widths and heights are more informative since they are not affected by the aspect ratio in the same way. For CRTs, the viewable size is typically smaller than the tube itself. Aspect ratio – is the ratio of the horizontal length to the vertical length. Monitors usually have the aspect ratio 4:3, 5:4, 16:10 or 16:9. Radius of curvature (for curved monitors) – is the radius that a circle would have if it had the same curvature as the display. This value is typically given in millimeters, but expressed with the letter "R" instead of a unit (for example, a display with "3800R curvature" has a 3800mm radius of curvature. Display resolution is the number of distinct pixels in each dimension that can be displayed natively. For a given display size, maximum resolution is limited by dot pitch or DPI. Dot pitch represents the distance between the primary elements of the display, typically averaged across it in nonuniform displays. A related unit is pixel pitch, In LCDs, pixel pitch is the distance between the center of two adjacent pixels. In CRTs, pixel pitch is defined as the distance between subpixels of the same color. Dot pitch is the reciprocal of pixel density. Pixel density is a measure of how densely packed the pixels on a display are. In LCDs, pixel density is the number of pixels in one linear unit along the display, typically measured in pixels per inch (px/in or ppi). Color characteristics: Luminance – measured in candelas per square meter (cd/m, also called a nit). Contrast ratio is the ratio of the luminosity of the brightest color (white) to that of the darkest color (black) that the monitor is capable of producing simultaneously. For example, a ratio of means that the brightest shade (white) is 20,000 times brighter than its darkest shade (black). Dynamic contrast ratio is measured with the LCD backlight turned off. ANSI contrast is with both black and white simultaneously adjacent onscreen. Color depth – measured in bits per primary color or bits for all colors. Those with 10bpc (bits per channel) or more can display more shades of color (approximately 1 billion shades) than traditional 8bpc monitors (approximately 16.8 million shades or colors), and can do so more precisely without having to resort to dithering. Gamut – measured as coordinates in the CIE 1931 color space. The names sRGB or Adobe RGB are shorthand notations. Color accuracy – measured in ΔE (delta-E); the lower the ΔE, the more accurate the color representation. A ΔE of below 1 is imperceptible to the human eye. A ΔE of 24 is considered good and requires a sensitive eye to spot the difference. Viewing angle is the maximum angle at which images on the monitor can be viewed, without subjectively excessive degradation to the image. It is measured in degrees horizontally and vertically. Input speed characteristics: Refresh rate is (in CRTs) the number of times in a second that the display is illuminated (the number of times a second a raster scan is completed). In LCDs it is the number of times the image can be changed per second, expressed in hertz (Hz). Determines the maximum number of frames per second (FPS) a monitor is capable of showing. Maximum refresh rate is limited by response time. Response time is the time a pixel in a monitor takes to change between two shades. The particular shades depend on the test procedure, which differs between manufacturers. In general, lower numbers mean faster transitions and therefore fewer visible image artifacts such as ghosting. Grey to grey (GtG), measured in milliseconds (ms). Input latency is the time it takes for a monitor to display an image after receiving it, typically measured in milliseconds (ms). Power consumption is measured in watts. Size On two-dimensional display devices such as computer monitors the display size or viewable image size is the actual amount of screen space that is available to display a picture, video or working space, without obstruction from the bezel or other aspects of the unit's design. The main measurements for display devices are width, height, total area and the diagonal. The size of a display is usually given by manufacturers diagonally, i.e. as the distance between two opposite screen corners. This method of measurement is inherited from the method used for the first generation of CRT television when picture tubes with circular faces were in common use. Being circular, it was the external diameter of the glass envelope that described their size. Since these circular tubes were used to display rectangular images, the diagonal measurement of the rectangular image was smaller than the diameter of the tube's face (due to the thickness of the glass). This method continued even when cathode-ray tubes were manufactured as rounded rectangles; it had the advantage of being a single number specifying the size and was not confusing when the aspect ratio was universally 4:3. With the introduction of flat-panel technology, the diagonal measurement became the actual diagonal of the visible display. This meant that an eighteen-inch LCD had a larger viewable area than an eighteen-inch cathode-ray tube. Estimation of monitor size by the distance between opposite corners does not take into account the display aspect ratio, so that for example a 16:9 widescreen display has less area, than a 4:3 screen. The 4:3 screen has dimensions of and an area , while the widescreen is , . Aspect ratio Until about 2003, most computer monitors had a 4:3 aspect ratio and some had 5:4. Between 2003 and 2006, monitors with 16:9 and mostly 16:10 (8:5) aspect ratios became commonly available, first in laptops and later also in standalone monitors. Reasons for this transition included productive uses (i.e. field of view in video games and movie viewing) such as the word processor display of two standard letter pages side by side, as well as CAD displays of large-size drawings and application menus at the same time. In 2008 16:10 became the most common sold aspect ratio for LCD monitors and the same year 16:10 was the mainstream standard for laptops and notebook computers. In 2010, the computer industry started to move over from 16:10 to 16:9 because 16:9 was chosen to be the standard high-definition television display size, and because they were cheaper to manufacture. In 2011, non-widescreen displays with 4:3 aspect ratios were only being manufactured in small quantities. According to Samsung, this was because the "Demand for the old 'Square monitors' has decreased rapidly over the last couple of years," and "I predict that by the end of 2011, production on all 4:3 or similar panels will be halted due to a lack of demand." Resolution The resolution for computer monitors has increased over time. From during the late 1970s, to during the late 1990s. Since 2009, the most commonly sold resolution for computer monitors is , shared with the 1080p of HDTV. Before 2013 mass market LCD monitors were limited to at , excluding niche professional monitors. By 2015 most major display manufacturers had released (4K UHD) displays, and the first (8K) monitors had begun shipping. Gamut Every RGB monitor has its own color gamut, bounded in chromaticity by a color triangle. Some of these triangles are smaller than the sRGB triangle, some are larger. Colors are typically encoded by 8 bits per primary color. The RGB value [255, 0, 0] represents red, but slightly different colors in different color spaces such as Adobe RGB and sRGB. Displaying sRGB-encoded data on wide-gamut devices can give an unrealistic result. The gamut is a property of the monitor; the image color space can be forwarded as Exif metadata in the picture. As long as the monitor gamut is wider than the color space gamut, correct display is possible, if the monitor is calibrated. A picture that uses colors that are outside the sRGB color space will display on an sRGB color space monitor with limitations. Still today, many monitors that can display the sRGB color space are not factory nor user-calibrated to display it correctly. Color management is needed both in electronic publishing (via the Internet for display in browsers) and in desktop publishing targeted to print. Additional features Universal features Power saving Most modern monitors will switch to a power-saving mode if no video-input signal is received. This allows modern operating systems to turn off a monitor after a specified period of inactivity. This also extends the monitor's service life. Some monitors will also switch themselves off after a time period on standby. Most modern laptops provide a method of screen dimming after periods of inactivity or when the battery is in use. This extends battery life and reduces wear. Indicator light Most modern monitors have two different indicator light colors wherein if video-input signal was detected, the indicator light is green and when the monitor is in power-saving mode, the screen is black and the indicator light is orange. Some monitors have different indicator light colors and some monitors have a blinking indicator light when in power-saving mode. Integrated accessories Many monitors have other accessories (or connections for them) integrated. This places standard ports within easy reach and eliminates the need for another separate hub, camera, microphone, or set of speakers. These monitors have advanced microprocessors which contain codec information, Windows interface drivers and other small software which help in proper functioning of these functions. Ultrawide screens Monitors that feature an aspect ratio greater than 2:1 (for instance, 21:9 or 32:9, as opposed to the more common 16:9, which resolves to 1.7:1).Monitors with an aspect ratio greater than 3:1 are marketed as super ultrawide monitors. These are typically massive curved screens intended to replace a multi-monitor deployment. Touch screen These monitors use touching of the screen as an input method. Items can be selected or moved with a finger, and finger gestures may be used to convey commands. The screen will need frequent cleaning due to image degradation from fingerprints. Sensors Ambient light for automatically adjusting screen brightness and/or color temperature Infrared camera for biometrics, eye and/or face recognition. Eye tracking as user input device. As lidar receiver for 3D scanning. Consumer features Glossy screen Some displays, especially newer flat-panel monitors, replace the traditional anti-glare matte finish with a glossy one. This increases color saturation and sharpness but reflections from lights and windows are more visible. Anti-reflective coatings are sometimes applied to help reduce reflections, although this only partly mitigates the problem. Curved designs Most often using nominally flat-panel display technology such as LCD or OLED, a concave rather than convex curve is imparted, reducing geometric distortion, especially in extremely large and wide seamless desktop monitors intended for close viewing range. 3D Newer monitors are able to display a different image for each eye, often with the help of special glasses and polarizers, giving the perception of depth. An autostereoscopic screen can generate 3D images without headgear. Professional features Anti-glare and anti-reflection screens Features for medical using or for outdoor placement. Directional screen Narrow viewing angle screens are used in some security-conscious applications. Integrated professional accessories Integrated screen calibration tools, screen hoods, signal transmitters; Protective screens. Tablet screens A combination of a monitor with a graphics tablet. Such devices are typically unresponsive to touch without the use of one or more special tools' pressure. Newer models however are now able to detect touch from any pressure and often have the ability to detect tool tilt and rotation as well. Touch and tablet sensors are often used on sample and hold displays such as LCDs to substitute for the light pen, which can only work on CRTs. Integrated display LUT and 3D LUT tables The option for using the display as a reference monitor; these calibration features can give an advanced color management control for take a near-perfect image. Local dimming backlight Option for professional LCD monitors, inherent to OLED & CRT; professional feature with mainstream tendency. Backlight brightness/color uniformity compensation Near to mainstream professional feature; advanced hardware driver for backlit modules with local zones of uniformity correction. Mounting Computer monitors are provided with a variety of methods for mounting them depending on the application and environment. Desktop A desktop monitor is typically provided with a stand from the manufacturer which lifts the monitor up to a more ergonomic viewing height. The stand may be attached to the monitor using a proprietary method or may use, or be adaptable to, a VESA mount. A VESA standard mount allows the monitor to be used with more after-market stands if the original stand is removed. Stands may be fixed or offer a variety of features such as height adjustment, horizontal swivel, and landscape or portrait screen orientation. VESA mount The Flat Display Mounting Interface (FDMI), also known as VESA Mounting Interface Standard (MIS) or colloquially as a VESA mount, is a family of standards defined by the Video Electronics Standards Association for mounting flat-panel displays to stands or wall mounts. It is implemented on most modern flat-panel monitors and TVs. For computer monitors, the VESA Mount typically consists of four threaded holes on the rear of the display that will mate with an adapter bracket. Rack mount Rack mount computer monitors are available in two styles and are intended to be mounted into a 19-inch rack: Fixed A fixed rack mount monitor is mounted directly to the rack with the flat-panel or CRT visible at all times. The height of the unit is measured in rack units (RU) and 8U or 9U are most common to fit 17-inch or 19-inch screens. The front sides of the unit are provided with flanges to mount to the rack, providing appropriately spaced holes or slots for the rack mounting screws. A 19-inch diagonal screen is the largest size that will fit within the rails of a 19-inch rack. Larger flat-panels may be accommodated but are 'mount-on-rack' and extend forward of the rack. There are smaller display units, typically used in broadcast environments, which fit multiple smaller screens side by side into one rack mount. Stowable A stowable rack mount monitor is 1U, 2U or 3U high and is mounted on rack slides allowing the display to be folded down and the unit slid into the rack for storage as a drawer. The flat display is visible only when pulled out of the rack and deployed. These units may include only a display or may be equipped with a keyboard creating a KVM (Keyboard Video Monitor). Most common are systems with a single LCD but there are systems providing two or three displays in a single rack mount system. Panel mount A panel mount computer monitor is intended for mounting into a flat surface with the front of the display unit protruding just slightly. They may also be mounted to the rear of the panel. A flange is provided around the screen, sides, top and bottom, to allow mounting. This contrasts with a rack mount display where the flanges are only on the sides. The flanges will be provided with holes for thru-bolts or may have studs welded to the rear surface to secure the unit in the hole in the panel. Often a gasket is provided to provide a water-tight seal to the panel and the front of the screen will be sealed to the back of the front panel to prevent water and dirt contamination. Open frame An open frame monitor provides the display and enough supporting structure to hold associated electronics and to minimally support the display. Provision will be made for attaching the unit to some external structure for support and protection. Open frame monitors are intended to be built into some other piece of equipment providing its own case. An arcade video game would be a good example with the display mounted inside the cabinet. There is usually an open frame display inside all end-use displays with the end-use display simply providing an attractive protective enclosure. Some rack mount monitor manufacturers will purchase desktop displays, take them apart, and discard the outer plastic parts, keeping the inner open-frame display for inclusion into their product. Security vulnerabilities According to an NSA document leaked to , the NSA sometimes swaps the monitor cables on targeted computers with a bugged monitor cable to allow the NSA to remotely see what is being displayed on the targeted computer monitor. Van Eck phreaking is the process of remotely displaying the contents of a CRT or LCD by detecting its electromagnetic emissions. It is named after Dutch computer researcher Wim van Eck, who in 1985 published the first paper on it, including proof of concept. Phreaking more generally is the process of exploiting telephone networks. See also Composite monitor History of display technology Comparison of CRT, LCD, plasma, and OLED displays Flat-panel display Head-mounted display High frame rate Liquid-crystal display Multi-monitor Vector monitor Virtual desktop Variable refresh rate References External links American inventions Computer peripherals Electronic display devices
Computer monitor
[ "Technology" ]
5,121
[ "Computer peripherals", "Components" ]
7,701
https://en.wikipedia.org/wiki/Cocaine
Cocaine (, , ultimately ) is a tropane alkaloid that acts as a central nervous system stimulant. As an extract, it is mainly used recreationally and often illegally for its euphoric and rewarding effects. It is also used in medicine by Indigenous South Americans for various purposes and rarely, but more formally, as a local anaesthetic or diagnostic tool by medical practitioners in more developed countries. It is primarily obtained from the leaves of two Coca species native to South America: Erythroxylum coca and E. novogranatense. After extraction from the plant, and further processing into cocaine hydrochloride (powdered cocaine), the drug is administered by being either snorted, applied topically to the mouth, or dissolved and injected into a vein. It can also then be turned into free base form (typically crack cocaine), in which it can be heated until sublimated and then the vapours can be inhaled. Cocaine stimulates the mesolimbic pathway in the brain. Mental effects may include an intense feeling of happiness, sexual arousal, loss of contact with reality, or agitation. Physical effects may include a fast heart rate, sweating, and dilated pupils. High doses can result in high blood pressure or high body temperature. Onset of effects can begin within seconds to minutes of use, depending on method of delivery, and can last between five and ninety minutes. As cocaine also has numbing and blood vessel constriction properties, it is occasionally used during surgery on the throat or inside of the nose to control pain, bleeding, and vocal cord spasm. Cocaine crosses the blood–brain barrier via a proton-coupled organic cation antiporter and (to a lesser extent) via passive diffusion across cell membranes. Cocaine blocks the dopamine transporter, inhibiting reuptake of dopamine from the synaptic cleft into the pre-synaptic axon terminal; the higher dopamine levels in the synaptic cleft increase dopamine receptor activation in the post-synaptic neuron, causing euphoria and arousal. Cocaine also blocks the serotonin transporter and norepinephrine transporter, inhibiting reuptake of serotonin and norepinephrine from the synaptic cleft into the pre-synaptic axon terminal and increasing activation of serotonin receptors and norepinephrine receptors in the post-synaptic neuron, contributing to the mental and physical effects of cocaine exposure. A single dose of cocaine induces tolerance to the drug's effects. Repeated use is likely to result in addiction. Addicts who abstain from cocaine may experience prolonged craving lasting for many months. Abstaining addicts also experience modest drug withdrawal symptoms lasting up to 24 hours, with sleep disruption, anxiety, irritability, crashing, depression, decreased libido, decreased ability to feel pleasure, and fatigue being common. Use of cocaine increases the overall risk of death, and intravenous use potentially increases the risk of trauma and infectious diseases such as blood infections and HIV through the use of shared paraphernalia. It also increases risk of stroke, heart attack, cardiac arrhythmia, lung injury (when smoked), and sudden cardiac death. Illicitly sold cocaine can be adulterated with fentanyl, local anesthetics, levamisole, cornstarch, quinine, or sugar, which can result in additional toxicity. In 2017, the Global Burden of Disease study found that cocaine use caused around 7,300 deaths annually. Uses Coca leaves have been used by Andean civilizations since ancient times. In ancient Wari culture, Inca culture, and through modern successor indigenous cultures of the Andes mountains, coca leaves are chewed, taken orally in the form of a tea, or alternatively, prepared in a sachet wrapped around alkaline burnt ashes, and held in the mouth against the inner cheek; it has traditionally been used to combat the effects of cold, hunger, and altitude sickness. Cocaine was first isolated from the leaves in 1860. Globally, in 2019, cocaine was used by an estimated 20 million people (0.4% of adults aged 15 to 64 years). The highest prevalence of cocaine use was in Australia and New Zealand (2.1%), followed by North America (2.1%), Western and Central Europe (1.4%), and South and Central America (1.0%). Since 1961, the Single Convention on Narcotic Drugs has required countries to make recreational use of cocaine a crime. In the United States, cocaine is regulated as a Schedule II drug under the Controlled Substances Act, meaning that it has a high potential for abuse but has an accepted medical use. While rarely used medically today, its accepted uses are as a topical local anesthetic for the upper respiratory tract as well as to reduce bleeding in the mouth, throat and nasal cavities. Medical Cocaine eye drops are frequently used by neurologists when examining people suspected of having Horner syndrome. In Horner syndrome, sympathetic innervation to the eye is blocked. In a healthy eye, cocaine will stimulate the sympathetic nerves by inhibiting norepinephrine reuptake, and the pupil will dilate; if the patient has Horner syndrome, the sympathetic nerves are blocked, and the affected eye will remain constricted or dilate to a lesser extent than the opposing (unaffected) eye which also receives the eye drop test. If both eyes dilate equally, the patient does not have Horner syndrome. Topical cocaine is sometimes used as a local numbing agent and vasoconstrictor to help control pain and bleeding with surgery of the nose, mouth, throat or lacrimal duct. Although some absorption and systemic effects may occur, the use of cocaine as a topical anesthetic and vasoconstrictor is generally safe, rarely causing cardiovascular toxicity, glaucoma, and pupil dilation. Occasionally, cocaine is mixed with adrenaline and sodium bicarbonate and used topically for surgery, a formulation called Moffett's solution. Cocaine hydrochloride (Goprelto), an ester local anesthetic, was approved for medical use in the United States in December 2017, and is indicated for the introduction of local anesthesia of the mucous membranes for diagnostic procedures and surgeries on or through the nasal cavities of adults. Cocaine hydrochloride (Numbrino) was approved for medical use in the United States in January 2020. The most common adverse reactions in people treated with Goprelto are headache and epistaxis. The most common adverse reactions in people treated with Numbrino are hypertension, tachycardia, and sinus tachycardia. Recreational Cocaine is a central nervous system stimulant. Its effects can last from 15 minutes to an hour. The duration of cocaine's effects depends on the amount taken and the route of administration. Cocaine can be in the form of fine white powder and has a bitter taste. Crack cocaine is a smokeable form of cocaine made into small "rocks" by processing cocaine with sodium bicarbonate (baking soda) and water. Crack cocaine is referred to as "crack" because of the crackling sounds it makes when heated. Cocaine use leads to increases in alertness, feelings of well-being and euphoria, increased energy and motor activity, and increased feelings of competence and sexuality. Analysis of the correlation between the use of 18 various psychoactive substances shows that cocaine use correlates with other "party drugs" (such as ecstasy or amphetamines), as well as with heroin and benzodiazepines use, and can be considered as a bridge between the use of different groups of drugs. Coca leaves It is legal for people to use coca leaves in some Andean nations, such as Peru and Bolivia, where they are chewed, consumed in the form of tea, or are sometimes incorporated into food products. Coca leaves are typically mixed with an alkaline substance (such as lime) and chewed into a wad that is retained in the buccal pouch (mouth between gum and cheek, much the same as chewing tobacco is chewed) and sucked of its juices. The juices are absorbed slowly by the mucous membrane of the inner cheek and by the gastrointestinal tract when swallowed. Alternatively, coca leaves can be infused in liquid and consumed like tea. Coca tea, an infusion of coca leaves, is also a traditional method of consumption. The tea has often been recommended for travelers in the Andes to prevent altitude sickness. Its actual effectiveness has never been systematically studied. In 1986 an article in the Journal of the American Medical Association revealed that U.S. health food stores were selling dried coca leaves to be prepared as an infusion as "Health Inca Tea". While the packaging claimed it had been "decocainized", no such process had actually taken place. The article stated that drinking two cups of the tea per day gave a mild stimulation, increased heart rate, and mood elevation, and the tea was essentially harmless. Insufflation Nasal insufflation (known colloquially as "snorting", "sniffing", or "blowing") is a common method of ingestion of recreational powdered cocaine. The drug coats and is absorbed through the mucous membranes lining the nasal passages. Cocaine's desired euphoric effects are delayed when snorted through the nose by about five minutes. This occurs because cocaine's absorption is slowed by its constricting effect on the blood vessels of the nose. Insufflation of cocaine also leads to the longest duration of its effects (60–90 minutes). When insufflating cocaine, absorption through the nasal membranes is approximately 30–60% In a study of cocaine users, the average time taken to reach peak subjective effects was 14.6 minutes. Any damage to the inside of the nose is due to cocaine constricting blood vessels — and therefore restricting blood and oxygen/nutrient flow — to that area. Rolled up banknotes, hollowed-out pens, cut straws, pointed ends of keys, specialized spoons, long fingernails, and (clean) tampon applicators are often used to insufflate cocaine. The cocaine typically is poured onto a flat, hard surface (such as a mobile phone screen, mirror, CD case or book) and divided into "bumps", "lines" or "rails", and then insufflated. A 2001 study reported that the sharing of straws used to "snort" cocaine can spread blood diseases such as hepatitis C. Injection Subjective effects not commonly shared with other methods of administration include a ringing in the ears moments after injection (usually when over 120 milligrams) lasting 2 to 5 minutes including tinnitus and audio distortion. This is colloquially referred to as a "bell ringer". In a study of cocaine users, the average time taken to reach peak subjective effects was 3.1 minutes. The euphoria passes quickly. Aside from the toxic effects of cocaine, there is also the danger of circulatory emboli from the insoluble substances that may be used to cut the drug. As with all injected illicit substances, there is a risk of the user contracting blood-borne infections if sterile injecting equipment is not available or used. An injected mixture of cocaine and heroin, known as "speedball", is a particularly dangerous combination, as the converse effects of the drugs actually complement each other, but may also mask the symptoms of an overdose. It has been responsible for numerous deaths, including celebrities such as comedians/actors John Belushi and Chris Farley, Mitch Hedberg, River Phoenix, grunge singer Layne Staley and actor Philip Seymour Hoffman. Experimentally, cocaine injections can be delivered to animals such as fruit flies to study the mechanisms of cocaine addiction. Inhalation The onset of cocaine's euphoric effects is fastest with inhalation, beginning after 3–5 seconds. This gives the briefest euphoria (5–15 minutes). Cocaine is smoked by inhaling the vapor produced when crack cocaine is heated to the point of sublimation. In a 2000 Brookhaven National Laboratory medical department study, based on self-reports of 32 people who used cocaine who participated in the study, "peak high" was found at a mean of 1.4 ± 0.5 minutes. Pyrolysis products of cocaine that occur only when heated/smoked have been shown to change the effect profile, i.e. anhydroecgonine methyl ester, when co-administered with cocaine, increases the dopamine in CPu and NAc brain regions, and has M1 — and M3 — receptor affinity. People often freebase crack with a pipe made from a small glass tube, often taken from "love roses", small glass tubes with a paper rose that are promoted as romantic gifts. These are sometimes called "stems", "horns", "blasters" and "straight shooters". A small piece of clean heavy copper or occasionally stainless steel scouring padoften called a "brillo" (actual Brillo Pads contain soap, and are not used) or "chore" (named for Chore Boy brand copper scouring pads)serves as a reduction base and flow modulator in which the "rock" can be melted and boiled to vapor. Crack is smoked by placing it at the end of the pipe; a flame held close to it produces vapor, which is then inhaled by the smoker. The effects felt almost immediately after smoking, are very intense and do not last long — usually 2 to 10 minutes. When smoked, cocaine is sometimes combined with other drugs, such as cannabis, often rolled into a joint or blunt. Effects Acute Acute exposure to cocaine has many effects on humans, including euphoria, increases in heart rate and blood pressure, and increases in cortisol secretion from the adrenal gland. In humans with acute exposure followed by continuous exposure to cocaine at a constant blood concentration, the acute tolerance to the chronotropic cardiac effects of cocaine begins after about 10 minutes, while acute tolerance to the euphoric effects of cocaine begins after about one hour. With excessive or prolonged use, the drug can cause itching, fast heart rate, and paranoid delusions or sensations of insects crawling on the skin. Intranasal cocaine and crack use are both associated with pharmacological violence. Aggressive behavior may be displayed by both addicts and casual users. Cocaine can induce psychosis characterized by paranoia, impaired reality testing, hallucinations, irritability, and physical aggression. Cocaine intoxication can cause hyperawareness, hypervigilance, and psychomotor agitation and delirium. Consumption of large doses of cocaine can cause violent outbursts, especially by those with preexisting psychosis. Crack-related violence is also systemic, relating to disputes between crack dealers and users. Acute exposure may induce cardiac arrhythmias, including atrial fibrillation, supraventricular tachycardia, ventricular tachycardia, and ventricular fibrillation. Acute exposure may also lead to angina, heart attack, and congestive heart failure. Cocaine overdose may cause seizures, abnormally high body temperature and a marked elevation of blood pressure, which can be life-threatening, abnormal heart rhythms, and death. Anxiety, paranoia, and restlessness can also occur, especially during the comedown. With excessive dosage, tremors, convulsions and increased body temperature are observed. Severe cardiac adverse events, particularly sudden cardiac death, become a serious risk at high doses due to cocaine's blocking effect on cardiac sodium channels. Incidental exposure of the eye to sublimated cocaine while smoking crack cocaine can cause serious injury to the cornea and long-term loss of visual acuity. Chronic Although it has been commonly asserted, the available evidence does not show that chronic use of cocaine is associated with broad cognitive deficits. Research is inconclusive on age-related loss of striatal dopamine transporter (DAT) sites, suggesting cocaine has neuroprotective or neurodegenerative properties for dopamine neurons. Exposure to cocaine may lead to the breakdown of the blood–brain barrier. Physical side effects from chronic smoking of cocaine include coughing up blood, bronchospasm, itching, fever, diffuse alveolar infiltrates without effusions, pulmonary and systemic eosinophilia, chest pain, lung trauma, sore throat, asthma, hoarse voice, dyspnea (shortness of breath), and an aching, flu-like syndrome. Cocaine constricts blood vessels, dilates pupils, and increases body temperature, heart rate, and blood pressure. It can also cause headaches and gastrointestinal complications such as abdominal pain and nausea. A common but untrue belief is that the smoking of cocaine chemically breaks down tooth enamel and causes tooth decay. Cocaine can cause involuntary tooth grinding, known as bruxism, which can deteriorate tooth enamel and lead to gingivitis. Additionally, stimulants like cocaine, methamphetamine, and even caffeine cause dehydration and dry mouth. Since saliva is an important mechanism in maintaining one's oral pH level, people who use cocaine over a long period of time who do not hydrate sufficiently may experience demineralization of their teeth due to the pH of the tooth surface dropping too low (below 5.5). Cocaine use also promotes the formation of blood clots. This increase in blood clot formation is attributed to cocaine-associated increases in the activity of plasminogen activator inhibitor, and an increase in the number, activation, and aggregation of platelets. Chronic intranasal usage can degrade the cartilage separating the nostrils (the septum nasi), leading eventually to its complete disappearance. Due to the absorption of the cocaine from cocaine hydrochloride, the remaining hydrochloride forms a dilute hydrochloric acid. Illicitly-sold cocaine may be contaminated with levamisole. Levamisole may accentuate cocaine's effects. Levamisole-adulterated cocaine has been associated with autoimmune disease. Cocaine use leads to an increased risk of hemorrhagic and ischemic strokes. Cocaine use also increases the risk of having a heart attack. Addiction Relatives of persons with cocaine addiction have an increased risk of cocaine addiction. Cocaine addiction occurs through ΔFosB overexpression in the nucleus accumbens, which results in altered transcriptional regulation in neurons within the nucleus accumbens. ΔFosB levels have been found to increase upon the use of cocaine. Each subsequent dose of cocaine continues to increase ΔFosB levels with no ceiling of tolerance. Elevated levels of ΔFosB leads to increases in brain-derived neurotrophic factor (BDNF) levels, which in turn increases the number of dendritic branches and spines present on neurons involved with the nucleus accumbens and prefrontal cortex areas of the brain. This change can be identified rather quickly, and may be sustained weeks after the last dose of the drug. Transgenic mice exhibiting inducible expression of ΔFosB primarily in the nucleus accumbens and dorsal striatum exhibit sensitized behavioural responses to cocaine. They self-administer cocaine at lower doses than control, but have a greater likelihood of relapse when the drug is withheld. ΔFosB increases the expression of AMPA receptor subunit GluR2 and also decreases expression of dynorphin, thereby enhancing sensitivity to reward. DNA damage is increased in the brain of rodents by administration of cocaine. During DNA repair of such damages, persistent chromatin alterations may occur such as methylation of DNA or the acetylation or methylation of histones at the sites of repair. These alterations can be epigenetic scars in the chromatin that contribute to the persistent epigenetic changes found in cocaine addiction. In humans, cocaine abuse may cause structural changes in brain connectivity, though it is unclear to what extent these changes are permanent. Dependence and withdrawal Cocaine dependence develops after even brief periods of regular cocaine use and produces a withdrawal state with emotional-motivational deficits upon cessation of cocaine use. During pregnancy Crack baby is a term for a child born to a mother who used crack cocaine during her pregnancy. The threat that cocaine use during pregnancy poses to the fetus is now considered exaggerated. Studies show that prenatal cocaine exposure (independent of other effects such as, for example, alcohol, tobacco, or physical environment) has no appreciable effect on childhood growth and development. In 2007, he National Institute on Drug Abuse of the United States warned about health risks while cautioning against stereotyping: There are also warnings about the threat of breastfeeding: The March of Dimes said "it is likely that cocaine will reach the baby through breast milk," and advises the following regarding cocaine use during pregnancy: Mortality Persons with regular or problematic use of cocaine have a significantly higher rate of death, and are specifically at higher risk of traumatic deaths and deaths attributable to infectious disease. Pharmacology Pharmacokinetics The extent of absorption of cocaine into the systemic circulation after nasal insufflation is similar to that after oral ingestion. The rate of absorption after nasal insufflation is limited by cocaine-induced vasoconstriction of capillaries in the nasal mucosa. Onset of absorption after oral ingestion is delayed because cocaine is a weak base with a pKa of 8.6, and is thus in an ionized form that is poorly absorbed from the acidic stomach and easily absorbed from the alkaline duodenum. The rate and extent of absorption from inhalation of cocaine is similar or greater than with intravenous injection, as inhalation provides access directly to the pulmonary capillary bed. The delay in absorption after oral ingestion may account for the popular belief that cocaine bioavailability from the stomach is lower than after insufflation. Compared with ingestion, the faster absorption of insufflated cocaine results in quicker attainment of maximum drug effects. Snorting cocaine produces maximum physiological effects within 40 minutes and maximum psychotropic effects within 20 minutes. Physiological and psychotropic effects from nasally insufflated cocaine are sustained for approximately 40–60 minutes after the peak effects are attained. Cocaine crosses the blood–brain barrier via both a proton-coupled organic cation antiporter and (to a lesser extent) via passive diffusion across cell membranes. As of September 2022, the gene or genes encoding the human proton-organic cation antiporter had not been identified. Cocaine has a short elimination half life of 0.7–1.5 hours and is extensively metabolized by plasma esterases and also by liver cholinesterases, with only about 1% excreted unchanged in the urine. The metabolism is dominated by hydrolytic ester cleavage, so the eliminated metabolites consist mostly of benzoylecgonine (BE), the major metabolite, and other metabolites in lesser amounts such as ecgonine methyl ester (EME) and ecgonine. Further minor metabolites of cocaine include norcocaine, p-hydroxycocaine, m-hydroxycocaine, p-hydroxybenzoylecgonine (), and m-hydroxybenzoylecgonine. If consumed with alcohol, cocaine combines with alcohol in the liver to form cocaethylene. Studies have suggested cocaethylene is more euphoric, and has a higher cardiovascular toxicity than cocaine by itself. Depending on liver and kidney functions, cocaine metabolites are detectable in urine between three and eight days. Generally speaking benzoylecgonine is eliminated from someone's urine between three and five days. In urine from heavy cocaine users, benzoylecgonine can be detected within four hours after intake and in concentrations greater than 150 ng/mL for up to eight days later. Detection of cocaine metabolites in hair is possible in regular users until after the sections of hair grown during the period of cocaine use are cut or fall out. Pharmacodynamics The pharmacodynamics of cocaine involve the complex relationships of neurotransmitters (inhibiting monoamine uptake in rats with ratios of about: serotonin:dopamine = 2:3, serotonin:norepinephrine = 2:5). The most extensively studied effect of cocaine on the central nervous system is the blockade of the dopamine transporter protein. Dopamine neurotransmitter released during neural signaling is normally recycled via the transporter; i.e., the transporter binds the transmitter and pumps it out of the synaptic cleft back into the presynaptic neuron, where it is taken up into storage vesicles. Cocaine binds tightly at the dopamine transporter forming a complex that blocks the transporter's function. The dopamine transporter can no longer perform its reuptake function, and thus dopamine accumulates in the synaptic cleft. The increased concentration of dopamine in the synapse activates post-synaptic dopamine receptors, which makes the drug rewarding and promotes the compulsive use of cocaine. Cocaine affects certain serotonin (5-HT) receptors; in particular, it has been shown to antagonize the 5-HT3 receptor, which is a ligand-gated ion channel. An overabundance of 5-HT3 receptors is reported in cocaine-conditioned rats, though 5-HT3's role is unclear. The 5-HT2 receptor (particularly the subtypes 5-HT2A, 5-HT2B and 5-HT2C) are involved in the locomotor-activating effects of cocaine. Cocaine has been demonstrated to bind as to directly stabilize the DAT transporter on the open outward-facing conformation. Further, cocaine binds in such a way as to inhibit a hydrogen bond innate to DAT. Cocaine's binding properties are such that it attaches so this hydrogen bond will not form and is blocked from formation due to the tightly locked orientation of the cocaine molecule. Research studies have suggested that the affinity for the transporter is not what is involved in the habituation of the substance so much as the conformation and binding properties to where and how on the transporter the molecule binds. Conflicting findings have challenged the widely accepted view that cocaine functions solely as a reuptake inhibitor. To induce euphoria an intravenous dose of 0.3-0.6 mg/kg of cocaine is required, which blocks 66-70% of dopamine transporters (DAT) in the brain. Re-administering cocaine beyond this threshold does not significantly increase DAT occupancy but still results in an increase of euphoria which cannot be explained by reuptake inhibition alone. This discrepancy is not shared with other dopamine reuptake inhbitors like bupropion, sibutramine, mazindol or tesofensine, which have similar or higher potencies than cocaine as dopamine reuptake inhibitors. These findings have evoked a hypothesis that cocaine may also function as a so-called "DAT inverse agonist" or "negative allosteric modifier of DAT" resulting in dopamine transporter reversal, and subsequent dopamine release into the synaptic cleft from the axon terminal in a manner similar to but distinct from amphetamines. Sigma receptors are affected by cocaine, as cocaine functions as a sigma ligand agonist. Further specific receptors it has been demonstrated to function on are NMDA and the D1 dopamine receptor. Cocaine also blocks sodium channels, thereby interfering with the propagation of action potentials; thus, like lignocaine and novocaine, it acts as a local anesthetic. It also functions on the binding sites to the dopamine and serotonin sodium dependent transport area as targets as separate mechanisms from its reuptake of those transporters; unique to its local anesthetic value which makes it in a class of functionality different from both its own derived phenyltropanes analogues which have that removed. In addition to this, cocaine has some target binding to the site of the κ-opioid receptor. Cocaine also causes vasoconstriction, thus reducing bleeding during minor surgical procedures. Recent research points to an important role of circadian mechanisms and clock genes in behavioral actions of cocaine. Cocaine is known to suppress hunger and appetite by increasing co-localization of sigma σ1R receptors and ghrelin GHS-R1a receptors at the neuronal cell surface, thereby increasing ghrelin-mediated signaling of satiety and possibly via other effects on appetitive hormones. Chronic users may lose their appetite and can experience severe malnutrition and significant weight loss. Cocaine effects, further, are shown to be potentiated for the user when used in conjunction with new surroundings and stimuli, and otherwise novel environs. Chemistry Appearance Cocaine in its purest form is a white, pearly product. Cocaine appearing in powder form is a salt, typically cocaine hydrochloride. Street cocaine is often adulterated or "cut" with cheaper substances to increase bulk, including talc, lactose, sucrose, glucose, mannitol, inositol, caffeine, procaine, phencyclidine, phenytoin, lignocaine, strychnine, levamisole, and amphetamine. Fentanyl has been increasingly found in cocaine samples, although it is unclear if this is primarily due to intentional adulteration or cross contamination. Crack cocaine looks like irregular shaped white rocks. Forms Salts Cocaine — a tropane alkaloid — is a weakly alkaline compound, and can therefore combine with acidic compounds to form salts. The hydrochloride (HCl) salt of cocaine is by far the most commonly encountered, although the sulfate (SO42−) and the nitrate (NO3−) salts are occasionally seen. Different salts dissolve to a greater or lesser extent in various solvents — the hydrochloride salt is polar in character and is quite soluble in water. Base As the name implies, "freebase" is the base form of cocaine, as opposed to the salt form. It is practically insoluble in water whereas hydrochloride salt is water-soluble. Smoking freebase cocaine has the additional effect of releasing methylecgonidine into the user's system due to the pyrolysis of the substance (a side effect which insufflating or injecting powder cocaine does not create). Some research suggests that smoking freebase cocaine can be even more cardiotoxic than other routes of administration because of methylecgonidine's effects on lung tissue and liver tissue. Pure cocaine is prepared by neutralizing its compounding salt with an alkaline solution, which will precipitate non-polar basic cocaine. It is further refined through aqueous-solvent liquid–liquid extraction. Crack cocaine Crack is usually smoked in a glass pipe, and once inhaled, it passes from the lungs directly to the central nervous system, producing an almost immediate "high" that can be very powerful – this initial crescendo of stimulation is known as a "rush". This is followed by an equally intense low, leaving the user craving more of the drug. Addiction to crack usually occurs within four to six weeks - much more rapidly than regular cocaine. Powder cocaine (cocaine hydrochloride) must be heated to a high temperature (about 197 °C), and considerable decomposition/burning occurs at these high temperatures. This effectively destroys some of the cocaine and yields a sharp, acrid, and foul-tasting smoke. Cocaine base/crack can be smoked because it vaporizes with little or no decomposition at , which is below the boiling point of water. Crack is a lower purity form of free-base cocaine that is usually produced by neutralization of cocaine hydrochloride with a solution of baking soda (sodium bicarbonate, NaHCO3) and water, producing a very hard/brittle, off-white-to-brown colored, amorphous material that contains sodium carbonate, entrapped water, and other by-products as the main impurities. The origin of the name "crack" comes from the "crackling" sound (and hence the onomatopoeic moniker "crack") that is produced when the cocaine and its impurities (i.e. water, sodium bicarbonate) are heated past the point of vaporization. Coca leaf infusions Coca herbal infusion (also referred to as coca tea) is used in coca-leaf producing countries much as any herbal medicinal infusion would elsewhere in the world. The free and legal commercialization of dried coca leaves under the form of filtration bags to be used as "coca tea" has been actively promoted by the governments of Peru and Bolivia for many years as a drink having medicinal powers. In Peru, the National Coca Company, a state-run corporation, sells cocaine-infused teas and other medicinal products and also exports leaves to the U.S. for medicinal use. Visitors to the city of Cuzco in Peru, and La Paz in Bolivia are greeted with the offering of coca leaf infusions (prepared in teapots with whole coca leaves) purportedly to help the newly arrived traveler overcome the malaise of high altitude sickness. The effects of drinking coca tea are mild stimulation and mood lift. It has also been promoted as an adjuvant for the treatment of cocaine dependence. One study on coca leaf infusion used with counseling in the treatment of 23 addicted coca-paste smokers in Lima, Peru found that the relapses rate fell from 4.35 times per month on average before coca tea treatment to one during treatment. The duration of abstinence increased from an average of 32 days before treatment to 217.2 days during treatment. This suggests that coca leaf infusion plus counseling may be effective at preventing relapse during cocaine addiction treatment. There is little information on the pharmacological and toxicological effects of consuming coca tea. A chemical analysis by solid-phase extraction and gas chromatography–mass spectrometry (SPE-GC/MS) of Peruvian and Bolivian tea bags indicated the presence of significant amounts of cocaine, the metabolite benzoylecgonine, ecgonine methyl ester and trans-cinnamoylcocaine in coca tea bags and coca tea. Urine specimens were also analyzed from an individual who consumed one cup of coca tea and it was determined that enough cocaine and cocaine-related metabolites were present to produce a positive drug test. Synthesis Biosynthesis The first synthesis and elucidation of the cocaine molecule was by Richard Willstätter in 1898. Willstätter's synthesis derived cocaine from tropinone. Since then, Robert Robinson and Edward Leete have made significant contributions to the mechanism of the synthesis. (-NO3) The additional carbon atoms required for the synthesis of cocaine are derived from acetyl-CoA, by addition of two acetyl-CoA units to the N-methyl-Δ1-pyrrolinium cation. The first addition is a Mannich-like reaction with the enolate anion from acetyl-CoA acting as a nucleophile toward the pyrrolinium cation. The second addition occurs through a Claisen condensation. This produces a racemic mixture of the 2-substituted pyrrolidine, with the retention of the thioester from the Claisen condensation. In formation of tropinone from racemic ethyl [2,3-13C2]4(Nmethyl-2-pyrrolidinyl)-3-oxobutanoate there is no preference for either stereoisomer. In cocaine biosynthesis, only the (S)-enantiomer can cyclize to form the tropane ring system of cocaine. The stereoselectivity of this reaction was further investigated through study of prochiral methylene hydrogen discrimination. This is due to the extra chiral center at C-2. This process occurs through an oxidation, which regenerates the pyrrolinium cation and formation of an enolate anion, and an intramolecular Mannich reaction. The tropane ring system undergoes hydrolysis, SAM-dependent methylation, and reduction via NADPH for the formation of methylecgonine. The benzoyl moiety required for the formation of the cocaine diester is synthesized from phenylalanine via cinnamic acid. Benzoyl-CoA then combines the two units to form cocaine. N-methyl-pyrrolinium cation The biosynthesis begins with L-Glutamine, which is derived to L-ornithine in plants. The major contribution of L-ornithine and L-arginine as a precursor to the tropane ring was confirmed by Edward Leete. Ornithine then undergoes a pyridoxal phosphate-dependent decarboxylation to form putrescine. In some animals, the urea cycle derives putrescine from ornithine. L-ornithine is converted to L-arginine, which is then decarboxylated via PLP to form agmatine. Hydrolysis of the imine derives N-carbamoylputrescine followed with hydrolysis of the urea to form putrescine. The separate pathways of converting ornithine to putrescine in plants and animals have converged. A SAM-dependent N-methylation of putrescine gives the N-methylputrescine product, which then undergoes oxidative deamination by the action of diamine oxidase to yield the aminoaldehyde. Schiff base formation confirms the biosynthesis of the N-methyl-Δ1-pyrrolinium cation. Robert Robinson's acetonedicarboxylate The biosynthesis of the tropane alkaloid is still not understood. Hemscheidt proposes that Robinson's acetonedicarboxylate emerges as a potential intermediate for this reaction. Condensation of N-methylpyrrolinium and acetonedicarboxylate would generate the oxobutyrate. Decarboxylation leads to tropane alkaloid formation. Reduction of tropinone The reduction of tropinone is mediated by NADPH-dependent reductase enzymes, which have been characterized in multiple plant species. These plant species all contain two types of the reductase enzymes, tropinone reductase I and tropinone reductase II. TRI produces tropine and TRII produces pseudotropine. Due to differing kinetic and pH/activity characteristics of the enzymes and by the 25-fold higher activity of TRI over TRII, the majority of the tropinone reduction is from TRI to form tropine. Illegal clandestine chemistry In 1991, the United States Department of Justice released a report detailing the typical process in which leaves from coca plants were ultimately converted into cocaine hydrochloride by Latin American drug cartels: the exact species of coca to be planted was determined by the location of its cultivation, with Erythroxylum coca being grown in tropical high altitude climates of the eastern Andes in Peru and Bolivia, while Erythroxylum novogranatense was favoured in drier lowland areas of Colombia the average cocaine alkaloid content of a sample of coca leaf varied between 0.1 and 0.8 percent, with coca from higher altitudes containing the largest percentages of cocaine alkaloids the typical farmer will plant coca on a sloping hill so rainfall will not drown the plants as they reach full maturity over 12 to 24 months after being planted the main harvest of coca leaves takes place after the traditional wet season in March, with additional harvesting also taking place in July and November the leaves are then taken to a flat area and spread out on tarpaulins to dry in the hot sun for approximately 6 hours, and afterwards placed in sacks to be transported to market or to a cocaine processing facility depending on location in the early 1990s, Peru and Bolivia were the main locations for converting coca leaf to coca paste and cocaine base, while Colombia was the primary location for the final conversion for these products into cocaine hydrochloride the conversion of coca leaf into coca paste was typically done very close to the coca fields to minimize the need to transport the coca leaves, with a plastic lined pit in the ground used as a "pozo" the leaves are added to the pozo along with fresh water from a nearby river, along with kerosene and sodium carbonate, then a team of several people will repeatedly stomp on the mixture in their bare feet for several hours to help turn the leaves into paste the cocaine alkaloids and kerosene eventually separate from the water and coca leaves, which are then drained off / scooped out of the mixture the cocaine alkaloids are then extracted from the kerosene and added into a dilute acidic solution, to which more sodium carbonate is added to cause a precipitate to form the acid and water are afterwards drained off and the precipitate is filtered and dried to produce an off-white putty-like substance, which is coca paste ready for transportation to cocaine base processing facility at the processing facility, coca paste is dissolved in a mixture of sulfuric acid and water, to which potassium permanganate is then added and the solution is left to stand for 6 hours to allow to unwanted alkaloids to break down the solution is then filtered and the precipitate is discarded, after which ammonia water is added and another precipitate is formed when the solution has finished reacting the liquid is drained, then the remaining precipitate is dried under heating lamps, and resulting powder is cocaine base ready for transfer to a cocaine hydrochloride laboratory at the laboratory, acetone is added to the cocaine base and after it has dissolved the solution is filtered to remove undesired material hydrochloric acid diluted in ether is added to the solution, which causes the cocaine to precipitate out of the solution as cocaine hydrochloride crystals the cocaine hydrochloride crystals are finally dried under lamps or in microwave ovens, then pressed into blocks and wrapped in plastic ready for export GMO synthesis Research In 2022, a GMO produced N. benthamiana were discovered that were able to produce 25% of the amount of cocaine found in a coca plant. Detection in body fluids Cocaine and its major metabolites may be quantified in blood, plasma, or urine to monitor for use, confirm a diagnosis of poisoning, or assist in the forensic investigation of a traffic or other criminal violation or sudden death. Most commercial cocaine immunoassay screening tests cross-react appreciably with the major cocaine metabolites, but chromatographic techniques can easily distinguish and separately measure each of these substances. When interpreting the results of a test, it is important to consider the cocaine usage history of the individual, since a chronic user can develop tolerance to doses that would incapacitate a cocaine-naive individual, and the chronic user often has high baseline values of the metabolites in his system. Cautious interpretation of testing results may allow a distinction between passive or active usage, and between smoking versus other routes of administration. Field analysis Cocaine may be detected by law enforcement using the Scott reagent. The test can easily generate false positives for common substances and must be confirmed with a laboratory test. Approximate cocaine purity can be determined using 1 mL 2% cupric sulfate pentahydrate in dilute HCl, 1 mL 2% potassium thiocyanate and 2 mL of chloroform. The shade of brown shown by the chloroform is proportional to the cocaine content. This test is not cross sensitive to heroin, methamphetamine, benzocaine, procaine and a number of other drugs but other chemicals could cause false positives. Usage According to a 2016 United Nations report, England and Wales are the countries with the highest rate of cocaine usage (2.4% of adults in the previous year). Other countries where the usage rate meets or exceeds 1.5% are Spain and Scotland (2.2%), the United States (2.1%), Australia (2.1%), Uruguay (1.8%), Brazil (1.75%), Chile (1.73%), the Netherlands (1.5%) and Ireland (1.5%). Europe Cocaine is the second most popular illegal recreational drug in Europe (behind cannabis). Since the mid-1990s, overall cocaine usage in Europe has been on the rise, but usage rates and attitudes tend to vary between countries. European countries with the highest usage rates are the United Kingdom, Spain, Italy, and the Republic of Ireland. Approximately 17 million Europeans (5.1%) have used cocaine at least once and 3.5 million (1.1%) in the last year. About 1.9% (2.3 million) of young adults (15–34 years old) have used cocaine in the last year (latest data available as of 2018). Usage is particularly prevalent among this demographic: 4% to 7% of males have used cocaine in the last year in Spain, Denmark, the Republic of Ireland, Italy, and the United Kingdom. The ratio of male to female users is approximately 3.8:1, but this statistic varies from 1:1 to 13:1 depending on country. In 2014 London had the highest amount of cocaine in its sewage out of 50 European cities. United States Cocaine is the second most popular illegal recreational drug in the United States (behind cannabis) and the U.S. is the world's largest consumer of cocaine. Its users span over different ages, races, and professions. In the 1970s and 1980s, the drug became particularly popular in the disco culture as cocaine usage was very common and popular in many discos such as Studio 54. Dependence treatment History Discovery Indigenous peoples of South America have chewed the leaves of Erythroxylon coca—a plant that contains vital nutrients as well as numerous alkaloids, including cocaine—for over a thousand years. The coca leaf was, and still is, chewed almost universally by some indigenous communities. The remains of coca leaves have been found with ancient Peruvian mummies, and pottery from the time period depicts humans with bulged cheeks, indicating the presence of something on which they are chewing. There is also evidence that these cultures used a mixture of coca leaves and saliva as an anesthetic for the performance of trepanation. When the Spanish arrived in South America, the conquistadors at first banned coca as an "evil agent of devil". But after discovering that without the coca the locals were barely able to work, the conquistadors legalized and taxed the leaf, taking 10% off the value of each crop. In 1569, Spanish botanist Nicolás Monardes described the indigenous peoples' practice of chewing a mixture of tobacco and coca leaves to induce "great contentment": In 1609, Padre Blas Valera wrote: Isolation and naming Although the stimulant and hunger-suppressant properties of coca leaves had been known for many centuries, the isolation of the cocaine alkaloid was not achieved until 1855. Various European scientists had attempted to isolate cocaine, but none had been successful for two reasons: the knowledge of chemistry required was insufficient, and conditions of sea-shipping from South America at the time would often degrade the quality of the cocaine in the plant samples available to European chemists by the time they arrived. However, by 1855, the German chemist Friedrich Gaedcke successfully isolated the cocaine alkaloid for the first time. Gaedcke named the alkaloid "erythroxyline", and published a description in the journal Archiv der Pharmazie. In 1856, Friedrich Wöhler asked Dr. Carl Scherzer, a scientist aboard the Novara (an Austrian frigate sent by Emperor Franz Joseph to circle the globe), to bring him a large amount of coca leaves from South America. In 1859, the ship finished its travels and Wöhler received a trunk full of coca. Wöhler passed on the leaves to Albert Niemann, a PhD student at the University of Göttingen in Germany, who then developed an improved purification process. Niemann described every step he took to isolate cocaine in his dissertation titled Über eine neue organische Base in den Cocablättern (On a New Organic Base in the Coca Leaves), which was published in 1860 and earned him his Ph.D. He wrote of the alkaloid's "colourless transparent prisms" and said that "Its solutions have an alkaline reaction, a bitter taste, promote the flow of saliva and leave a peculiar numbness, followed by a sense of cold when applied to the tongue." Niemann named the alkaloid "cocaine" from "coca" (from Quechua "kúka") + suffix "ine". The first synthesis and elucidation of the structure of the cocaine molecule was by Richard Willstätter in 1898. It was the first biomimetic synthesis of an organic structure recorded in academic chemical literature. The synthesis started from tropinone, a related natural product and took five steps. Because of the former use of cocaine as a local anesthetic, a suffix "-caine" was later extracted and used to form names of synthetic local anesthetics. Medicalization With the discovery of this new alkaloid, Western medicine was quick to exploit the possible uses of this plant. In 1879, Vassili von Anrep, of the University of Würzburg, devised an experiment to demonstrate the analgesic properties of the newly discovered alkaloid. He prepared two separate jars, one containing a cocaine-salt solution, with the other containing merely saltwater. He then submerged a frog's legs into the two jars, one leg in the treatment and one in the control solution, and proceeded to stimulate the legs in several different ways. The leg that had been immersed in the cocaine solution reacted very differently from the leg that had been immersed in saltwater. Karl Koller (a close associate of Sigmund Freud, who would write about cocaine later) experimented with cocaine for ophthalmic usage. In an infamous experiment in 1884, he experimented upon himself by applying a cocaine solution to his own eye and then pricking it with pins. His findings were presented to the Heidelberg Ophthalmological Society. Also in 1884, Jellinek demonstrated the effects of cocaine as a respiratory system anesthetic. In 1885, William Halsted demonstrated nerve-block anesthesia, and James Leonard Corning demonstrated peridural anesthesia. 1898 saw Heinrich Quincke use cocaine for spinal anesthesia. Popularization In 1859, an Italian doctor, Paolo Mantegazza, returned from Peru, where he had witnessed first-hand the use of coca by the local indigenous peoples. He proceeded to experiment on himself and upon his return to Milan, he wrote a paper in which he described the effects. In this paper, he declared coca and cocaine (at the time they were assumed to be the same) as being useful medicinally, in the treatment of "a furred tongue in the morning, flatulence, and whitening of the teeth." A chemist named Angelo Mariani who read Mantegazza's paper became immediately intrigued with coca and its economic potential. In 1863, Mariani started marketing a wine called Vin Mariani, which had been treated with coca leaves, to become coca wine. The ethanol in wine acted as a solvent and extracted the cocaine from the coca leaves, altering the drink's effect. It contained 6 mg cocaine per ounce of wine, but Vin Mariani which was to be exported contained 7.2 mg per ounce, to compete with the higher cocaine content of similar drinks in the United States. A "pinch of coca leaves" was included in John Styth Pemberton's original 1886 recipe for Coca-Cola, though the company began using decocainized leaves in 1906 when the Pure Food and Drug Act was passed. In 1879 cocaine began to be used to treat morphine addiction. Cocaine was introduced into clinical use as a local anesthetic in Germany in 1884, about the same time as Sigmund Freud published his work Über Coca, in which he wrote that cocaine causes: By 1885 the U.S. manufacturer Parke-Davis sold coca-leaf cigarettes and cheroots, a cocaine inhalant, a Coca Cordial, cocaine crystals, and cocaine solution for intravenous injection. The company promised that its cocaine products would "supply the place of food, make the coward brave, the silent eloquent and render the sufferer insensitive to pain." By the late Victorian era, cocaine use had appeared as a vice in literature. For example, it was injected by Arthur Conan Doyle's fictional Sherlock Holmes, generally to offset the boredom he felt when he was not working on a case. In early 20th-century Memphis, Tennessee, cocaine was sold in neighborhood drugstores on Beale Street, costing five or ten cents for a small boxful. Stevedores along the Mississippi River used the drug as a stimulant, and white employers encouraged its use by black laborers. In 1909, Ernest Shackleton took "Forced March" brand cocaine tablets to Antarctica, as did Captain Scott a year later on his ill-fated journey to the South Pole. In the 1931 song "Minnie the Moocher", Cab Calloway heavily references cocaine use. He uses the phrase "kicking the gong around", slang for cocaine use; describes titular character Minnie as "tall and skinny;" and describes Smokey Joe as "cokey". In the 1932 comedy musical film The Big Broadcast, Cab Calloway performs the song with his orchestra and mimes snorting cocaine in between verses. During the mid-1940s, amidst World War II, cocaine was considered for inclusion as an ingredient of a future generation of 'pep pills' for the German military, code named D-IX. In modern popular culture, references to cocaine are common. The drug has a glamorous image associated with the wealthy, famous and powerful, and is said to make users "feel rich and beautiful". In addition the pace of modern society − such as in finance − gives many the incentive to make use of the drug. Modern usage In many countries, cocaine is a popular recreational drug. Cocaine use is prevalent across all socioeconomic strata, including age, demographics, economic, social, political, religious, and livelihood. In the United States, the development of "crack" cocaine introduced the substance to a generally poorer inner-city market. The use of the powder form has stayed relatively constant, experiencing a new height of use across the 1980s and 1990s in the U.S. However, from 2006 to 2010 cocaine use in the US declined by roughly half before again rising once again from 2017 onwards. In the UK, cocaine use increased significantly between the 1990s and late 2000s, with a similar high consumption in some other European countries, including Spain. The estimated U.S. cocaine market exceeded US$70 billion in street value for the year 2005, exceeding revenues by corporations such as Starbucks. Cocaine's status as a club drug shows its immense popularity among the "party crowd". In 1995 the World Health Organization (WHO) and the United Nations Interregional Crime and Justice Research Institute (UNICRI) announced in a press release the publication of the results of the largest global study on cocaine use ever undertaken. An American representative in the World Health Assembly banned the publication of the study, because it seemed to make a case for the positive uses of cocaine. An excerpt of the report strongly conflicted with accepted paradigms, for example, "that occasional cocaine use does not typically lead to severe or even minor physical or social problems." In the sixth meeting of the B committee, the US representative threatened that "If World Health Organization activities relating to drugs failed to reinforce proven drug control approaches, funds for the relevant programs should be curtailed". This led to the decision to discontinue publication. A part of the study was recuperated and published in 2010, including profiles of cocaine use in 20 countries, but are unavailable . In October 2010 it was reported that the use of cocaine in Australia has doubled since monitoring began in 2003. A problem with illegal cocaine use, especially in the higher volumes used to combat fatigue (rather than increase euphoria) by long-term users, is the risk of ill effects or damage caused by the compounds used in adulteration. Cutting or "stepping on" the drug is commonplace, using compounds which simulate ingestion effects, such as Novocain (procaine) producing temporary anesthesia, as many users believe a strong numbing effect is the result of strong and/or pure cocaine, ephedrine or similar stimulants that are to produce an increased heart rate. The normal adulterants for profit are inactive sugars, usually mannitol, creatine, or glucose, so introducing active adulterants gives the illusion of purity and to 'stretch' or make it so a dealer can sell more product than without the adulterants, however the purity of the cocaine is subsequently lowered. The adulterant of sugars allows the dealer to sell the product for a higher price because of the illusion of purity and allows the sale of more of the product at that higher price, enabling dealers to significantly increase revenue with little additional cost for the adulterants. A 2007 study by the European Monitoring Centre for Drugs and Drug Addiction showed that the purity levels for street purchased cocaine was often under 5% and on average under 50% pure. Society and culture Legal status The production, distribution, and sale of cocaine products is restricted (and illegal in most contexts) in most countries as regulated by the Single Convention on Narcotic Drugs, and the United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances. In the United States the manufacture, importation, possession, and distribution of cocaine are additionally regulated by the 1970 Controlled Substances Act. Some countries, such as Peru and Bolivia, permit the cultivation of coca leaf for traditional consumption by the local indigenous population, but nevertheless, prohibit the production, sale, and consumption of cocaine. The provisions as to how much a coca farmer can yield annually is protected by laws such as the Bolivian Cato accord. In addition, some parts of Europe, the United States, and Australia allow processed cocaine for medicinal uses only. Australia Cocaine is a Schedule 8 controlled drug in Australia under the Poisons Standard. It is the second most popular illicit recreational drug in Australia behind cannabis. In Western Australia under the Misuse of Drugs Act 1981 4.0g of cocaine is the amount of prohibited drugs determining a court of trial, 2.0g is the amount of cocaine required for the presumption of intention to sell or supply and 28.0g is the amount of cocaine required for purposes of drug trafficking. United States The US federal government instituted a national labeling requirement for cocaine and cocaine-containing products through the Pure Food and Drug Act of 1906. The next important federal regulation was the Harrison Narcotics Tax Act of 1914. While this act is often seen as the start of prohibition, the act itself was not actually a prohibition on cocaine, but instead set up a regulatory and licensing regime. The Harrison Act did not recognize addiction as a treatable condition and therefore the therapeutic use of cocaine, heroin, or morphine to such individuals was outlawed leading a 1915 editorial in the journal American Medicine to remark that the addict "is denied the medical care he urgently needs, open, above-board sources from which he formerly obtained his drug supply are closed to him, and he is driven to the underworld where he can get his drug, but of course, surreptitiously and in violation of the law." The Harrison Act left manufacturers of cocaine untouched so long as they met certain purity and labeling standards. Despite that cocaine was typically illegal to sell and legal outlets were rarer, the quantities of legal cocaine produced declined very little. Legal cocaine quantities did not decrease until the Jones–Miller Act of 1922 put serious restrictions on cocaine manufactures. Before the early 1900s, the primary problem caused by cocaine use was portrayed by newspapers to be addiction, not violence or crime, and the cocaine user was represented as an upper or middle class White person. In 1914, The New York Times published an article titled "Negro Cocaine 'Fiends' Are a New Southern Menace", portraying Black cocaine users as dangerous and able to withstand wounds that would normally be fatal. The Anti-Drug Abuse Act of 1986 mandated the same prison sentences for distributing 500 grams of powdered cocaine and just 5 grams of crack cocaine. In the National Survey on Drug Use and Health, white respondents reported a higher rate of powdered cocaine use, and Black respondents reported a higher rate of crack cocaine use. Interdiction In 2004, according to the United Nations, 589 tonnes of cocaine were seized globally by law enforcement authorities. Colombia seized 188 t, the United States 166 t, Europe 79 t, Peru 14 t, Bolivia 9 t, and the rest of the world 133 t. Production Colombia is as of 2019 the world's largest cocaine producer, with production more than tripling since 2013. Three-quarters of the world's annual yield of cocaine has been produced in Colombia, both from cocaine base imported from Peru (primarily the Huallaga Valley) and Bolivia and from locally grown coca. There was a 28% increase in the amount of potentially harvestable coca plants which were grown in Colombia in 1998. This, combined with crop reductions in Bolivia and Peru, made Colombia the nation with the largest area of coca under cultivation after the mid-1990s. Coca grown for traditional purposes by indigenous communities, a use which is still present and is permitted by Colombian laws, only makes up a small fragment of total coca production, most of which is used for the illegal drug trade. An interview with a coca farmer published in 2003 described a mode of production by acid-base extraction that has changed little since 1905. Roughly of leaves were harvested per hectare, six times per year. The leaves were dried for half a day, then chopped into small pieces with a string trimmer and sprinkled with a small amount of powdered cement (replacing sodium carbonate from former times). Several hundred pounds of this mixture were soaked in of gasoline for a day, then the gasoline was removed and the leaves were pressed for the remaining liquid, after which they could be discarded. Then battery acid (weak sulfuric acid) was used, one bucket per of leaves, to create a phase separation in which the cocaine free base in the gasoline was acidified and extracted into a few buckets of "murky-looking smelly liquid". Once powdered caustic soda was added to this, the cocaine precipitated and could be removed by filtration through a cloth. The resulting material, when dried, was termed pasta and sold by the farmer. The yearly harvest of leaves from a hectare produced of pasta, approximately 40–60% cocaine. Repeated recrystallization from solvents, producing pasta lavada and eventually crystalline cocaine were performed at specialized laboratories after the sale. Attempts to eradicate coca fields through the use of defoliants have devastated part of the farming economy in some coca-growing regions of Colombia, and strains appear to have been developed that are more resistant or immune to their use. Whether these strains are natural mutations or the product of human tampering is unclear. These strains have also shown to be more potent than those previously grown, increasing profits for the drug cartels responsible for the exporting of cocaine. Although production fell temporarily, coca crops rebounded in numerous smaller fields in Colombia, rather than the larger plantations. The cultivation of coca has become an attractive economic decision for many growers due to the combination of several factors, including the lack of other employment alternatives, the lower profitability of alternative crops in official crop substitution programs, the eradication-related damages to non-drug farms, the spread of new strains of the coca plant due to persistent worldwide demand. The latest estimate provided by the U.S. authorities on the annual production of cocaine in Colombia refers to 290 metric tons. As of the end of 2011, the seizure operations of Colombian cocaine carried out in different countries have totaled 351.8 metric tons of cocaine, i.e. 121.3% of Colombia's annual production according to the U.S. Department of State's estimates. Synthesis Synthesizing cocaine could eliminate the high visibility and low reliability of offshore sources and international smuggling, replacing them with clandestine domestic laboratories, as are common for illicit methamphetamine, but is rarely done. Natural cocaine remains the lowest cost and highest quality supply of cocaine. Formation of inactive stereoisomers (cocaine has four chiral centres – 1R 2R, 3S, and 5S, two of them dependent, hence eight possible stereoisomers) plus synthetic by-products limits the yield and purity. Trafficking and distribution Organized criminal gangs operating on a large scale dominate the cocaine trade. Most cocaine is grown and processed in South America, particularly in Colombia, Bolivia, Peru, and smuggled into the United States and Europe, the United States being the world's largest consumer of cocaine, where it is sold at huge markups; usually in the US at $80–120 for 1 gram, and $250–300 for 3.5 grams ( of an ounce, or an "eight ball"). Caribbean and Mexican routes The primary cocaine importation points in the United States have been in Arizona, southern California, southern Florida, and Texas. Typically, land vehicles are driven across the U.S.–Mexico border. Sixty-five percent of cocaine enters the United States through Mexico, and the vast majority of the rest enters through Florida. , the Sinaloa Cartel is the most active drug cartel involved in smuggling illicit drugs like cocaine into the United States and trafficking them throughout the United States. Cocaine traffickers from Colombia and Mexico have established a labyrinth of smuggling routes throughout the Caribbean, the Bahama Island chain, and South Florida. They often hire traffickers from Mexico or the Dominican Republic to transport the drug using a variety of smuggling techniques to U.S. markets. These include airdrops of in the Bahama Islands or off the coast of Puerto Rico, mid-ocean boat-to-boat transfers of , and the commercial shipment of tonnes of cocaine through the port of Miami. Chilean route Another route of cocaine traffic goes through Chile, which is primarily used for cocaine produced in Bolivia since the nearest seaports lie in northern Chile. The arid Bolivia–Chile border is easily crossed by 4×4 vehicles that then head to the seaports of Iquique and Antofagasta. While the price of cocaine is higher in Chile than in Peru and Bolivia, the final destination is usually Europe, especially Spain where drug dealing networks exist among South American immigrants. Techniques Cocaine is also carried in small, concealed, kilogram quantities across the border by couriers known as "mules" (or "mulas"), who cross a border either legally, for example, through a port or airport, or illegally elsewhere. The drugs may be strapped to the waist or legs or hidden in bags, or hidden in the body (by swallowing or placement inside an orifice), typically known as 'bodypacking. If the mule gets through without being caught, the gangs will receive most of the profits. If the mule caught, gangs may sever all links and the mule will usually stand trial for trafficking alone. In many cases, mules are often forced into the role, as result of coercion, violence, threats or extreme poverty. Bulk cargo ships are also used to smuggle cocaine to staging sites in the western Caribbean–Gulf of Mexico area. These vessels are typically 150–250-foot (50–80 m) coastal freighters that carry an average cocaine load of approximately 2.5 tonnes. Commercial fishing vessels are also used for smuggling operations. In areas with a high volume of recreational traffic, smugglers use the same types of vessels, such as go-fast boats, like those used by the local populations. Sophisticated drug subs are the latest tool drug runners are using to bring cocaine north from Colombia, it was reported on 20 March 2008. Although the vessels were once viewed as a quirky sideshow in the drug war, they are becoming faster, more seaworthy, and capable of carrying bigger loads of drugs than earlier models, according to those charged with catching them. Sales to consumers Cocaine is readily available in all major countries' metropolitan areas. According to the Summer 1998 Pulse Check, published by the U.S. Office of National Drug Control Policy, cocaine use had stabilized across the country, with a few increases reported in San Diego, Bridgeport, Miami, and Boston. In the West, cocaine usage was lower, which was thought to be due to a switch to methamphetamine among some users; methamphetamine is cheaper, three and a half times more powerful, and lasts 12–24 times longer with each dose. Nevertheless, the number of cocaine users remain high, with a large concentration among urban youth. In addition to the amounts previously mentioned, cocaine can be sold in "bill sizes": for example, $10 might purchase a "dime bag", a very small amount (0.1–0.15 g) of cocaine. These amounts and prices are very popular among young people because they are inexpensive and easily concealed on one's body. Quality and price can vary dramatically depending on supply and demand, and on geographic region. In 2008, the European Monitoring Centre for Drugs and Drug Addiction reports that the typical retail price of cocaine varied between €50 and €75 per gram in most European countries, although Cyprus, Romania, Sweden, and Turkey reported much higher values. Consumption World annual cocaine consumption, as of 2000, stood at around 600 tonnes, with the United States consuming around 300 t, 50% of the total, Europe about 150 t, 25% of the total, and the rest of the world the remaining 150 t or 25%. It is estimated that 1.5 million people in the United States used cocaine in 2010, down from 2.4 million in 2006. Conversely, cocaine use appears to be increasing in Europe with the highest prevalences in Spain, the United Kingdom, Italy, and Ireland. The 2010 UN World Drug Report concluded that "it appears that the North American cocaine market has declined in value from US$47 billion in 1998 to US$38 billion in 2008. Between 2006 and 2008, the value of the market remained basically stable". See also Black cocaine Coca alkaloids Coca eradication Cocaine and amphetamine regulated transcript Cocaine Anonymous Cocaine paste Crack epidemic Illegal drug trade in Latin America Coca production in Colombia Legal status of cocaine List of cocaine analogues List of countries by prevalence of cocaine use Methylphenidate Modafinil Prenatal cocaine exposure Ypadu References General and cited references Further reading External links 1855 introductions 1855 in science Alkaloids found in Erythroxylum Anorectics Benzoate esters Carboxylate esters Cardiac stimulants CYP2D6 inhibitors Euphoriants German inventions Glycine receptor agonists Local anesthetics Methyl esters Otologicals Powders Secondary metabolites Serotonin–norepinephrine–dopamine reuptake inhibitors Sigma agonists Stimulants Sympathomimetic amines Teratogens Tropane alkaloids found in Erythroxylum coca Vasoconstrictors Wikipedia medicine articles ready to translate Obsolete medications
Cocaine
[ "Physics", "Chemistry" ]
14,956
[ "Chemical ecology", "Secondary metabolites", "Neurochemistry", "Adulteration", "Drug safety", "Teratogens", "Neurotoxins", "Metabolism" ]
7,706
https://en.wikipedia.org/wiki/Cartesian%20coordinate%20system
In geometry, a Cartesian coordinate system (, ) in a plane is a coordinate system that specifies each point uniquely by a pair of real numbers called coordinates, which are the signed distances to the point from two fixed perpendicular oriented lines, called coordinate lines, coordinate axes or just axes (plural of axis) of the system. The point where the axes meet is called the origin and has as coordinates. The axes directions represent an orthogonal basis. The combination of origin and basis forms a coordinate frame called the Cartesian frame. Similarly, the position of any point in three-dimensional space can be specified by three Cartesian coordinates, which are the signed distances from the point to three mutually perpendicular planes. More generally, Cartesian coordinates specify the point in an -dimensional Euclidean space for any dimension . These coordinates are the signed distances from the point to mutually perpendicular fixed hyperplanes. Cartesian coordinates are named for René Descartes, whose invention of them in the 17th century revolutionized mathematics by allowing the expression of problems of geometry in terms of algebra and calculus. Using the Cartesian coordinate system, geometric shapes (such as curves) can be described by equations involving the coordinates of points of the shape. For example, a circle of radius 2, centered at the origin of the plane, may be described as the set of all points whose coordinates and satisfy the equation ; the area, the perimeter and the tangent line at any point can be computed from this equation by using integrals and derivatives, in a way that can be applied to any curve. Cartesian coordinates are the foundation of analytic geometry, and provide enlightening geometric interpretations for many other branches of mathematics, such as linear algebra, complex analysis, differential geometry, multivariate calculus, group theory and more. A familiar example is the concept of the graph of a function. Cartesian coordinates are also essential tools for most applied disciplines that deal with geometry, including astronomy, physics, engineering and many more. They are the most common coordinate system used in computer graphics, computer-aided geometric design and other geometry-related data processing. History The adjective Cartesian refers to the French mathematician and philosopher René Descartes, who published this idea in 1637 while he was resident in the Netherlands. It was independently discovered by Pierre de Fermat, who also worked in three dimensions, although Fermat did not publish the discovery. The French cleric Nicole Oresme used constructions similar to Cartesian coordinates well before the time of Descartes and Fermat. Both Descartes and Fermat used a single axis in their treatments and have a variable length measured in reference to this axis. The concept of using a pair of axes was introduced later, after Descartes' La Géométrie was translated into Latin in 1649 by Frans van Schooten and his students. These commentators introduced several concepts while trying to clarify the ideas contained in Descartes's work. The development of the Cartesian coordinate system would play a fundamental role in the development of the calculus by Isaac Newton and Gottfried Wilhelm Leibniz. The two-coordinate description of the plane was later generalized into the concept of vector spaces. Many other coordinate systems have been developed since Descartes, such as the polar coordinates for the plane, and the spherical and cylindrical coordinates for three-dimensional space. Description One dimension An affine line with a chosen Cartesian coordinate system is called a number line. Every point on the line has a real-number coordinate, and every real number represents some point on the line. There are two degrees of freedom in the choice of Cartesian coordinate system for a line, which can be specified by choosing two distinct points along the line and assigning them to two distinct real numbers (most commonly zero and one). Other points can then be uniquely assigned to numbers by linear interpolation. Equivalently, one point can be assigned to a specific real number, for instance an origin point corresponding to zero, and an oriented length along the line can be chosen as a unit, with the orientation indicating the correspondence between directions along the line and positive or negative numbers. Each point corresponds to its signed distance from the origin (a number with an absolute value equal to the distance and a or sign chosen based on direction). A geometric transformation of the line can be represented by a function of a real variable, for example translation of the line corresponds to addition, and scaling the line corresponds to multiplication. Any two Cartesian coordinate systems on the line can be related to each-other by a linear function (function of the form taking a specific point's coordinate in one system to its coordinate in the other system. Choosing a coordinate system for each of two different lines establishes an affine map from one line to the other taking each point on one line to the point on the other line with the same coordinate. Two dimensions A Cartesian coordinate system in two dimensions (also called a rectangular coordinate system or an orthogonal coordinate system) is defined by an ordered pair of perpendicular lines (axes), a single unit of length for both axes, and an orientation for each axis. The point where the axes meet is taken as the origin for both, thus turning each axis into a number line. For any point P, a line is drawn through P perpendicular to each axis, and the position where it meets the axis is interpreted as a number. The two numbers, in that chosen order, are the Cartesian coordinates of P. The reverse construction allows one to determine the point P given its coordinates. The first and second coordinates are called the abscissa and the ordinate of P, respectively; and the point where the axes meet is called the origin of the coordinate system. The coordinates are usually written as two numbers in parentheses, in that order, separated by a comma, as in . Thus the origin has coordinates , and the points on the positive half-axes, one unit away from the origin, have coordinates and . In mathematics, physics, and engineering, the first axis is usually defined or depicted as horizontal and oriented to the right, and the second axis is vertical and oriented upwards. (However, in some computer graphics contexts, the ordinate axis may be oriented downwards.) The origin is often labeled O, and the two coordinates are often denoted by the letters X and Y, or x and y. The axes may then be referred to as the X-axis and Y-axis. The choices of letters come from the original convention, which is to use the latter part of the alphabet to indicate unknown values. The first part of the alphabet was used to designate known values. A Euclidean plane with a chosen Cartesian coordinate system is called a . In a Cartesian plane, one can define canonical representatives of certain geometric figures, such as the unit circle (with radius equal to the length unit, and center at the origin), the unit square (whose diagonal has endpoints at and ), the unit hyperbola, and so on. The two axes divide the plane into four right angles, called quadrants. The quadrants may be named or numbered in various ways, but the quadrant where all coordinates are positive is usually called the first quadrant. If the coordinates of a point are , then its distances from the X-axis and from the Y-axis are and , respectively; where denotes the absolute value of a number. Three dimensions A Cartesian coordinate system for a three-dimensional space consists of an ordered triplet of lines (the axes) that go through a common point (the origin), and are pair-wise perpendicular; an orientation for each axis; and a single unit of length for all three axes. As in the two-dimensional case, each axis becomes a number line. For any point P of space, one considers a plane through P perpendicular to each coordinate axis, and interprets the point where that plane cuts the axis as a number. The Cartesian coordinates of P are those three numbers, in the chosen order. The reverse construction determines the point P given its three coordinates. Alternatively, each coordinate of a point P can be taken as the distance from P to the plane defined by the other two axes, with the sign determined by the orientation of the corresponding axis. Each pair of axes defines a coordinate plane. These planes divide space into eight octants. The octants are: The coordinates are usually written as three numbers (or algebraic formulas) surrounded by parentheses and separated by commas, as in or . Thus, the origin has coordinates , and the unit points on the three axes are , , and . Standard names for the coordinates in the three axes are abscissa, ordinate and applicate. The coordinates are often denoted by the letters x, y, and z. The axes may then be referred to as the x-axis, y-axis, and z-axis, respectively. Then the coordinate planes can be referred to as the xy-plane, yz-plane, and xz-plane. In mathematics, physics, and engineering contexts, the first two axes are often defined or depicted as horizontal, with the third axis pointing up. In that case the third coordinate may be called height or altitude. The orientation is usually chosen so that the 90-degree angle from the first axis to the second axis looks counter-clockwise when seen from the point ; a convention that is commonly called the right-hand rule. Higher dimensions Since Cartesian coordinates are unique and non-ambiguous, the points of a Cartesian plane can be identified with pairs of real numbers; that is, with the Cartesian product , where is the set of all real numbers. In the same way, the points in any Euclidean space of dimension n be identified with the tuples (lists) of n real numbers; that is, with the Cartesian product . Generalizations The concept of Cartesian coordinates generalizes to allow axes that are not perpendicular to each other, and/or different units along each axis. In that case, each coordinate is obtained by projecting the point onto one axis along a direction that is parallel to the other axis (or, in general, to the hyperplane defined by all the other axes). In such an oblique coordinate system the computations of distances and angles must be modified from that in standard Cartesian systems, and many standard formulas (such as the Pythagorean formula for the distance) do not hold (see affine plane). Notations and conventions The Cartesian coordinates of a point are usually written in parentheses and separated by commas, as in or . The origin is often labelled with the capital letter O. In analytic geometry, unknown or generic coordinates are often denoted by the letters (x, y) in the plane, and (x, y, z) in three-dimensional space. This custom comes from a convention of algebra, which uses letters near the end of the alphabet for unknown values (such as the coordinates of points in many geometric problems), and letters near the beginning for given quantities. These conventional names are often used in other domains, such as physics and engineering, although other letters may be used. For example, in a graph showing how a pressure varies with time, the graph coordinates may be denoted p and t. Each axis is usually named after the coordinate which is measured along it; so one says the x-axis, the y-axis, the t-axis, etc. Another common convention for coordinate naming is to use subscripts, as (x1, x2, ..., xn) for the n coordinates in an n-dimensional space, especially when n is greater than 3 or unspecified. Some authors prefer the numbering (x0, x1, ..., xn−1). These notations are especially advantageous in computer programming: by storing the coordinates of a point as an array, instead of a record, the subscript can serve to index the coordinates. In mathematical illustrations of two-dimensional Cartesian systems, the first coordinate (traditionally called the abscissa) is measured along a horizontal axis, oriented from left to right. The second coordinate (the ordinate) is then measured along a vertical axis, usually oriented from bottom to top. Young children learning the Cartesian system, commonly learn the order to read the values before cementing the x-, y-, and z-axis concepts, by starting with 2D mnemonics (for example, 'Walk along the hall then up the stairs' akin to straight across the x-axis then up vertically along the y-axis). Computer graphics and image processing, however, often use a coordinate system with the y-axis oriented downwards on the computer display. This convention developed in the 1960s (or earlier) from the way that images were originally stored in display buffers. For three-dimensional systems, a convention is to portray the xy-plane horizontally, with the z-axis added to represent height (positive up). Furthermore, there is a convention to orient the x-axis toward the viewer, biased either to the right or left. If a diagram (3D projection or 2D perspective drawing) shows the x- and y-axis horizontally and vertically, respectively, then the z-axis should be shown pointing "out of the page" towards the viewer or camera. In such a 2D diagram of a 3D coordinate system, the z-axis would appear as a line or ray pointing down and to the left or down and to the right, depending on the presumed viewer or camera perspective. In any diagram or display, the orientation of the three axes, as a whole, is arbitrary. However, the orientation of the axes relative to each other should always comply with the right-hand rule, unless specifically stated otherwise. All laws of physics and math assume this right-handedness, which ensures consistency. For 3D diagrams, the names "abscissa" and "ordinate" are rarely used for x and y, respectively. When they are, the z-coordinate is sometimes called the applicate. The words abscissa, ordinate and applicate are sometimes used to refer to coordinate axes rather than the coordinate values. Quadrants and octants The axes of a two-dimensional Cartesian system divide the plane into four infinite regions, called quadrants, each bounded by two half-axes. These are often numbered from 1st to 4th and denoted by Roman numerals: I (where the coordinates both have positive signs), II (where the abscissa is negative − and the ordinate is positive +), III (where both the abscissa and the ordinate are −), and IV (abscissa +, ordinate −). When the axes are drawn according to the mathematical custom, the numbering goes counter-clockwise starting from the upper right ("north-east") quadrant. Similarly, a three-dimensional Cartesian system defines a division of space into eight regions or octants, according to the signs of the coordinates of the points. The convention used for naming a specific octant is to list its signs; for example, or . The generalization of the quadrant and octant to an arbitrary number of dimensions is the orthant, and a similar naming system applies. Cartesian formulae for the plane Distance between two points The Euclidean distance between two points of the plane with Cartesian coordinates and is This is the Cartesian version of Pythagoras's theorem. In three-dimensional space, the distance between points and is which can be obtained by two consecutive applications of Pythagoras' theorem. Euclidean transformations The Euclidean transformations or Euclidean motions are the (bijective) mappings of points of the Euclidean plane to themselves which preserve distances between points. There are four types of these mappings (also called isometries): translations, rotations, reflections and glide reflections. Translation Translating a set of points of the plane, preserving the distances and directions between them, is equivalent to adding a fixed pair of numbers to the Cartesian coordinates of every point in the set. That is, if the original coordinates of a point are , after the translation they will be Rotation To rotate a figure counterclockwise around the origin by some angle is equivalent to replacing every point with coordinates (x,y) by the point with coordinates (x',y'), where Thus: Reflection If are the Cartesian coordinates of a point, then are the coordinates of its reflection across the second coordinate axis (the y-axis), as if that line were a mirror. Likewise, are the coordinates of its reflection across the first coordinate axis (the x-axis). In more generality, reflection across a line through the origin making an angle with the x-axis, is equivalent to replacing every point with coordinates by the point with coordinates , where Thus: Glide reflection A glide reflection is the composition of a reflection across a line followed by a translation in the direction of that line. It can be seen that the order of these operations does not matter (the translation can come first, followed by the reflection). General matrix form of the transformations All affine transformations of the plane can be described in a uniform way by using matrices. For this purpose, the coordinates of a point are commonly represented as the column matrix The result of applying an affine transformation to a point is given by the formula where is a 2×2 matrix and is a column matrix. That is, Among the affine transformations, the Euclidean transformations are characterized by the fact that the matrix is orthogonal; that is, its columns are orthogonal vectors of Euclidean norm one, or, explicitly, and This is equivalent to saying that times its transpose is the identity matrix. If these conditions do not hold, the formula describes a more general affine transformation. The transformation is a translation if and only if is the identity matrix. The transformation is a rotation around some point if and only if is a rotation matrix, meaning that it is orthogonal and A reflection or glide reflection is obtained when, Assuming that translations are not used (that is, ) transformations can be composed by simply multiplying the associated transformation matrices. In the general case, it is useful to use the augmented matrix of the transformation; that is, to rewrite the transformation formula where With this trick, the composition of affine transformations is obtained by multiplying the augmented matrices. Affine transformation Affine transformations of the Euclidean plane are transformations that map lines to lines, but may change distances and angles. As said in the preceding section, they can be represented with augmented matrices: The Euclidean transformations are the affine transformations such that the 2×2 matrix of the is orthogonal. The augmented matrix that represents the composition of two affine transformations is obtained by multiplying their augmented matrices. Some affine transformations that are not Euclidean transformations have received specific names. Scaling An example of an affine transformation which is not Euclidean is given by scaling. To make a figure larger or smaller is equivalent to multiplying the Cartesian coordinates of every point by the same positive number m. If are the coordinates of a point on the original figure, the corresponding point on the scaled figure has coordinates If m is greater than 1, the figure becomes larger; if m is between 0 and 1, it becomes smaller. Shearing A shearing transformation will push the top of a square sideways to form a parallelogram. Horizontal shearing is defined by: Shearing can also be applied vertically: Orientation and handedness In two dimensions Fixing or choosing the x-axis determines the y-axis up to direction. Namely, the y-axis is necessarily the perpendicular to the x-axis through the point marked 0 on the x-axis. But there is a choice of which of the two half lines on the perpendicular to designate as positive and which as negative. Each of these two choices determines a different orientation (also called handedness) of the Cartesian plane. The usual way of orienting the plane, with the positive x-axis pointing right and the positive y-axis pointing up (and the x-axis being the "first" and the y-axis the "second" axis), is considered the positive or standard orientation, also called the right-handed orientation. A commonly used mnemonic for defining the positive orientation is the right-hand rule. Placing a somewhat closed right hand on the plane with the thumb pointing up, the fingers point from the x-axis to the y-axis, in a positively oriented coordinate system. The other way of orienting the plane is following the left-hand rule, placing the left hand on the plane with the thumb pointing up. When pointing the thumb away from the origin along an axis towards positive, the curvature of the fingers indicates a positive rotation along that axis. Regardless of the rule used to orient the plane, rotating the coordinate system will preserve the orientation. Switching any one axis will reverse the orientation, but switching both will leave the orientation unchanged. In three dimensions Once the x- and y-axes are specified, they determine the line along which the z-axis should lie, but there are two possible orientations for this line. The two possible coordinate systems, which result are called 'right-handed' and 'left-handed'. The standard orientation, where the xy-plane is horizontal and the z-axis points up (and the x- and the y-axis form a positively oriented two-dimensional coordinate system in the xy-plane if observed from above the xy-plane) is called right-handed or positive. The name derives from the right-hand rule. If the index finger of the right hand is pointed forward, the middle finger bent inward at a right angle to it, and the thumb placed at a right angle to both, the three fingers indicate the relative orientation of the x-, y-, and z-axes in a right-handed system. The thumb indicates the x-axis, the index finger the y-axis and the middle finger the z-axis. Conversely, if the same is done with the left hand, a left-handed system results. Figure 7 depicts a left and a right-handed coordinate system. Because a three-dimensional object is represented on the two-dimensional screen, distortion and ambiguity result. The axis pointing downward (and to the right) is also meant to point towards the observer, whereas the "middle"-axis is meant to point away from the observer. The red circle is parallel to the horizontal xy-plane and indicates rotation from the x-axis to the y-axis (in both cases). Hence the red arrow passes in front of the z-axis. Figure 8 is another attempt at depicting a right-handed coordinate system. Again, there is an ambiguity caused by projecting the three-dimensional coordinate system into the plane. Many observers see Figure 8 as "flipping in and out" between a convex cube and a concave "corner". This corresponds to the two possible orientations of the space. Seeing the figure as convex gives a left-handed coordinate system. Thus the "correct" way to view Figure 8 is to imagine the x-axis as pointing towards the observer and thus seeing a concave corner. Representing a vector in the standard basis A point in space in a Cartesian coordinate system may also be represented by a position vector, which can be thought of as an arrow pointing from the origin of the coordinate system to the point. If the coordinates represent spatial positions (displacements), it is common to represent the vector from the origin to the point of interest as . In two dimensions, the vector from the origin to the point with Cartesian coordinates (x, y) can be written as: where and are unit vectors in the direction of the x-axis and y-axis respectively, generally referred to as the standard basis (in some application areas these may also be referred to as versors). Similarly, in three dimensions, the vector from the origin to the point with Cartesian coordinates can be written as: where and There is no natural interpretation of multiplying vectors to obtain another vector that works in all dimensions, however there is a way to use complex numbers to provide such a multiplication. In a two-dimensional cartesian plane, identify the point with coordinates with the complex number . Here, i is the imaginary unit and is identified with the point with coordinates , so it is not the unit vector in the direction of the x-axis. Since the complex numbers can be multiplied giving another complex number, this identification provides a means to "multiply" vectors. In a three-dimensional cartesian space a similar identification can be made with a subset of the quaternions. See also Cartesian coordinate robot Horizontal and vertical Jones diagram, which plots four variables rather than two Orthogonal coordinates Polar coordinate system Regular grid Spherical coordinate system Citations General and cited references Further reading External links Cartesian Coordinate System Coordinate Converter – converts between polar, Cartesian and spherical coordinates Coordinates of a point – interactive tool to explore coordinates of a point open source JavaScript class for 2D/3D Cartesian coordinate system manipulation Analytic geometry Elementary mathematics René Descartes Orthogonal coordinate systems Three-dimensional coordinate systems
Cartesian coordinate system
[ "Mathematics" ]
5,119
[ "Elementary mathematics", "Orthogonal coordinate systems", "Coordinate systems" ]
7,713
https://en.wikipedia.org/wiki/Chinese%20remainder%20theorem
In mathematics, the Chinese remainder theorem states that if one knows the remainders of the Euclidean division of an integer n by several integers, then one can determine uniquely the remainder of the division of n by the product of these integers, under the condition that the divisors are pairwise coprime (no two divisors share a common factor other than 1). The theorem is sometimes called Sunzi's theorem. Both names of the theorem refer to its earliest known statement that appeared in Sunzi Suanjing, a Chinese manuscript written during the 3rd to 5th century CE. This first statement was restricted to the following example: If one knows that the remainder of n divided by 3 is 2, the remainder of n divided by 5 is 3, and the remainder of n divided by 7 is 2, then with no other information, one can determine the remainder of n divided by 105 (the product of 3, 5, and 7) without knowing the value of n. In this example, the remainder is 23. Moreover, this remainder is the only possible positive value of n that is less than 105. The Chinese remainder theorem is widely used for computing with large integers, as it allows replacing a computation for which one knows a bound on the size of the result by several similar computations on small integers. The Chinese remainder theorem (expressed in terms of congruences) is true over every principal ideal domain. It has been generalized to any ring, with a formulation involving two-sided ideals. History The earliest known statement of the problem appears in the 5th-century book Sunzi Suanjing by the Chinese mathematician Sunzi: Sunzi's work would not be considered a theorem by modern standards; it only gives one particular problem, without showing how to solve it, much less any proof about the general case or a general algorithm for solving it. What amounts to an algorithm for solving this problem was described by Aryabhata (6th century). Special cases of the Chinese remainder theorem were also known to Brahmagupta (7th century) and appear in Fibonacci's Liber Abaci (1202). The result was later generalized with a complete solution called Da-yan-shu () in Qin Jiushao's 1247 Mathematical Treatise in Nine Sections which was translated into English in early 19th century by British missionary Alexander Wylie. The notion of congruences was first introduced and used by Carl Friedrich Gauss in his Disquisitiones Arithmeticae of 1801. Gauss illustrates the Chinese remainder theorem on a problem involving calendars, namely, "to find the years that have a certain period number with respect to the solar and lunar cycle and the Roman indiction." Gauss introduces a procedure for solving the problem that had already been used by Leonhard Euler but was in fact an ancient method that had appeared several times. Statement Let n1, ..., nk be integers greater than 1, which are often called moduli or divisors. Let us denote by N the product of the ni. The Chinese remainder theorem asserts that if the ni are pairwise coprime, and if a1, ..., ak are integers such that 0 ≤ ai < ni for every i, then there is one and only one integer x, such that 0 ≤ x < N and the remainder of the Euclidean division of x by ni is ai for every i. This may be restated as follows in terms of congruences: If the are pairwise coprime, and if a1, ..., ak are any integers, then the system has a solution, and any two solutions, say x1 and x2, are congruent modulo N, that is, . In abstract algebra, the theorem is often restated as: if the ni are pairwise coprime, the map defines a ring isomorphism between the ring of integers modulo N and the direct product of the rings of integers modulo the ni. This means that for doing a sequence of arithmetic operations in one may do the same computation independently in each and then get the result by applying the isomorphism (from the right to the left). This may be much faster than the direct computation if N and the number of operations are large. This is widely used, under the name multi-modular computation, for linear algebra over the integers or the rational numbers. The theorem can also be restated in the language of combinatorics as the fact that the infinite arithmetic progressions of integers form a Helly family. Proof The existence and the uniqueness of the solution may be proven independently. However, the first proof of existence, given below, uses this uniqueness. Uniqueness Suppose that and are both solutions to all the congruences. As and give the same remainder, when divided by , their difference is a multiple of each . As the are pairwise coprime, their product also divides , and thus and are congruent modulo . If and are supposed to be non-negative and less than (as in the first statement of the theorem), then their difference may be a multiple of only if . Existence (first proof) The map maps congruence classes modulo to sequences of congruence classes modulo . The proof of uniqueness shows that this map is injective. As the domain and the codomain of this map have the same number of elements, the map is also surjective, which proves the existence of the solution. This proof is very simple but does not provide any direct way for computing a solution. Moreover, it cannot be generalized to other situations where the following proof can. Existence (constructive proof) Existence may be established by an explicit construction of . This construction may be split into two steps, first solving the problem in the case of two moduli, and then extending this solution to the general case by induction on the number of moduli. Case of two moduli We want to solve the system: where and are coprime. Bézout's identity asserts the existence of two integers and such that The integers and may be computed by the extended Euclidean algorithm. A solution is given by Indeed, implying that The second congruence is proved similarly, by exchanging the subscripts 1 and 2. General case Consider a sequence of congruence equations: where the are pairwise coprime. The two first equations have a solution provided by the method of the previous section. The set of the solutions of these two first equations is the set of all solutions of the equation As the other are coprime with this reduces solving the initial problem of equations to a similar problem with equations. Iterating the process, one gets eventually the solutions of the initial problem. Existence (direct construction) For constructing a solution, it is not necessary to make an induction on the number of moduli. However, such a direct construction involves more computation with large numbers, which makes it less efficient and less used. Nevertheless, Lagrange interpolation is a special case of this construction, applied to polynomials instead of integers. Let be the product of all moduli but one. As the are pairwise coprime, and are coprime. Thus Bézout's identity applies, and there exist integers and such that A solution of the system of congruences is In fact, as is a multiple of for we have for every Computation Consider a system of congruences: where the are pairwise coprime, and let In this section several methods are described for computing the unique solution for , such that and these methods are applied on the example Several methods of computation are presented. The two first ones are useful for small examples, but become very inefficient when the product is large. The third one uses the existence proof given in . It is the most convenient when the product is large, or for computer computation. Systematic search It is easy to check whether a value of is a solution: it suffices to compute the remainder of the Euclidean division of by each . Thus, to find the solution, it suffices to check successively the integers from to until finding the solution. Although very simple, this method is very inefficient. For the simple example considered here, integers (including ) have to be checked for finding the solution, which is . This is an exponential time algorithm, as the size of the input is, up to a constant factor, the number of digits of , and the average number of operations is of the order of . Therefore, this method is rarely used, neither for hand-written computation nor on computers. Search by sieving The search of the solution may be made dramatically faster by sieving. For this method, we suppose, without loss of generality, that (if it were not the case, it would suffice to replace each by the remainder of its division by ). This implies that the solution belongs to the arithmetic progression By testing the values of these numbers modulo one eventually finds a solution of the two first congruences. Then the solution belongs to the arithmetic progression Testing the values of these numbers modulo and continuing until every modulus has been tested eventually yields the solution. This method is faster if the moduli have been ordered by decreasing value, that is if For the example, this gives the following computation. We consider first the numbers that are congruent to 4 modulo 5 (the largest modulus), which are 4, , , ... For each of them, compute the remainder by 4 (the second largest modulus) until getting a number congruent to 3 modulo 4. Then one can proceed by adding at each step, and computing only the remainders by 3. This gives 4 mod 4 → 0. Continue 4 + 5 = 9 mod 4 →1. Continue 9 + 5 = 14 mod 4 → 2. Continue 14 + 5 = 19 mod 4 → 3. OK, continue by considering remainders modulo 3 and adding 5 × 4 = 20 each time 19 mod 3 → 1. Continue 19 + 20 = 39 mod 3 → 0. OK, this is the result. This method works well for hand-written computation with a product of moduli that is not too big. However, it is much slower than other methods, for very large products of moduli. Although dramatically faster than the systematic search, this method also has an exponential time complexity and is therefore not used on computers. Using the existence construction The constructive existence proof shows that, in the case of two moduli, the solution may be obtained by the computation of the Bézout coefficients of the moduli, followed by a few multiplications, additions and reductions modulo (for getting a result in the interval ). As the Bézout's coefficients may be computed with the extended Euclidean algorithm, the whole computation, at most, has a quadratic time complexity of where denotes the number of digits of For more than two moduli, the method for two moduli allows the replacement of any two congruences by a single congruence modulo the product of the moduli. Iterating this process provides eventually the solution with a complexity, which is quadratic in the number of digits of the product of all moduli. This quadratic time complexity does not depend on the order in which the moduli are regrouped. One may regroup the two first moduli, then regroup the resulting modulus with the next one, and so on. This strategy is the easiest to implement, but it also requires more computation involving large numbers. Another strategy consists in partitioning the moduli in pairs whose product have comparable sizes (as much as possible), applying, in parallel, the method of two moduli to each pair, and iterating with a number of moduli approximatively divided by two. This method allows an easy parallelization of the algorithm. Also, if fast algorithms (that is, algorithms working in quasilinear time) are used for the basic operations, this method provides an algorithm for the whole computation that works in quasilinear time. On the current example (which has only three moduli), both strategies are identical and work as follows. Bézout's identity for 3 and 4 is Putting this in the formula given for proving the existence gives for a solution of the two first congruences, the other solutions being obtained by adding to −9 any multiple of . One may continue with any of these solutions, but the solution is smaller (in absolute value) and thus leads probably to an easier computation Bézout identity for 5 and 3 × 4 = 12 is Applying the same formula again, we get a solution of the problem: The other solutions are obtained by adding any multiple of , and the smallest positive solution is . As a linear Diophantine system The system of congruences solved by the Chinese remainder theorem may be rewritten as a system of linear Diophantine equations: where the unknown integers are and the Therefore, every general method for solving such systems may be used for finding the solution of Chinese remainder theorem, such as the reduction of the matrix of the system to Smith normal form or Hermite normal form. However, as usual when using a general algorithm for a more specific problem, this approach is less efficient than the method of the preceding section, based on a direct use of Bézout's identity. Over principal ideal domains In , the Chinese remainder theorem has been stated in three different ways: in terms of remainders, of congruences, and of a ring isomorphism. The statement in terms of remainders does not apply, in general, to principal ideal domains, as remainders are not defined in such rings. However, the two other versions make sense over a principal ideal domain : it suffices to replace "integer" by "element of the domain" and by . These two versions of the theorem are true in this context, because the proofs (except for the first existence proof), are based on Euclid's lemma and Bézout's identity, which are true over every principal domain. However, in general, the theorem is only an existence theorem and does not provide any way for computing the solution, unless one has an algorithm for computing the coefficients of Bézout's identity. Over univariate polynomial rings and Euclidean domains The statement in terms of remainders given in cannot be generalized to any principal ideal domain, but its generalization to Euclidean domains is straightforward. The univariate polynomials over a field is the typical example of a Euclidean domain which is not the integers. Therefore, we state the theorem for the case of the ring for a field For getting the theorem for a general Euclidean domain, it suffices to replace the degree by the Euclidean function of the Euclidean domain. The Chinese remainder theorem for polynomials is thus: Let (the moduli) be, for , pairwise coprime polynomials in . Let be the degree of , and be the sum of the If are polynomials such that or for every , then, there is one and only one polynomial , such that and the remainder of the Euclidean division of by is for every . The construction of the solution may be done as in or . However, the latter construction may be simplified by using, as follows, partial fraction decomposition instead of the extended Euclidean algorithm. Thus, we want to find a polynomial , which satisfies the congruences for Consider the polynomials The partial fraction decomposition of gives polynomials with degrees such that and thus Then a solution of the simultaneous congruence system is given by the polynomial In fact, we have for This solution may have a degree larger than The unique solution of degree less than may be deduced by considering the remainder of the Euclidean division of by This solution is Lagrange interpolation A special case of Chinese remainder theorem for polynomials is Lagrange interpolation. For this, consider monic polynomials of degree one: They are pairwise coprime if the are all different. The remainder of the division by of a polynomial is , by the polynomial remainder theorem. Now, let be constants (polynomials of degree 0) in Both Lagrange interpolation and Chinese remainder theorem assert the existence of a unique polynomial of degree less than such that for every Lagrange interpolation formula is exactly the result, in this case, of the above construction of the solution. More precisely, let The partial fraction decomposition of is In fact, reducing the right-hand side to a common denominator one gets and the numerator is equal to one, as being a polynomial of degree less than which takes the value one for different values of Using the above general formula, we get the Lagrange interpolation formula: Hermite interpolation Hermite interpolation is an application of the Chinese remainder theorem for univariate polynomials, which may involve moduli of arbitrary degrees (Lagrange interpolation involves only moduli of degree one). The problem consists of finding a polynomial of the least possible degree, such that the polynomial and its first derivatives take given values at some fixed points. More precisely, let be elements of the ground field and, for let be the values of the first derivatives of the sought polynomial at (including the 0th derivative, which is the value of the polynomial itself). The problem is to find a polynomial such that its j&hairsp;th derivative takes the value at for and Consider the polynomial This is the Taylor polynomial of order at , of the unknown polynomial Therefore, we must have Conversely, any polynomial that satisfies these congruences, in particular verifies, for any therefore is its Taylor polynomial of order at , that is, solves the initial Hermite interpolation problem. The Chinese remainder theorem asserts that there exists exactly one polynomial of degree less than the sum of the which satisfies these congruences. There are several ways for computing the solution One may use the method described at the beginning of . One may also use the constructions given in or . Generalization to non-coprime moduli The Chinese remainder theorem can be generalized to non-coprime moduli. Let be any integers, let ; , and consider the system of congruences: If , then this system has a unique solution modulo . Otherwise, it has no solutions. If one uses Bézout's identity to write , then the solution is given by This defines an integer, as divides both and . Otherwise, the proof is very similar to that for coprime moduli. Generalization to arbitrary rings The Chinese remainder theorem can be generalized to any ring, by using coprime ideals (also called comaximal ideals). Two ideals and are coprime if there are elements and such that This relation plays the role of Bézout's identity in the proofs related to this generalization, which otherwise are very similar. The generalization may be stated as follows. Let be two-sided ideals of a ring and let be their intersection. If the ideals are pairwise coprime, we have the isomorphism: between the quotient ring and the direct product of the where "" denotes the image of the element in the quotient ring defined by the ideal Moreover, if is commutative, then the ideal intersection of pairwise coprime ideals is equal to their product; that is if and are coprime for all . Interpretation in terms of idempotents Let be pairwise coprime two-sided ideals with and be the isomorphism defined above. Let be the element of whose components are all except the &hairsp;th which is , and The are central idempotents that are pairwise orthogonal; this means, in particular, that and for every and . Moreover, one has and In summary, this generalized Chinese remainder theorem is the equivalence between giving pairwise coprime two-sided ideals with a zero intersection, and giving central and pairwise orthogonal idempotents that sum to . Applications Sequence numbering The Chinese remainder theorem has been used to construct a Gödel numbering for sequences, which is involved in the proof of Gödel's incompleteness theorems. Fast Fourier transform The prime-factor FFT algorithm (also called Good-Thomas algorithm) uses the Chinese remainder theorem for reducing the computation of a fast Fourier transform of size to the computation of two fast Fourier transforms of smaller sizes and (providing that and are coprime). Encryption Most implementations of RSA use the Chinese remainder theorem during signing of HTTPS certificates and during decryption. The Chinese remainder theorem can also be used in secret sharing, which consists of distributing a set of shares among a group of people who, all together (but no one alone), can recover a certain secret from the given set of shares. Each of the shares is represented in a congruence, and the solution of the system of congruences using the Chinese remainder theorem is the secret to be recovered. Secret sharing using the Chinese remainder theorem uses, along with the Chinese remainder theorem, special sequences of integers that guarantee the impossibility of recovering the secret from a set of shares with less than a certain cardinality. Range ambiguity resolution The range ambiguity resolution techniques used with medium pulse repetition frequency radar can be seen as a special case of the Chinese remainder theorem. Decomposition of surjections of finite abelian groups Given a surjection of finite abelian groups, we can use the Chinese remainder theorem to give a complete description of any such map. First of all, the theorem gives isomorphisms where . In addition, for any induced map from the original surjection, we have and since for a pair of primes , the only non-zero surjections can be defined if and . These observations are pivotal for constructing the ring of profinite integers, which is given as an inverse limit of all such maps. Dedekind's theorem Dedekind's theorem on the linear independence of characters. Let be a monoid and an integral domain, viewed as a monoid by considering the multiplication on . Then any finite family of distinct monoid homomorphisms is linearly independent. In other words, every family of elements satisfying must be equal to the family . Proof. First assume that is a field, otherwise, replace the integral domain by its quotient field, and nothing will change. We can linearly extend the monoid homomorphisms to -algebra homomorphisms , where is the monoid ring of over . Then, by linearity, the condition yields Next, for the two -linear maps and are not proportional to each other. Otherwise and would also be proportional, and thus equal since as monoid homomorphisms they satisfy: , which contradicts the assumption that they are distinct. Therefore, the kernels and are distinct. Since is a field, is a maximal ideal of for every in . Because they are distinct and maximal the ideals and are coprime whenever . The Chinese Remainder Theorem (for general rings) yields an isomorphism: where Consequently, the map is surjective. Under the isomorphisms the map corresponds to: Now, yields for every vector in the image of the map . Since is surjective, this means that for every vector Consequently, . QED. See also Covering system Hasse principle Residue number system Notes References . See in particular Section 2.5, "Helly Property", pp. 393–394. Further reading . See Section 31.5: The Chinese remainder theorem, pp. 873–876. . See Section 4.3.2 (pp. 286–291), exercise 4.6.2–3 (page 456). External links Full text of the Sun-tzu Suan-ching (Chinese) Chinese Text Project Articles containing proofs Sun, Master Commutative algebra Modular arithmetic Theorems in number theory
Chinese remainder theorem
[ "Mathematics" ]
4,895
[ "Mathematical theorems", "Fields of abstract algebra", "Theorems in number theory", "Arithmetic", "Mathematical problems", "Articles containing proofs", "Commutative algebra", "Modular arithmetic", "Number theory" ]
7,720
https://en.wikipedia.org/wiki/Coprophagia
Coprophagia ( ) or coprophagy ( ) is the consumption of feces. The word is derived from the Ancient Greek "feces" and "to eat". Coprophagy refers to many kinds of feces-eating, including eating feces of other species (heterospecifics), of other individuals (allocoprophagy), or one's own (autocoprophagy). Feces may be already deposited or taken directly from the anus. In humans, coprophagia has been described since the late 19th century in individuals with mental illnesses and in some sexual acts, such as the practices of anilingus and felching where sex partners insert their tongue into each other's anus and ingest biologically significant amounts of feces. Some animal species eat feces as a normal behavior, in particular lagomorphs, which do so to allow tough plant materials to be digested more thoroughly by passing twice through the digestive tract. Other species may eat feces under certain conditions. Coprophagia by humans In cuisine The feces of the rock ptarmigan is used in Urumiit, which is a delicacy in some Inuit cuisine. Several beverages are made using the feces of animals, including but not limited to Kopi luwak, insect tea, and Black Ivory Coffee. Casu martzu is a cheese that uses the digestive processes of live maggots to help ferment and break down the cheese's fats. As a cult practice Members of a religious cult in Thailand routinely ate the feces and dead skin of their leader, whom they considered to be a holy man with healing powers. As a paraphilia According to the DSM-5, coprophilia is a paraphilia where the object of sexual interest is feces. This can involve coprophagia. Coprophagia is sometimes depicted in pornography, typically under the term "scat" (from scatology), such as in the shock video 2 Girls 1 Cup. The 120 Days of Sodom, a 1785 novel by Marquis de Sade, prominently features depictions of erotic sadomasochistic coprophagia. The 1975 film of the same name also contains scenes of coprophilia and coprophagia. As a supposed medical treatment Ayurveda and Siddha medicine use animal excreta in various forms, with the most important being the dung and urine of the Zebu. During the mid 16th century, physicians tasted their patients' feces to better judge their state and condition, according to François Rabelais. Rubelais studied medicine, but was also a writer of satirical and grotesque fiction, so the truth of this statement is unclear. Lewin reported "... consumption of fresh, warm camel feces has been recommended by Bedouins as a remedy for bacterial dysentery; its efficacy (probably attributable to the antibiotic subtilisin from Bacillus subtilis) was anecdotally confirmed by German soldiers in Africa during World War II". However, this story is likely a myth, and independent research has been unable to verify these claims. As a symptom Coprophagia has also been observed in some people with schizophrenia and pica. Coprophagia by nonhuman animals By invertebrates Coprophagous insects consume and redigest the feces of large animals. These feces contain substantial amounts of semidigested food, particularly in the case of herbivores, owing to the inefficiency of the large animals' digestive systems. Thousands of species of coprophagous insects are known, especially among the orders Diptera and Coleoptera. Examples of such flies are Scathophaga stercoraria and Sepsis cynipsea, dung flies commonly found in Europe around cattle droppings. Among beetles, dung beetles are a diverse lineage, many of which feed on the microorganism-rich liquid component of mammals' dung, and lay their eggs in balls composed mainly of the remaining fibrous material. Group living and aggregation among common earwigs promotes allo-coprophagy (consuming the feces of other members of one's own species) to promote the growth of helpful gut bacteria and provide a food source when food is scarce. Through proctodeal feeding, termites eat one another's feces as a means of obtaining their hindgut protists. Termites and protists have a symbiotic relationship (e.g. with the protozoan that allows the termites to digest the cellulose in their diet). For example, in one group of termites, a three-way symbiotic relationship exists; termites of the family Rhinotermitidae, cellulolytic protists of the genus Pseudotrichonympha in the guts of these termites, and intracellular bacterial symbionts of the protists. By vertebrates Lagomorphs (rabbits, hares, pikas) and some other mammals ferment fiber in their cecums, which is then expelled as cecotropes and eaten from the anus, a process called "cecotrophy". Then their food is processed through the gastrointestinal tract a second time, which allows them to absorb more nutrition. While cecotropes are expelled from the anus, they are not feces and thus eating them is not called coprophagia. Domesticated and wild mammals are sometimes coprophagic. Some dogs may lack critical digestive enzymes when they are only eating processed dried foods, so they gain these from consuming fecal matter. They only consume fecal matter that is less than two days old which supports this theory. Cattle in the United States are often fed chicken litter. Concerns have arisen that the practice of feeding chicken litter to cattle could lead to bovine spongiform encephalopathy (mad-cow disease) because of the crushed bone meal in chicken feed. The U.S. Food and Drug Administration regulates this practice by attempting to prevent the introduction of any part of cattle brain or spinal cord into livestock feed. Chickens also eat their own feces. Other countries, such as Canada, have banned chicken litter for use as a livestock feed. The young of elephants, giant pandas, koalas, and hippos eat the feces of their mothers or other animals in the herd, to obtain the bacteria required to properly digest vegetation found in their ecosystems. When such animals are born, their intestines are sterile and do not contain these bacteria. Without doing this, they would be unable to obtain any nutritional value from plants. Piglets with access to maternal feces early in life exhibited better performance. Hamsters, guinea pigs, chinchillas, hedgehogs, and pigs eat their own droppings, which are thought to be a source of vitamins B and K, produced by gut bacteria. Sometimes, there is also the aspect of self-anointment while these creatures eat their droppings. On rare occasions gorillas have been observed consuming their feces, possibly out of boredom, a desire for warm food, or to reingest seeds contained in the feces. Coprophagia by plants Some carnivorous plants, such as pitcher plants of the genus Nepenthes, obtain nutrition from the feces of commensal animals. Notable examples include Nepenthes jamban, whose specific name is the Indonesian word for toilet. Manure is organic matter, mostly animal feces, that is used as organic fertilizer for plants in agriculture. See also Coprophilous fungi Fecal bacteriotherapy Faecal transplant Fecal–oral route, a route of disease transmission Gomutra Kopi luwak Panchagavya Pig toilet Scathophagidae Scatophagidae References Further reading External links Eating behaviors Ethology Feces Pica (disorder)
Coprophagia
[ "Biology" ]
1,661
[ "Behavior", "Biological interactions", "Eating behaviors", "Excretion", "Animal waste products", "Behavioural sciences", "Feces", "Ethology" ]
7,722
https://en.wikipedia.org/wiki/Compactron
Compactrons are a type of vacuum tube, which contain multiple electrode structures packed into a single enclosure. They were designed to compete with early transistor electronics and were used in televisions, radios, and similar roles. History The Compactron was a trade name applied to multi-electrode structure tubes specifically constructed on a 12-pin Duodecar base. This vacuum tube family was introduced in 1961 by General Electric in Owensboro, Kentucky to compete with transistorized electronics during the solid state transition. Television sets were a primary application. The idea of multi-electrode tubes itself was far from new and indeed the Loewe company of Germany was producing multi-electrode tubes as far back as 1926, and they even included all of the required passive components as well. Use was prevalent in televisions because transistors were slow to achieve the high power and frequency capabilities needed particularly in color television sets. The first portable color television, the General Electric Porta-Color, was designed using 13 tubes, 10 of which were Compactrons. Even before the compactron design was unveiled, nearly all tube based electronic equipment used multi-electrode tubes of one type or another. Virtually every AM/FM radio receiver of the 1950s and 60's used a 6AK8 (EABC80) tube (or equivalent) consisting of three diodes and a triode which was designed in 1954. Compactron's integrated valve design helped lower power consumption and heat generation (they were to tubes what integrated circuits were to transistors). Compactrons were also used in a few high end Hi-Fi stereos. They were also used by Ampeg and Fender in some of their guitar amplifiers. No modern tube based Hi-Fi systems are known to use this tube type, as simpler and more readily available tubes have again filled this niche. One tube, the 7868, is used in some Hi-Fi systems made today. This tube is a Novar tube. It has the same physical dimensions as the compactron, but a 9 pin base. The exhaust tip is on the top or bottom of the tube, depending on the manufacturer's preference. It is currently in production by Electro-Harmonix.(The new power amp, Linear Tube Audio's Ultralinear, uses 4 17JN6 compactron tubes as the power tube in the amp.) The amp generates 20 watts of power with these inexpensive TV tubes. Notable features A distinguishing feature of most Compactrons is the placement of the evacuation tip on the bottom end, rather than the top end as was customary with "miniature" tubes, and a characteristic 3/4" diameter circle pin pattern. Most Compactrons ranged in glass envelope diameter from 28 to 70 mm depending upon the internal configuration. Variations of the Compactron design were made by Sylvania and by some Japanese firms. Examples Examples of Compactrons type types include: 6AG11 double diode similar to 6AL5, double triode high-mu similar to 12AT7. Designed for FM stereo multiplex service. 6BK11 triple triode. Two of the triodes are similar to 12AX7 and one of them is similar to 5751. 6C10 high-mu triple triode, all three being similar to 12AX7, used for audio amplifiers, and as color matrix amplifiers in television by Sylvania, etc.... not related to the Edison Swan (later Mazda) 6C10 triode-hexode 6M11 twin triode - pentode. Designed for sync separators and AGC amplifier circuits. 6K11 triple triode. Designed for sync separators and AGC amplifier circuits. 6LF6 beam power pentode with anode cap. Designed for horizontal output service. 8B10 twin triode - twin diode. Designed for horizontal phase detector service, and horizontal oscillator service. 12AE10 twin pentode. Designed for FM discriminator/detector, and audio output. 38HK7 pentode diode. Designed for horizontal output service and as a damper diode 1AD2 diode high voltage, used in flyback transformer rectification Due to their specific applications in television circuits, many different Compactron types were produced. Almost all were assigned using standard US tube numbers. Technological obsolescence Integrated circuits (of the analogue and digital type) gradually took over all of the functions that the Compactron was designed for. "Hybrid" television sets produced in the early to mid-1970s made use of a combination of tubes (typically Compactrons), transistors, and integrated circuits in the same set. By the mid-1980s this type of tube was functionally obsolete. Compactrons simply don't exist in any TV sets designed after 1986. Other specialist uses of the tube declined in parallel with the television set manufacture. Manufacture of Compactrons ceased in the early 1990s. New old stock replacements for almost all Compactron types produced are easily found for sale on the Internet. References Notes Vacuum tubes
Compactron
[ "Physics" ]
1,023
[ "Vacuum tubes", "Vacuum", "Matter" ]
7,723
https://en.wikipedia.org/wiki/Carmichael%20number
In number theory, a Carmichael number is a composite number which in modular arithmetic satisfies the congruence relation: for all integers . The relation may also be expressed in the form: for all integers that are relatively prime to . They are infinite in number. They constitute the comparatively rare instances where the strict converse of Fermat's Little Theorem does not hold. This fact precludes the use of that theorem as an absolute test of primality. The Carmichael numbers form the subset K1 of the Knödel numbers. The Carmichael numbers were named after the American mathematician Robert Carmichael by Nicolaas Beeger, in 1950. Øystein Ore had referred to them in 1948 as numbers with the "Fermat property", or "F numbers" for short. Overview Fermat's little theorem states that if is a prime number, then for any integer , the number is an integer multiple of . Carmichael numbers are composite numbers which have the same property. Carmichael numbers are also called Fermat pseudoprimes or absolute Fermat pseudoprimes. A Carmichael number will pass a Fermat primality test to every base relatively prime to the number, even though it is not actually prime. This makes tests based on Fermat's Little Theorem less effective than strong probable prime tests such as the Baillie–PSW primality test and the Miller–Rabin primality test. However, no Carmichael number is either an Euler–Jacobi pseudoprime or a strong pseudoprime to every base relatively prime to it so, in theory, either an Euler or a strong probable prime test could prove that a Carmichael number is, in fact, composite. Arnault gives a 397-digit Carmichael number that is a strong pseudoprime to all prime bases less than 307: where 29674495668685510550154174642905332730771991799853043350995075531276838753171770199594238596428121188033664754218345562493168782883 is a 131-digit prime. is the smallest prime factor of , so this Carmichael number is also a (not necessarily strong) pseudoprime to all bases less than . As numbers become larger, Carmichael numbers become increasingly rare. For example, there are 20,138,200 Carmichael numbers between 1 and 1021 (approximately one in 50 trillion (5·1013) numbers). Korselt's criterion An alternative and equivalent definition of Carmichael numbers is given by Korselt's criterion. Theorem (A. Korselt 1899): A positive composite integer is a Carmichael number if and only if is square-free, and for all prime divisors of , it is true that . It follows from this theorem that all Carmichael numbers are odd, since any even composite number that is square-free (and hence has only one prime factor of two) will have at least one odd prime factor, and thus results in an even dividing an odd, a contradiction. (The oddness of Carmichael numbers also follows from the fact that is a Fermat witness for any even composite number.) From the criterion it also follows that Carmichael numbers are cyclic. Additionally, it follows that there are no Carmichael numbers with exactly two prime divisors. Discovery The first seven Carmichael numbers, from 561 to 8911, were all found by the Czech mathematician Václav Šimerka in 1885 (thus preceding not just Carmichael but also Korselt, although Šimerka did not find anything like Korselt's criterion). His work, published in Czech scientific journal Časopis pro pěstování matematiky a fysiky, however, remained unnoticed. Korselt was the first who observed the basic properties of Carmichael numbers, but he did not give any examples. That 561 is a Carmichael number can be seen with Korselt's criterion. Indeed, is square-free and , and . The next six Carmichael numbers are : In 1910, Carmichael himself also published the smallest such number, 561, and the numbers were later named after him. Jack Chernick proved a theorem in 1939 which can be used to construct a subset of Carmichael numbers. The number is a Carmichael number if its three factors are all prime. Whether this formula produces an infinite quantity of Carmichael numbers is an open question (though it is implied by Dickson's conjecture). Paul Erdős heuristically argued there should be infinitely many Carmichael numbers. In 1994 W. R. (Red) Alford, Andrew Granville and Carl Pomerance used a bound on Olson's constant to show that there really do exist infinitely many Carmichael numbers. Specifically, they showed that for sufficiently large , there are at least Carmichael numbers between 1 and . Thomas Wright proved that if and are relatively prime, then there are infinitely many Carmichael numbers in the arithmetic progression , where . Löh and Niebuhr in 1992 found some very large Carmichael numbers, including one with 1,101,518 factors and over 16 million digits. This has been improved to 10,333,229,505 prime factors and 295,486,761,787 digits, so the largest known Carmichael number is much greater than the largest known prime. Properties Factorizations Carmichael numbers have at least three positive prime factors. The first Carmichael numbers with prime factors are : The first Carmichael numbers with 4 prime factors are : The second Carmichael number (1105) can be expressed as the sum of two squares in more ways than any smaller number. The third Carmichael number (1729) is the Hardy-Ramanujan Number: the smallest number that can be expressed as the sum of two cubes (of positive numbers) in two different ways. Distribution Let denote the number of Carmichael numbers less than or equal to . The distribution of Carmichael numbers by powers of 10 : In 1953, Knödel proved the upper bound: for some constant . In 1956, Erdős improved the bound to for some constant . He further gave a heuristic argument suggesting that this upper bound should be close to the true growth rate of . In the other direction, Alford, Granville and Pomerance proved in 1994 that for sufficiently large X, In 2005, this bound was further improved by Harman to who subsequently improved the exponent to . Regarding the asymptotic distribution of Carmichael numbers, there have been several conjectures. In 1956, Erdős conjectured that there were Carmichael numbers for X sufficiently large. In 1981, Pomerance sharpened Erdős' heuristic arguments to conjecture that there are at least Carmichael numbers up to , where . However, inside current computational ranges (such as the counts of Carmichael numbers performed by Pinch up to 1021), these conjectures are not yet borne out by the data. In 2021, Daniel Larsen proved an analogue of Bertrand's postulate for Carmichael numbers first conjectured by Alford, Granville, and Pomerance in 1994. Using techniques developed by Yitang Zhang and James Maynard to establish results concerning small gaps between primes, his work yielded the much stronger statement that, for any and sufficiently large in terms of , there will always be at least Carmichael numbers between and Generalizations The notion of Carmichael number generalizes to a Carmichael ideal in any number field . For any nonzero prime ideal in , we have for all in , where is the norm of the ideal . (This generalizes Fermat's little theorem, that for all integers when is prime.) Call a nonzero ideal in Carmichael if it is not a prime ideal and for all , where is the norm of the ideal . When is , the ideal is principal, and if we let be its positive generator then the ideal is Carmichael exactly when is a Carmichael number in the usual sense. When is larger than the rationals it is easy to write down Carmichael ideals in : for any prime number that splits completely in , the principal ideal is a Carmichael ideal. Since infinitely many prime numbers split completely in any number field, there are infinitely many Carmichael ideals in . For example, if is any prime number that is 1 mod 4, the ideal in the Gaussian integers is a Carmichael ideal. Both prime and Carmichael numbers satisfy the following equality: Lucas–Carmichael number A positive composite integer is a Lucas–Carmichael number if and only if is square-free, and for all prime divisors of , it is true that . The first Lucas–Carmichael numbers are: 399, 935, 2015, 2915, 4991, 5719, 7055, 8855, 12719, 18095, 20705, 20999, 22847, 29315, 31535, 46079, 51359, 60059, 63503, 67199, 73535, 76751, 80189, 81719, 88559, 90287, ... Quasi–Carmichael number Quasi–Carmichael numbers are squarefree composite numbers with the property that for every prime factor of , divides positively with being any integer besides 0. If , these are Carmichael numbers, and if , these are Lucas–Carmichael numbers. The first Quasi–Carmichael numbers are: 35, 77, 143, 165, 187, 209, 221, 231, 247, 273, 299, 323, 357, 391, 399, 437, 493, 527, 561, 589, 598, 713, 715, 899, 935, 943, 989, 1015, 1073, 1105, 1147, 1189, 1247, 1271, 1295, 1333, 1517, 1537, 1547, 1591, 1595, 1705, 1729, ... Knödel number An n-Knödel number for a given positive integer n is a composite number m with the property that each coprime to m satisfies . The case are Carmichael numbers. Higher-order Carmichael numbers Carmichael numbers can be generalized using concepts of abstract algebra. The above definition states that a composite integer n is Carmichael precisely when the nth-power-raising function pn from the ring Zn of integers modulo n to itself is the identity function. The identity is the only Zn-algebra endomorphism on Zn so we can restate the definition as asking that pn be an algebra endomorphism of Zn. As above, pn satisfies the same property whenever n is prime. The nth-power-raising function pn is also defined on any Zn-algebra A. A theorem states that n is prime if and only if all such functions pn are algebra endomorphisms. In-between these two conditions lies the definition of Carmichael number of order m for any positive integer m as any composite number n such that pn is an endomorphism on every Zn-algebra that can be generated as Zn-module by m elements. Carmichael numbers of order 1 are just the ordinary Carmichael numbers. An order-2 Carmichael number According to Howe, 17 · 31 · 41 · 43 · 89 · 97 · 167 · 331 is an order 2 Carmichael number. This product is equal to 443,372,888,629,441. Properties Korselt's criterion can be generalized to higher-order Carmichael numbers, as shown by Howe. A heuristic argument, given in the same paper, appears to suggest that there are infinitely many Carmichael numbers of order m, for any m. However, not a single Carmichael number of order 3 or above is known. Notes References External links Encyclopedia of Mathematics Table of Carmichael numbers Tables of Carmichael numbers with many prime factors Tables of Carmichael numbers below Final Answers Modular Arithmetic Eponymous numbers in mathematics Integer sequences Modular arithmetic Pseudoprimes
Carmichael number
[ "Mathematics" ]
2,370
[ "Sequences and series", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Mathematical objects", "Combinatorics", "Arithmetic", "Modular arithmetic", "Numbers", "Number theory" ]
7,739
https://en.wikipedia.org/wiki/Carbide
In chemistry, a carbide usually describes a compound composed of carbon and a metal. In metallurgy, carbiding or carburizing is the process for producing carbide coatings on a metal piece. Interstitial / Metallic carbides The carbides of the group 4, 5 and 6 transition metals (with the exception of chromium) are often described as interstitial compounds. These carbides have metallic properties and are refractory. Some exhibit a range of stoichiometries, being a non-stoichiometric mixture of various carbides arising due to crystal defects. Some of them, including titanium carbide and tungsten carbide, are important industrially and are used to coat metals in cutting tools. The long-held view is that the carbon atoms fit into octahedral interstices in a close-packed metal lattice when the metal atom radius is greater than approximately 135 pm: When the metal atoms are cubic close-packed, (ccp), then filling all of the octahedral interstices with carbon achieves 1:1 stoichiometry with the rock salt structure. When the metal atoms are hexagonal close-packed, (hcp), as the octahedral interstices lie directly opposite each other on either side of the layer of metal atoms, filling only one of these with carbon achieves 2:1 stoichiometry with the CdI2 structure. The following table shows structures of the metals and their carbides. (N.B. the body centered cubic structure adopted by vanadium, niobium, tantalum, chromium, molybdenum and tungsten is not a close-packed lattice.) The notation "h/2" refers to the M2C type structure described above, which is only an approximate description of the actual structures. The simple view that the lattice of the pure metal "absorbs" carbon atoms can be seen to be untrue as the packing of the metal atom lattice in the carbides is different from the packing in the pure metal, although it is technically correct that the carbon atoms fit into the octahedral interstices of a close-packed metal lattice. For a long time the non-stoichiometric phases were believed to be disordered with a random filling of the interstices, however short and longer range ordering has been detected. Iron forms a number of carbides, , and . The best known is cementite, Fe3C, which is present in steels. These carbides are more reactive than the interstitial carbides; for example, the carbides of Cr, Mn, Fe, Co and Ni are all hydrolysed by dilute acids and sometimes by water, to give a mixture of hydrogen and hydrocarbons. These compounds share features with both the inert interstitials and the more reactive salt-like carbides. Some metals, such as lead and tin, are believed not to form carbides under any circumstances. There exists however a mixed titanium-tin carbide, which is a two-dimensional conductor. Chemical classification of carbides Carbides can be generally classified by the chemical bonds type as follows: salt-like (ionic), covalent compounds, interstitial compounds, and "intermediate" transition metal carbides. Examples include calcium carbide (CaC2), silicon carbide (SiC), tungsten carbide (WC; often called, simply, carbide when referring to machine tooling), and cementite (Fe3C), each used in key industrial applications. The naming of ionic carbides is not systematic. Salt-like / saline / ionic carbides Salt-like carbides are composed of highly electropositive elements such as the alkali metals, alkaline earth metals, lanthanides, actinides, and group 3 metals (scandium, yttrium, and lutetium). Aluminium from group 13 forms carbides, but gallium, indium, and thallium do not. These materials feature isolated carbon centers, often described as "C4−", in the methanides or methides; two-atom units, "", in the acetylides; and three-atom units, "", in the allylides. The graphite intercalation compound KC8, prepared from vapour of potassium and graphite, and the alkali metal derivatives of C60 are not usually classified as carbides. Methanides Methanides are a subset of carbides distinguished by their tendency to decompose in water producing methane. Three examples are aluminium carbide , magnesium carbide and beryllium carbide . Transition metal carbides are not saline: their reaction with water is very slow and is usually neglected. For example, depending on surface porosity, 5–30 atomic layers of titanium carbide are hydrolyzed, forming methane within 5 minutes at ambient conditions, following by saturation of the reaction. Note that methanide in this context is a trivial historical name. According to the IUPAC systematic naming conventions, a compound such as NaCH3 would be termed a "methanide", although this compound is often called methylsodium. See Methyl group#Methyl anion for more information about the anion. Acetylides/ethynides Several carbides are assumed to be salts of the acetylide anion (also called percarbide, by analogy with peroxide), which has a triple bond between the two carbon atoms. Alkali metals, alkaline earth metals, and lanthanoid metals form acetylides, for example, sodium carbide Na2C2, calcium carbide CaC2, and LaC2. Lanthanides also form carbides (sesquicarbides, see below) with formula M2C3. Metals from group 11 also tend to form acetylides, such as copper(I) acetylide and silver acetylide. Carbides of the actinide elements, which have stoichiometry MC2 and M2C3, are also described as salt-like derivatives of . The C–C triple bond length ranges from 119.2 pm in CaC2 (similar to ethyne), to 130.3 pm in LaC2 and 134 pm in UC2. The bonding in LaC2 has been described in terms of LaIII with the extra electron delocalised into the antibonding orbital on , explaining the metallic conduction. Allylides The polyatomic ion , sometimes called allylide, is found in and . The ion is linear and is isoelectronic with . The C–C distance in Mg2C3 is 133.2 pm. yields methylacetylene, CH3CCH, and propadiene, CH2CCH2, on hydrolysis, which was the first indication that it contains . Covalent carbides The carbides of silicon and boron are described as "covalent carbides", although virtually all compounds of carbon exhibit some covalent character. Silicon carbide has two similar crystalline forms, which are both related to the diamond structure. Boron carbide, B4C, on the other hand, has an unusual structure which includes icosahedral boron units linked by carbon atoms. In this respect boron carbide is similar to the boron rich borides. Both silicon carbide (also known as carborundum) and boron carbide are very hard materials and refractory. Both materials are important industrially. Boron also forms other covalent carbides, such as B25C. Molecular carbides Metal complexes containing C are known as metal carbido complexes. Most common are carbon-centered octahedral clusters, such as (where "Ph" represents a phenyl group) and [Fe6C(CO)6]2−. Similar species are known for the metal carbonyls and the early metal halides. A few terminal carbides have been isolated, such as . Metallocarbohedrynes (or "met-cars") are stable clusters with the general formula where M is a transition metal (Ti, Zr, V, etc.). Related materials In addition to the carbides, other groups of related carbon compounds exist: graphite intercalation compounds alkali metal fullerides endohedral fullerenes, where the metal atom is encapsulated within a fullerene molecule metallacarbohedrenes (met-cars) which are cluster compounds containing C2 units. tunable nanoporous carbon, where gas chlorination of metallic carbides removes metal molecules to form a highly porous, near-pure carbon material capable of high-density energy storage. transition metal carbene complexes. two-dimensional transition metal carbides: MXenes See also Kappa-carbides References Anions Salts
Carbide
[ "Physics", "Chemistry" ]
1,901
[ "Salts", "Ions", "Matter", "Anions" ]
7,783
https://en.wikipedia.org/wiki/Coriolis%20force
In physics, the Coriolis force is an inertial (or fictitious) force that acts on objects in motion within a frame of reference that rotates with respect to an inertial frame. In a reference frame with clockwise rotation, the force acts to the left of the motion of the object. In one with anticlockwise (or counterclockwise) rotation, the force acts to the right. Deflection of an object due to the Coriolis force is called the Coriolis effect. Though recognized previously by others, the mathematical expression for the Coriolis force appeared in an 1835 paper by French scientist Gaspard-Gustave de Coriolis, in connection with the theory of water wheels. Early in the 20th century, the term Coriolis force began to be used in connection with meteorology. Newton's laws of motion describe the motion of an object in an inertial (non-accelerating) frame of reference. When Newton's laws are transformed to a rotating frame of reference, the Coriolis and centrifugal accelerations appear. When applied to objects with masses, the respective forces are proportional to their masses. The magnitude of the Coriolis force is proportional to the rotation rate, and the magnitude of the centrifugal force is proportional to the square of the rotation rate. The Coriolis force acts in a direction perpendicular to two quantities: the angular velocity of the rotating frame relative to the inertial frame and the velocity of the body relative to the rotating frame, and its magnitude is proportional to the object's speed in the rotating frame (more precisely, to the component of its velocity that is perpendicular to the axis of rotation). The centrifugal force acts outwards in the radial direction and is proportional to the distance of the body from the axis of the rotating frame. These additional forces are termed inertial forces, fictitious forces, or pseudo forces. By introducing these fictitious forces to a rotating frame of reference, Newton's laws of motion can be applied to the rotating system as though it were an inertial system; these forces are correction factors that are not required in a non-rotating system. In popular (non-technical) usage of the term "Coriolis effect", the rotating reference frame implied is almost always the Earth. Because the Earth spins, Earth-bound observers need to account for the Coriolis force to correctly analyze the motion of objects. The Earth completes one rotation for each sidereal day, so for motions of everyday objects the Coriolis force is imperceptible; its effects become noticeable only for motions occurring over large distances and long periods of time, such as large-scale movement of air in the atmosphere or water in the ocean, or where high precision is important, such as artillery or missile trajectories. Such motions are constrained by the surface of the Earth, so only the horizontal component of the Coriolis force is generally important. This force causes moving objects on the surface of the Earth to be deflected to the right (with respect to the direction of travel) in the Northern Hemisphere and to the left in the Southern Hemisphere. The horizontal deflection effect is greater near the poles, since the effective rotation rate about a local vertical axis is largest there, and decreases to zero at the equator. Rather than flowing directly from areas of high pressure to low pressure, as they would in a non-rotating system, winds and currents tend to flow to the right of this direction north of the equator ("clockwise") and to the left of this direction south of it ("anticlockwise"). This effect is responsible for the rotation and thus formation of cyclones . History Italian scientist Giovanni Battista Riccioli and his assistant Francesco Maria Grimaldi described the effect in connection with artillery in the 1651 Almagestum Novum, writing that rotation of the Earth should cause a cannonball fired to the north to deflect to the east. In 1674, Claude François Milliet Dechales described in his Cursus seu Mundus Mathematicus how the rotation of the Earth should cause a deflection in the trajectories of both falling bodies and projectiles aimed toward one of the planet's poles. Riccioli, Grimaldi, and Dechales all described the effect as part of an argument against the heliocentric system of Copernicus. In other words, they argued that the Earth's rotation should create the effect, and so failure to detect the effect was evidence for an immobile Earth. The Coriolis acceleration equation was derived by Euler in 1749, and the effect was described in the tidal equations of Pierre-Simon Laplace in 1778. Gaspard-Gustave de Coriolis published a paper in 1835 on the energy yield of machines with rotating parts, such as waterwheels. That paper considered the supplementary forces that are detected in a rotating frame of reference. Coriolis divided these supplementary forces into two categories. The second category contained a force that arises from the cross product of the angular velocity of a coordinate system and the projection of a particle's velocity into a plane perpendicular to the system's axis of rotation. Coriolis referred to this force as the "compound centrifugal force" due to its analogies with the centrifugal force already considered in category one. The effect was known in the early 20th century as the "acceleration of Coriolis", and by 1920 as "Coriolis force". In 1856, William Ferrel proposed the existence of a circulation cell in the mid-latitudes with air being deflected by the Coriolis force to create the prevailing westerly winds. The understanding of the kinematics of how exactly the rotation of the Earth affects airflow was partial at first. Late in the 19th century, the full extent of the large scale interaction of pressure-gradient force and deflecting force that in the end causes air masses to move along isobars was understood. Formula In Newtonian mechanics, the equation of motion for an object in an inertial reference frame is: where is the vector sum of the physical forces acting on the object, is the mass of the object, and is the acceleration of the object relative to the inertial reference frame. Transforming this equation to a reference frame rotating about a fixed axis through the origin with angular velocity having variable rotation rate, the equation takes the form: where the prime (') variables denote coordinates of the rotating reference frame (not a derivative) and: is the vector sum of the physical forces acting on the object is the angular velocity of the rotating reference frame relative to the inertial frame is the position vector of the object relative to the rotating reference frame is the velocity of the object relative to the rotating reference frame is the acceleration of the object relative to the rotating reference frame The fictitious forces as they are perceived in the rotating frame act as additional forces that contribute to the apparent acceleration just like the real external forces. The fictitious force terms of the equation are, reading from left to right: Euler force, Coriolis force, centrifugal force, As seen in these formulas the Euler and centrifugal forces depend on the position vector of the object, while the Coriolis force depends on the object's velocity as measured in the rotating reference frame. As expected, for a non-rotating inertial frame of reference the Coriolis force and all other fictitious forces disappear. Direction of Coriolis force for simple cases As the Coriolis force is proportional to a cross product of two vectors, it is perpendicular to both vectors, in this case the object's velocity and the frame's rotation vector. It therefore follows that: if the velocity is parallel to the rotation axis, the Coriolis force is zero. For example, on Earth, this situation occurs for a body at the equator moving north or south relative to the Earth's surface. (At any latitude other than the equator, however, the north–south motion would have a component perpendicular to the rotation axis and a force specified by the inward or outward cases mentioned below). if the velocity is straight inward to the axis, the Coriolis force is in the direction of local rotation. For example, on Earth, this situation occurs for a body at the equator falling downward, as in the Dechales illustration above, where the falling ball travels further to the east than does the tower. Note also that heading north in the northern hemisphere would have a velocity component toward the rotation axis, resulting in a Coriolis force to the east (more pronounced the further north one is). if the velocity is straight outward from the axis, the Coriolis force is against the direction of local rotation. In the tower example, a ball launched upward would move toward the west. if the velocity is in the direction of rotation, the Coriolis force is outward from the axis. For example, on Earth, this situation occurs for a body at the equator moving east relative to Earth's surface. It would move upward as seen by an observer on the surface. This effect (see Eötvös effect below) was discussed by Galileo Galilei in 1632 and by Riccioli in 1651. if the velocity is against the direction of rotation, the Coriolis force is inward to the axis. For example, on Earth, this situation occurs for a body at the equator moving west, which would deflect downward as seen by an observer. Intuitive explanation For an intuitive explanation of the origin of the Coriolis force, consider an object, constrained to follow the Earth's surface and moving northward in the Northern Hemisphere. Viewed from outer space, the object does not appear to go due north, but has an eastward motion (it rotates around toward the right along with the surface of the Earth). The further north it travels, the smaller the "radius of its parallel (latitude)" (the minimum distance from the surface point to the axis of rotation, which is in a plane orthogonal to the axis), and so the slower the eastward motion of its surface. As the object moves north it has a tendency to maintain the eastward speed it started with (rather than slowing down to match the reduced eastward speed of local objects on the Earth's surface), so it veers east (i.e. to the right of its initial motion). Though not obvious from this example, which considers northward motion, the horizontal deflection occurs equally for objects moving eastward or westward (or in any other direction). However, the theory that the effect determines the rotation of draining water in a household bathtub, sink or toilet has been repeatedly disproven by modern-day scientists; the force is negligibly small compared to the many other influences on the rotation. Length scales and the Rossby number The time, space, and velocity scales are important in determining the importance of the Coriolis force. Whether rotation is important in a system can be determined by its Rossby number (Ro), which is the ratio of the velocity, U, of a system to the product of the Coriolis parameter, , and the length scale, L, of the motion: Hence, it is the ratio of inertial to Coriolis forces; a small Rossby number indicates a system is strongly affected by Coriolis forces, and a large Rossby number indicates a system in which inertial forces dominate. For example, in tornadoes, the Rossby number is large, so in them the Coriolis force is negligible, and balance is between pressure and centrifugal forces. In low-pressure systems the Rossby number is low, as the centrifugal force is negligible; there, the balance is between Coriolis and pressure forces. In oceanic systems the Rossby number is often around 1, with all three forces comparable. An atmospheric system moving at U =  occupying a spatial distance of L = , has a Rossby number of approximately 0.1. A baseball pitcher may throw the ball at U =  for a distance of L = . The Rossby number in this case would be 32,000 (at latitude 31°47'46.382"). Baseball players don't care about which hemisphere they're playing in. However, an unguided missile obeys exactly the same physics as a baseball, but can travel far enough and be in the air long enough to experience the effect of Coriolis force. Long-range shells in the Northern Hemisphere landed close to, but to the right of, where they were aimed until this was noted. (Those fired in the Southern Hemisphere landed to the left.) In fact, it was this effect that first drew the attention of Coriolis himself. Simple cases Tossed ball on a rotating carousel The figure illustrates a ball tossed from 12:00 o'clock toward the center of a counter-clockwise rotating carousel. On the left, the ball is seen by a stationary observer above the carousel, and the ball travels in a straight line to the center, while the ball-thrower rotates counter-clockwise with the carousel. On the right, the ball is seen by an observer rotating with the carousel, so the ball-thrower appears to stay at 12:00 o'clock. The figure shows how the trajectory of the ball as seen by the rotating observer can be constructed. On the left, two arrows locate the ball relative to the ball-thrower. One of these arrows is from the thrower to the center of the carousel (providing the ball-thrower's line of sight), and the other points from the center of the carousel to the ball. (This arrow gets shorter as the ball approaches the center.) A shifted version of the two arrows is shown dotted. On the right is shown this same dotted pair of arrows, but now the pair are rigidly rotated so the arrow corresponding to the line of sight of the ball-thrower toward the center of the carousel is aligned with 12:00 o'clock. The other arrow of the pair locates the ball relative to the center of the carousel, providing the position of the ball as seen by the rotating observer. By following this procedure for several positions, the trajectory in the rotating frame of reference is established as shown by the curved path in the right-hand panel. The ball travels in the air, and there is no net force upon it. To the stationary observer, the ball follows a straight-line path, so there is no problem squaring this trajectory with zero net force. However, the rotating observer sees a curved path. Kinematics insists that a force (pushing to the right of the instantaneous direction of travel for a counter-clockwise rotation) must be present to cause this curvature, so the rotating observer is forced to invoke a combination of centrifugal and Coriolis forces to provide the net force required to cause the curved trajectory. Bounced ball The figure describes a more complex situation where the tossed ball on a turntable bounces off the edge of the carousel and then returns to the tosser, who catches the ball. The effect of Coriolis force on its trajectory is shown again as seen by two observers: an observer (referred to as the "camera") that rotates with the carousel, and an inertial observer. The figure shows a bird's-eye view based upon the same ball speed on forward and return paths. Within each circle, plotted dots show the same time points. In the left panel, from the camera's viewpoint at the center of rotation, the tosser (smiley face) and the rail both are at fixed locations, and the ball makes a very considerable arc on its travel toward the rail, and takes a more direct route on the way back. From the ball tosser's viewpoint, the ball seems to return more quickly than it went (because the tosser is rotating toward the ball on the return flight). On the carousel, instead of tossing the ball straight at a rail to bounce back, the tosser must throw the ball toward the right of the target and the ball then seems to the camera to bear continuously to the left of its direction of travel to hit the rail (left because the carousel is turning clockwise). The ball appears to bear to the left from direction of travel on both inward and return trajectories. The curved path demands this observer to recognize a leftward net force on the ball. (This force is "fictitious" because it disappears for a stationary observer, as is discussed shortly.) For some angles of launch, a path has portions where the trajectory is approximately radial, and Coriolis force is primarily responsible for the apparent deflection of the ball (centrifugal force is radial from the center of rotation, and causes little deflection on these segments). When a path curves away from radial, however, centrifugal force contributes significantly to deflection. The ball's path through the air is straight when viewed by observers standing on the ground (right panel). In the right panel (stationary observer), the ball tosser (smiley face) is at 12 o'clock and the rail the ball bounces from is at position 1. From the inertial viewer's standpoint, positions 1, 2, and 3 are occupied in sequence. At position 2, the ball strikes the rail, and at position 3, the ball returns to the tosser. Straight-line paths are followed because the ball is in free flight, so this observer requires that no net force is applied. Applied to the Earth The acceleration affecting the motion of air "sliding" over the Earth's surface is the horizontal component of the Coriolis term This component is orthogonal to the velocity over the Earth surface and is given by the expression where is the spin rate of the Earth is the latitude, positive in the northern hemisphere and negative in the southern hemisphere In the northern hemisphere, where the latitude is positive, this acceleration, as viewed from above, is to the right of the direction of motion. Conversely, it is to the left in the southern hemisphere. Rotating sphere Consider a location with latitude φ on a sphere that is rotating around the north–south axis. A local coordinate system is set up with the x axis horizontally due east, the y axis horizontally due north and the z axis vertically upwards. The rotation vector, velocity of movement and Coriolis acceleration expressed in this local coordinate system [listing components in the order east (e), north (n) and upward (u)] are:     When considering atmospheric or oceanic dynamics, the vertical velocity is small, and the vertical component of the Coriolis acceleration () is small compared with the acceleration due to gravity (g, approximately near Earth's surface). For such cases, only the horizontal (east and north) components matter. The restriction of the above to the horizontal plane is (setting vu = 0):     where is called the Coriolis parameter. By setting vn = 0, it can be seen immediately that (for positive φ and ω) a movement due east results in an acceleration due south; similarly, setting ve = 0, it is seen that a movement due north results in an acceleration due east. In general, observed horizontally, looking along the direction of the movement causing the acceleration, the acceleration always is turned 90° to the right (for positive φ) and of the same size regardless of the horizontal orientation. In the case of equatorial motion, setting φ = 0° yields:         Ω in this case is parallel to the north-south axis. Accordingly, an eastward motion (that is, in the same direction as the rotation of the sphere) provides an upward acceleration known as the Eötvös effect, and an upward motion produces an acceleration due west. Meteorology and oceanography Perhaps the most important impact of the Coriolis effect is in the large-scale dynamics of the oceans and the atmosphere. In meteorology and oceanography, it is convenient to postulate a rotating frame of reference wherein the Earth is stationary. In accommodation of that provisional postulation, the centrifugal and Coriolis forces are introduced. Their relative importance is determined by the applicable Rossby numbers. Tornadoes have high Rossby numbers, so, while tornado-associated centrifugal forces are quite substantial, Coriolis forces associated with tornadoes are for practical purposes negligible. Because surface ocean currents are driven by the movement of wind over the water's surface, the Coriolis force also affects the movement of ocean currents and cyclones as well. Many of the ocean's largest currents circulate around warm, high-pressure areas called gyres. Though the circulation is not as significant as that in the air, the deflection caused by the Coriolis effect is what creates the spiralling pattern in these gyres. The spiralling wind pattern helps the hurricane form. The stronger the force from the Coriolis effect, the faster the wind spins and picks up additional energy, increasing the strength of the hurricane. Air within high-pressure systems rotates in a direction such that the Coriolis force is directed radially inwards, and nearly balanced by the outwardly radial pressure gradient. As a result, air travels clockwise around high pressure in the Northern Hemisphere and anticlockwise in the Southern Hemisphere. Air around low-pressure rotates in the opposite direction, so that the Coriolis force is directed radially outward and nearly balances an inwardly radial pressure gradient. Flow around a low-pressure area If a low-pressure area forms in the atmosphere, air tends to flow in towards it, but is deflected perpendicular to its velocity by the Coriolis force. A system of equilibrium can then establish itself creating circular movement, or a cyclonic flow. Because the Rossby number is low, the force balance is largely between the pressure-gradient force acting towards the low-pressure area and the Coriolis force acting away from the center of the low pressure. Instead of flowing down the gradient, large scale motions in the atmosphere and ocean tend to occur perpendicular to the pressure gradient. This is known as geostrophic flow. On a non-rotating planet, fluid would flow along the straightest possible line, quickly eliminating pressure gradients. The geostrophic balance is thus very different from the case of "inertial motions" (see below), which explains why mid-latitude cyclones are larger by an order of magnitude than inertial circle flow would be. This pattern of deflection, and the direction of movement, is called Buys-Ballot's law. In the atmosphere, the pattern of flow is called a cyclone. In the Northern Hemisphere the direction of movement around a low-pressure area is anticlockwise. In the Southern Hemisphere, the direction of movement is clockwise because the rotational dynamics is a mirror image there. At high altitudes, outward-spreading air rotates in the opposite direction. Cyclones rarely form along the equator due to the weak Coriolis effect present in this region. Inertial circles An air or water mass moving with speed subject only to the Coriolis force travels in a circular trajectory called an inertial circle. Since the force is directed at right angles to the motion of the particle, it moves with a constant speed around a circle whose radius is given by: where is the Coriolis parameter , introduced above (where is the latitude). The time taken for the mass to complete a full circle is therefore . The Coriolis parameter typically has a mid-latitude value of about 10−4 s−1; hence for a typical atmospheric speed of , the radius is with a period of about 17 hours. For an ocean current with a typical speed of , the radius of an inertial circle is . These inertial circles are clockwise in the northern hemisphere (where trajectories are bent to the right) and anticlockwise in the southern hemisphere. If the rotating system is a parabolic turntable, then is constant and the trajectories are exact circles. On a rotating planet, varies with latitude and the paths of particles do not form exact circles. Since the parameter varies as the sine of the latitude, the radius of the oscillations associated with a given speed are smallest at the poles (latitude of ±90°), and increase toward the equator. Other terrestrial effects The Coriolis effect strongly affects the large-scale oceanic and atmospheric circulation, leading to the formation of robust features like jet streams and western boundary currents. Such features are in geostrophic balance, meaning that the Coriolis and pressure gradient forces balance each other. Coriolis acceleration is also responsible for the propagation of many types of waves in the ocean and atmosphere, including Rossby waves and Kelvin waves. It is also instrumental in the so-called Ekman dynamics in the ocean, and in the establishment of the large-scale ocean flow pattern called the Sverdrup balance. Eötvös effect The practical impact of the "Coriolis effect" is mostly caused by the horizontal acceleration component produced by horizontal motion. There are other components of the Coriolis effect. Westward-traveling objects are deflected downwards, while eastward-traveling objects are deflected upwards. This is known as the Eötvös effect. This aspect of the Coriolis effect is greatest near the equator. The force produced by the Eötvös effect is similar to the horizontal component, but the much larger vertical forces due to gravity and pressure suggest that it is unimportant in the hydrostatic equilibrium. However, in the atmosphere, winds are associated with small deviations of pressure from the hydrostatic equilibrium. In the tropical atmosphere, the order of magnitude of the pressure deviations is so small that the contribution of the Eötvös effect to the pressure deviations is considerable. In addition, objects traveling upwards (i.e. out) or downwards (i.e. in) are deflected to the west or east respectively. This effect is also the greatest near the equator. Since vertical movement is usually of limited extent and duration, the size of the effect is smaller and requires precise instruments to detect. For example, idealized numerical modeling studies suggest that this effect can directly affect tropical large-scale wind field by roughly 10% given long-duration (2 weeks or more) heating or cooling in the atmosphere. Moreover, in the case of large changes of momentum, such as a spacecraft being launched into orbit, the effect becomes significant. The fastest and most fuel-efficient path to orbit is a launch from the equator that curves to a directly eastward heading. Intuitive example Imagine a train that travels through a frictionless railway line along the equator. Assume that, when in motion, it moves at the necessary speed to complete a trip around the world in one day (465 m/s). The Coriolis effect can be considered in three cases: when the train travels west, when it is at rest, and when it travels east. In each case, the Coriolis effect can be calculated from the rotating frame of reference on Earth first, and then checked against a fixed inertial frame. The image below illustrates the three cases as viewed by an observer at rest in a (near) inertial frame from a fixed point above the North Pole along the Earth's axis of rotation; the train is denoted by a few red pixels, fixed at the left side in the leftmost picture, moving in the others The train travels toward the west: In that case, it moves against the direction of rotation. Therefore, on the Earth's rotating frame the Coriolis term is pointed inwards towards the axis of rotation (down). This additional force downwards should cause the train to be heavier while moving in that direction.If one looks at this train from the fixed non-rotating frame on top of the center of the Earth, at that speed it remains stationary as the Earth spins beneath it. Hence, the only force acting on it is gravity and the reaction from the track. This force is greater (by 0.34%) than the force that the passengers and the train experience when at rest (rotating along with Earth). This difference is what the Coriolis effect accounts for in the rotating frame of reference. The train comes to a stop: From the point of view on the Earth's rotating frame, the velocity of the train is zero, thus the Coriolis force is also zero and the train and its passengers recuperate their usual weight.From the fixed inertial frame of reference above Earth, the train now rotates along with the rest of the Earth. 0.34% of the force of gravity provides the centripetal force needed to achieve the circular motion on that frame of reference. The remaining force, as measured by a scale, makes the train and passengers "lighter" than in the previous case. The train travels east. In this case, because it moves in the direction of Earth's rotating frame, the Coriolis term is directed outward from the axis of rotation (up). This upward force makes the train seem lighter still than when at rest. From the fixed inertial frame of reference above Earth, the train traveling east now rotates at twice the rate as when it was at rest—so the amount of centripetal force needed to cause that circular path increases leaving less force from gravity to act on the track. This is what the Coriolis term accounts for on the previous paragraph.As a final check one can imagine a frame of reference rotating along with the train. Such frame would be rotating at twice the angular velocity as Earth's rotating frame. The resulting centrifugal force component for that imaginary frame would be greater. Since the train and its passengers are at rest, that would be the only component in that frame explaining again why the train and the passengers are lighter than in the previous two cases. This also explains why high-speed projectiles that travel west are deflected down, and those that travel east are deflected up. This vertical component of the Coriolis effect is called the Eötvös effect. The above example can be used to explain why the Eötvös effect starts diminishing when an object is traveling westward as its tangential speed increases above Earth's rotation (465 m/s). If the westward train in the above example increases speed, part of the force of gravity that pushes against the track accounts for the centripetal force needed to keep it in circular motion on the inertial frame. Once the train doubles its westward speed at that centripetal force becomes equal to the force the train experiences when it stops. From the inertial frame, in both cases it rotates at the same speed but in the opposite directions. Thus, the force is the same cancelling completely the Eötvös effect. Any object that moves westward at a speed above experiences an upward force instead. In the figure, the Eötvös effect is illustrated for a object on the train at different speeds. The parabolic shape is because the centripetal force is proportional to the square of the tangential speed. On the inertial frame, the bottom of the parabola is centered at the origin. The offset is because this argument uses the Earth's rotating frame of reference. The graph shows that the Eötvös effect is not symmetrical, and that the resulting downward force experienced by an object that travels west at high velocity is less than the resulting upward force when it travels east at the same speed. Draining in bathtubs and toilets Contrary to popular misconception, bathtubs, toilets, and other water receptacles do not drain in opposite directions in the Northern and Southern Hemispheres. This is because the magnitude of the Coriolis force is negligible at this scale. Forces determined by the initial conditions of the water (e.g. the geometry of the drain, the geometry of the receptacle, preexisting momentum of the water, etc.) are likely to be orders of magnitude greater than the Coriolis force and hence will determine the direction of water rotation, if any. For example, identical toilets flushed in both hemispheres drain in the same direction, and this direction is determined mostly by the shape of the toilet bowl. Under real-world conditions, the Coriolis force does not influence the direction of water flow perceptibly. Only if the water is so still that the effective rotation rate of the Earth is faster than that of the water relative to its container, and if externally applied torques (such as might be caused by flow over an uneven bottom surface) are small enough, the Coriolis effect may indeed determine the direction of the vortex. Without such careful preparation, the Coriolis effect will be much smaller than various other influences on drain direction such as any residual rotation of the water and the geometry of the container. Laboratory testing of draining water under atypical conditions In 1962, Ascher Shapiro performed an experiment at MIT to test the Coriolis force on a large basin of water, across, with a small wooden cross above the plug hole to display the direction of rotation, covering it and waiting for at least 24 hours for the water to settle. Under these precise laboratory conditions, he demonstrated the effect and consistent counterclockwise rotation. The experiment required extreme precision, since the acceleration due to Coriolis effect is only that of gravity. The vortex was measured by a cross made of two slivers of wood pinned above the draining hole. It takes 20 minutes to drain, and the cross starts turning only around 15 minutes. At the end it is turning at 1 rotation every 3 to 4 seconds. He reported that, Lloyd Trefethen reported clockwise rotation in the Southern Hemisphere at the University of Sydney in five tests with settling times of 18 h or more. Ballistic trajectories The Coriolis force is important in external ballistics for calculating the trajectories of very long-range artillery shells. The most famous historical example was the Paris gun, used by the Germans during World War I to bombard Paris from a range of about . The Coriolis force minutely changes the trajectory of a bullet, affecting accuracy at extremely long distances. It is adjusted for by accurate long-distance shooters, such as snipers. At the latitude of Sacramento, California, a northward shot would be deflected to the right. There is also a vertical component, explained in the Eötvös effect section above, which causes westward shots to hit low, and eastward shots to hit high. The effects of the Coriolis force on ballistic trajectories should not be confused with the curvature of the paths of missiles, satellites, and similar objects when the paths are plotted on two-dimensional (flat) maps, such as the Mercator projection. The projections of the three-dimensional curved surface of the Earth to a two-dimensional surface (the map) necessarily results in distorted features. The apparent curvature of the path is a consequence of the sphericity of the Earth and would occur even in a non-rotating frame. The Coriolis force on a moving projectile depends on velocity components in all three directions, latitude, and azimuth. The directions are typically downrange (the direction that the gun is initially pointing), vertical, and cross-range. where , down-range acceleration. , vertical acceleration with positive indicating acceleration upward. , cross-range acceleration with positive indicating acceleration to the right. , down-range velocity. , vertical velocity with positive indicating upward. , cross-range velocity with positive indicating velocity to the right. = 0.00007292 rad/sec, angular velocity of the earth (based on a sidereal day). , latitude with positive indicating Northern hemisphere. , azimuth measured clockwise from due North. Visualization To demonstrate the Coriolis effect, a parabolic turntable can be used. On a flat turntable, the inertia of a co-rotating object forces it off the edge. However, if the turntable surface has the correct paraboloid (parabolic bowl) shape (see the figure) and rotates at the corresponding rate, the force components shown in the figure make the component of gravity tangential to the bowl surface exactly equal to the centripetal force necessary to keep the object rotating at its velocity and radius of curvature (assuming no friction). (See banked turn.) This carefully contoured surface allows the Coriolis force to be displayed in isolation. Discs cut from cylinders of dry ice can be used as pucks, moving around almost frictionlessly over the surface of the parabolic turntable, allowing effects of Coriolis on dynamic phenomena to show themselves. To get a view of the motions as seen from the reference frame rotating with the turntable, a video camera is attached to the turntable so as to co-rotate with the turntable, with results as shown in the figure. In the left panel of the figure, which is the viewpoint of a stationary observer, the gravitational force in the inertial frame pulling the object toward the center (bottom ) of the dish is proportional to the distance of the object from the center. A centripetal force of this form causes the elliptical motion. In the right panel, which shows the viewpoint of the rotating frame, the inward gravitational force in the rotating frame (the same force as in the inertial frame) is balanced by the outward centrifugal force (present only in the rotating frame). With these two forces balanced, in the rotating frame the only unbalanced force is Coriolis (also present only in the rotating frame), and the motion is an inertial circle. Analysis and observation of circular motion in the rotating frame is a simplification compared with analysis and observation of elliptical motion in the inertial frame. Because this reference frame rotates several times a minute rather than only once a day like the Earth, the Coriolis acceleration produced is many times larger and so easier to observe on small time and spatial scales than is the Coriolis acceleration caused by the rotation of the Earth. In a manner of speaking, the Earth is analogous to such a turntable. The rotation has caused the planet to settle on a spheroid shape, such that the normal force, the gravitational force and the centrifugal force exactly balance each other on a "horizontal" surface. (See equatorial bulge.) The Coriolis effect caused by the rotation of the Earth can be seen indirectly through the motion of a Foucault pendulum. In other areas Coriolis flow meter A practical application of the Coriolis effect is the mass flow meter, an instrument that measures the mass flow rate and density of a fluid flowing through a tube. The operating principle involves inducing a vibration of the tube through which the fluid passes. The vibration, though not completely circular, provides the rotating reference frame that gives rise to the Coriolis effect. While specific methods vary according to the design of the flow meter, sensors monitor and analyze changes in frequency, phase shift, and amplitude of the vibrating flow tubes. The changes observed represent the mass flow rate and density of the fluid. Molecular physics In polyatomic molecules, the molecule motion can be described by a rigid body rotation and internal vibration of atoms about their equilibrium position. As a result of the vibrations of the atoms, the atoms are in motion relative to the rotating coordinate system of the molecule. Coriolis effects are therefore present, and make the atoms move in a direction perpendicular to the original oscillations. This leads to a mixing in molecular spectra between the rotational and vibrational levels, from which Coriolis coupling constants can be determined. Gyroscopic precession When an external torque is applied to a spinning gyroscope along an axis that is at right angles to the spin axis, the rim velocity that is associated with the spin becomes radially directed in relation to the external torque axis. This causes a torque-induced force to act on the rim in such a way as to tilt the gyroscope at right angles to the direction that the external torque would have tilted it. This tendency has the effect of keeping spinning bodies in their rotational frame. Insect flight Flies (Diptera) and some moths (Lepidoptera) exploit the Coriolis effect in flight with specialized appendages and organs that relay information about the angular velocity of their bodies. Coriolis forces resulting from linear motion of these appendages are detected within the rotating frame of reference of the insects' bodies. In the case of flies, their specialized appendages are dumbbell shaped organs located just behind their wings called "halteres". The fly's halteres oscillate in a plane at the same beat frequency as the main wings so that any body rotation results in lateral deviation of the halteres from their plane of motion. In moths, their antennae are known to be responsible for the sensing of Coriolis forces in the similar manner as with the halteres in flies. In both flies and moths, a collection of mechanosensors at the base of the appendage are sensitive to deviations at the beat frequency, correlating to rotation in the pitch and roll planes, and at twice the beat frequency, correlating to rotation in the yaw plane. Lagrangian point stability In astronomy, Lagrangian points are five positions in the orbital plane of two large orbiting bodies where a small object affected only by gravity can maintain a stable position relative to the two large bodies. The first three Lagrangian points (L1, L2, L3) lie along the line connecting the two large bodies, while the last two points (L4 and L5) each form an equilateral triangle with the two large bodies. The L4 and L5 points, although they correspond to maxima of the effective potential in the coordinate frame that rotates with the two large bodies, are stable due to the Coriolis effect. The stability can result in orbits around just L4 or L5, known as tadpole orbits, where trojans can be found. It can also result in orbits that encircle L3, L4, and L5, known as horseshoe orbits. See also Analytical mechanics Applied mechanics Classical mechanics Earth's rotation Equatorial Rossby wave Frenet–Serret formulas Gyroscope Kinetics (physics) Reactive centrifugal force Secondary flow Statics Uniform circular motion Whirlpool Physics and meteorology Riccioli, G. B., 1651: Almagestum Novum, Bologna, pp. 425–427 (Original book [in Latin], scanned images of complete pages.) Coriolis, G. G., 1832: "Mémoire sur le principe des forces vives dans les mouvements relatifs des machines." Journal de l'école Polytechnique, Vol 13, pp. 268–302. (Original article [in French], PDF file, 1.6 MB, scanned images of complete pages.) Coriolis, G. G., 1835: "Mémoire sur les équations du mouvement relatif des systèmes de corps." Journal de l'école Polytechnique, Vol 15, pp. 142–154 (Original article [in French] PDF file, 400 KB, scanned images of complete pages.) Gill, A. E. Atmosphere-Ocean dynamics, Academic Press, 1982. Durran, D. R., 1993: Is the Coriolis force really responsible for the inertial oscillation?, Bull. Amer. Meteor. Soc., 74, pp. 2179–2184; Corrigenda. Bulletin of the American Meteorological Society, 75, p. 261 Durran, D. R., and S. K. Domonkos, 1996: An apparatus for demonstrating the inertial oscillation, Bulletin of the American Meteorological Society, 77, pp. 557–559. Marion, Jerry B. 1970, Classical Dynamics of Particles and Systems, Academic Press. Persson, A., 1998 How do we Understand the Coriolis Force? Bulletin of the American Meteorological Society 79, pp. 1373–1385. Symon, Keith. 1971, Mechanics, Addison–Wesley Akira Kageyama & Mamoru Hyodo: Eulerian derivation of the Coriolis force James F. Price: A Coriolis tutorial Woods Hole Oceanographic Institute (2003) . Elementary, non-mathematical; but well written. Historical Grattan-Guinness, I., Ed., 1994: Companion Encyclopedia of the History and Philosophy of the Mathematical Sciences. Vols. I and II. Routledge, 1840 pp. 1997: The Fontana History of the Mathematical Sciences. Fontana, 817 pp. 710 pp. Khrgian, A., 1970: Meteorology: A Historical Survey. Vol. 1. Keter Press, 387 pp. Kuhn, T. S., 1977: Energy conservation as an example of simultaneous discovery. The Essential Tension, Selected Studies in Scientific Tradition and Change, University of Chicago Press, 66–104. Kutzbach, G., 1979: The Thermal Theory of Cyclones. A History of Meteorological Thought in the Nineteenth Century. Amer. Meteor. Soc., 254 pp. References External links The definition of the Coriolis effect from the Glossary of Meteorology The Coriolis Effect — a conflict between common sense and mathematics PDF-file. 20 pages. A general discussion by Anders Persson of various aspects of the coriolis effect, including Foucault's Pendulum and Taylor columns. The coriolis effect in meteorology PDF-file. 5 pages. A detailed explanation by Mats Rosengren of how the gravitational force and the rotation of the Earth affect the atmospheric motion over the Earth surface. 2 figures 10 Coriolis Effect Videos and Games- from the About.com Weather Page Coriolis Force – from ScienceWorld Coriolis Effect and Drains An article from the NEWTON web site hosted by the Argonne National Laboratory. Catalog of Coriolis videos Coriolis Effect: A graphical animation, a visual Earth animation with precise explanation An introduction to fluid dynamics SPINLab Educational Film explains the Coriolis effect with the aid of lab experiments Do bathtubs drain counterclockwise in the Northern Hemisphere? by Cecil Adams. Bad Coriolis. An article uncovering misinformation about the Coriolis effect. By Alistair B. Fraser, emeritus professor of meteorology at Pennsylvania State University The Coriolis Effect: A (Fairly) Simple Explanation, an explanation for the layperson Observe an animation of the Coriolis effect over Earth's surface Animation clip showing scenes as viewed from both an inertial frame and a rotating frame of reference, visualizing the Coriolis and centrifugal forces. Vincent Mallette The Coriolis Force @ INWIT NASA notes Interactive Coriolis Fountain lets you control rotation speed, droplet speed and frame of reference to explore the Coriolis effect. Rotating Co-ordinating Systems , transformation from inertial systems Classical mechanics Force Atmospheric dynamics Physical phenomena Fictitious forces Rotation
Coriolis force
[ "Physics", "Chemistry", "Mathematics" ]
9,731
[ "Physical phenomena", "Force", "Physical quantities", "Atmospheric dynamics", "Quantity", "Mass", "Classical mechanics", "Fictitious forces", "Rotation", "Motion (physics)", "Mechanics", "Wikipedia categories named after physical quantities", "Matter", "Fluid dynamics" ]
7,794
https://en.wikipedia.org/wiki/Crystallography
Crystallography is the branch of science devoted to the study of molecular and crystalline structure and properties. The word crystallography is derived from the Ancient Greek word (; "clear ice, rock-crystal"), and (; "to write"). In July 2012, the United Nations recognised the importance of the science of crystallography by proclaiming 2014 the International Year of Crystallography. Crystallography is a broad topic, and many of its subareas, such as X-ray crystallography, are themselves important scientific topics. Crystallography ranges from the fundamentals of crystal structure to the mathematics of crystal geometry, including those that are not periodic or quasicrystals. At the atomic scale it can involve the use of X-ray diffraction to produce experimental data that the tools of X-ray crystallography can convert into detailed positions of atoms, and sometimes electron density. At larger scales it includes experimental tools such as orientational imaging to examine the relative orientations at the grain boundary in materials. Crystallography plays a key role in many areas of biology, chemistry, and physics, as well new developments in these fields. History and timeline Before the 20th century, the study of crystals was based on physical measurements of their geometry using a goniometer. This involved measuring the angles of crystal faces relative to each other and to theoretical reference axes (crystallographic axes), and establishing the symmetry of the crystal in question. The position in 3D space of each crystal face is plotted on a stereographic net such as a Wulff net or Lambert net. The pole to each face is plotted on the net. Each point is labelled with its Miller index. The final plot allows the symmetry of the crystal to be established. The discovery of X-rays and electrons in the last decade of the 19th century enabled the determination of crystal structures on the atomic scale, which brought about the modern era of crystallography. The first X-ray diffraction experiment was conducted in 1912 by Max von Laue, while electron diffraction was first realized in 1927 in the Davisson–Germer experiment and parallel work by George Paget Thomson and Alexander Reid. These developed into the two main branches of crystallography, X-ray crystallography and electron diffraction. The quality and throughput of solving crystal structures greatly improved in the second half of the 20th century, with the developments of customized instruments and phasing algorithms. Nowadays, crystallography is an interdisciplinary field, supporting theoretical and experimental discoveries in various domains. Modern-day scientific instruments for crystallography vary from laboratory-sized equipment, such as diffractometers and electron microscopes, to dedicated large facilities, such as photoinjectors, synchrotron light sources and free-electron lasers. Methodology Crystallographic methods depend mainly on analysis of the diffraction patterns of a sample targeted by a beam of some type. X-rays are most commonly used; other beams used include electrons or neutrons. Crystallographers often explicitly state the type of beam used, as in the terms X-ray diffraction, neutron diffraction and electron diffraction. These three types of radiation interact with the specimen in different ways. X-rays interact with the spatial distribution of electrons in the sample. Neutrons are scattered by the atomic nuclei through the strong nuclear forces, but in addition the magnetic moment of neutrons is non-zero, so they are also scattered by magnetic fields. When neutrons are scattered from hydrogen-containing materials, they produce diffraction patterns with high noise levels, which can sometimes be resolved by substituting deuterium for hydrogen. Electrons are charged particles and therefore interact with the total charge distribution of both the atomic nuclei and the electrons of the sample. It is hard to focus x-rays or neutrons, but since electrons are charged they can be focused and are used in electron microscope to produce magnified images. There are many ways that transmission electron microscopy and related techniques such as scanning transmission electron microscopy, high-resolution electron microscopy can be used to obtain images with in many cases atomic resolution from which crystallographic information can be obtained. There are also other methods such as low-energy electron diffraction, low-energy electron microscopy and reflection high-energy electron diffraction which can be used to obtain crystallographic information about surfaces. Applications in various areas Materials science Crystallography is used by materials scientists to characterize different materials. In single crystals, the effects of the crystalline arrangement of atoms is often easy to see macroscopically because the natural shapes of crystals reflect the atomic structure. In addition, physical properties are often controlled by crystalline defects. The understanding of crystal structures is an important prerequisite for understanding crystallographic defects. Most materials do not occur as a single crystal, but are poly-crystalline in nature (they exist as an aggregate of small crystals with different orientations). As such, powder diffraction techniques, which take diffraction patterns of samples with a large number of crystals, play an important role in structural determination. Other physical properties are also linked to crystallography. For example, the minerals in clay form small, flat, platelike structures. Clay can be easily deformed because the platelike particles can slip along each other in the plane of the plates, yet remain strongly connected in the direction perpendicular to the plates. Such mechanisms can be studied by crystallographic texture measurements. Crystallographic studies help elucidate the relationship between a material's structure and its properties, aiding in developing new materials with tailored characteristics. This understanding is crucial in various fields, including metallurgy, geology, and materials science. Advancements in crystallographic techniques, such as electron diffraction and X-ray crystallography, continue to expand our understanding of material behavior at the atomic level. In another example, iron transforms from a body-centered cubic (bcc) structure called ferrite to a face-centered cubic (fcc) structure called austenite when it is heated. The fcc structure is a close-packed structure unlike the bcc structure; thus the volume of the iron decreases when this transformation occurs. Crystallography is useful in phase identification. When manufacturing or using a material, it is generally desirable to know what compounds and what phases are present in the material, as their composition, structure and proportions will influence the material's properties. Each phase has a characteristic arrangement of atoms. X-ray or neutron diffraction can be used to identify which structures are present in the material, and thus which compounds are present. Crystallography covers the enumeration of the symmetry patterns which can be formed by atoms in a crystal and for this reason is related to group theory. Biology X-ray crystallography is the primary method for determining the molecular conformations of biological macromolecules, particularly protein and nucleic acids such as DNA and RNA. The double-helical structure of DNA was deduced from crystallographic data. The first crystal structure of a macromolecule was solved in 1958, a three-dimensional model of the myoglobin molecule obtained by X-ray analysis. The Protein Data Bank (PDB) is a freely accessible repository for the structures of proteins and other biological macromolecules. Computer programs such as RasMol, Pymol or VMD can be used to visualize biological molecular structures. Neutron crystallography is often used to help refine structures obtained by X-ray methods or to solve a specific bond; the methods are often viewed as complementary, as X-rays are sensitive to electron positions and scatter most strongly off heavy atoms, while neutrons are sensitive to nucleus positions and scatter strongly even off many light isotopes, including hydrogen and deuterium. Electron diffraction has been used to determine some protein structures, most notably membrane proteins and viral capsids. Notation Coordinates in square brackets such as [100] denote a direction vector (in real space). Coordinates in angle brackets or chevrons such as <100> denote a family of directions which are related by symmetry operations. In the cubic crystal system for example, <100> would mean [100], [010], [001] or the negative of any of those directions. Miller indices in parentheses such as (100) denote a plane of the crystal structure, and regular repetitions of that plane with a particular spacing. In the cubic system, the normal to the (hkl) plane is the direction [hkl], but in lower-symmetry cases, the normal to (hkl) is not parallel to [hkl]. Indices in curly brackets or braces such as {100} denote a family of planes and their normals. In cubic materials the symmetry makes them equivalent, just as the way angle brackets denote a family of directions. In non-cubic materials, <hkl> is not necessarily perpendicular to {hkl}. Reference literature The International Tables for Crystallography is an eight-book series that outlines the standard notations for formatting, describing and testing crystals. The series contains books that covers analysis methods and the mathematical procedures for determining organic structure through x-ray crystallography, electron diffraction, and neutron diffraction. The International tables are focused on procedures, techniques and descriptions and do not list the physical properties of individual crystals themselves. Each book is about 1000 pages and the titles of the books are: Vol A - Space Group Symmetry, Vol A1 - Symmetry Relations Between Space Groups, Vol B - Reciprocal Space, Vol C - Mathematical, Physical, and Chemical Tables, Vol D - Physical Properties of Crystals, Vol E - Subperiodic Groups, Vol F - Crystallography of Biological Macromolecules, and Vol G - Definition and Exchange of Crystallographic Data. Notable scientists See also Atomic packing factor Crystal structure Crystallographic database Crystallographic point group Crystallographic group Dana classification system Electron crystallography Electron diffraction Fractional coordinates Low-energy electron diffraction Neutron crystallography Neutron diffraction at OPAL Neutron diffraction at the ILL NMR crystallography Point group Precession electron diffraction Quasicrystal Reflection high-energy electron diffraction Space group Symmetric group Timeline of crystallography Transmission electron microscopy X-ray crystallography References External links Free book, Geometry of Crystals, Polycrystals and Phase Transformations American Crystallographic Association Learning Crystallography Web Course on Crystallography Crystallographic Space Groups Chemistry Condensed matter physics Instrumental analysis Materials science Neutron-related techniques Synchrotron-related techniques
Crystallography
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,178
[ "Applied and interdisciplinary physics", "Instrumental analysis", "Phases of matter", "Materials science", "Crystallography", "Condensed matter physics", "nan", "Matter" ]
7,807
https://en.wikipedia.org/wiki/Cavitation
Cavitation in fluid mechanics and engineering normally is the phenomenon in which the static pressure of a liquid reduces to below the liquid's vapor pressure, leading to the formation of small vapor-filled cavities in the liquid. When subjected to higher pressure, these cavities, called "bubbles" or "voids", collapse and can generate shock waves that may damage machinery. These shock waves are strong when they are very close to the imploded bubble, but rapidly weaken as they propagate away from the implosion. Cavitation is a significant cause of wear in some engineering contexts. Collapsing voids that implode near to a metal surface cause cyclic stress through repeated implosion. This results in surface fatigue of the metal, causing a type of wear also called "cavitation". The most common examples of this kind of wear are to pump impellers, and bends where a sudden change in the direction of liquid occurs. Cavitation is usually divided into two classes of behavior. Inertial (or transient) cavitation is the process in which a void or bubble in a liquid rapidly collapses, producing a shock wave. It occurs in nature in the strikes of mantis shrimp and pistol shrimp, as well as in the vascular tissues of plants. In manufactured objects, it can occur in control valves, pumps, propellers and impellers. Non-inertial cavitation is the process in which a bubble in a fluid is forced to oscillate in size or shape due to some form of energy input, such as an acoustic field. The gas in the bubble may contain a portion of a different gas than the vapor phase of the liquid. Such cavitation is often employed in ultrasonic cleaning baths and can also be observed in pumps, propellers, etc. Since the shock waves formed by collapse of the voids are strong enough to cause significant damage to parts, cavitation is typically an undesirable phenomenon in machinery. It may be desirable if intentionally used, for example, to sterilize contaminated surgical instruments, break down pollutants in water purification systems, emulsify tissue for cataract surgery or kidney stone lithotripsy, or homogenize fluids. It is very often specifically prevented in the design of machines such as turbines or propellers, and eliminating cavitation is a major field in the study of fluid dynamics. However, it is sometimes useful and does not cause damage when the bubbles collapse away from machinery, such as in supercavitation. Physics Inertial cavitation Inertial cavitation was first observed in the late 19th century, considering the collapse of a spherical void within a liquid. When a volume of liquid is subjected to a sufficiently low pressure, it may rupture and form a cavity. This phenomenon is coined cavitation inception and may occur behind the blade of a rapidly rotating propeller or on any surface vibrating in the liquid with sufficient amplitude and acceleration. A fast-flowing river can cause cavitation on rock surfaces, particularly when there is a drop-off, such as on a waterfall. Vapor gases evaporate into the cavity from the surrounding medium; thus, the cavity is not a vacuum at all, but rather a low-pressure vapor (gas) bubble. Once the conditions which caused the bubble to form are no longer present, such as when the bubble moves downstream, the surrounding liquid begins to implode due its higher pressure, building up momentum as it moves inward. As the bubble finally collapses, the inward momentum of the surrounding liquid causes a sharp increase of pressure and temperature of the vapor within. The bubble eventually collapses to a minute fraction of its original size, at which point the gas within dissipates into the surrounding liquid via a rather violent mechanism which releases a significant amount of energy in the form of an acoustic shock wave and as visible light. At the point of total collapse, the temperature of the vapor within the bubble may be several thousand Kelvin, and the pressure several hundred atmospheres. The physical process of cavitation inception is similar to boiling. The major difference between the two is the thermodynamic paths that precede the formation of the vapor. Boiling occurs when the local temperature of the liquid reaches the saturation temperature, and further heat is supplied to allow the liquid to sufficiently phase change into a gas. Cavitation inception occurs when the local pressure falls sufficiently far below the saturated vapor pressure, a value given by the tensile strength of the liquid at a certain temperature. In order for cavitation inception to occur, the cavitation "bubbles" generally need a surface on which they can nucleate. This surface can be provided by the sides of a container, by impurities in the liquid, or by small undissolved microbubbles within the liquid. It is generally accepted that hydrophobic surfaces stabilize small bubbles. These pre-existing bubbles start to grow unbounded when they are exposed to a pressure below the threshold pressure, termed Blake's threshold. The presence of an incompressible core inside a cavitation nucleus substantially lowers the cavitation threshold below the Blake threshold. The vapor pressure here differs from the meteorological definition of vapor pressure, which describes the partial pressure of water in the atmosphere at some value less than 100% saturation. Vapor pressure as relating to cavitation refers to the vapor pressure in equilibrium conditions and can therefore be more accurately defined as the equilibrium (or saturated) vapor pressure. Non-inertial cavitation is the process in which small bubbles in a liquid are forced to oscillate in the presence of an acoustic field, when the intensity of the acoustic field is insufficient to cause total bubble collapse. This form of cavitation causes significantly less erosion than inertial cavitation, and is often used for the cleaning of delicate materials, such as silicon wafers. Other ways of generating cavitation voids involve the local deposition of energy, such as an intense focused laser pulse (optic cavitation) or with an electrical discharge through a spark. These techniques have been used to study the evolution of the bubble that is actually created by locally boiling the liquid with a local increment of temperature. Hydrodynamic cavitation Hydrodynamic cavitation is the process of vaporisation, bubble generation and bubble implosion which occurs in a flowing liquid as a result of a decrease and subsequent increase in local pressure. Cavitation will only occur if the local pressure declines to some point below the saturated vapor pressure of the liquid and subsequent recovery above the vapor pressure. If the recovery pressure is not above the vapor pressure then flashing is said to have occurred. In pipe systems, cavitation typically occurs either as the result of an increase in the kinetic energy (through an area constriction) or an increase in the pipe elevation. Hydrodynamic cavitation can be produced by passing a liquid through a constricted channel at a specific flow velocity or by mechanical rotation of an object through a liquid. In the case of the constricted channel and based on the specific (or unique) geometry of the system, the combination of pressure and kinetic energy can create the hydrodynamic cavitation cavern downstream of the local constriction generating high energy cavitation bubbles. Based on the thermodynamic phase change diagram, an increase in temperature could initiate a known phase change mechanism known as boiling. However, a decrease in static pressure could also help one pass the multi-phase diagram and initiate another phase change mechanism known as cavitation. On the other hand, a local increase in flow velocity could lead to a static pressure drop to the critical point at which cavitation could be initiated (based on Bernoulli's principle). The critical pressure point is vapor saturated pressure. In a closed fluidic system where no flow leakage is detected, a decrease in cross-sectional area would lead to velocity increment and hence static pressure drop. This is the working principle of many hydrodynamic cavitation based reactors for different applications such as water treatment, energy harvesting, heat transfer enhancement, food processing, etc. There are different flow patterns detected as a cavitation flow progresses: inception, developed flow, supercavitation, and choked flow. Inception is the first moment that the second phase (gas phase) appears in the system. This is the weakest cavitating flow captured in a system corresponding to the highest cavitation number. When the cavities grow and becomes larger in size in the orifice or venturi structures, developed flow is recorded. The most intense cavitating flow is known as supercavitation where theoretically all the nozzle area of an orifice is filled with gas bubbles. This flow regime corresponds to the lowest cavitation number in a system. After supercavitation, the system is not capable of passing more flow. Hence, velocity does not change while the upstream pressure increase. This would lead to an increase in cavitation number which shows that choked flow occurred. The process of bubble generation, and the subsequent growth and collapse of the cavitation bubbles, results in very high energy densities and in very high local temperatures and local pressures at the surface of the bubbles for a very short time. The overall liquid medium environment, therefore, remains at ambient conditions. When uncontrolled, cavitation is damaging; by controlling the flow of the cavitation, however, the power can be harnessed and non-destructive. Controlled cavitation can be used to enhance chemical reactions or propagate certain unexpected reactions because free radicals are generated in the process due to disassociation of vapors trapped in the cavitating bubbles. Orifices and venturi are reported to be widely used for generating cavitation. A venturi has an inherent advantage over an orifice because of its smooth converging and diverging sections, such that it can generate a higher flow velocity at the throat for a given pressure drop across it. On the other hand, an orifice has an advantage that it can accommodate a greater number of holes (larger perimeter of holes) in a given cross sectional area of the pipe. The cavitation phenomenon can be controlled to enhance the performance of high-speed marine vessels and projectiles, as well as in material processing technologies, in medicine, etc. Controlling the cavitating flows in liquids can be achieved only by advancing the mathematical foundation of the cavitation processes. These processes are manifested in different ways, the most common ones and promising for control being bubble cavitation and supercavitation. The first exact classical solution should perhaps be credited to the well-known solution by Hermann von Helmholtz in 1868. The earliest distinguished studies of academic type on the theory of a cavitating flow with free boundaries and supercavitation were published in the book Jets, wakes and cavities followed by Theory of jets of ideal fluid. Widely used in these books was the well-developed theory of conformal mappings of functions of a complex variable, allowing one to derive a large number of exact solutions of plane problems. Another venue combining the existing exact solutions with approximated and heuristic models was explored in the work Hydrodynamics of Flows with Free Boundaries that refined the applied calculation techniques based on the principle of cavity expansion independence, theory of pulsations and stability of elongated axisymmetric cavities, etc. and in Dimensionality and similarity methods in the problems of the hydromechanics of vessels. A natural continuation of these studies was recently presented in The Hydrodynamics of Cavitating Flows – an encyclopedic work encompassing all the best advances in this domain for the last three decades, and blending the classical methods of mathematical research with the modern capabilities of computer technologies. These include elaboration of nonlinear numerical methods of solving 3D cavitation problems, refinement of the known plane linear theories, development of asymptotic theories of axisymmetric and nearly axisymmetric flows, etc. As compared to the classical approaches, the new trend is characterized by expansion of the theory into the 3D flows. It also reflects a certain correlation with current works of an applied character on the hydrodynamics of supercavitating bodies. Hydrodynamic cavitation can also improve some industrial processes. For instance, cavitated corn slurry shows higher yields in ethanol production compared to uncavitated corn slurry in dry milling facilities. This is also used in the mineralization of bio-refractory compounds which otherwise would need extremely high temperature and pressure conditions since free radicals are generated in the process due to the dissociation of vapors trapped in the cavitating bubbles, which results in either the intensification of the chemical reaction or may even result in the propagation of certain reactions not possible under otherwise ambient conditions. Acoustic cavitation and ultrasonic cavitation Inertial cavitation can also occur in the presence of an acoustic field. Microscopic gas bubbles that are generally present in a liquid will be forced to oscillate due to an applied acoustic field. If the acoustic intensity is sufficiently high, the bubbles will first grow in size and then rapidly collapse. Hence, inertial cavitation can occur even if the rarefaction in the liquid is insufficient for a Rayleigh-like void to occur. Ultrasonic cavitation inception will occur when the acceleration of the ultrasound source is enough to produce the needed pressure drop. This pressure drop depends on the value of the acceleration and the size of the affected volume by the pressure wave. The dimensionless number that predicts ultrasonic cavitation is the Garcia-Atance number. High power ultrasonic horns produce accelerations high enough to create a cavitating region that can be used for homogenization, dispersion, deagglomeration, erosion, cleaning, milling, emulsification, extraction, disintegration, and sonochemistry. Aerodyamic cavitation Although predominant in liquids, cavitation exists to an extent in gas as it has fluid dynamics at high speeds. For example, a bullet with a flat tip moves faster underwater as it creates cavitation compared to a bullet with a sharp tip. An ideal shape for aerodynamic cavitation is a dune. It has such a form that provides minimal resistance to the wind. A surface with small dunes installed on aircraft and various high speed vehicles, the total friction against the air will decrease several times. The dune surface pushes the air upwards, underneath and behind the air pressure drops reducing friction. The dune may increase frontal resistance, but it will be compensated by a decrease in the total friction area, as it happens in an underwater bullet. As a result, the speed of the aircraft or vehicle will increase significantly. Applications Chemical engineering In industry, cavitation is often used to homogenize, or mix and break down, suspended particles in a colloidal liquid compound such as paint mixtures or milk. Many industrial mixing machines are based upon this design principle. It is usually achieved through impeller design or by forcing the mixture through an annular opening that has a narrow entrance orifice with a much larger exit orifice. In the latter case, the drastic decrease in pressure as the liquid accelerates into a larger volume induces cavitation. This method can be controlled with hydraulic devices that control inlet orifice size, allowing for dynamic adjustment during the process, or modification for different substances. The surface of this type of mixing valve, against which surface the cavitation bubbles are driven causing their implosion, undergoes tremendous mechanical and thermal localized stress; they are therefore often constructed of extremely strong and hard materials such as stainless steel, Stellite, or even polycrystalline diamond (PCD). Cavitating water purification devices have also been designed, in which the extreme conditions of cavitation can break down pollutants and organic molecules. Spectral analysis of light emitted in sonochemical reactions reveal chemical and plasma-based mechanisms of energy transfer. The light emitted from cavitation bubbles is termed sonoluminescence. Use of this technology has been tried successfully in alkali refining of vegetable oils. Hydrophobic chemicals are attracted underwater by cavitation as the pressure difference between the bubbles and the liquid water forces them to join. This effect may assist in protein folding. Biomedical Cavitation plays an important role for the destruction of kidney stones in shock wave lithotripsy. Currently, tests are being conducted as to whether cavitation can be used to transfer large molecules into biological cells (sonoporation). Nitrogen cavitation is a method used in research to lyse cell membranes while leaving organelles intact. Cavitation plays a key role in non-thermal, non-invasive fractionation of tissue for treatment of a variety of diseases and can be used to open the blood-brain barrier to increase uptake of neurological drugs in the brain. Cavitation also plays a role in HIFU, a thermal non-invasive treatment methodology for cancer. In wounds caused by high velocity impacts (like for example bullet wounds) there are also effects due to cavitation. The exact wounding mechanisms are not completely understood yet as there is temporary cavitation, and permanent cavitation together with crushing, tearing and stretching. Also the high variance in density within the body makes it hard to determine its effects. Ultrasound sometimes is used to increase bone formation, for instance in post-surgical applications. It has been suggested that the sound of "cracking" knuckles derives from the collapse of cavitation in the synovial fluid within the joint. Cavitation can also form Ozone micro-nanobubbles which shows promise in dental applications. Cleaning In industrial cleaning applications, cavitation has sufficient power to overcome the particle-to-substrate adhesion forces, loosening contaminants. The threshold pressure required to initiate cavitation is a strong function of the pulse width and the power input. This method works by generating acoustic cavitation in the cleaning fluid, picking up and carrying contaminant particles away in the hope that they do not reattach to the material being cleaned (which is a possibility when the object is immersed, for example in an ultrasonic cleaning bath). The same physical forces that remove contaminants also have the potential to damage the target being cleaned. Food and beverage Eggs Cavitation has been applied to egg pasteurization. A hole-filled rotor produces cavitation bubbles, heating the liquid from within. Equipment surfaces stay cooler than the passing liquid, so eggs do not harden as they did on the hot surfaces of older equipment. The intensity of cavitation can be adjusted, making it possible to tune the process for minimum protein damage. Vegetable oil production Cavitation has been applied to vegetable oil degumming and refining since 2011 and is considered a proven and standard technology in this application. The implementation of hydrodynamic cavitation in the degumming and refining process allows for a significant reduction in process aid, such as chemicals, water and bleaching clay, use. Biofuels Biodiesel Cavitation has been applied to Biodiesel production since 2011 and is considered a proven and standard technology in this application. The implementation of hydrodynamic cavitation in the transesterification process allows for a significant reduction in catalyst use, quality improvement and production capacity increase. Cavitation damage Cavitation is usually an undesirable occurrence. In devices such as propellers and pumps, cavitation causes a great deal of noise, damage to components, vibrations, and a loss of efficiency. Noise caused by cavitation can be particularly undesirable in naval vessels where such noise may render them more easily detectable by passive sonar. Cavitation has also become a concern in the renewable energy sector as it may occur on the blade surface of tidal stream turbines. When the cavitation bubbles collapse, they force energetic liquid into very small volumes, thereby creating spots of high temperature and emitting shock waves, the latter of which are a source of noise. The noise created by cavitation is a particular problem for military submarines, as it increases the chances of being detected by passive sonar. Although the collapse of a small cavity is a relatively low-energy event, highly localized collapses can erode metals, such as steel, over time. The pitting caused by the collapse of cavities produces great wear on components and can dramatically shorten a propeller's or pump's lifetime. After a surface is initially affected by cavitation, it tends to erode at an accelerating pace. The cavitation pits increase the turbulence of the fluid flow and create crevices that act as nucleation sites for additional cavitation bubbles. The pits also increase the components' surface area and leave behind residual stresses. This makes the surface more prone to stress corrosion. Pumps and propellers Major places where cavitation occurs are in pumps, on propellers, or at restrictions in a flowing liquid. As an impeller's (in a pump) or propeller's (as in the case of a ship or submarine) blades move through a fluid, low-pressure areas are formed as the fluid accelerates around and moves past the blades. The faster the blade moves, the lower the pressure can become around it. As it reaches vapor pressure, the fluid vaporizes and forms small bubbles of gas. This is cavitation. When the bubbles collapse later, they typically cause very strong local shock waves in the fluid, which may be audible and may even damage the blades. Cavitation in pumps may occur in two different forms: Suction cavitation Suction cavitation occurs when the pump suction is under a low-pressure/high-vacuum condition where the liquid turns into a vapor at the eye of the pump impeller. This vapor is carried over to the discharge side of the pump, where it no longer sees vacuum and is compressed back into a liquid by the discharge pressure. This imploding action occurs violently and attacks the face of the impeller. An impeller that has been operating under a suction cavitation condition can have large chunks of material removed from its face or very small bits of material removed, causing the impeller to look spongelike. Both cases will cause premature failure of the pump, often due to bearing failure. Suction cavitation is often identified by a sound like gravel or marbles in the pump casing. Common causes of suction cavitation can include clogged filters, pipe blockage on the suction side, poor piping design, pump running too far right on the pump curve, or conditions not meeting NPSH (net positive suction head) requirements. In automotive applications, a clogged filter in a hydraulic system (power steering, power brakes) can cause suction cavitation making a noise that rises and falls in synch with engine RPM. It is fairly often a high pitched whine, like set of nylon gears not quite meshing correctly. Discharge cavitation Discharge cavitation occurs when the pump discharge pressure is extremely high, normally occurring in a pump that is running at less than 10% of its best efficiency point. The high discharge pressure causes the majority of the fluid to circulate inside the pump instead of being allowed to flow out the discharge. As the liquid flows around the impeller, it must pass through the small clearance between the impeller and the pump housing at extremely high flow velocity. This flow velocity causes a vacuum to develop at the housing wall (similar to what occurs in a venturi), which turns the liquid into a vapor. A pump that has been operating under these conditions shows premature wear of the impeller vane tips and the pump housing. In addition, due to the high pressure conditions, premature failure of the pump's mechanical seal and bearings can be expected. Under extreme conditions, this can break the impeller shaft. Discharge cavitation in joint fluid is thought to cause the popping sound produced by bone joint cracking, for example by deliberately cracking one's knuckles. Cavitation solutions Since all pumps require well-developed inlet flow to meet their potential, a pump may not perform or be as reliable as expected due to a faulty suction piping layout such as a close-coupled elbow on the inlet flange. When poorly developed flow enters the pump impeller, it strikes the vanes and is unable to follow the impeller passage. The liquid then separates from the vanes causing mechanical problems due to cavitation, vibration and performance problems due to turbulence and poor filling of the impeller. This results in premature seal, bearing and impeller failure, high maintenance costs, high power consumption, and less-than-specified head and/or flow. To have a well-developed flow pattern, pump manufacturer's manuals recommend about (10 diameters?) of straight pipe run upstream of the pump inlet flange. Unfortunately, piping designers and plant personnel must contend with space and equipment layout constraints and usually cannot comply with this recommendation. Instead, it is common to use an elbow close-coupled to the pump suction which creates a poorly developed flow pattern at the pump suction. With a double-suction pump tied to a close-coupled elbow, flow distribution to the impeller is poor and causes reliability and performance shortfalls. The elbow divides the flow unevenly with more channeled to the outside of the elbow. Consequently, one side of the double-suction impeller receives more flow at a higher flow velocity and pressure while the starved side receives a highly turbulent and potentially damaging flow. This degrades overall pump performance (delivered head, flow and power consumption) and causes axial imbalance which shortens seal, bearing and impeller life. To overcome cavitation: Increase suction pressure if possible. Decrease liquid temperature if possible. Throttle back on the discharge valve to decrease flow-rate. Vent gases off the pump casing. Control valves Cavitation can occur in control valves. If the actual pressure drop across the valve as defined by the upstream and downstream pressures in the system is greater than the sizing calculations allow, pressure drop flashing or cavitation may occur. The change from a liquid state to a vapor state results from the increase in flow velocity at or just downstream of the greatest flow restriction which is normally the valve port. To maintain a steady flow of liquid through a valve the flow velocity must be greatest at the vena contracta or the point where the cross sectional area is the smallest. This increase in flow velocity is accompanied by a substantial decrease in the fluid pressure which is partially recovered downstream as the area increases and flow velocity decreases. This pressure recovery is never completely to the level of the upstream pressure. If the pressure at the vena contracta drops below the vapor pressure of the fluid bubbles will form in the flow stream. If the pressure recovers after the valve to a pressure that is once again above the vapor pressure, then the vapor bubbles will collapse and cavitation will occur. Spillways When water flows over a dam spillway, the irregularities on the spillway surface will cause small areas of flow separation in a high-speed flow, and, in these regions, the pressure will be lowered. If the flow velocities are high enough the pressure may fall to below the local vapor pressure of the water and vapor bubbles will form. When these are carried downstream into a high pressure region the bubbles collapse giving rise to high pressures and possible cavitation damage. Experimental investigations show that the damage on concrete chute and tunnel spillways can start at clear water flow velocities of between , and, up to flow velocities of , it may be possible to protect the surface by streamlining the boundaries, improving the surface finishes or using resistant materials. When some air is present in the water the resulting mixture is compressible and this damps the high pressure caused by the bubble collapses. If the flow velocities near the spillway invert are sufficiently high, aerators (or aeration devices) must be introduced to prevent cavitation. Although these have been installed for some years, the mechanisms of air entrainment at the aerators and the slow movement of the air away from the spillway surface are still challenging. The spillway aeration device design is based upon a small deflection of the spillway bed (or sidewall) such as a ramp and offset to deflect the high flow velocity flow away from the spillway surface. In the cavity formed below the nappe, a local subpressure beneath the nappe is produced by which air is sucked into the flow. The complete design includes the deflection device (ramp, offset) and the air supply system. Engines Some larger diesel engines suffer from cavitation due to high compression and undersized cylinder walls. Vibrations of the cylinder wall induce alternating low and high pressure in the coolant against the cylinder wall. The result is pitting of the cylinder wall, which will eventually let cooling fluid leak into the cylinder and combustion gases to leak into the coolant. It is possible to prevent this from happening with the use of chemical additives in the cooling fluid that form a protective layer on the cylinder wall. This layer will be exposed to the same cavitation, but rebuilds itself. Additionally a regulated overpressure in the cooling system (regulated and maintained by the coolant filler cap spring pressure) prevents the forming of cavitation. From about the 1980s, new designs of smaller gasoline engines also displayed cavitation phenomena. One answer to the need for smaller and lighter engines was a smaller coolant volume and a correspondingly higher coolant flow velocity. This gave rise to rapid changes in flow velocity and therefore rapid changes of static pressure in areas of high heat transfer. Where resulting vapor bubbles collapsed against a surface, they had the effect of first disrupting protective oxide layers (of cast aluminium materials) and then repeatedly damaging the newly formed surface, preventing the action of some types of corrosion inhibitor (such as silicate based inhibitors). A final problem was the effect that increased material temperature had on the relative electrochemical reactivity of the base metal and its alloying constituents. The result was deep pits that could form and penetrate the engine head in a matter of hours when the engine was running at high load and high speed. These effects could largely be avoided by the use of organic corrosion inhibitors or (preferably) by designing the engine head in such a way as to avoid certain cavitation inducing conditions. In nature Geology Some hypotheses relating to diamond formation posit a possible role for cavitation—namely cavitation in the kimberlite pipes providing the extreme pressure needed to change pure carbon into the rare allotrope that is diamond. The loudest three sounds ever recorded, during the 1883 eruption of Krakatoa, are now understood as the bursts of three huge cavitation bubbles, each larger than the last, formed in the volcano's throat. Rising magma, filled with dissolved gasses and under immense pressure, encountered a different magma that compressed easily, allowing bubbles to grow and combine. Vascular plants Cavitation can occur in the xylem of vascular plants. The sap vaporizes locally so that either the vessel elements or tracheids are filled with water vapor. Plants are able to repair cavitated xylem in a number of ways. For plants less than 50 cm tall, root pressure can be sufficient to redissolve the vapor. Larger plants direct solutes into the xylem via ray cells, or in tracheids, via osmosis through bordered pits. Solutes attract water, the pressure rises and vapor can redissolve. In some trees, the sound of the cavitation is audible, particularly in summer, when the rate of evapotranspiration is highest. Some deciduous trees have to shed leaves in the autumn partly because cavitation increases as temperatures decrease. Spore dispersal in plants Cavitation plays a role in the spore dispersal mechanisms of certain plants. In ferns, for example, the fern sporangium acts as a catapult that launches spores into the air. The charging phase of the catapult is driven by water evaporation from the annulus cells, which triggers a pressure decrease. When the compressive pressure reaches approximately 9MPa, cavitation occurs. This rapid event triggers spore dispersal due to the elastic energy released by the annulus structure. The initial spore acceleration is extremely large – up to 10 times the gravitational acceleration. Marine life Just as cavitation bubbles form on a fast-spinning boat propeller, they may also form on the tails and fins of aquatic animals. This primarily occurs near the surface of the ocean, where the ambient water pressure is low. Cavitation may limit the maximum swimming speed of powerful swimming animals like dolphins and tuna. Dolphins may have to restrict their speed because collapsing cavitation bubbles on their tail are painful. Tuna have bony fins without nerve endings and do not feel pain from cavitation. They are slowed down when cavitation bubbles create a vapor film around their fins. Lesions have been found on tuna that are consistent with cavitation damage. Some sea animals have found ways to use cavitation to their advantage when hunting prey. The pistol shrimp snaps a specialized claw to create cavitation, which can kill small fish. The mantis shrimp (of the smasher variety) uses cavitation as well in order to stun, smash open, or kill the shellfish that it feasts upon. Thresher sharks use 'tail slaps' to debilitate their small fish prey and cavitation bubbles have been seen rising from the apex of the tail arc. Coastal erosion In the last half-decade, coastal erosion in the form of inertial cavitation has been generally accepted. Bubbles in an incoming wave are forced into cracks in the cliff being eroded. Varying pressure decompresses some vapor pockets which subsequently implode. The resulting pressure peaks can blast apart fractions of the rock. History As early as 1754, the Swiss mathematician Leonhard Euler (1707–1783) speculated about the possibility of cavitation. In 1859, the English mathematician William Henry Besant (1828–1917) published a solution to the problem of the dynamics of the collapse of a spherical cavity in a fluid, which had been presented by the Anglo-Irish mathematician George Stokes (1819–1903) as one of the Cambridge [University] Senate-house problems and riders for the year 1847. In 1894, Irish fluid dynamicist Osborne Reynolds (1842–1912) studied the formation and collapse of vapor bubbles in boiling liquids and in constricted tubes. The term cavitation first appeared in 1895 in a paper by John Isaac Thornycroft (1843–1928) and Sydney Walker Barnaby (1855–1925)—son of Sir Nathaniel Barnaby (1829 – 1915), who had been Chief Constructor of the Royal Navy—to whom it had been suggested by the British engineer Robert Edmund Froude (1846–1924), third son of the English hydrodynamicist William Froude (1810–1879). Early experimental studies of cavitation were conducted in 1894–5 by Thornycroft and Barnaby and by the Anglo-Irish engineer Charles Algernon Parsons (1854–1931), who constructed a stroboscopic apparatus to study the phenomenon. Thornycroft and Barnaby were the first researchers to observe cavitation on the back sides of propeller blades. In 1917, the British physicist Lord Rayleigh (1842–1919) extended Besant's work, publishing a mathematical model of cavitation in an incompressible fluid (ignoring surface tension and viscosity), in which he also determined the pressure in the fluid. The mathematical models of cavitation which were developed by British engineer Stanley Smith Cook (1875–1952) and by Lord Rayleigh revealed that collapsing bubbles of vapor could generate very high pressures, which were capable of causing the damage that had been observed on ships' propellers. Experimental evidence of cavitation causing such high pressures was initially collected in 1952 by Mark Harrison (a fluid dynamicist and acoustician at the U.S. Navy's David Taylor Model Basin at Carderock, Maryland, USA) who used acoustic methods and in 1956 by Wernfried Güth (a physicist and acoustician of Göttigen University, Germany) who used optical Schlieren photography. In 1944, Soviet scientists Mark Iosifovich Kornfeld (1908–1993) and L. Suvorov of the Leningrad Physico-Technical Institute (now: the Ioffe Physical-Technical Institute of the Russian Academy of Sciences, St. Petersburg, Russia) proposed that during cavitation, bubbles in the vicinity of a solid surface do not collapse symmetrically; instead, a dimple forms on the bubble at a point opposite the solid surface and this dimple evolves into a jet of liquid. This jet of liquid damages the solid surface. This hypothesis was supported in 1951 by theoretical studies by Maurice Rattray Jr., a doctoral student at the California Institute of Technology. Kornfeld and Suvorov's hypothesis was confirmed experimentally in 1961 by Charles F. Naudé and Albert T. Ellis, fluid dynamicists at the California Institute of Technology. A series of experimental investigations of the propagation of strong shock wave (SW) in a liquid with gas bubbles, which made it possible to establish the basic laws governing the process, the mechanism for the transformation of the energy of the SW, attenuation of the SW, and the formation of the structure, and experiments on the analysis of the attenuation of waves in bubble screens with different acoustic properties were begun by pioneer works of Soviet scientist prof.V.F. Minin at the Institute of Hydrodynamics (Novosibirsk, Russia) in 1957–1960, who examined also the first convenient model of a screen - a sequence of alternating flat one-dimensional liquid and gas layers. In an experimental investigations of the dynamics of the form of pulsating gaseous cavities and interaction of SW with bubble clouds in 1957–1960 V.F. Minin discovered that under the action of SW a bubble collapses asymmetrically with the formation of a cumulative jet, which forms in the process of collapse and causes fragmentation of the bubble. See also References Further reading For cavitation in plants, see Plant Physiology by Taiz and Zeiger. For cavitation in the engineering field, visit Cavitation corrosion For hydrodynamic cavitation in the ethanol field, visit Arisdyne and Ethanol Producer Magazine: "Tiny Bubbles to Make You Happy" For Cavitation on tidal stream turbines, see External links Cavitation and Bubbly Flows, Saint Anthony Falls Laboratory, University of Minnesota Cavitation and Bubble Dynamics by Christopher E. Brennen Fundamentals of Multiphase Flow by Christopher E. Brennen van der Waals-type CFD Modeling of Cavitation Cavitation bubble in varying gravitational fields, jet-formation Cavitation limits the speed of dolphins Tiny Bubbles to Make You Happy Pump Cavitation Fluid dynamics Physical phenomena Articles containing video clips Bubbles (physics) Pressure
Cavitation
[ "Physics", "Chemistry", "Engineering" ]
8,068
[ "Scalar physical quantities", "Physical phenomena", "Mechanical quantities", "Physical quantities", "Bubbles (physics)", "Foams", "Chemical engineering", "Pressure", "Piping", "Wikipedia categories named after physical quantities", "Fluid dynamics" ]
7,832
https://en.wikipedia.org/wiki/Complete%20metric%20space
In mathematical analysis, a metric space is called complete (or a Cauchy space) if every Cauchy sequence of points in has a limit that is also in . Intuitively, a space is complete if there are no "points missing" from it (inside or at the boundary). For instance, the set of rational numbers is not complete, because e.g. is "missing" from it, even though one can construct a Cauchy sequence of rational numbers that converges to it (see further examples below). It is always possible to "fill all the holes", leading to the completion of a given space, as explained below. Definition Cauchy sequence A sequence of elements from of a metric space is called Cauchy if for every positive real number there is a positive integer such that for all positive integers Complete space A metric space is complete if any of the following equivalent conditions are satisfied: Every Cauchy sequence of points in has a limit that is also in Every Cauchy sequence in converges in (that is, to some point of ). Every decreasing sequence of non-empty closed subsets of with diameters tending to 0, has a non-empty intersection: if is closed and non-empty, for every and then there is a unique point common to all sets Examples The space of rational numbers, with the standard metric given by the absolute value of the difference, is not complete. Consider for instance the sequence defined by and This is a Cauchy sequence of rational numbers, but it does not converge towards any rational limit: If the sequence did have a limit then by solving necessarily yet no rational number has this property. However, considered as a sequence of real numbers, it does converge to the irrational number . The open interval , again with the absolute difference metric, is not complete either. The sequence defined by is Cauchy, but does not have a limit in the given space. However the closed interval is complete; for example the given sequence does have a limit in this interval, namely zero. The space of real numbers and the space of complex numbers (with the metric given by the absolute difference) are complete, and so is Euclidean space , with the usual distance metric. In contrast, infinite-dimensional normed vector spaces may or may not be complete; those that are complete are Banach spaces. The space C of continuous real-valued functions on a closed and bounded interval is a Banach space, and so a complete metric space, with respect to the supremum norm. However, the supremum norm does not give a norm on the space C of continuous functions on , for it may contain unbounded functions. Instead, with the topology of compact convergence, C can be given the structure of a Fréchet space: a locally convex topological vector space whose topology can be induced by a complete translation-invariant metric. The space Qp of p-adic numbers is complete for any prime number This space completes Q with the p-adic metric in the same way that R completes Q with the usual metric. If is an arbitrary set, then the set of all sequences in becomes a complete metric space if we define the distance between the sequences and to be where is the smallest index for which is distinct from or if there is no such index. This space is homeomorphic to the product of a countable number of copies of the discrete space Riemannian manifolds which are complete are called geodesic manifolds; completeness follows from the Hopf–Rinow theorem. Some theorems Every compact metric space is complete, though complete spaces need not be compact. In fact, a metric space is compact if and only if it is complete and totally bounded. This is a generalization of the Heine–Borel theorem, which states that any closed and bounded subspace of is compact and therefore complete. Let be a complete metric space. If is a closed set, then is also complete. Let be a metric space. If is a complete subspace, then is also closed. If is a set and is a complete metric space, then the set of all bounded functions from to is a complete metric space. Here we define the distance in in terms of the distance in with the supremum norm If is a topological space and is a complete metric space, then the set consisting of all continuous bounded functions is a closed subspace of and hence also complete. The Baire category theorem says that every complete metric space is a Baire space. That is, the union of countably many nowhere dense subsets of the space has empty interior. The Banach fixed-point theorem states that a contraction mapping on a complete metric space admits a fixed point. The fixed-point theorem is often used to prove the inverse function theorem on complete metric spaces such as Banach spaces. Completion For any metric space M, it is possible to construct a complete metric space M′ (which is also denoted as ), which contains M as a dense subspace. It has the following universal property: if N is any complete metric space and f is any uniformly continuous function from M to N, then there exists a unique uniformly continuous function f′ from M′ to N that extends f. The space M''' is determined up to isometry by this property (among all complete metric spaces isometrically containing M), and is called the completion of M. The completion of M can be constructed as a set of equivalence classes of Cauchy sequences in M. For any two Cauchy sequences and in M, we may define their distance as (This limit exists because the real numbers are complete.) This is only a pseudometric, not yet a metric, since two different Cauchy sequences may have the distance 0. But "having distance 0" is an equivalence relation on the set of all Cauchy sequences, and the set of equivalence classes is a metric space, the completion of M. The original space is embedded in this space via the identification of an element x of M' with the equivalence class of sequences in M converging to x (i.e., the equivalence class containing the sequence with constant value x). This defines an isometry onto a dense subspace, as required. Notice, however, that this construction makes explicit use of the completeness of the real numbers, so completion of the rational numbers needs a slightly different treatment. Cantor's construction of the real numbers is similar to the above construction; the real numbers are the completion of the rational numbers using the ordinary absolute value to measure distances. The additional subtlety to contend with is that it is not logically permissible to use the completeness of the real numbers in their own construction. Nevertheless, equivalence classes of Cauchy sequences are defined as above, and the set of equivalence classes is easily shown to be a field that has the rational numbers as a subfield. This field is complete, admits a natural total ordering, and is the unique totally ordered complete field (up to isomorphism). It is defined as the field of real numbers (see also Construction of the real numbers for more details). One way to visualize this identification with the real numbers as usually viewed is that the equivalence class consisting of those Cauchy sequences of rational numbers that "ought" to have a given real limit is identified with that real number. The truncations of the decimal expansion give just one choice of Cauchy sequence in the relevant equivalence class. For a prime the -adic numbers arise by completing the rational numbers with respect to a different metric. If the earlier completion procedure is applied to a normed vector space, the result is a Banach space containing the original space as a dense subspace, and if it is applied to an inner product space, the result is a Hilbert space containing the original space as a dense subspace. Topologically complete spaces Completeness is a property of the metric and not of the topology, meaning that a complete metric space can be homeomorphic to a non-complete one. An example is given by the real numbers, which are complete but homeomorphic to the open interval , which is not complete. In topology one considers completely metrizable spaces, spaces for which there exists at least one complete metric inducing the given topology. Completely metrizable spaces can be characterized as those spaces that can be written as an intersection of countably many open subsets of some complete metric space. Since the conclusion of the Baire category theorem is purely topological, it applies to these spaces as well. Completely metrizable spaces are often called topologically complete. However, the latter term is somewhat arbitrary since metric is not the most general structure on a topological space for which one can talk about completeness (see the section Alternatives and generalizations). Indeed, some authors use the term topologically complete for a wider class of topological spaces, the completely uniformizable spaces. A topological space homeomorphic to a separable complete metric space is called a Polish space. Alternatives and generalizations Since Cauchy sequences can also be defined in general topological groups, an alternative to relying on a metric structure for defining completeness and constructing the completion of a space is to use a group structure. This is most often seen in the context of topological vector spaces, but requires only the existence of a continuous "subtraction" operation. In this setting, the distance between two points and is gauged not by a real number via the metric in the comparison but by an open neighbourhood of via subtraction in the comparison A common generalisation of these definitions can be found in the context of a uniform space, where an entourage is a set of all pairs of points that are at no more than a particular "distance" from each other. It is also possible to replace Cauchy sequences in the definition of completeness by Cauchy nets or Cauchy filters. If every Cauchy net (or equivalently every Cauchy filter) has a limit in then is called complete. One can furthermore construct a completion for an arbitrary uniform space similar to the completion of metric spaces. The most general situation in which Cauchy nets apply is Cauchy spaces; these too have a notion of completeness and completion just like uniform spaces. See also Notes References Kreyszig, Erwin, Introductory functional analysis with applications'' (Wiley, New York, 1978). Lang, Serge, "Real and Functional Analysis" Metric geometry Topology Uniform spaces
Complete metric space
[ "Physics", "Mathematics" ]
2,131
[ "Uniform spaces", "Space (mathematics)", "Topological spaces", "Topology", "Space", "Geometry", "Spacetime" ]
7,834
https://en.wikipedia.org/wiki/Chain%20reaction
A chain reaction is a sequence of reactions where a reactive product or by-product causes additional reactions to take place. In a chain reaction, positive feedback leads to a self-amplifying chain of events. Chain reactions are one way that systems which are not in thermodynamic equilibrium can release energy or increase entropy in order to reach a state of higher entropy. For example, a system may not be able to reach a lower energy state by releasing energy into the environment, because it is hindered or prevented in some way from taking the path that will result in the energy release. If a reaction results in a small energy release making way for more energy releases in an expanding chain, then the system will typically collapse explosively until much or all of the stored energy has been released. A macroscopic metaphor for chain reactions is thus a snowball causing a larger snowball until finally an avalanche results ("snowball effect"). This is a result of stored gravitational potential energy seeking a path of release over friction. Chemically, the equivalent to a snow avalanche is a spark causing a forest fire. In nuclear physics, a single stray neutron can result in a prompt critical event, which may finally be energetic enough for a nuclear reactor meltdown or (in a bomb) a nuclear explosion. Another metaphor for a chain reaction is the domino effect, named after the act of domino toppling, where the simple action of toppling one domino leads to all dominoes eventually toppling, even if they are significantly larger. Numerous chain reactions can be represented by a mathematical model based on Markov chains. Chemical chain reactions History In 1913, the German chemist Max Bodenstein first put forth the idea of chemical chain reactions. If two molecules react, not only molecules of the final reaction products are formed, but also some unstable molecules which can further react with the parent molecules with a far larger probability than the initial reactants. (In the new reaction, further unstable molecules are formed besides the stable products, and so on.) In 1918, Walther Nernst proposed that the photochemical reaction between hydrogen and chlorine is a chain reaction in order to explain what is known as the quantum yield phenomena. This means that one photon of light is responsible for the formation of as many as 106 molecules of the product HCl. Nernst suggested that the photon dissociates a Cl2 molecule into two Cl atoms which each initiate a long chain of reaction steps forming HCl. In 1923, Danish and Dutch scientists J. A. Christiansen and Hendrik Anthony Kramers, in an analysis of the formation of polymers, pointed out that such a chain reaction need not start with a molecule excited by light, but could also start with two molecules colliding violently due to thermal energy as previously proposed for initiation of chemical reactions by van' t Hoff. Christiansen and Kramers also noted that if, in one link of the reaction chain, two or more unstable molecules are produced, the reaction chain would branch and grow. The result is in fact an exponential growth, thus giving rise to explosive increases in reaction rates, and indeed to chemical explosions themselves. This was the first proposal for the mechanism of chemical explosions. A quantitative chain chemical reaction theory was created later on by Soviet physicist Nikolay Semyonov in 1934. Semyonov shared the Nobel Prize in 1956 with Sir Cyril Norman Hinshelwood, who independently developed many of the same quantitative concepts. Typical steps The main types of steps in chain reaction are of the following types. Initiation (formation of active particles or chain carriers, often free radicals, in either a thermal or a photochemical step) Propagation (may comprise several elementary steps in a cycle, where the active particle through reaction forms another active particle which continues the reaction chain by entering the next elementary step). In effect the active particle serves as a catalyst for the overall reaction of the propagation cycle. Particular cases are: chain branching (a propagation step where one active particle enters the step and two or more are formed); chain transfer (a propagation step in which the active particle is a growing polymer chain which reacts to form an inactive polymer whose growth is terminated and an active small particle (such as a radical), which may then react to form a new polymer chain). Termination (elementary step in which the active particle loses its activity; e. g. by recombination of two free radicals). The chain length is defined as the average number of times the propagation cycle is repeated, and equals the overall reaction rate divided by the initiation rate. Some chain reactions have complex rate equations with fractional order or mixed order kinetics. Detailed example: the hydrogen-bromine reaction The reaction H2 + Br2 → 2 HBr proceeds by the following mechanism: Initiation Br2 → 2 Br• (thermal) or Br2 + hν → 2 Br• (photochemical) each Br atom is a free radical, indicated by the symbol "•" representing an unpaired electron. Propagation (here a cycle of two steps) Br• + H2 → HBr + H• H• + Br2 → HBr + Br• the sum of these two steps corresponds to the overall reaction H2 + Br2 → 2 HBr, with catalysis by Br• which participates in the first step and is regenerated in the second step. Retardation (inhibition) H• + HBr → H2 + Br• this step is specific to this example, and corresponds to the first propagation step in reverse. Termination 2 Br• → Br2 recombination of two radicals, corresponding in this example to initiation in reverse. As can be explained using the steady-state approximation, the thermal reaction has an initial rate of fractional order (3/2), and a complete rate equation with a two-term denominator (mixed-order kinetics). Further chemical examples The reaction 2 H2 + O2 → 2 H2O provides an example of chain branching. The propagation is a sequence of two steps whose net effect is to replace an H atom by another H atom plus two OH radicals. This leads to an explosion under certain conditions of temperature and pressure. H• + O2 → •OH + •O• •O• + H2 → •OH + H• In chain-growth polymerization, the propagation step corresponds to the elongation of the growing polymer chain. Chain transfer corresponds to transfer of the activity from this growing chain, whose growth is terminated, to another molecule which may be a second growing polymer chain. For polymerization, the kinetic chain length defined above may differ from the degree of polymerization of the product macromolecule. Polymerase chain reaction, a technique used in molecular biology to amplify (make many copies of) a piece of DNA by in vitro enzymatic replication using a DNA polymerase. Acetaldehyde pyrolysis and rate equation The pyrolysis (thermal decomposition) of acetaldehyde, CH3CHO (g) → CH4 (g) + CO (g), proceeds via the Rice-Herzfeld mechanism: Initiation (formation of free radicals): CH3CHO (g) → •CH3 (g) + •CHO (g) k1 The methyl and CHO groups are free radicals. Propagation (two steps): •CH3 (g) + CH3CHO (g) → CH4 (g) + •CH3CO (g) k2 This reaction step provides methane, which is one of the two main products. •CH3CO (g) → CO (g) + •CH3 (g) k3 The product •CH3CO (g) of the previous step gives rise to carbon monoxide (CO), which is the second main product. The sum of the two propagation steps corresponds to the overall reaction CH3CHO (g) → CH4 (g) + CO (g), catalyzed by a methyl radical •CH3. Termination: •CH3 (g) + •CH3 (g) → C2H6 (g) k4 This reaction is the only source of ethane (minor product) and it is concluded to be the main chain ending step. Although this mechanism explains the principal products, there are others that are formed in a minor degree, such as acetone (CH3COCH3) and propanal (CH3CH2CHO). Applying the Steady State Approximation for the intermediate species CH3(g) and CH3CO(g), the rate law for the formation of methane and the order of reaction are found: The rate of formation of the product methane is For the intermediates and Adding (2) and (3), we obtain so that Using (4) in (1) gives the rate law , which is order 3/2 in the reactant CH3CHO. Nuclear chain reactions A nuclear chain reaction was proposed by Leo Szilard in 1933, shortly after the neutron was discovered, yet more than five years before nuclear fission was first discovered. Szilárd knew of chemical chain reactions, and he had been reading about an energy-producing nuclear reaction involving high-energy protons bombarding lithium, demonstrated by John Cockcroft and Ernest Walton, in 1932. Now, Szilárd proposed to use neutrons theoretically produced from certain nuclear reactions in lighter isotopes, to induce further reactions in light isotopes that produced more neutrons. This would in theory produce a chain reaction at the level of the nucleus. He did not envision fission as one of these neutron-producing reactions, since this reaction was not known at the time. Experiments he proposed using beryllium and indium failed. Later, after fission was discovered in 1938, Szilárd immediately realized the possibility of using neutron-induced fission as the particular nuclear reaction necessary to create a chain-reaction, so long as fission also produced neutrons. In 1939, with Enrico Fermi, Szilárd proved this neutron-multiplying reaction in uranium. In this reaction, a neutron plus a fissionable atom causes a fission resulting in a larger number of neutrons than the single one that was consumed in the initial reaction. Thus was born the practical nuclear chain reaction by the mechanism of neutron-induced nuclear fission. Specifically, if one or more of the produced neutrons themselves interact with other fissionable nuclei, and these also undergo fission, then there is a possibility that the macroscopic overall fission reaction will not stop, but continue throughout the reaction material. This is then a self-propagating and thus self-sustaining chain reaction. This is the principle for nuclear reactors and atomic bombs. Demonstration of a self-sustaining nuclear chain reaction was accomplished by Enrico Fermi and others, in the successful operation of Chicago Pile-1, the first artificial nuclear reactor, in late 1942. Electron avalanche in gases An electron avalanche happens between two unconnected electrodes in a gas when an electric field exceeds a certain threshold. Random thermal collisions of gas atoms may result in a few free electrons and positively charged gas ions, in a process called impact ionization. Acceleration of these free electrons in a strong electric field causes them to gain energy, and when they impact other atoms, the energy causes release of new free electrons and ions (ionization), which fuels the same process. If this process happens faster than it is naturally quenched by ions recombining, the new ions multiply in successive cycles until the gas breaks down into a plasma and current flows freely in a discharge. Electron avalanches are essential to the dielectric breakdown process within gases. The process can culminate in corona discharges, streamers, leaders, or in a spark or continuous electric arc that completely bridges the gap. The process may extend huge sparks — streamers in lightning discharges propagate by formation of electron avalanches created in the high potential gradient ahead of the streamers' advancing tips. Once begun, avalanches are often intensified by the creation of photoelectrons as a result of ultraviolet radiation emitted by the excited medium's atoms in the aft-tip region. The extremely high temperature of the resulting plasma cracks the surrounding gas molecules and the free ions recombine to create new chemical compounds. The process can also be used to detect radiation that initiates the process, as the passage of a single particles can be amplified to large discharges. This is the mechanism of a Geiger counter and also the visualization possible with a spark chamber and other wire chambers. Avalanche breakdown in semiconductors An avalanche breakdown process can happen in semiconductors, which in some ways conduct electricity analogously to a mildly ionized gas. Semiconductors rely on free electrons knocked out of the crystal by thermal vibration for conduction. Thus, unlike metals, semiconductors become better conductors the higher the temperature. This sets up conditions for the same type of positive feedback—heat from current flow causes temperature to rise, which increases charge carriers, lowering resistance, and causing more current to flow. This can continue to the point of complete breakdown of normal resistance at a semiconductor junction, and failure of the device (this may be temporary or permanent depending on whether there is physical damage to the crystal). Certain devices, such as avalanche diodes, deliberately make use of the effect. Living organisms Examples of chain reactions in living organisms include excitation of neurons in epilepsy and lipid peroxidation. In peroxidation, a lipid radical reacts with oxygen to form a peroxyl radical (L• + O2 → LOO•). The peroxyl radical then oxidises another lipid, thus forming another lipid radical (LOO• + L–H → LOOH + L•). A chain reaction in glutamatergic synapses is the cause of synchronous discharge in some epileptic seizures. See also Cascading failure Multiple-vehicle collision Rube Goldberg machine References External links IUPAC Gold Book - Chain reaction Chemical kinetics Metaphors referring to objects Causality
Chain reaction
[ "Physics", "Chemistry" ]
2,864
[ "Chemical reaction engineering", "Chemical kinetics" ]
7,839
https://en.wikipedia.org/wiki/Stellar%20corona
A corona (: coronas or coronae) is the outermost layer of a star's atmosphere. It is a hot but relatively dim region of plasma populated by intermittent coronal structures known as solar prominences or filaments. The Sun's corona lies above the chromosphere and extends millions of kilometres into outer space. Coronal light is typically obscured by diffuse sky radiation and glare from the solar disk, but can be easily seen by the naked eye during a total solar eclipse or with a specialized coronagraph. Spectroscopic measurements indicate strong ionization in the corona and a plasma temperature in excess of , much hotter than the surface of the Sun, known as the photosphere. is, in turn, derived . History In 1724, French-Italian astronomer Giacomo F. Maraldi recognized that the aura visible during a solar eclipse belongs to the Sun, not to the Moon. In 1809, Spanish astronomer José Joaquín de Ferrer coined the term 'corona'. Based on his own observations of the 1806 solar eclipse at Kinderhook (New York), de Ferrer also proposed that the corona was part of the Sun and not of the Moon. English astronomer Norman Lockyer identified the first element unknown on Earth in the Sun's chromosphere, which was called helium (from Greek 'sun'). French astronomer Jules Jenssen noted, after comparing his readings between the 1871 and 1878 eclipses, that the size and shape of the corona changes with the sunspot cycle. In 1930, Bernard Lyot invented the "coronograph" (now "coronagraph"), which allows viewing the corona without a total eclipse. In 1952, American astronomer Eugene Parker proposed that the solar corona might be heated by myriad tiny 'nanoflares', miniature brightenings resembling solar flares that would occur all over the surface of the Sun. Historical theories The high temperature of the Sun's corona gives it unusual spectral features, which led some in the 19th century to suggest that it contained a previously unknown element, "coronium". Instead, these spectral features have since been explained by highly ionized iron (Fe-XIV, or Fe13+). Bengt Edlén, following the work of Walter Grotrian in 1939, first identified the coronal spectral lines in 1940 (observed since 1869) as transitions from low-lying metastable levels of the ground configuration of highly ionised metals (the green Fe-XIV line from Fe13+ at , but also the red Fe-X line from Fe9+ at ). Observable components The solar corona has three recognized, and distinct, sources of light that occupy the same volume: the "F-corona" (for "Fraunhofer"), the "K-corona" (for "Kontinuierlich"), and the "E-corona" (for "emission"). The "F-corona" is named for the Fraunhofer spectrum of absorption lines in ordinary sunlight, which are preserved by reflection off small material objects. The F-corona is faint near the Sun itself, but drops in brightness only gradually far from the Sun, extending far across the sky and becoming the zodiacal light. The F-corona is recognized to arise from small dust grains orbiting the Sun; these form a tenuous cloud that extends through much of the solar system. The "K-corona" is named for the fact that its spectrum is a continuum, with no major spectral features. It is sunlight that is Thomson-scattered by free electrons in the hot plasma of the Sun's outer atmosphere. The continuum nature of the spectrum arises from Doppler broadening of the Sun's Fraunhofer absorption lines in the reference frame of the (hot and therefore fast-moving) electrons. Although the K-corona is a phenomenon of the electrons in the plasma, the term is frequently used to describe the plasma itself (as distinct from the dust that gives rise to the F-corona). The "E-corona" is the component of the corona with an emission-line spectrum, either inside or outside the wavelength band of visible light. It is a phenomenon of the ion component of the plasma, as individual ions are excited by collision with other ions or electrons, or by absorption of ultraviolet light from the Sun. Physical features The Sun's corona is much hotter (by a factor from 150 to 450) than the visible surface of the Sun: the corona's temperature is 1 to 3 million kelvin compared to the photosphere's average temperature – around . The corona is far less dense than the photosphere, and produces about one-millionth as much visible light. The corona is separated from the photosphere by the relatively shallow chromosphere. The exact mechanism by which the corona is heated is still the subject of some debate, but likely possibilities include episodic energy releases from the pervasive magnetic field and magnetohydrodynamic waves from below. The outer edges of the Sun's corona are constantly being transported away, creating the "open" magnetic flux entrained in the solar wind. The corona is not always evenly distributed across the surface of the Sun. During periods of quiet, the corona is more or less confined to the equatorial regions, with coronal holes covering the polar regions. However, during the Sun's active periods, the corona is evenly distributed over the equatorial and polar regions, though it is most prominent in areas with sunspot activity. The solar cycle spans approximately 11 years, from one solar minimum to the following minimum. Since the solar magnetic field is continually wound up due to the faster rotation of mass at the Sun's equator (differential rotation), sunspot activity is more pronounced at solar maximum where the magnetic field is more twisted. Associated with sunspots are coronal loops, loops of magnetic flux, upwelling from the solar interior. The magnetic flux pushes the hotter photosphere aside, exposing the cooler plasma below, thus creating the relatively dark sun spots. High-resolution X-ray images of the Sun's corona photographed by Skylab in 1973, by Yohkoh in 1991–2001, and by subsequent space-based instruments revealed the structure of the corona to be quite varied and complex, leading astronomers to classify various zones on the coronal disc. Astronomers usually distinguish several regions, as described below. Active regions Active regions are ensembles of loop structures connecting points of opposite magnetic polarity in the photosphere, the so-called coronal loops. They generally distribute in two zones of activity, which are parallel to the solar equator. The average temperature is between two and four million kelvin, while the density goes from 109 to 1010 particles per cubic centimetre. Active regions involve all the phenomena directly linked to the magnetic field, which occur at different heights above the Sun's surface: sunspots and faculae occur in the photosphere; spicules, Hα filaments and plages in the chromosphere; prominences in the chromosphere and transition region; and flares and coronal mass ejections (CME) happen in the corona and chromosphere. If flares are very violent, they can also perturb the photosphere and generate a Moreton wave. On the contrary, quiescent prominences are large, cool, dense structures which are observed as dark, "snake-like" Hα ribbons (appearing like filaments) on the solar disc. Their temperature is about –, and so they are usually considered as chromospheric features. In 2013, images from the High Resolution Coronal Imager revealed never-before-seen "magnetic braids" of plasma within the outer layers of these active regions. Coronal loops Coronal loops are the basic structures of the magnetic solar corona. These loops are the closed-magnetic flux cousins of the open-magnetic flux that can be found in coronal holes and the solar wind. Loops of magnetic flux well up from the solar body and fill with hot solar plasma. Due to the heightened magnetic activity in these coronal loop regions, coronal loops can often be the precursor to solar flares and CMEs. The solar plasma that feeds these structures is heated from under to well over 106 K from the photosphere, through the transition region, and into the corona. Often, the solar plasma will fill these loops from one point and drain to another, called foot points (siphon flow due to a pressure difference, or asymmetric flow due to some other driver). When the plasma rises from the foot points towards the loop top, as always occurs during the initial phase of a compact flare, it is defined as chromospheric evaporation. When the plasma rapidly cools and falls toward the photosphere, it is called chromospheric condensation. There may also be symmetric flow from both loop foot points, causing a build-up of mass in the loop structure. The plasma may cool rapidly in this region (for a thermal instability), its dark filaments obvious against the solar disk or prominences off the Sun's limb. Coronal loops may have lifetimes in the order of seconds (in the case of flare events), minutes, hours or days. Where there is a balance in loop energy sources and sinks, coronal loops can last for long periods of time and are known as steady state or quiescent coronal loops (example). Coronal loops are very important to our understanding of the current coronal heating problem. Coronal loops are highly radiating sources of plasma and are therefore easy to observe by instruments such as TRACE. An explanation of the coronal heating problem remains as these structures are being observed remotely, where many ambiguities are present (i.e., radiation contributions along the line-of-sight propagation). In-situ measurements are required before a definitive answer can be determined, but due to the high plasma temperatures in the corona, in-situ measurements are, at present, impossible. The next mission of the NASA Parker Solar Probe will approach the Sun very closely, allowing more direct observations. Large-scale structures Large-scale structures are very long arcs which can cover over a quarter of the solar disk but contain plasma less dense than in the coronal loops of the active regions. They were first detected in the June 8, 1968, flare observation during a rocket flight. The large-scale structure of the corona changes over the 11-year solar cycle and becomes particularly simple during the minimum period, when the magnetic field of the Sun is almost similar to a dipolar configuration (plus a quadrupolar component). Interconnections of active regions The interconnections of active regions are arcs connecting zones of opposite magnetic field, of different active regions. Significant variations of these structures are often seen after a flare. Some other features of this kind are helmet streamers – large, cap-like coronal structures with long, pointed peaks that usually overlie sunspots and active regions. Coronal streamers are considered to be sources of the slow solar wind. Filament cavities Filament cavities are zones which look dark in the X-rays and are above the regions where Hα filaments are observed in the chromosphere. They were first observed in the two 1970 rocket flights which also detected coronal holes. Filament cavities are cooler clouds of plasma suspended above the Sun's surface by magnetic forces. The regions of intense magnetic field look dark in images because they are empty of hot plasma. In fact, the sum of the magnetic pressure and plasma pressure must be constant everywhere on the heliosphere in order to have an equilibrium configuration: where the magnetic field is higher, the plasma must be cooler or less dense. The plasma pressure can be calculated by the state equation of a perfect gas: , where is the particle number density, the Boltzmann constant and the plasma temperature. It is evident from the equation that the plasma pressure lowers when the plasma temperature decreases with respect to the surrounding regions or when the zone of intense magnetic field empties. The same physical effect renders sunspots apparently dark in the photosphere. Bright points Bright points are small active regions found on the solar disk. X-ray bright points were first detected on April 8, 1969, during a rocket flight. The fraction of the solar surface covered by bright points varies with the solar cycle. They are associated with small bipolar regions of the magnetic field. Their average temperature ranges from 1.1 MK to 3.4 MK. The variations in temperature are often correlated with changes in the X-ray emission. Coronal holes Coronal holes are unipolar regions which look dark in the X-rays since they do not emit much radiation. These are wide zones of the Sun where the magnetic field is unipolar and opens towards the interplanetary space. The high speed solar wind arises mainly from these regions. In the UV images of the coronal holes, some small structures, similar to elongated bubbles, are often seen as they were suspended in the solar wind. These are the coronal plumes. More precisely, they are long thin streamers that project outward from the Sun's north and south poles. The quiet Sun The solar regions which are not part of active regions and coronal holes are commonly identified as the quiet Sun. The equatorial region has a faster rotation speed than the polar zones. The result of the Sun's differential rotation is that the active regions always arise in two bands parallel to the equator and their extension increases during the periods of maximum of the solar cycle, while they almost disappear during each minimum. Therefore, the quiet Sun always coincides with the equatorial zone and its surface is less active during the maximum of the solar cycle. Approaching the minimum of the solar cycle (also named butterfly cycle), the extension of the quiet Sun increases until it covers the whole disk surface excluding some bright points on the hemisphere and the poles, where there are coronal holes. Alfvén surface The Alfvén surface is the boundary separating the corona from the solar wind defined as where the coronal plasma's Alfvén speed and the large-scale solar wind speed are equal. Researchers were unsure exactly where the Alfvén critical surface of the Sun lay. Based on remote images of the corona, estimates had put it somewhere between 10 and 20 solar radii from the surface of the Sun. On April 28, 2021, during its eighth flyby of the Sun, NASA's Parker Solar Probe encountered the specific magnetic and particle conditions at 18.8 solar radii that indicated that it penetrated the Alfvén surface. Variability of the corona A portrait, as diversified as the one already pointed out for the coronal features, is emphasized by the analysis of the dynamics of the main structures of the corona, which evolve at differential times. Studying coronal variability in its complexity is not easy because the times of evolution of the different structures can vary considerably: from seconds to several months. The typical sizes of the regions where coronal events take place vary in the same way, as it is shown in the following table. Flares Flares take place in active regions and are characterized by a sudden increase of the radiative flux emitted from small regions of the corona. They are very complex phenomena, visible at different wavelengths; they involve several zones of the solar atmosphere and many physical effects, thermal and not thermal, and sometimes wide reconnections of the magnetic field lines with material expulsion. Flares are impulsive phenomena, of average duration of 15 minutes, and the most energetic events can last several hours. Flares produce a high and rapid increase of the density and temperature. An emission in white light is only seldom observed: usually, flares are only seen at extreme UV wavelengths and into the X-rays, typical of the chromospheric and coronal emission. In the corona, the morphology of flares is described by observations in the UV, soft and hard X-rays, and in Hα wavelengths, and is very complex. However, two kinds of basic structures can be distinguished: Compact flares, when each of the two arches where the event is happening maintains its morphology: only an increase of the emission is observed without significant structural variations. The emitted energy is of the order of 1022 – 1023 J. Flares of long duration, associated with eruptions of prominences, transients in white light and two-ribbon flares: in this case the magnetic loops change their configuration during the event. The energies emitted during these flares are of such great proportion they can reach 1025 J. As for temporal dynamics, three different phases are generally distinguished, whose duration are not comparable. The durations of those periods depend on the range of wavelengths used to observe the event: An initial impulsive phase, whose duration is on the order of minutes, strong emissions of energy are often observed even in the microwaves, EUV wavelengths and in the hard X-ray frequencies. A maximum phase A decay phase, which can last several hours. Sometimes also a phase preceding the flare can be observed, usually called as "pre-flare" phase. Coronal mass ejections Often accompanying large solar flares and prominences are coronal mass ejections (CME). These are enormous emissions of coronal material and magnetic field that travel outward from the Sun at up to 3000 km/s, containing roughly 10 times the energy of the solar flare or prominence that accompanies them. Some larger CMEs can propel hundreds of millions of tons of material into interplanetary space at roughly 1.5 million kilometers an hour. Stellar coronae Coronal stars are ubiquitous among the stars in the cool half of the Hertzsprung–Russell diagram. These coronae can be detected using X-ray telescopes. Some stellar coronae, particularly in young stars, are much more luminous than the Sun's. For example, FK Comae Berenices is the prototype for the FK Com class of variable star. These are giants of spectral types G and K with an unusually rapid rotation and signs of extreme activity. Their X-ray coronae are among the most luminous (Lx ≥ 1032 erg·s−1 or 1025W) and the hottest known with dominant temperatures up to 40 MK. The astronomical observations planned with the Einstein Observatory by Giuseppe Vaiana and his group showed that F-, G-, K- and M-stars have chromospheres and often coronae much like the Sun. The O-B stars, which do not have surface convection zones, have a strong X-ray emission. However these stars do not have coronae, but the outer stellar envelopes emit this radiation during shocks due to thermal instabilities in rapidly moving gas blobs. Also A-stars do not have convection zones but they do not emit at the UV and X-ray wavelengths. Thus they appear to have neither chromospheres nor coronae. Physics of the corona The matter in the external part of the solar atmosphere is in the state of plasma, at very high temperature (a few million kelvin) and at very low density (of the order of 1015 particles/m3). According to the definition of plasma, it is a quasi-neutral ensemble of particles which exhibits a collective behaviour. The composition is similar to that in the Sun's interior, mainly hydrogen, but with much greater ionization of its heavier elements than that found in the photosphere. Heavier metals, such as iron, are partially ionized and have lost most of the external electrons. The ionization state of a chemical element depends strictly on the temperature and is regulated by the Saha equation in the lowest atmosphere, but by collisional equilibrium in the optically thin corona. Historically, the presence of the spectral lines emitted from highly ionized states of iron allowed determination of the high temperature of the coronal plasma, revealing that the corona is much hotter than the internal layers of the chromosphere. The corona behaves like a gas which is very hot but very light at the same time: the pressure in the corona is usually only 0.1 to 0.6 Pa in active regions, while on the Earth the atmospheric pressure is about 100 kPa, approximately a million times higher than on the solar surface. However it is not properly a gas, because it is made of charged particles, basically protons and electrons, moving at different velocities. Supposing that they have the same kinetic energy on average (for the equipartition theorem), electrons have a mass roughly times smaller than protons, therefore they acquire more velocity. Metal ions are always slower. This fact has relevant physical consequences either on radiative processes (that are very different from the photospheric radiative processes), or on thermal conduction. Furthermore, the presence of electric charges induces the generation of electric currents and high magnetic fields. Magnetohydrodynamic waves (MHD waves) can also propagate in this plasma, even though it is still not clear how they can be transmitted or generated in the corona. Radiation Coronal plasma is optically thin and therefore transparent to the electromagnetic radiation that it emits and to that coming from lower layers. The plasma is very rarefied and the photon mean free path overcomes by far all the other length-scales, including the typical sizes of common coronal features. Electromagnetic radiation from the corona has been identified coming from three main sources, located in the same volume of space: The K-corona (K for , "continuous" in German) is created by sunlight Thomson scattering off free electrons; doppler broadening of the reflected photospheric absorption lines spreads them so greatly as to completely obscure them, giving the spectral appearance of a continuum with no absorption lines. The F-corona (F for Fraunhofer) is created by sunlight bouncing off dust particles, and is observable because its light contains the Fraunhofer absorption lines that are seen in raw sunlight; the F-corona extends to very high elongation angles from the Sun, where it is called the zodiacal light. The E-corona (E for emission) is due to spectral emission lines produced by ions that are present in the coronal plasma; it may be observed in broad or forbidden or hot spectral emission lines and is the main source of information about the corona's composition. Thermal conduction In the corona thermal conduction occurs from the external hotter atmosphere towards the inner cooler layers. Responsible for the diffusion process of the heat are the electrons, which are much lighter than ions and move faster, as explained above. When there is a magnetic field the thermal conductivity of the plasma becomes higher in the direction which is parallel to the field lines rather than in the perpendicular direction. A charged particle moving in the direction perpendicular to the magnetic field line is subject to the Lorentz force which is normal to the plane individuated by the velocity and the magnetic field. This force bends the path of the particle. In general, since particles also have a velocity component along the magnetic field line, the Lorentz force constrains them to bend and move along spirals around the field lines at the cyclotron frequency. If collisions between the particles are very frequent, they are scattered in every direction. This happens in the photosphere, where the plasma carries the magnetic field in its motion. In the corona, on the contrary, the mean free-path of the electrons is of the order of kilometres and even more, so each electron can do a helicoidal motion long before being scattered after a collision. Therefore, the heat transfer is enhanced along the magnetic field lines and inhibited in the perpendicular direction. In the direction longitudinal to the magnetic field, the thermal conductivity of the corona is where is the Boltzmann constant, is the temperature in kelvin, is the electron mass, is the electric charge of the electron, is the Coulomb logarithm, and is the Debye length of the plasma with particle density . The Coulomb logarithm is roughly 20 in the corona, with a mean temperature of 1 MK and a density of 1015 particles/m3, and about 10 in the chromosphere, where the temperature is approximately 10kK and the particle density is of the order of 1018 particles/m3, and in practice it can be assumed constant. Thence, if we indicate with the heat for a volume unit, expressed in J m−3, the Fourier equation of heat transfer, to be computed only along the direction of the field line, becomes Numerical calculations have shown that the thermal conductivity of the corona is comparable to that of copper. Coronal seismology Coronal seismology is a method of studying the plasma of the solar corona with the use of magnetohydrodynamic (MHD) waves. MHD studies the dynamics of electrically conducting fluids – in this case, the fluid is the coronal plasma. Philosophically, coronal seismology is similar to the Earth's seismology, the Sun's helioseismology, and MHD spectroscopy of laboratory plasma devices. In all these approaches, waves of various kinds are used to probe a medium. The potential of coronal seismology in the estimation of the coronal magnetic field, density scale height, fine structure and heating has been demonstrated by different research groups. Coronal heating problem The coronal heating problem in solar physics relates to the question of why the temperature of the Sun's corona is millions of degrees kelvin greater than the thousands of degrees kelvin of the surface. Several theories have been proposed to explain this phenomenon, but it is still challenging to determine which is correct. The problem first emerged after the identification of unknown spectral lines in the solar spectrum with highly ionized iron and calcium atoms. The comparison of the coronal and the photospheric temperatures of , leads to the question of how the 200-times-hotter coronal temperature can be maintained. The problem is primarily concerned with how the energy is transported up into the corona and then converted into heat within a few solar radii. The high temperatures require energy to be carried from the solar interior to the corona by non-thermal processes, because the second law of thermodynamics prevents heat from flowing directly from the solar photosphere (surface), which is at about , to the much hotter corona at about 1 to 3 MK (parts of the corona can even reach ). Between the photosphere and the corona, the thin region through which the temperature increases is known as the transition region. It ranges from only tens to hundreds of kilometers thick. Energy cannot be transferred from the cooler photosphere to the corona by conventional heat transfer as this would violate the second law of thermodynamics. An analogy of this would be a light bulb raising the temperature of the air surrounding it to something greater than its glass surface. Hence, some other manner of energy transfer must be involved in the heating of the corona. The amount of power required to heat the solar corona can easily be calculated as the difference between coronal radiative losses and heating by thermal conduction toward the chromosphere through the transition region. It is about 1 kilowatt for every square meter of surface area on the Sun's chromosphere, or 1/ of the amount of light energy that escapes the Sun. Many coronal heating theories have been proposed, but two theories have remained as the most likely candidates: wave heating and magnetic reconnection (or nanoflares). Through most of the past 50 years, neither theory has been able to account for the extreme coronal temperatures. In 2012, high resolution (<0.2″) soft X-ray imaging with the High Resolution Coronal Imager aboard a sounding rocket revealed tightly wound braids in the corona. It is hypothesized that the reconnection and unravelling of braids can act as primary sources of heating of the active solar corona to temperatures of up to 4 million kelvin. The main heat source in the quiescent corona (about 1.5 million kelvin) is assumed to originate from MHD waves. NASA's Parker Solar Probe is intended to approach the Sun to a distance of approximately 9.5 solar radii to investigate coronal heating and the origin of the solar wind. It was successfully launched on August 12, 2018 and by late 2022 had completed the first 13 of more than 20 planned close approaches to the Sun. Wave heating theory The wave heating theory, proposed in 1949 by Évry Schatzman, proposes that waves carry energy from the solar interior to the solar chromosphere and corona. The Sun is made of plasma rather than ordinary gas, so it supports several types of waves analogous to sound waves in air. The most important types of wave are magneto-acoustic waves and Alfvén waves. Magneto-acoustic waves are sound waves that have been modified by the presence of a magnetic field, and Alfvén waves are similar to ultra low frequency radio waves that have been modified by interaction with matter in the plasma. Both types of waves can be launched by the turbulence of granulation and super granulation at the solar photosphere, and both types of waves can carry energy for some distance through the solar atmosphere before turning into shock waves that dissipate their energy as heat. One problem with wave heating is delivery of the heat to the appropriate place. Magneto-acoustic waves cannot carry sufficient energy upward through the chromosphere to the corona, both because of the low pressure present in the chromosphere and because they tend to be reflected back to the photosphere. Alfvén waves can carry enough energy, but do not dissipate that energy rapidly enough once they enter the corona. Waves in plasmas are notoriously difficult to understand and describe analytically, but computer simulations, carried out by Thomas Bogdan and colleagues in 2003, seem to show that Alfvén waves can transmute into other wave modes at the base of the corona, providing a pathway that can carry large amounts of energy from the photosphere through the chromosphere and transition region and finally into the corona where it dissipates it as heat. Another problem with wave heating has been the complete absence, until the late 1990s, of any direct evidence of waves propagating through the solar corona. The first direct observation of waves propagating into and through the solar corona was made in 1997 with the Solar and Heliospheric Observatory space-borne solar observatory, the first platform capable of observing the Sun in the extreme ultraviolet (EUV) for long periods of time with stable photometry. Those were magneto-acoustic waves with a frequency of about 1 millihertz (mHz, corresponding to a wave period), that carry only about 10% of the energy required to heat the corona. Many observations exist of localized wave phenomena, such as Alfvén waves launched by solar flares, but those events are transient and cannot explain the uniform coronal heat. It is not yet known exactly how much wave energy is available to heat the corona. Results published in 2004 using data from the TRACE spacecraft seem to indicate that there are waves in the solar atmosphere at frequencies as high as (10 second period). Measurements of the temperature of different ions in the solar wind with the UVCS instrument aboard SOHO give strong indirect evidence that there are waves at frequencies as high as , well into the range of human hearing. These waves are very difficult to detect under normal circumstances, but evidence collected during solar eclipses by teams from Williams College suggest the presences of such waves in the 1– range. Recently, Alfvénic motions have been found in the lower solar atmosphere and also in the quiet Sun, in coronal holes and in active regions using observations with AIA on board the Solar Dynamics Observatory. These Alfvénic oscillations have significant power, and seem to be connected to the chromospheric Alfvénic oscillations previously reported with the Hinode spacecraft. Solar wind observations with the Wind spacecraft have recently shown evidence to support theories of Alfvén-cyclotron dissipation, leading to local ion heating. Magnetic reconnection theory The magnetic reconnection theory relies on the solar magnetic field to induce electric currents in the solar corona. The currents then collapse suddenly, releasing energy as heat and wave energy in the corona. This process is called "reconnection" because of the peculiar way that magnetic fields behave in plasma (or any electrically conductive fluid such as mercury or seawater). In a plasma, magnetic field lines are normally tied to individual pieces of matter, so that the topology of the magnetic field remains the same: if a particular north and south magnetic pole are connected by a single field line, then even if the plasma is stirred or if the magnets are moved around, that field line will continue to connect those particular poles. The connection is maintained by electric currents that are induced in the plasma. Under certain conditions, the electric currents can collapse, allowing the magnetic field to "reconnect" to other magnetic poles and release heat and wave energy in the process. Magnetic reconnection is hypothesized to be the mechanism behind solar flares, the largest explosions in the Solar System. Furthermore, the surface of the Sun is covered with millions of small magnetized regions 50– across. These small magnetic poles are buffeted and churned by the constant granulation. The magnetic field in the solar corona must undergo nearly constant reconnection to match the motion of this "magnetic carpet", so the energy released by the reconnection is a natural candidate for the coronal heat, perhaps as a series of "microflares" that individually provide very little energy but together account for the required energy. The idea that nanoflares might heat the corona was proposed by Eugene Parker in the 1980s but is still controversial. In particular, ultraviolet telescopes such as TRACE and SOHO/EIT can observe individual micro-flares as small brightenings in extreme ultraviolet light, but there seem to be too few of these small events to account for the energy released into the corona. The additional energy not accounted for could be made up by wave energy, or by gradual magnetic reconnection that releases energy more smoothly than micro-flares and therefore does not appear well in the TRACE data. Variations on the micro-flare hypothesis use other mechanisms to stress the magnetic field or to release the energy, and are a subject of active research in 2005. Spicules (type II) For decades, researchers believed spicules could send heat into the corona. However, following observational research in the 1980s, it was found that spicule plasma did not reach coronal temperatures, and so the theory was discounted. As per studies performed in 2010 at the National Center for Atmospheric Research in Colorado, in collaboration with the Lockheed Martin's Solar and Astrophysics Laboratory (LMSAL) and the Institute of Theoretical Astrophysics of the University of Oslo, a new class of spicules (TYPE II) discovered in 2007, which travel faster (up to 100 km/s) and have shorter lifespans, can account for the problem. These jets insert heated plasma into the Sun's outer atmosphere. Thus, a much greater understanding of the corona and improvement in the knowledge of the Sun's subtle influence on the Earth's upper atmosphere can be expected henceforth. The Atmospheric Imaging Assembly on NASA's recently launched Solar Dynamics Observatory and NASA's Focal Plane Package for the Solar Optical Telescope on the Japanese Hinode satellite which was used to test this hypothesis. The high spatial and temporal resolutions of the newer instruments reveal this coronal mass supply. These observations reveal a one-to-one connection between plasma that is heated to millions of degrees and the spicules that insert this plasma into the corona. See also Advanced Composition Explorer Geocorona Supernova Supra-arcade downflows X-ray astronomy References External links NASA description of the solar corona Coronal heating problem at Innovation Reports NASA/GSFC description of the coronal heating problem FAQ about coronal heating Solar and Heliospheric Observatory, including near-real-time images of the solar corona Coronal x-ray images from the Hinode XRT nasa.gov Astronomy Picture of the Day July 26, 2009 – a combination of thirty-three photographs of the Sun's corona that were digitally processed to highlight faint features of a total eclipse that occurred in March 2006 Animated explanation of the core of the Sun (University of South Wales) Alfvén waves may heat the Sun's corona Solar Interface Region – Bart de Pontieu (SETI Talks) Video Sun Space plasmas Light sources Unsolved problems in astronomy Articles containing video clips
Stellar corona
[ "Physics", "Astronomy" ]
7,530
[ "Space plasmas", "Unsolved problems in astronomy", "Concepts in astronomy", "Astrophysics", "Astronomical controversies" ]
7,849
https://en.wikipedia.org/wiki/Crystallographic%20defect
A crystallographic defect is an interruption of the regular patterns of arrangement of atoms or molecules in crystalline solids. The positions and orientations of particles, which are repeating at fixed distances determined by the unit cell parameters in crystals, exhibit a periodic crystal structure, but this is usually imperfect. Several types of defects are often characterized: point defects, line defects, planar defects, bulk defects. Topological homotopy establishes a mathematical method of characterization. Point defects Point defects are defects that occur only at or around a single lattice point. They are not extended in space in any dimension. Strict limits for how small a point defect is are generally not defined explicitly. However, these defects typically involve at most a few extra or missing atoms. Larger defects in an ordered structure are usually considered dislocation loops. For historical reasons, many point defects, especially in ionic crystals, are called centers: for example a vacancy in many ionic solids is called a luminescence center, a color center, or F-center. These dislocations permit ionic transport through crystals leading to electrochemical reactions. These are frequently specified using Kröger–Vink notation. Vacancy defects are lattice sites which would be occupied in a perfect crystal, but are vacant. If a neighboring atom moves to occupy the vacant site, the vacancy moves in the opposite direction to the site which used to be occupied by the moving atom. The stability of the surrounding crystal structure guarantees that the neighboring atoms will not simply collapse around the vacancy. In some materials, neighboring atoms actually move away from a vacancy, because they experience attraction from atoms in the surroundings. A vacancy (or pair of vacancies in an ionic solid) is sometimes called a Schottky defect. Interstitial defects are atoms that occupy a site in the crystal structure at which there is usually not an atom. They are generally high energy configurations. Small atoms (mostly impurities) in some crystals can occupy interstices without high energy, such as hydrogen in palladium. A nearby pair of a vacancy and an interstitial is often called a Frenkel defect or Frenkel pair. This is caused when an ion moves into an interstitial site and creates a vacancy. Due to fundamental limitations of material purification methods, materials are never 100% pure, which by definition induces defects in crystal structure. In the case of an impurity, the atom is often incorporated at a regular atomic site in the crystal structure. This is neither a vacant site nor is the atom on an interstitial site and it is called a substitutional defect. The atom is not supposed to be anywhere in the crystal, and is thus an impurity. In some cases where the radius of the substitutional atom (ion) is substantially smaller than that of the atom (ion) it is replacing, its equilibrium position can be shifted away from the lattice site. These types of substitutional defects are often referred to as off-center ions. There are two different types of substitutional defects: Isovalent substitution and aliovalent substitution. Isovalent substitution is where the ion that is substituting the original ion is of the same oxidation state as the ion it is replacing. Aliovalent substitution is where the ion that is substituting the original ion is of a different oxidation state than the ion it is replacing. Aliovalent substitutions change the overall charge within the ionic compound, but the ionic compound must be neutral. Therefore, a charge compensation mechanism is required. Hence either one of the metals is partially or fully oxidised or reduced, or ion vacancies are created. Antisite defects occur in an ordered alloy or compound when atoms of different type exchange positions. For example, some alloys have a regular structure in which every other atom is a different species; for illustration assume that type A atoms sit on the corners of a cubic lattice, and type B atoms sit in the center of the cubes. If one cube has an A atom at its center, the atom is on a site usually occupied by a B atom, and is thus an antisite defect. This is neither a vacancy nor an interstitial, nor an impurity. Topological defects are regions in a crystal where the normal chemical bonding environment is topologically different from the surroundings. For instance, in a perfect sheet of graphite (graphene) all atoms are in rings containing six atoms. If the sheet contains regions where the number of atoms in a ring is different from six, while the total number of atoms remains the same, a topological defect has formed. An example is the Stone Wales defect in nanotubes, which consists of two adjacent 5-membered and two 7-membered atom rings. Amorphous solids may contain defects. These are naturally somewhat hard to define, but sometimes their nature can be quite easily understood. For instance, in ideally bonded amorphous silica all Si atoms have 4 bonds to O atoms and all O atoms have 2 bonds to Si atom. Thus e.g. an O atom with only one Si bond (a dangling bond) can be considered a defect in silica. Moreover, defects can also be defined in amorphous solids based on empty or densely packed local atomic neighbourhoods, and the properties of such 'defects' can be shown to be similar to normal vacancies and interstitials in crystals. Complexes can form between different kinds of point defects. For example, if a vacancy encounters an impurity, the two may bind together if the impurity is too large for the lattice. Interstitials can form 'split interstitial' or 'dumbbell' structures where two atoms effectively share an atomic site, resulting in neither atom actually occupying the site. Line defects Line defects can be described by gauge theories. Dislocations are linear defects, around which the atoms of the crystal lattice are misaligned. There are two basic types of dislocations, the edge dislocation and the screw dislocation. "Mixed" dislocations, combining aspects of both types, are also common. Edge dislocations are caused by the termination of a plane of atoms in the middle of a crystal. In such a case, the adjacent planes are not straight, but instead bend around the edge of the terminating plane so that the crystal structure is perfectly ordered on either side. The analogy with a stack of paper is apt: if a half a piece of paper is inserted in a stack of paper, the defect in the stack is only noticeable at the edge of the half sheet. The screw dislocation is more difficult to visualise, but basically comprises a structure in which a helical path is traced around the linear defect (dislocation line) by the atomic planes of atoms in the crystal lattice. The presence of dislocation results in lattice strain (distortion). The direction and magnitude of such distortion is expressed in terms of a Burgers vector (b). For an edge type, b is perpendicular to the dislocation line, whereas in the cases of the screw type it is parallel. In metallic materials, b is aligned with close-packed crystallographic directions and its magnitude is equivalent to one interatomic spacing. Dislocations can move if the atoms from one of the surrounding planes break their bonds and rebond with the atoms at the terminating edge. It is the presence of dislocations and their ability to readily move (and interact) under the influence of stresses induced by external loads that leads to the characteristic malleability of metallic materials. Dislocations can be observed using transmission electron microscopy, field ion microscopy and atom probe techniques. Deep-level transient spectroscopy has been used for studying the electrical activity of dislocations in semiconductors, mainly silicon. Disclinations are line defects corresponding to "adding" or "subtracting" an angle around a line. Basically, this means that if you track the crystal orientation around the line defect, you get a rotation. Usually, they were thought to play a role only in liquid crystals, but recent developments suggest that they might have a role also in solid materials, e.g. leading to the self-healing of cracks. Planar defects Grain boundaries occur where the crystallographic direction of the lattice abruptly changes. This usually occurs when two crystals begin growing separately and then meet. Antiphase boundaries occur in ordered alloys: in this case, the crystallographic direction remains the same, but each side of the boundary has an opposite phase: For example, if the ordering is usually ABABABAB (hexagonal close-packed crystal), an antiphase boundary takes the form of ABABBABA. Stacking faults occur in a number of crystal structures, but the common example is in close-packed structures. They are formed by a local deviation of the stacking sequence of layers in a crystal. An example would be the ABABCABAB stacking sequence. A twin boundary is a defect that introduces a plane of mirror symmetry in the ordering of a crystal. For example, in cubic close-packed crystals, the stacking sequence of a twin boundary would be ABCABCBACBA. On planes of single crystals, steps between atomically flat terraces can also be regarded as planar defects. It has been shown that such defects and their geometry have significant influence on the adsorption of organic molecules Bulk defects Three-dimensional macroscopic or bulk defects, such as pores, cracks, or inclusions Voids — small regions where there are no atoms, and which can be thought of as clusters of vacancies Impurities can cluster together to form small regions of a different phase. These are often called precipitates. Mathematical classification methods A successful mathematical classification method for physical lattice defects, which works not only with the theory of dislocations and other defects in crystals but also, e.g., for disclinations in liquid crystals and for excitations in superfluid 3He, is the topological homotopy theory. Computer simulation methods Density functional theory, classical molecular dynamics and kinetic Monte Carlo simulations are widely used to study the properties of defects in solids with computer simulations. Simulating jamming of hard spheres of different sizes and/or in containers with non-commeasurable sizes using the Lubachevsky–Stillinger algorithm can be an effective technique for demonstrating some types of crystallographic defects. See also Bjerrum defect Crystallographic defects in diamond Kröger–Vink notation F-center References Further reading Hagen Kleinert, Gauge Fields in Condensed Matter, Vol. II, "Stresses and defects", pp. 743–1456, World Scientific (Singapore, 1989); Paperback Hermann Schmalzried: Solid State Reactions. Verlag Chemie, Weinheim 1981, .
Crystallographic defect
[ "Chemistry", "Materials_science", "Engineering" ]
2,196
[ "Crystallographic defects", "Crystallography", "Materials degradation", "Materials science" ]
7,850
https://en.wikipedia.org/wiki/Chomsky%20normal%20form
In formal language theory, a context-free grammar, G, is said to be in Chomsky normal form (first described by Noam Chomsky) if all of its production rules are of the form: A → BC,   or A → a,   or S → ε, where A, B, and C are nonterminal symbols, the letter a is a terminal symbol (a symbol that represents a constant value), S is the start symbol, and ε denotes the empty string. Also, neither B nor C may be the start symbol, and the third production rule can only appear if ε is in L(G), the language produced by the context-free grammar G. Every grammar in Chomsky normal form is context-free, and conversely, every context-free grammar can be transformed into an equivalent one which is in Chomsky normal form and has a size no larger than the square of the original grammar's size. Converting a grammar to Chomsky normal form To convert a grammar to Chomsky normal form, a sequence of simple transformations is applied in a certain order; this is described in most textbooks on automata theory. The presentation here follows Hopcroft, Ullman (1979), but is adapted to use the transformation names from Lange, Leiß (2009). Each of the following transformations establishes one of the properties required for Chomsky normal form. START: Eliminate the start symbol from right-hand sides Introduce a new start symbol S0, and a new rule S0 → S, where S is the previous start symbol. This does not change the grammar's produced language, and S0 will not occur on any rule's right-hand side. TERM: Eliminate rules with nonsolitary terminals To eliminate each rule A → X1 ... a ... Xn with a terminal symbol a being not the only symbol on the right-hand side, introduce, for every such terminal, a new nonterminal symbol Na, and a new rule Na → a. Change every rule A → X1 ... a ... Xn to A → X1 ... Na ... Xn. If several terminal symbols occur on the right-hand side, simultaneously replace each of them by its associated nonterminal symbol. This does not change the grammar's produced language. BIN: Eliminate right-hand sides with more than 2 nonterminals Replace each rule A → X1 X2 ... Xn with more than 2 nonterminals X1,...,Xn by rules A → X1 A1, A1 → X2 A2, ... , An-2 → Xn-1 Xn, where Ai are new nonterminal symbols. Again, this does not change the grammar's produced language. DEL: Eliminate ε-rules An ε-rule is a rule of the form A → ε, where A is not S0, the grammar's start symbol. To eliminate all rules of this form, first determine the set of all nonterminals that derive ε. Hopcroft and Ullman (1979) call such nonterminals nullable, and compute them as follows: If a rule A → ε exists, then A is nullable. If a rule A → X1 ... Xn exists, and every single Xi is nullable, then A is nullable, too. Obtain an intermediate grammar by replacing each rule A → X1 ... Xn by all versions with some nullable Xi omitted. By deleting in this grammar each ε-rule, unless its left-hand side is the start symbol, the transformed grammar is obtained. For example, in the following grammar, with start symbol S0, S0 → AbB | C B → AA | AC C → b | c A → a | ε the nonterminal A, and hence also B, is nullable, while neither C nor S0 is. Hence the following intermediate grammar is obtained: S0 → b | b | b | b   |   C B → | | | ε   |   C | C C → b | c A → a | ε In this grammar, all ε-rules have been "inlined at the call site". In the next step, they can hence be deleted, yielding the grammar: S0 → AbB | Ab | bB | b   |   C B → AA | A   |   AC | C C → b | c A → a This grammar produces the same language as the original example grammar, viz. {ab,aba,abaa,abab,abac,abb,abc,b,ba,baa,bab,bac,bb,bc,c}, but has no ε-rules. UNIT: Eliminate unit rules A unit rule is a rule of the form A → B, where A, B are nonterminal symbols. To remove it, for each rule B → X1 ... Xn, where X1 ... Xn is a string of nonterminals and terminals, add rule A → X1 ... Xn unless this is a unit rule which has already been (or is being) removed. The skipping of nonterminal symbol B in the resulting grammar is possible due to B being a member of the unit closure of nonterminal symbol A. Order of transformations When choosing the order in which the above transformations are to be applied, it has to be considered that some transformations may destroy the result achieved by other ones. For example, START will re-introduce a unit rule if it is applied after UNIT. The table shows which orderings are admitted. Moreover, the worst-case bloat in grammar size depends on the transformation order. Using |G| to denote the size of the original grammar G, the size blow-up in the worst case may range from |G|2 to 22 |G|, depending on the transformation algorithm used. The blow-up in grammar size depends on the order between DEL and BIN. It may be exponential when DEL is done first, but is linear otherwise. UNIT can incur a quadratic blow-up in the size of the grammar. The orderings START,TERM,BIN,DEL,UNIT and START,BIN,DEL,UNIT,TERM lead to the least (i.e. quadratic) blow-up. Example The following grammar, with start symbol Expr, describes a simplified version of the set of all syntactical valid arithmetic expressions in programming languages like C or Algol60. Both number and variable are considered terminal symbols here for simplicity, since in a compiler front end their internal structure is usually not considered by the parser. The terminal symbol "^" denoted exponentiation in Algol60. {| |- | Expr | → Term | | | Expr AddOp Term | | | AddOp Term |- | Term | → Factor | | | Term MulOp Factor |- | Factor | → Primary | | | Factor ^ Primary |- | Primary | → number | | | variable | | | ( Expr ) |- | AddOp | → + | | | − |- | MulOp | → * | | | / |} In step "START" of the above conversion algorithm, just a rule S0→Expr is added to the grammar. After step "TERM", the grammar looks like this: {| |- | S0 | → Expr |- | Expr | → Term | | | Expr AddOp Term | | | AddOp Term |- | Term | → Factor | | | Term MulOp Factor |- | Factor | → Primary | | | Factor PowOp Primary |- | Primary | → number | | | variable | | | Open Expr Close |- | AddOp | → + | | | − |- | MulOp | → * | | | / |- | PowOp | → ^ |- | Open | → ( |- | Close | → ) |} After step "BIN", the following grammar is obtained: {| |- | S0 | → Expr |- | Expr | → Term | | | Expr AddOp_Term | | | AddOp Term |- | Term | → Factor | | | Term MulOp_Factor |- | Factor | → Primary | | | Factor PowOp_Primary |- | Primary | → number | | | variable | | | Open Expr_Close |- | AddOp | → + | | | − |- | MulOp | → * | | | / |- | PowOp | → ^ |- | Open | → ( |- | Close | → ) |- | AddOp_Term | colspan=3 | → AddOp Term |- | MulOp_Factor | colspan=3 | → MulOp Factor |- | PowOp_Primary | colspan=3 | → PowOp Primary |- | Expr_Close | colspan=3 | → Expr Close |} Since there are no ε-rules, step "DEL" does not change the grammar. After step "UNIT", the following grammar is obtained, which is in Chomsky normal form: {| |- | S0 | → number | | | variable | | | Open Expr_Close | | | Factor PowOp_Primary | | | Term MulOp_Factor | | | Expr AddOp_Term | | | AddOp Term |- | Expr | → number | | | variable | | | Open Expr_Close | | | Factor PowOp_Primary | | | Term MulOp_Factor | | | Expr AddOp_Term | | | AddOp Term |- | Term | → number | | | variable | | | Open Expr_Close | | | Factor PowOp_Primary | | | Term MulOp_Factor |- | Factor | → number | | | variable | | | Open Expr_Close | | | Factor PowOp_Primary |- | Primary | → number | | | variable | | | Open Expr_Close |- | AddOp | → + | | | − |- | MulOp | → * | | | / |- | PowOp | → ^ |- | Open | → ( |- | Close | → ) |- | AddOp_Term | colspan=3 | → AddOp Term |- | MulOp_Factor | colspan=3 | → MulOp Factor |- | PowOp_Primary | colspan=3 | → PowOp Primary |- | Expr_Close | colspan=3 | → Expr Close |} The Na introduced in step "TERM" are PowOp, Open, and Close. The Ai introduced in step "BIN" are AddOp_Term, MulOp_Factor, PowOp_Primary, and Expr_Close. Alternative definition Chomsky reduced form Another way to define the Chomsky normal form is: A formal grammar is in Chomsky reduced form if all of its production rules are of the form: or , where , and are nonterminal symbols, and is a terminal symbol. When using this definition, or may be the start symbol. Only those context-free grammars which do not generate the empty string can be transformed into Chomsky reduced form. Floyd normal form In a letter where he proposed a term Backus–Naur form (BNF), Donald E. Knuth implied a BNF "syntax in which all definitions have such a form may be said to be in 'Floyd Normal Form'", or or , where , and are nonterminal symbols, and is a terminal symbol, because Robert W. Floyd found any BNF syntax can be converted to the above one in 1961. But he withdrew this term, "since doubtless many people have independently used this simple fact in their own work, and the point is only incidental to the main considerations of Floyd's note." While Floyd's note cites Chomsky's original 1959 article, Knuth's letter does not. Application Besides its theoretical significance, CNF conversion is used in some algorithms as a preprocessing step, e.g., the CYK algorithm, a bottom-up parsing for context-free grammars, and its variant probabilistic CKY. See also Backus–Naur form CYK algorithm Greibach normal form Kuroda normal form Pumping lemma for context-free languages — its proof relies on the Chomsky normal form Notes References Further reading Cole, Richard. Converting CFGs to CNF (Chomsky Normal Form), October 17, 2007. (pdf) — uses the order TERM, BIN, START, DEL, UNIT. (Pages 237–240 of section 6.6: simplified forms and normal forms.) (Pages 98–101 of section 2.1: context-free grammars. Page 156.) (pages 171-183 of section 7.1: Chomsky Normal Form) Sipser, Michael. Introduction to the Theory of Computation, 2nd edition. Formal languages Noam Chomsky
Chomsky normal form
[ "Mathematics" ]
2,752
[ "Formal languages", "Mathematical logic" ]
7,851
https://en.wikipedia.org/wiki/Comprehensive%20Nuclear-Test-Ban%20Treaty
The Comprehensive Nuclear-Test-Ban Treaty (CTBT) is a multilateral treaty to ban nuclear weapons test explosions and any other nuclear explosions, for both civilian and military purposes, in all environments. It was adopted by the United Nations General Assembly on 10 September 1996, but has not entered into force, as nine specific nations have not ratified the treaty. History The movement for international control of nuclear weapons began in 1945, with a call from Canada and the United Kingdom for a conference on the subject. In June 1946, Bernard Baruch, an emissary of President Harry S. Truman, proposed the Baruch Plan before the United Nations Atomic Energy Commission, which called for an international system of controls on the production of atomic energy. The plan, which would serve as the basis for U.S. nuclear policy into the 1950s, was rejected by the Soviet Union as a US ploy to cement its nuclear dominance. Between the Trinity nuclear test of 16 July 1945 and the signing of the Partial Test Ban Treaty (PTBT) on 5 August 1963, 499 nuclear tests were conducted. Much of the impetus for the PTBT, the precursor to the CTBT, was rising public concern surrounding the size and resulting nuclear fallout from underwater and atmospheric nuclear tests, particularly tests of powerful thermonuclear weapons (hydrogen bombs). The Castle Bravo test of 1 March 1954, in particular, attracted significant attention as the detonation resulted in fallout that spread over inhabited areas and sickened a group of Japanese fishermen. Between 1945 and 1963, the US conducted 215 atmospheric tests, the Soviet Union conducted 219, the UK conducted 21, and France conducted 4. In 1954, following the Castle Bravo test, Prime Minister Jawaharlal Nehru of India issued the first appeal for a "standstill agreement" on testing, which was soon echoed by the British Labour Party. Negotiations on a comprehensive test ban, primarily involving the US, UK, and the Soviet Union, began in 1955 following a proposal by Soviet leader Nikita Khrushchev. Of primary concern throughout the negotiations, which would stretch—with some interruptions—to July 1963, was the system of verifying compliance with the test ban and detecting illicit tests. On the Western side, there were concerns that the Soviet Union would be able to circumvent any test ban and secretly leap ahead in the nuclear arms race. These fears were amplified following the US Rainier shot of 19 September 1957, which was the first contained underground test of a nuclear weapon. Though the US held a significant advantage in underground testing capabilities, there was worry that the Soviet Union would be able to covertly conduct underground tests during a test ban, as underground detonations were more challenging to detect than above-ground tests. On the Soviet side, conversely, the on-site compliance inspections demanded by the US and UK were seen as amounting to espionage. Disagreement over verification would lead to the Anglo-American and Soviet negotiators abandoning a comprehensive test ban (i.e., a ban on all tests, including those underground) in favor of a partial ban, which would be finalized on 25 July 1963. The PTBT, joined by 123 states following the original three parties, banned detonations for military and civilian purposes underwater, in the atmosphere, and outer space. The PTBT had mixed results. On the one hand, enactment of the treaty was followed by a substantial drop in the atmospheric concentration of radioactive particles. On the other hand, nuclear proliferation was not halted entirely (though it may have been slowed) and nuclear testing continued at a rapid clip. Compared to the 499 tests from 1945 to the signing of the PTBT, 436 tests were conducted over the ten years following the PTBT. Furthermore, US and Soviet underground testing continued "venting" radioactive gas into the atmosphere. Additionally, though underground testing was generally safer than above-ground testing, underground tests continued to risk the leaking of radionuclides, including plutonium, into the ground. From 1964 through 1996, the year of the CTBT's adoption, an estimated 1,377 underground nuclear tests were conducted. The final non-underground (atmospheric or underwater) test was conducted by China in 1980. The PTBT has been seen as a step towards the Nuclear Non-proliferation Treaty (NPT) of 1968, which directly referenced the PTBT. Under the NPT, non-nuclear weapon states were prohibited from possessing, manufacturing, and acquiring nuclear weapons or other nuclear explosive devices. All signatories, including nuclear weapon states, were committed to the goal of total nuclear disarmament. However, India, Pakistan, and Israel have declined to sign the NPT on the grounds that such a treaty is fundamentally discriminatory as it places limitations on states that do not have nuclear weapons while making no efforts to curb weapons development by declared nuclear weapons states. In 1974, a step towards a comprehensive test ban was made with the Threshold Test Ban Treaty (TTBT), ratified by the US and Soviet Union, which banned underground tests with yields above 150 kilotons. In April 1976, the two states reached agreement on the Peaceful Nuclear Explosions Treaty (PNET), which concerns nuclear detonations outside the weapons sites discussed in the TTBT. As in the TTBT, the US and Soviet Union agreed to bar peaceful nuclear explosions (PNEs) at these other locations with yields above 150 kilotons, as well as group explosions with total yields over 1,500 kilotons. To verify compliance, the PNET requires that states rely on national technical means of verification, share information on explosions, and grant on-site access to counterparties. The TTBT and PNET entered into force on 11 December 1990. In October 1977, the US, UK, and Soviet Union returned to negotiations over a test ban. These three nuclear powers made notable progress in the late 1970s, agreeing to terms on a ban on all testing, including a temporary prohibition on PNEs, but continued disagreements over the compliance mechanisms led to an end to negotiations ahead of Ronald Reagan's inauguration as president in 1981. In 1985, Soviet leader Mikhail Gorbachev announced a unilateral testing moratorium, and in December 1986, Reagan reaffirmed US commitment to pursue the long-term goal of a comprehensive test ban. In November 1987, negotiations on a test ban restarted, followed by a joint US-Soviet program to research underground-test detection in December 1987. In October 2023, Russian president Vladimir Putin stated that since the United States had not ratified the CTBT, consideration could be given to withdrawing Russia's ratification of the treaty. Later in the month, a law revoking ratification of the CTBT was passed by the Russian parliament. On 2 November, Putin officially signed into law the withdrawal of ratification of the treaty. Negotiations Given the political situation prevailing in the subsequent decades, little progress was made in nuclear disarmament until the end of the Cold War in 1991. Parties to the PTBT held an amendment conference that year to discuss a proposal to convert the Treaty into an instrument banning all nuclear-weapon tests. With strong support from the UN General Assembly, negotiations for a comprehensive test-ban treaty began in 1993. Adoption Extensive efforts were made over the next three years to draft the Treaty text and its two annexes. However, the Conference on Disarmament, in which negotiations were being held, did not succeed in reaching consensus on the adoption of the text. Under the direction of Prime Minister John Howard and Foreign Minister Alexander Downer, Australia then sent the text to the United Nations General Assembly in New York, where it was submitted as a draft resolution. On 10 September 1996, the Comprehensive Test-Ban Treaty (CTBT) was adopted by a large majority, exceeding two-thirds of the General Assembly's Membership. Obligations (Article I): Each State Party undertakes not to carry out any nuclear weapon test explosion or any other nuclear explosion, and to prohibit and prevent any such nuclear explosion at any place under its jurisdiction or control. Each State Party undertakes, furthermore, to refrain from causing, encouraging, or in any way participating in the carrying out of any nuclear weapon test explosion or any other nuclear explosion. Status The Treaty was adopted by the United Nations General Assembly on 10 September 1996. It opened for signature in New York on 24 September 1996, when it was signed by 71 states, including five of the eight then nuclear-capable states. , 178 states have ratified the CTBT and another nine states have signed but not ratified it. The treaty will enter into force 180 days after the 44 states listed in Annex 2 of the treaty have ratified it. These "Annex 2 states" are states that participated in the CTBT's negotiations between 1994 and 1996 and possessed nuclear power reactors or research reactors at that time. , nine Annex 2 states have not ratified the treaty: China, Egypt, Iran, Israel and the United States have signed but not ratified the Treaty; India, North Korea and Pakistan have not signed it; while Russia signed and ratified the treaty but subsequently withdrew its ratification prior to its entry into force. Monitoring Geophysical and other technologies are used to monitor for compliance with the Treaty: forensic seismology, hydroacoustics, infrasound, and radionuclide monitoring. The first three forms of monitoring are known as wave-form measurements. Seismic monitoring is performed with a system of 50 primary stations located throughout the world, with 120 auxiliary stations in signatory states. Hydroacoustic monitoring is performed with a system of 11 stations that consist of hydrophone triads to monitor for underwater explosions. Hydroacoustic stations can use seismometers to measure T-waves from possible underwater explosions instead of hydrophones. The best measurement of hydroacoustic waves has been found to be at a depth of 1000 m. Infrasound monitoring relies on changes in atmospheric pressure caused by a possible nuclear explosion, with 41 stations certified as of August 2019. One of the biggest concerns with infrasound measurements is noise due to exposure from wind, which can affect the sensor's ability to measure if an event occurred. Together, these technologies are used to monitor the ground, water, and atmosphere for any sign of a nuclear explosion. Radionuclide monitoring takes the form of either monitoring for radioactive particulates or noble gases as a product of a nuclear explosion. Radioactive particles emit radiation that can be measured by any of the 80 stations located throughout the world. They are created from nuclear explosions that can collect onto the dust that is moved from the explosion. If a nuclear explosion took place underground, noble gas monitoring can be used to verify whether or not a possible nuclear explosion took place. Noble gas monitoring relies on measuring increases in radioactive xenon gas. Different isotopes of xenon include 131mXe, 133Xe, 133mXe, and 135Xe. All four monitoring methods make up the International Monitoring System (IMS). Statistical theories and methods are integral to CTBT monitoring providing confidence in verification analysis. Once the Treaty enters into force, on-site inspections will be conducted where concerns about compliance arise. The Preparatory Commission for the Comprehensive Test Ban Treaty Organization (CTBTO), an international organization headquartered in Vienna, Austria, was created to build the verification framework, including establishment and provisional operation of the network of monitoring stations, the creation of an international data centre (IDC), and development of the on-site Inspection capability. The CTBTO is responsible for collecting information from the IMS and distribute the analyzed and raw data to member states to judge whether or not a nuclear explosion occurred through the IDC. Parameters such as determining the location where a nuclear explosion or test took place is one of the things that the IDC can accomplish. If a member state chooses to assert that another state had violated the CTBT, they can request an on-site inspection to take place to verify. The monitoring network consists of 337 facilities located all over the globe. As of May 2012, more than 260 facilities have been certified. The monitoring stations register data that is transmitted to the international data centre in Vienna for processing and analysis. The data are sent to states that have signed the Treaty. Subsequent nuclear testing Three countries have tested nuclear weapons since the CTBT opened for signature in 1996. India and Pakistan both carried out two sets of tests in 1998. North Korea carried out six announced tests, one each in 2006, 2009, 2013, two in 2016 and one in 2017. All six North Korean tests were picked up by the International Monitoring System set up by the Comprehensive Nuclear-Test-Ban Treaty Organization Preparatory Commission. A North Korean test is believed to have taken place in January 2016, evidenced by an "artificial earthquake" measured as a magnitude 5.1 by the U.S. Geological Survey. The first successful North Korean hydrogen bomb test supposedly took place in September 2017. It was estimated to have an explosive yield of 120 kilotons. See also International Day for the Total Elimination of Nuclear Weapons List of weapons of mass destruction treaties Comprehensive Nuclear-Test-Ban Treaty Organization Comprehensive Nuclear-Test-Ban Treaty Organization Preparatory Commission National technical means of verification Nuclear disarmament Nuclear-free zone Treaty on the Prohibition of Nuclear Weapons References Sources External links Full text of the treaty CTBTO Preparatory Commission — official news and information The Test Ban Test: U.S. Rejection has Scuttled the CTBT US conducts subcritical nuclear test ABC News, 24 February 2006 International Physicians for the Prevention of Nuclear War, 1991 Daryl Kimball and Christine Kucia, Arms Control Association, 2002 General John M. Shalikashvili, Special Advisor to the President and the Secretary of State for the Comprehensive Test Ban Treaty Christopher Paine, Senior Researcher with NRDC's Nuclear Program, 1999 Obama or McCain Can Finish Journey to Nuclear Test Ban Treaty Introductory note by Thomas Graham, Jr., procedural history note and audiovisual material on the Comprehensive Nuclear Test Ban Treaty in the United Nations Audiovisual Library of International Law Lecture by Masahiko Asada titled Nuclear Weapons and International Law in the Lecture Series of the United Nations Audiovisual Library of International Law Comprehensive Nuclear-Test-Ban Treaty: Background and Current Developments Congressional Research Service The Woodrow Wilson Center's Nuclear Proliferation International History Project or NPIHP is a global network of individuals and institutions engaged in the study of international nuclear history through archival documents, oral history interviews and other empirical sources. Arms control treaties Non-proliferation treaties Nuclear weapons policy Foreign relations of India Foreign relations of Pakistan 106th United States Congress Treaties concluded in 1996 Treaties not entered into force Nuclear weapons testing Treaties of the Afghan Transitional Administration Treaties of Albania Treaties of Algeria Treaties of Andorra Treaties of Angola Treaties of Antigua and Barbuda Treaties of Argentina Treaties of Armenia Treaties of Australia Treaties of Austria Treaties of Azerbaijan Treaties of the Bahamas Treaties of Bahrain Treaties of Bangladesh Treaties of Barbados Treaties of Belarus Treaties of Belgium Treaties of Belize Treaties of Benin Treaties of Bolivia Treaties of Bosnia and Herzegovina Treaties of Botswana Treaties of Brazil Treaties of Brunei Treaties of Bulgaria Treaties of Burkina Faso Treaties of Burundi Treaties of Cambodia Treaties of Cameroon Treaties of Canada Treaties of Cape Verde Treaties of the Central African Republic Treaties of Chad Treaties of Chile Treaties of Colombia Treaties of the Republic of the Congo Treaties of the Cook Islands Treaties of Costa Rica Treaties of Cuba Treaties of Croatia Treaties of Cyprus Treaties of the Czech Republic Treaties of the Democratic Republic of the Congo Treaties of Denmark Treaties of Djibouti Treaties of the Dominican Republic Treaties of Ecuador Treaties of El Salvador Treaties of Eritrea Treaties of Estonia Treaties of Eswatini Treaties of Ethiopia Treaties of Fiji Treaties of Finland Treaties of France Treaties of Gabon Treaties of Georgia (country) Treaties of Germany Treaties of Ghana Treaties of Greece Treaties of Grenada Treaties of Guatemala Treaties of Guinea Treaties of Guinea-Bissau Treaties of Guyana Treaties of Haiti Treaties of the Holy See Treaties of Honduras Treaties of Hungary Treaties of Iceland Treaties of Indonesia Treaties of Iraq Treaties of Ireland Treaties of Italy Treaties of Ivory Coast Treaties of Jamaica Treaties of Japan Treaties of Jordan Treaties of Kazakhstan Treaties of Kenya Treaties of Kiribati Treaties of Kuwait Treaties of Kyrgyzstan Treaties of Laos Treaties of Latvia Treaties of Lebanon Treaties of Lesotho Treaties of Liberia Treaties of the Libyan Arab Jamahiriya Treaties of Liechtenstein Treaties of Lithuania Treaties of Luxembourg Treaties of Madagascar Treaties of Malawi Treaties of Malaysia Treaties of the Maldives Treaties of Mali Treaties of Malta Treaties of the Marshall Islands Treaties of Mauritania Treaties of Mexico Treaties of the Federated States of Micronesia Treaties of Monaco Treaties of Mongolia Treaties of Montenegro Treaties of Morocco Treaties of Mozambique Treaties of Myanmar Treaties of Namibia Treaties of Nauru Treaties of the Netherlands Treaties of New Zealand Treaties of Nicaragua Treaties of Niger Treaties of Nigeria Treaties of Niue Treaties of North Macedonia Treaties of Norway Treaties of Oman Treaties of Palau Treaties of Panama Treaties of Papua New Guinea Treaties of Paraguay Treaties of Peru Treaties of the Philippines Treaties of Poland Treaties of Portugal Treaties of Qatar Treaties of South Korea Treaties of Moldova Treaties of Romania Treaties of Russia Treaties of Rwanda Treaties of Samoa Treaties of San Marino Treaties of Senegal Treaties of Serbia and Montenegro Treaties of Seychelles Treaties of Sierra Leone Treaties of Singapore Treaties of Slovakia Treaties of Slovenia Treaties of South Africa Treaties of Spain Treaties of Saint Kitts and Nevis Treaties of Saint Lucia Treaties of Saint Vincent and the Grenadines Treaties of the Republic of the Sudan (1985–2011) Treaties of Suriname Treaties of Sweden Treaties of Switzerland Treaties of Tajikistan Treaties of Togo Treaties of Trinidad and Tobago Treaties of Tunisia Treaties of Turkey Treaties of Turkmenistan Treaties of Uganda Treaties of Ukraine Treaties of the United Arab Emirates Treaties of the United Kingdom Treaties of Tanzania Treaties of Uruguay Treaties of Uzbekistan Treaties of Vanuatu Treaties of Venezuela Treaties of Vietnam Treaties of Zambia Treaties establishing intergovernmental organizations Treaties adopted by United Nations General Assembly resolutions Treaties extended to Aruba Treaties extended to the Netherlands Antilles
Comprehensive Nuclear-Test-Ban Treaty
[ "Technology" ]
3,609
[ "Environmental impact of nuclear power", "Nuclear weapons testing" ]
7,921
https://en.wikipedia.org/wiki/Derivative
In mathematics, the derivative is a fundamental tool that quantifies the sensitivity to change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point. The tangent line is the best linear approximation of the function near that input value. For this reason, the derivative is often described as the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable. The process of finding a derivative is called differentiation. There are multiple different notations for differentiation, two of the most commonly used being Leibniz notation and prime notation. Leibniz notation, named after Gottfried Wilhelm Leibniz, is represented as the ratio of two differentials, whereas prime notation is written by adding a prime mark. Higher order notations represent repeated differentiation, and they are usually denoted in Leibniz notation by adding superscripts to the differentials, and in prime notation by adding additional prime marks. The higher order derivatives can be applied in physics; for example, while the first derivative of the position of a moving object with respect to time is the object's velocity, how the position changes as time advances, the second derivative is the object's acceleration, how the velocity changes as time advances. Derivatives can be generalized to functions of several real variables. In this generalization, the derivative is reinterpreted as a linear transformation whose graph is (after an appropriate translation) the best linear approximation to the graph of the original function. The Jacobian matrix is the matrix that represents this linear transformation with respect to the basis given by the choice of independent and dependent variables. It can be calculated in terms of the partial derivatives with respect to the independent variables. For a real-valued function of several variables, the Jacobian matrix reduces to the gradient vector. Definition As a limit A function of a real variable is differentiable at a point of its domain, if its domain contains an open interval containing , and the limit exists. This means that, for every positive real number , there exists a positive real number such that, for every such that and then is defined, and where the vertical bars denote the absolute value. This is an example of the (ε, δ)-definition of limit. If the function is differentiable at , that is if the limit exists, then this limit is called the derivative of at . Multiple notations for the derivative exist. The derivative of at can be denoted , read as " prime of "; or it can be denoted , read as "the derivative of with respect to at " or " by (or over) at ". See below. If is a function that has a derivative at every point in its domain, then a function can be defined by mapping every point to the value of the derivative of at . This function is written and is called the derivative function or the derivative of . The function sometimes has a derivative at most, but not all, points of its domain. The function whose value at equals whenever is defined and elsewhere is undefined is also called the derivative of . It is still a function, but its domain may be smaller than the domain of . For example, let be the squaring function: . Then the quotient in the definition of the derivative is The division in the last step is valid as long as . The closer is to , the closer this expression becomes to the value . The limit exists, and for every input the limit is . So, the derivative of the squaring function is the doubling function: . The ratio in the definition of the derivative is the slope of the line through two points on the graph of the function , specifically the points and . As is made smaller, these points grow closer together, and the slope of this line approaches the limiting value, the slope of the tangent to the graph of at . In other words, the derivative is the slope of the tangent. Using infinitesimals One way to think of the derivative is as the ratio of an infinitesimal change in the output of the function to an infinitesimal change in its input. In order to make this intuition rigorous, a system of rules for manipulating infinitesimal quantities is required. The system of hyperreal numbers is a way of treating infinite and infinitesimal quantities. The hyperreals are an extension of the real numbers that contain numbers greater than anything of the form for any finite number of terms. Such numbers are infinite, and their reciprocals are infinitesimals. The application of hyperreal numbers to the foundations of calculus is called nonstandard analysis. This provides a way to define the basic concepts of calculus such as the derivative and integral in terms of infinitesimals, thereby giving a precise meaning to the in the Leibniz notation. Thus, the derivative of becomes for an arbitrary infinitesimal , where denotes the standard part function, which "rounds off" each finite hyperreal to the nearest real. Taking the squaring function as an example again, Continuity and differentiability If is differentiable at , then must also be continuous at . As an example, choose a point and let be the step function that returns the value 1 for all less than , and returns a different value 10 for all greater than or equal to . The function cannot have a derivative at . If is negative, then is on the low part of the step, so the secant line from to is very steep; as tends to zero, the slope tends to infinity. If is positive, then is on the high part of the step, so the secant line from to has slope zero. Consequently, the secant lines do not approach any single slope, so the limit of the difference quotient does not exist. However, even if a function is continuous at a point, it may not be differentiable there. For example, the absolute value function given by is continuous at , but it is not differentiable there. If is positive, then the slope of the secant line from 0 to is one; if is negative, then the slope of the secant line from to is . This can be seen graphically as a "kink" or a "cusp" in the graph at . Even a function with a smooth graph is not differentiable at a point where its tangent is vertical: For instance, the function given by is not differentiable at . In summary, a function that has a derivative is continuous, but there are continuous functions that do not have a derivative. Most functions that occur in practice have derivatives at all points or almost every point. Early in the history of calculus, many mathematicians assumed that a continuous function was differentiable at most points. Under mild conditions (for example, if the function is a monotone or a Lipschitz function), this is true. However, in 1872, Weierstrass found the first example of a function that is continuous everywhere but differentiable nowhere. This example is now known as the Weierstrass function. In 1931, Stefan Banach proved that the set of functions that have a derivative at some point is a meager set in the space of all continuous functions. Informally, this means that hardly any random continuous functions have a derivative at even one point. Notation One common way of writing the derivative of a function is Leibniz notation, introduced by Gottfried Wilhelm Leibniz in 1675, which denotes a derivative as the quotient of two differentials, such as and . It is still commonly used when the equation is viewed as a functional relationship between dependent and independent variables. The first derivative is denoted by , read as "the derivative of with respect to ". This derivative can alternately be treated as the application of a differential operator to a function, Higher derivatives are expressed using the notation for the -th derivative of . These are abbreviations for multiple applications of the derivative operator; for example, Unlike some alternatives, Leibniz notation involves explicit specification of the variable for differentiation, in the denominator, which removes ambiguity when working with multiple interrelated quantities. The derivative of a composed function can be expressed using the chain rule: if and then Another common notation for differentiation is by using the prime mark in the symbol of a function . This is known as prime notation, due to Joseph-Louis Lagrange. The first derivative is written as , read as " prime of , or , read as " prime". Similarly, the second and the third derivatives can be written as and , respectively. For denoting the number of higher derivatives beyond this point, some authors use Roman numerals in superscript, whereas others place the number in parentheses, such as or . The latter notation generalizes to yield the notation for the th derivative of . In Newton's notation or the dot notation, a dot is placed over a symbol to represent a time derivative. If is a function of , then the first and second derivatives can be written as and , respectively. This notation is used exclusively for derivatives with respect to time or arc length. It is typically used in differential equations in physics and differential geometry. However, the dot notation becomes unmanageable for high-order derivatives (of order 4 or more) and cannot deal with multiple independent variables. Another notation is D-notation, which represents the differential operator by the symbol . The first derivative is written and higher derivatives are written with a superscript, so the -th derivative is . This notation is sometimes called Euler notation, although it seems that Leonhard Euler did not use it, and the notation was introduced by Louis François Antoine Arbogast. To indicate a partial derivative, the variable differentiated by is indicated with a subscript, for example given the function , its partial derivative with respect to can be written or . Higher partial derivatives can be indicated by superscripts or multiple subscripts, e.g. and . Rules of computation In principle, the derivative of a function can be computed from the definition by considering the difference quotient and computing its limit. Once the derivatives of a few simple functions are known, the derivatives of other functions are more easily computed using rules for obtaining derivatives of more complicated functions from simpler ones. This process of finding a derivative is known as differentiation. Rules for basic functions The following are the rules for the derivatives of the most common basic functions. Here, is a real number, and is the base of the natural logarithm, approximately . Derivatives of powers: Functions of exponential, natural logarithm, and logarithm with general base: , for , for , for Trigonometric functions: Inverse trigonometric functions: , for , for Rules for combined functions Given that the and are the functions. The following are some of the most basic rules for deducing the derivative of functions from derivatives of basic functions. Constant rule: if is constant, then for all , Sum rule: for all functions and and all real numbers and . Product rule: for all functions and . As a special case, this rule includes the fact whenever is a constant because by the constant rule. Quotient rule: for all functions and at all inputs where . Chain rule for composite functions: If , then Computation example The derivative of the function given by is Here the second term was computed using the chain rule and the third term using the product rule. The known derivatives of the elementary functions , , , , and , as well as the constant , were also used. Higher-order derivatives Higher order derivatives are the result of differentiating a function repeatedly. Given that is a differentiable function, the derivative of is the first derivative, denoted as . The derivative of is the second derivative, denoted as , and the derivative of is the third derivative, denoted as . By continuing this process, if it exists, the th derivative is the derivative of the th derivative or the derivative of order . As has been discussed above, the generalization of derivative of a function may be denoted as . A function that has successive derivatives is called times differentiable. If the th derivative is continuous, then the function is said to be of differentiability class . A function that has infinitely many derivatives is called infinitely differentiable or smooth. Any polynomial function is infinitely differentiable; taking derivatives repeatedly will eventually result in a constant function, and all subsequent derivatives of that function are zero. One application of higher-order derivatives is in physics. Suppose that a function represents the position of an object at the time. The first derivative of that function is the velocity of an object with respect to time, the second derivative of the function is the acceleration of an object with respect to time, and the third derivative is the jerk. In other dimensions Vector-valued functions A vector-valued function of a real variable sends real numbers to vectors in some vector space . A vector-valued function can be split up into its coordinate functions , meaning that . This includes, for example, parametric curves in or . The coordinate functions are real-valued functions, so the above definition of derivative applies to them. The derivative of is defined to be the vector, called the tangent vector, whose coordinates are the derivatives of the coordinate functions. That is, if the limit exists. The subtraction in the numerator is the subtraction of vectors, not scalars. If the derivative of exists for every value of , then is another vector-valued function. Partial derivatives Functions can depend upon more than one variable. A partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant. Partial derivatives are used in vector calculus and differential geometry. As with ordinary derivatives, multiple notations exist: the partial derivative of a function with respect to the variable is variously denoted by among other possibilities. It can be thought of as the rate of change of the function in the -direction. Here ∂ is a rounded d called the partial derivative symbol. To distinguish it from the letter d, ∂ is sometimes pronounced "der", "del", or "partial" instead of "dee". For example, let , then the partial derivative of function with respect to both variables and are, respectively: In general, the partial derivative of a function in the direction at the point is defined to be: This is fundamental for the study of the functions of several real variables. Let be such a real-valued function. If all partial derivatives with respect to are defined at the point , these partial derivatives define the vector which is called the gradient of at . If is differentiable at every point in some domain, then the gradient is a vector-valued function that maps the point to the vector . Consequently, the gradient determines a vector field. Directional derivatives If is a real-valued function on , then the partial derivatives of measure its variation in the direction of the coordinate axes. For example, if is a function of and , then its partial derivatives measure the variation in in the and direction. However, they do not directly measure the variation of in any other direction, such as along the diagonal line . These are measured using directional derivatives. Given a vector , then the directional derivative of in the direction of at the point is: If all the partial derivatives of exist and are continuous at , then they determine the directional derivative of in the direction by the formula: Total derivative, total differential and Jacobian matrix When is a function from an open subset of to , then the directional derivative of in a chosen direction is the best linear approximation to at that point and in that direction. However, when , no single directional derivative can give a complete picture of the behavior of . The total derivative gives a complete picture by considering all directions at once. That is, for any vector starting at , the linear approximation formula holds: Similarly with the single-variable derivative, is chosen so that the error in this approximation is as small as possible. The total derivative of at is the unique linear transformation such that Here is a vector in , so the norm in the denominator is the standard length on . However, is a vector in , and the norm in the numerator is the standard length on . If is a vector starting at , then is called the pushforward of by . If the total derivative exists at , then all the partial derivatives and directional derivatives of exist at , and for all , is the directional derivative of in the direction . If is written using coordinate functions, so that , then the total derivative can be expressed using the partial derivatives as a matrix. This matrix is called the Jacobian matrix of at : Generalizations The concept of a derivative can be extended to many other settings. The common thread is that the derivative of a function at a point serves as a linear approximation of the function at that point. An important generalization of the derivative concerns complex functions of complex variables, such as functions from (a domain in) the complex numbers to . The notion of the derivative of such a function is obtained by replacing real variables with complex variables in the definition. If is identified with by writing a complex number as then a differentiable function from to is certainly differentiable as a function from to (in the sense that its partial derivatives all exist), but the converse is not true in general: the complex derivative only exists if the real derivative is complex linear and this imposes relations between the partial derivatives called the Cauchy–Riemann equations – see holomorphic functions. Another generalization concerns functions between differentiable or smooth manifolds. Intuitively speaking such a manifold is a space that can be approximated near each point by a vector space called its tangent space: the prototypical example is a smooth surface in . The derivative (or differential) of a (differentiable) map between manifolds, at a point in , is then a linear map from the tangent space of at to the tangent space of at . The derivative function becomes a map between the tangent bundles of and . This definition is used in differential geometry. Differentiation can also be defined for maps between vector space, such as Banach space, in which those generalizations are the Gateaux derivative and the Fréchet derivative. One deficiency of the classical derivative is that very many functions are not differentiable. Nevertheless, there is a way of extending the notion of the derivative so that all continuous functions and many other functions can be differentiated using a concept known as the weak derivative. The idea is to embed the continuous functions in a larger space called the space of distributions and only require that a function is differentiable "on average". Properties of the derivative have inspired the introduction and study of many similar objects in algebra and topology; an example is differential algebra. Here, it consists of the derivation of some topics in abstract algebra, such as rings, ideals, field, and so on. The discrete equivalent of differentiation is finite differences. The study of differential calculus is unified with the calculus of finite differences in time scale calculus. The arithmetic derivative involves the function that is defined for the integers by the prime factorization. This is an analogy with the product rule. See also Covariant derivative Derivation Exterior derivative Functional derivative Integral Lie derivative Notes References . See the English version here. External links Khan Academy: "Newton, Leibniz, and Usain Bolt" Online Derivative Calculator from Wolfram Alpha. Mathematical analysis Differential calculus Functions and mappings Linear operators in calculus Rates Change
Derivative
[ "Mathematics" ]
3,952
[ "Mathematical analysis", "Functions and mappings", "Calculus", "Mathematical objects", "Mathematical relations", "Differential calculus" ]
7,925
https://en.wikipedia.org/wiki/David%20Hume
David Hume (; born David Home; – 25 August 1776) was a Scottish philosopher, historian, economist, and essayist who was best known for his highly influential system of empiricism, philosophical scepticism and metaphysical naturalism. Beginning with A Treatise of Human Nature (1739–40), Hume strove to create a naturalistic science of man that examined the psychological basis of human nature. Hume followed John Locke in rejecting the existence of innate ideas, concluding that all human knowledge derives solely from experience. This places him with Francis Bacon, Thomas Hobbes, John Locke, and George Berkeley as an empiricist. Hume argued that inductive reasoning and belief in causality cannot be justified rationally; instead, they result from custom and mental habit. We never actually perceive that one event causes another but only experience the "constant conjunction" of events. This problem of induction means that to draw any causal inferences from past experience, it is necessary to presuppose that the future will resemble the past; this metaphysical presupposition cannot itself be grounded in prior experience. An opponent of philosophical rationalists, Hume held that passions rather than reason govern human behaviour, famously proclaiming that "Reason is, and ought only to be the slave of the passions." Hume was also a sentimentalist who held that ethics are based on emotion or sentiment rather than abstract moral principle. He maintained an early commitment to naturalistic explanations of moral phenomena and is usually accepted by historians of European philosophy to have first clearly expounded the is–ought problem, or the idea that a statement of fact alone can never give rise to a normative conclusion of what ought to be done. Hume denied that humans have an actual conception of the self, positing that we experience only a bundle of sensations, and that the self is nothing more than this bundle of perceptions connected by an association of ideas. Hume's compatibilist theory of free will takes causal determinism as fully compatible with human freedom. His philosophy of religion, including his rejection of miracles, and of the argument from design for God's existence, were especially controversial for their time. Hume left a legacy that affected utilitarianism, logical positivism, the philosophy of science, early analytic philosophy, cognitive science, theology, and many other fields and thinkers. Immanuel Kant credited Hume as the inspiration that had awakened him from his "dogmatic slumbers." Early life Hume was born on 26 April 1711, as David Home, in a tenement on the north side of Edinburgh's Lawnmarket. He was the second of two sons born to Catherine Home (née Falconer), daughter of Sir David Falconer of Newton, Midlothian and his wife Mary Falconer (née Norvell), and Joseph Home of Chirnside in the County of Berwick, an advocate of Ninewells. Joseph died just after David's second birthday. Catherine, who never remarried, raised the two brothers and their sister on her own. Hume changed his family name's spelling in 1734, as the surname 'Home' (pronounced as 'Hume') was not well-known in England. Hume never married and lived partly at his Chirnside family home in Berwickshire, which had belonged to the family since the 16th century. His finances as a young man were very "slender", as his family was not rich; as a younger son he had little patrimony to live on. Hume attended the University of Edinburgh at an unusually early ageeither 12 or possibly as young as 10at a time when 14 was the typical age. Initially, Hume considered a career in law, because of his family. However, in his words, he came to have: ...an insurmountable aversion to everything but the pursuits of Philosophy and general Learning; and while [my family] fanceyed I was poring over Voet and Vinnius, Cicero and Virgil were the Authors which I was secretly devouring. He had little respect for the professors of his time, telling a friend in 1735 that "there is nothing to be learnt from a Professor, which is not to be met with in Books". He did not graduate. "Disease of the learned" At around age 18, Hume made a philosophical discovery that opened up to him "a new Scene of Thought", inspiring him "to throw up every other Pleasure or Business to apply entirely to it". As he did not recount what this scene exactly was, commentators have offered a variety of speculations. One prominent interpretation among contemporary Humean scholarship is that this new "scene of thought" was Hume's realisation that Francis Hutcheson's theory of moral sense could be applied to the understanding of morality as well. From this inspiration, Hume set out to spend a minimum of 10 years reading and writing. He soon came to the verge of a mental breakdown, first starting with a coldnesswhich he attributed to a "Laziness of Temper"that lasted about nine months. Scurvy spots later broke out on his fingers, persuading Hume's physician to diagnose him with the "Disease of the Learned". Hume wrote that he "went under a Course of Bitters and Anti-Hysteric Pills", taken along with a pint of claret every day. He also decided to have a more active life to better continue his learning. His health improved somewhat, but in 1731, he was afflicted with a ravenous appetite and palpitations. After eating well for a time, he went from being "tall, lean and raw-bon'd" to being "sturdy, robust [and] healthful-like." Indeed, Hume would become well known for being obese and having a fondness for good port and cheese, often using them as philosophical metaphors for his conjectures. Career Despite having noble ancestry, Hume had no source of income and no learned profession by age 25. As was common at his time, he became a merchant's assistant, despite having to leave his native Scotland. He travelled via Bristol to La Flèche in Anjou, France. There he had frequent discourse with the Jesuits of the College of La Flèche. Hume was derailed in his attempts to start a university career by protests over his alleged "atheism", also lamenting that his literary debut, A Treatise of Human Nature, "fell dead-born from the press." However, he found literary success in his lifetime as an essayist, and a career as a librarian at the University of Edinburgh. These successes provided him much needed income at the time. His tenure there, and the access to research materials it provided, resulted in Hume's writing the massive six-volume The History of England, which became a bestseller and the standard history of England in its day. For over 60 years, Hume was the dominant interpreter of English history. He described his "love for literary fame" as his "ruling passion" and judged his two late works, the so-called "first" and "second" enquiries, An Enquiry Concerning Human Understanding and An Enquiry Concerning the Principles of Morals, as his greatest literary and philosophical achievements. He would ask of his contemporaries to judge him on the merits of the later texts alone, rather than on the more radical formulations of his early, youthful work, dismissing his philosophical debut as juvenilia: "A work which the Author had projected before he left College." Despite Hume's protestations, a consensus exists today that his most important arguments and philosophically distinctive doctrines are found in the original form they take in the Treatise. Though he was only 23 years old when starting this work, it is now regarded as one of the most important in the history of Western philosophy. 1730s Hume worked for four years on his first major work, A Treatise of Human Nature, subtitled "Being an Attempt to Introduce the Experimental Method of Reasoning into Moral Subjects", completing it in 1738 at age 28. Although many scholars today consider the Treatise to be Hume's most important work and one of the most important books in Western philosophy, critics in Great Britain at the time described it as "abstract and unintelligible". As Hume had spent most of his savings during those four years, he resolved "to make a very rigid frugality supply [his] deficiency of fortune, to maintain unimpaired [his] independency, and to regard every object as contemptible except the improvements of [his] talents in literature". Despite the disappointment, Hume later wrote: "Being naturally of a cheerful and sanguine temper, I soon recovered from the blow and prosecuted with great ardour my studies in the country." There, in an attempt to make his larger work better known and more intelligible, he published the An Abstract of a Book lately Published as a summary of the main doctrines of the Treatise, without revealing its authorship. This work contained the same ideas, but with a shorter and clearer explanation. Although there has been some academic speculation as to the pamphlet's true author, it is generally regarded as Hume's creation. 1740s After the publication of Essays Moral and Political in 1741included in the later edition as Essays, Moral, Political, and LiteraryHume applied for the Chair of Pneumatics and Moral Philosophy at the University of Edinburgh. However, the position was given to William Cleghorn after Edinburgh ministers petitioned the town council not to appoint Hume because he was seen as an atheist. In 1745, during the Jacobite risings, Hume tutored the Marquess of Annandale, an engagement that ended in disarray after about a year. The Marquess could not follow with Hume's lectures, his father saw little need for philosophy, and on a personal level, the Marquess found Hume's dietary tendencies to be bizarre. Hume then started his great historical work, The History of England, which took fifteen years and ran to over a million words. During this time, he was also involved with the Canongate Theatre through his friend John Home, a preacher. In this context, he associated with Lord Monboddo and other thinkers of the Scottish Enlightenment in Edinburgh. From 1746, Hume served for three years as secretary to General James St Clair, who was envoy to the courts of Turin and Vienna. At that time Hume wrote Philosophical Essays Concerning Human Understanding, later published as An Enquiry Concerning Human Understanding. Often called the First Enquiry, it proved little more successful than the Treatise, perhaps because of the publication of his short autobiography My Own Life, which "made friends difficult for the first Enquiry". By the end of this period Hume had attained his well-known corpulent stature; "the good table of the General and the prolonged inactive life had done their work", leaving him "a man of tremendous bulk". In 1749 he went to live with his brother in the countryside, although he continued to associate with the aforementioned Scottish Enlightenment figures. 1750s–1760s Hume's religious views were often suspect and, in the 1750s, it was necessary for his friends to avert a trial against him on the charge of heresy, specifically in an ecclesiastical court. However, he "would not have come and could not be forced to attend if he said he was not a member of the Established Church". Hume failed to gain the chair of philosophy at the University of Glasgow due to his religious views. By this time, he had published the Philosophical Essays, which were decidedly anti-religious. This represented a turning point in his career and the various opportunities made available to him. Even Adam Smith, his personal friend who had vacated the Glasgow philosophy chair, was against his appointment out of concern that public opinion would be against it. In 1761, all his works were banned on the Index Librorum Prohibitorum. Hume returned to Edinburgh in 1751. In the following year, the Faculty of Advocates hired him to be their Librarian, a job in which he would receive little to no pay, but which nonetheless gave him "the command of a large library". This resource enabled him to continue historical research for The History of England. Hume's volume of Political Discourses, written in 1749 and published by Kincaid & Donaldson in 1752, was the only work he considered successful on first publication. In 1753, Hume moved from his house on Riddles Court on the Lawnmarket to a house on the Canongate at the other end of the Royal Mile. Here he lived in a tenement known as Jack's Land, immediately west of the still surviving Shoemakers Land. Eventually, with the publication of his six-volume The History of England between 1754 and 1762, Hume achieved the fame that he coveted. The volumes traced events from the Invasion of Julius Caesar to the Revolution of 1688 and was a bestseller in its day. Hume was also a longtime friend of bookseller Andrew Millar, who sold Hume's History (after acquiring the rights from Scottish bookseller Gavin Hamilton), although the relationship was sometimes complicated. Letters between them illuminate both men's interest in the success of the History. In 1762 Hume moved from Jack's Land on the Canongate to James Court on the Lawnmarket. He sold the house to James Boswell in 1766. Later life From 1763 to 1765, Hume was invited to attend Lord Hertford in Paris, where he became secretary to the British embassy in France. Hume was well received among Parisian society, and while there he met with Isaac de Pinto. In 1765, Hume served as a chargé d'affaires in Paris, writing "despatches to the British Secretary of State". He wrote of his Paris life, "I really wish often for the plain roughness of The Poker Club of Edinburgh... to correct and qualify so much lusciousness." Upon returning to Britain in 1766, Hume wrote a letter to Lord Hertford after being asked to by George Colebrooke; the letter informed Lord Hertford that he had an opportunity to invest in one of Colebrooke's slave plantations in the West Indies, though Hertford ultimately decided not to do so. In June of that year, Hume facilitated the purchase of a slave plantation in Martinique on behalf of his friend, the wine merchant John Stewart, by writing to the colony's governor Victor-Thérèse Charpentier. According to Felix Waldmann, a former Hume Fellow at the University of Edinburgh, Hume's "puckish scepticism about the existence of religious miracles played a significant part in defining the critical outlook which underpins the practice of modern science." Waldmann also argued that Hume's views "served to reinforce the institution of racialised slavery in the later 18th century." In 1766, Hume left Paris to accompany Jean-Jacques Rousseau to England. Once there, he and Rousseau fell out, leaving Hume sufficiently worried about the damage to his reputation from the quarrel with Rousseau that he would author an account of the dispute, titling it "A concise and genuine account of the dispute between Mr. Hume and Mr. Rousseau". In 1767, Hume was appointed Under Secretary of State for the Northern Department. Here, he wrote that he was given "all the secrets of the Kingdom". In 1769 he returned to James' Court in Edinburgh, where he would live from 1771 until his death in 1776. Hume's nephew and namesake, David Hume of Ninewells (1757–1838), was a co-founder of the Royal Society of Edinburgh in 1783. He was a Professor of Scots Law at Edinburgh University and rose to be Principal Clerk of Session in the Scottish High Court and Baron of the Exchequer. He is buried with his uncle in Old Calton Cemetery. Autobiography In the last year of his life, Hume wrote an extremely brief autobiographical essay titled "My Own Life", summing up his entire life in "fewer than 5 pages"; it contains many interesting judgments that have been of enduring interest to subsequent readers of Hume. Donald Seibert (1984), a scholar of 18th-century literature, judged it a "remarkable autobiography, even though it may lack the usual attractions of that genre. Anyone hankering for startling revelations or amusing anecdotes had better look elsewhere." Despite condemning vanity as a dangerous passion, in his autobiography Hume confesses his belief that the "love of literary fame" had served as his "ruling passion" in life, and claims that this desire "never soured my temper, notwithstanding my frequent disappointments". One such disappointment Hume discusses in this account is in the initial literary reception of the Treatise, which he claims to have overcome by means of the success of the Essays: "the work was favourably received, and soon made me entirely forget my former disappointment". Hume, in his own retrospective judgment, argues that his philosophical debut's apparent failure "had proceeded more from the manner than the matter". He thus suggests that "I had been guilty of a very usual indiscretion, in going to the press too early." Hume also provides an unambiguous self-assessment of the relative value of his works: that "my Enquiry concerning the Principles of Morals; which, in my own opinion (who ought not to judge on that subject) is of all my writings, historical, philosophical, or literary, incomparably the best." He also wrote of his social relations: "My company was not unacceptable to the young and careless, as well as to the studious and literary", noting of his complex relation to religion, as well as to the state, that "though I wantonly exposed myself to the rage of both civil and religious factions, they seemed to be disarmed in my behalf of their wonted fury". He goes on to profess of his character: "My friends never had occasion to vindicate any one circumstance of my character and conduct." Hume concludes the essay with a frank admission: I cannot say there is no vanity in making this funeral oration of myself, but I hope it is not a misplaced one; and this is a matter of fact which is easily cleared and ascertained. Death Diarist and biographer James Boswell saw Hume a few weeks before his death from a form of abdominal cancer. Hume told him that he sincerely believed it a "most unreasonable fancy" that there might be life after death. Hume asked that his body be interred in a "simple Roman tomb", requesting in his will that it be inscribed only with his name and the year of his birth and death, "leaving it to Posterity to add the Rest". David Hume died at the southwest corner of St. Andrew's Square in Edinburgh's New Town, at what is now 21 Saint David Street. A popular story, consistent with some historical evidence and with the help of coincidence, suggests that the street was named after Hume. His tomb stands, as he wished it, on the southwestern slope of Calton Hill, in the Old Calton Cemetery. Adam Smith later recounted Hume's amusing speculation that he might ask Charon, Hades' ferryman, to allow him a few more years of life in order to see "the downfall of some of the prevailing systems of superstition". The ferryman replied, "You loitering rogue, that will not happen these many hundred years.… Get into the boat this instant." Writings A Treatise of Human Nature begins with the introduction: "'Tis evident, that all the sciences have a relation, more or less, to human nature.… Even Mathematics, Natural Philosophy, and Natural Religion, are in some measure dependent on the science of Man." The science of man, as Hume explains, is the "only solid foundation for the other sciences" and that the method for this science requires both experience and observation as the foundations of a logical argument. In regards to this, philosophical historian Frederick Copleston (1999) suggests that it was Hume's aim to apply to the science of man the method of experimental philosophy (the term that was current at the time to imply natural philosophy), and that "Hume's plan is to extend to philosophy in general the methodological limitations of Newtonian physics." Until recently, Hume was seen as a forerunner of logical positivism, a form of anti-metaphysical empiricism. According to the logical positivists (in summary of their verification principle), unless a statement could be verified by experience, or else was true or false by definition (i.e., either tautological or contradictory), then it was meaningless. Hume, on this view, was a protopositivist, who, in his philosophical writings, attempted to demonstrate the ways in which ordinary propositions about objects, causal relations, the self, and so on, are semantically equivalent to propositions about one's experiences. Many commentators have since rejected this understanding of Humean empiricism, stressing an epistemological (rather than a semantic) reading of his project. According to this opposing view, Hume's empiricism consisted in the idea that it is our knowledge, and not our ability to conceive, that is restricted to what can be experienced. Hume thought that we can form beliefs about that which extends beyond any possible experience, through the operation of faculties such as custom and the imagination, but he was sceptical about claims to knowledge on this basis. Impressions and ideas A central doctrine of Hume's philosophy, stated in the very first lines of the Treatise of Human Nature, is that the mind consists of perceptions, or the mental objects which are present to it, and which divide into two categories: "All the perceptions of the human mind resolve themselves into two distinct kinds, which I shall call and ." Hume believed that it would "not be very necessary to employ many words in explaining this distinction", which commentators have generally taken to mean the distinction between feeling and thinking. Controversially, Hume, in some sense, may regard the distinction as a matter of degree, as he takes impressions to be distinguished from ideas on the basis of their force, liveliness, and vivacitywhat Henry E. Allison (2008) calls the "FLV criterion." Ideas are therefore "faint" impressions. For example, experiencing the painful sensation of touching a hot pan's handle is more forceful than simply thinking about touching a hot pan. According to Hume, impressions are meant to be the original form of all our ideas. From this, Don Garrett (2002) has coined the term copy principle, referring to Hume's doctrine that all ideas are ultimately copied from some original impression, whether it be a passion or sensation, from which they derive. Simple and complex After establishing the forcefulness of impressions and ideas, these two categories are further broken down into simple and complex: "simple perceptions or impressions and ideas are such as admit of no distinction nor separation", whereas "the complex are the contrary to these, and may be distinguished into parts". When looking at an apple, a person experiences a variety of colour-sensationswhat Hume notes as a complex impression. Similarly, a person experiences a variety of taste-sensations, tactile-sensations, and smell-sensations when biting into an apple, with the overall sensationagain, a complex impression. Thinking about an apple allows a person to form complex ideas, which are made of similar parts as the complex impressions they were developed from, but which are also less forceful. Hume believes that complex perceptions can be broken down into smaller and smaller parts until perceptions are reached that have no parts of their own, and these perceptions are thus referred to as simple. Principles of association Regardless of how boundless it may seem; a person's imagination is confined to the mind's ability to recombine the information it has already acquired from the body's sensory experience (the ideas that have been derived from impressions). In addition, "as our imagination takes our most basic ideas and leads us to form new ones, it is directed by three principles of association, namely, resemblance, contiguity, and cause and effect": The principle of resemblance refers to the tendency of ideas to become associated if the objects they represent resemble one another. For example, someone looking at an illustration of a flower can conceive an idea of the physical flower because the idea of the illustrated object is associated with the physical object's idea. The principle of contiguity describes the tendency of ideas to become associated if the objects they represent are near to each other in time or space, such as when the thought of a crayon in a box leads one to think of the crayon contiguous to it. The principle of cause and effect refers to the tendency of ideas to become associated if the objects they represent are causally related, which explains how remembering a broken window can make someone think of a ball that had caused the window to shatter. Hume elaborates more on the last principle, explaining that, when somebody observes that one object or event consistently produces the same object or event, that results in "an expectation that a particular event (a 'cause') will be followed by another event (an 'effect') previously and constantly associated with it". Hume calls this principle custom, or habit, saying that "custom...renders our experience useful to us, and makes us expect, for the future, a similar train of events with those which have appeared in the past". However, even though custom can serve as a guide in life, it still only represents an expectation. In other words: Experience cannot establish a necessary connection between cause and effect, because we can imagine without contradiction a case where the cause does not produce its usual effect…the reason why we mistakenly infer that there is something in the cause that necessarily produces its effect is because our past experiences have habituated us to think in this way. Continuing this idea, Hume argues that "only in the pure realm of ideas, logic, and mathematics, not contingent on the direct sense awareness of reality, [can] causation safely…be applied—all other sciences are reduced to probability". He uses this scepticism to reject metaphysics and many theological views on the basis that they are not grounded in fact and observations, and are therefore beyond the reach of human understanding. Induction and causation The cornerstone of Hume's epistemology is the problem of induction. This may be the area of Hume's thought where his scepticism about human powers of reason is most pronounced. The problem revolves around the plausibility of inductive reasoning, that is, reasoning from the observed behaviour of objects to their behaviour when unobserved. As Hume wrote, induction concerns how things behave when they go "beyond the present testimony of the senses, or the records of our memory". Hume argues that we tend to believe that things behave in a regular manner, meaning that patterns in the behaviour of objects seem to persist into the future, and throughout the unobserved present. Hume's argument is that we cannot rationally justify the claim that nature will continue to be uniform, as justification comes in only two varieties—demonstrative reasoning and probable reasoning—and both of these are inadequate. With regard to demonstrative reasoning, Hume argues that the uniformity principle cannot be demonstrated, as it is "consistent and conceivable" that nature might stop being regular. Turning to probable reasoning, Hume argues that we cannot hold that nature will continue to be uniform because it has been in the past. As this is using the very sort of reasoning (induction) that is under question, it would be circular reasoning. Thus, no form of justification will rationally warrant our inductive inferences. Hume's solution to this problem is to argue that, rather than reason, natural instinct explains the human practice of making inductive inferences. He asserts that "Nature, by an absolute and uncontroulable necessity has determin'd us to judge as well as to breathe and feel." In 1985, and in agreement with Hume, John D. Kenyon writes: Reason might manage to raise a doubt about the truth of a conclusion of natural inductive inference just for a moment ... but the sheer agreeableness of animal faith will protect us from excessive caution and sterile suspension of belief. Others, such as Charles Sanders Peirce, have demurred from Hume's solution, while some, such as Kant and Karl Popper, have thought that Hume's analysis has "posed a most fundamental challenge to all human knowledge claims". The notion of causation is closely linked to the problem of induction. According to Hume, we reason inductively by associating constantly conjoined events. It is the mental act of association that is the basis of our concept of causation. At least three interpretations of Hume's theory of causation are represented in the literature: the logical positivist; the sceptical realist; and the quasi-realist. Hume acknowledged that there are events constantly unfolding, and humanity cannot guarantee that these events are caused by prior events or are independent instances. He opposed the widely accepted theory of causation that 'all events have a specific course or reason'. Therefore, Hume crafted his own theory of causation, formed through his empiricist and sceptic beliefs. He split causation into two realms: "All the objects of human reason or enquiry may naturally be divided into two kinds, to wit, Relations of Ideas, and Matters of Fact." Relations of Ideas are a priori and represent universal bonds between ideas that mark the cornerstones of human thought. Matters of Fact are dependent on the observer and experience. They are often not universally held to be true among multiple persons. Hume was an Empiricist, meaning he believed "causes and effects are discoverable not by reason, but by experience". He goes on to say that, even with the perspective of the past, humanity cannot dictate future events because thoughts of the past are limited, compared to the possibilities for the future. Hume's separation between Matters of Fact and Relations of Ideas is often referred to as "Hume's fork." Hume explains his theory of causation and causal inference by division into three different parts. In these three branches he explains his ideas and compares and contrasts his views to his predecessors. These branches are the Critical Phase, the Constructive Phase, and Belief. In the Critical Phase, Hume denies his predecessors' theories of causation. Next, he uses the Constructive Phase to resolve any doubts the reader may have had while observing the Critical Phase. "Habit or Custom" mends the gaps in reasoning that occur without the human mind even realising it. Associating ideas has become second nature to the human mind. It "makes us expect for the future, a similar train of events with those which have appeared in the past". However, Hume says that this association cannot be trusted because the span of the human mind to comprehend the past is not necessarily applicable to the wide and distant future. This leads him to the third branch of causal inference, Belief. Belief is what drives the human mind to hold that expectancy of the future is based on past experience. Throughout his explanation of causal inference, Hume is arguing that the future is not certain to be repetition of the past and that the only way to justify induction is through uniformity. The logical positivist interpretation is that Hume analyses causal propositions, such as "A causes B", in terms of regularities in perception: "A causes B" is equivalent to "Whenever A-type events happen, B-type ones follow", where "whenever" refers to all possible perceptions. In his Treatise of Human Nature, Hume wrote: Power and necessity…are…qualities of perceptions, not of objects…felt by the soul and not perceiv'd externally in bodies. This view is rejected by sceptical realists, who argue that Hume thought that causation amounts to more than just the regular succession of events. Hume said that, when two events are causally conjoined, a necessary connection underpins the conjunction: Shall we rest contented with these two relations of contiguity and succession, as affording a complete idea of causation? By no means…there is a necessary connexion to be taken into consideration. Angela Coventry writes that, for Hume, "there is nothing in any particular instance of cause and effect involving external objects which suggests the idea of power or necessary connection" and "we are ignorant of the powers that operate between objects". However, while denying the possibility of knowing the powers between objects, Hume accepted the causal principle, writing: "I never asserted so absurd a proposition as that something could arise without a cause." It has been argued that, while Hume did not think that causation is reducible to pure regularity, he was not a fully-fledged realist either. Simon Blackburn calls this a quasi-realist reading, saying that "Someone talking of cause is voicing a distinct mental set: he is by no means in the same state as someone merely describing regular sequences." In Hume's words, "nothing is more usual than to apply to external bodies every internal sensation, which they occasion". 'Self' Empiricist philosophers, such as Hume and Berkeley, favoured the bundle theory of personal identity. In this theory, "the mind itself, far from being an independent power, is simply 'a bundle of perceptions' without unity or cohesive quality". The self is nothing but a bundle of experiences linked by the relations of causation and resemblance; or, more accurately, the empirically warranted idea of the self is just the idea of such a bundle. According to Hume: This view is supported by, for example, positivist interpreters, who have seen Hume as suggesting that terms such as "self", "person", or "mind" refer to collections of "sense-contents". A modern-day version of the bundle theory of the mind has been advanced by Derek Parfit in his Reasons and Persons. However, some philosophers have criticised Hume's bundle-theory interpretation of personal identity. They argue that distinct selves can have perceptions that stand in relation to similarity and causality. Thus, perceptions must already come parcelled into distinct "bundles" before they can be associated according to the relations of similarity and causality. In other words, the mind must already possess a unity that cannot be generated, or constituted, by these relations alone. Since the bundle-theory interpretation portrays Hume as answering an ontological question, philosophers like Galen Strawson see Hume as not very concerned with such questions and have queried whether this view is really Hume's. Instead, Strawson suggests that Hume might have been answering an epistemological question about the causal origin of our concept of the self. In the Appendix to the Treatise, Hume declares himself dissatisfied with his earlier account of personal identity in Book 1. Corliss Swain notes that "Commentators agree that if Hume did find some new problem" when he reviewed the section on personal identity, "he wasn't forthcoming about its nature in the Appendix." One interpretation of Hume's view of the self, argued for by philosopher and psychologist James Giles, is that Hume is not arguing for a bundle theory, which is a form of reductionism, but rather for an eliminative view of the self. Rather than reducing the self to a bundle of perceptions, Hume rejects the idea of the self altogether. On this interpretation, Hume is proposing a "no-self theory" and thus has much in common with Buddhist thought (see anattā). Psychologist Alison Gopnik has argued that Hume was in a position to learn about Buddhist thought during his time in France in the 1730s. Practical reason Practical reason relates to whether standards or principles exist that are also authoritative for all rational beings, dictating people's intentions and actions. Hume is mainly considered an anti-rationalist, denying the possibility for practical reason, although other philosophers such as Christine Korsgaard, Jean Hampton, and Elijah Millgram claim that Hume is not so much of an anti-rationalist as he is just a sceptic of practical reason. Hume denied the existence of practical reason as a principle because he claimed reason does not have any effect on morality, since morality is capable of producing effects in people that reason alone cannot create. As Hume explains in A Treatise of Human Nature (1740): Morals excite passions, and produce or prevent actions. Reason of itself is utterly impotent in this particular. The rules of morality, therefore, are not conclusions of our reason." Since practical reason is supposed to regulate our actions (in theory), Hume denied practical reason on the grounds that reason cannot directly oppose passions. As Hume puts it, "Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them." Reason is less significant than any passion because reason has no original influence, while "A passion is an original existence, or, if you will, modification of existence." Practical reason is also concerned with the value of actions rather than the truth of propositions, so Hume believed that reason's shortcoming of affecting morality proved that practical reason could not be authoritative for all rational beings, since morality was essential for dictating people's intentions and actions. Ethics Hume's writings on ethics began in the 1740 Treatise and were refined in his An Enquiry Concerning the Principles of Morals (1751). He understood feeling, rather than knowing, as that which governs ethical actions, stating that "moral decisions are grounded in moral sentiment." Arguing that reason cannot be behind morality, he wrote: Morals excite passions, and produce or prevent actions. Reason itself is utterly impotent in this particular. The rules of morality, therefore, are not conclusions of our reason. Hume's moral sentimentalism was shared by his close friend Adam Smith, and the two were mutually influenced by the moral reflections of their older contemporary, Francis Hutcheson. Peter Singer claims that Hume's argument that morals cannot have a rational basis alone "would have been enough to earn him a place in the history of ethics." Hume also put forward the is–ought problem, later known as Hume's Law, denying the possibility of logically deriving what ought to be from what is. According to the Treatise (1740), in every system of morality that Hume has read, the author begins by stating facts about the world as it is but always ends up suddenly referring to what ought to be the case. Hume demands that a reason should be given for inferring what ought to be the case, from what is the case. This is because it "seems altogether inconceivable, how this new relation can be a deduction from others." Hume's theory of ethics has been influential in modern-day meta-ethical theory, helping to inspire emotivism, and ethical expressivism and non-cognitivism, as well as Allan Gibbard's general theory of moral judgment and judgments of rationality. Aesthetics Hume's ideas about aesthetics and the theory of art are spread throughout his works, but are particularly connected with his ethical writings, and also the essays "Of the Standard of Taste" and "Of Tragedy" (1757). His views are rooted in the work of Joseph Addison and Francis Hutcheson. In the Treatise (1740), he touches on the connection between beauty and deformity and vice and virtue. His later writings on the subject continue to draw parallels of beauty and deformity in art with conduct and character. In "Standard of Taste", Hume argues that no rules can be drawn up about what is a tasteful object. However, a reliable critic of taste can be recognised as objective, sensible and unprejudiced, and as having extensive experience. "Of Tragedy" addresses the question of why humans enjoy tragic drama. Hume was concerned with the way spectators find pleasure in the sorrow and anxiety depicted in a tragedy. He argued that this was because the spectator is aware that he is witnessing a dramatic performance. There is pleasure in realising that the terrible events that are being shown are actually fiction. Furthermore, Hume laid down rules for educating people in taste and correct conduct, and his writings in this area have been very influential on English and Anglo-Saxon aesthetics. Free will, determinism, and responsibility Hume, along with Thomas Hobbes, is cited as a classical compatibilist about the notions of freedom and determinism. Compatibilism seeks to reconcile human freedom with the mechanist view that human beings are part of a deterministic universe, which is completely governed by physical laws. Hume, on this point, was influenced greatly by the scientific revolution, particularly by Sir Isaac Newton. Hume argued that the dispute between freedom and determinism continued over 2000 years due to ambiguous terminology. He wrote: "From this circumstance alone, that a controversy has been long kept on foot…we may presume that there is some ambiguity in the expression," and that different disputants use different meanings for the same terms. Hume defines the concept of necessity as "the uniformity, observable in the operations of nature; where similar objects are constantly conjoined together," and liberty as "a power of acting or not acting, according to the determinations of the will." He then argues that, according to these definitions, not only are the two compatible, but liberty requires necessity. For if our actions were not necessitated in the above sense, they would "have so little in connexion with motives, inclinations and circumstances, that one does not follow with a certain degree of uniformity from the other." But if our actions are not thus connected to the will, then our actions can never be free: they would be matters of "chance; which is universally allowed to have no existence." Australian philosopher John Passmore writes that confusion has arisen because "necessity" has been taken to mean "necessary connexion." Once this has been abandoned, Hume argues that "liberty and necessity will be found not to be in conflict one with another." Moreover, Hume goes on to argue that in order to be held morally responsible, it is required that our behaviour be caused or necessitated, for, as he wrote: Actions are, by their very nature, temporary and perishing; and where they proceed not from some cause in the character and disposition of the person who performed them, they can neither redound to his honour, if good; nor infamy, if evil. Hume describes the link between causality and our capacity to rationally make a decision from this an inference of the mind. Human beings assess a situation based upon certain predetermined events and from that form a choice. Hume believes that this choice is made spontaneously. Hume calls this form of decision making the liberty of spontaneity. Education writer Richard Wright considers that Hume's position rejects a famous moral puzzle attributed to French philosopher Jean Buridan. The Buridan's ass puzzle describes a donkey that is hungry. This donkey has separate bales of hay on both sides, which are of equal distances from him. The problem concerns which bale the donkey chooses. Buridan was said to believe that the donkey would die, because he has no autonomy. The donkey is incapable of forming a rational decision as there is no motive to choose one bale of hay over the other. However, human beings are different, because a human who is placed in a position where he is forced to choose one loaf of bread over another will make a decision to take one in lieu of the other. For Buridan, humans have the capacity of autonomy, and he recognises the choice that is ultimately made will be based on chance, as both loaves of bread are exactly the same. However, Wright says that Hume completely rejects this notion, arguing that a human will spontaneously act in such a situation because he is faced with impending death if he fails to do so. Such a decision is not made on the basis of chance, but rather on necessity and spontaneity, given the prior predetermined events leading up to the predicament. Hume's argument is supported by modern-day compatibilists such as R. E. Hobart, a pseudonym of philosopher Dickinson S. Miller. However, P. F. Strawson argued that the issue of whether we hold one another morally responsible does not ultimately depend on the truth or falsity of a metaphysical thesis such as determinism. This is because our so holding one another is a non-rational human sentiment that is not predicated on such theses. Religion Philosopher Paul Russell (2005) contends that Hume wrote "on almost every central question in the philosophy of religion", and that these writings "are among the most important and influential contributions on this topic." Touching on the philosophy, psychology, history, and anthropology of religious thought, Hume's 1757 dissertation "The Natural History of Religion" argues that the monotheistic religions of Judaism, Christianity, and Islam all derive from earlier polytheistic religions. He went on to suggest that all religious belief "traces, in the end, to dread of the unknown". Hume had also written on religious subjects in the first Enquiry, as well as later in the Dialogues Concerning Natural Religion. Religious views Although he wrote a great deal about religion, Hume's personal views have been the subject of much debate. Some modern critics have described Hume's religious views as agnostic or have described him as a "Pyrrhonian skeptic". Contemporaries considered him to be an atheist, or at least un-Christian, enough so that the Church of Scotland seriously considered bringing charges of infidelity against him. Evidence of his un-Christian beliefs can especially be found in his writings on miracles, in which he attempts to separate historical method from the narrative accounts of miracles. Nevertheless, modern scholars have tended to dismiss the claims of Hume's contemporaries describing him as an atheist as coming from religiously intolerant people who did not understand Hume’s philosophy. The fact that contemporaries suspected him of atheism is exemplified by a story Hume liked to tell: The best theologian he ever met, he used to say, was the old Edinburgh fishwife who, having recognized him as Hume the atheist, refused to pull him out of the bog into which he had fallen until he declared he was a Christian and repeated the Lord's prayer. However, in works such as "Of Superstition and Enthusiasm", Hume specifically seems to support the standard religious views of his time and place. This still meant that he could be very critical of the Catholic Church, dismissing it with the standard Protestant accusations of superstition and idolatry, as well as dismissing as idolatry what his compatriots saw as uncivilised beliefs. He also considered extreme Protestant sects, the members of which he called "enthusiasts", to be corrupters of religion. By contrast, in "The Natural History of Religion", Hume presents arguments suggesting that polytheism had much to commend it over monotheism. Additionally, when mentioning religion as a factor in his History of England, Hume uses it to show the deleterious effect it has on human progress. In his Treatise of Human Nature, Hume wrote: "Generally speaking, the errors in religions are dangerous; those in philosophy only ridiculous." Lou Reich (1998) argues that Hume was a religious naturalist and rejects interpretations of Hume as an atheist. Paul Russell (2008) writes that Hume was plainly sceptical about religious belief, although perhaps not to the extent of complete atheism. He suggests that Hume's position is best characterised by the term "irreligion," while philosopher David O'Connor (2013) argues that Hume's final position was "weakly deistic". For O'Connor, Hume's "position is deeply ironic. This is because, while inclining towards a weak form of deism, he seriously doubts that we can ever find a sufficiently favourable balance of evidence to justify accepting any religious position." He adds that Hume "did not believe in the God of standard theism ... but he did not rule out all concepts of deity", and that "ambiguity suited his purposes, and this creates difficulty in definitively pinning down his final position on religion". Design argument One of the traditional topics of natural theology is that of the existence of God, and one of the a posteriori arguments for this is the argument from design or the teleological argument. The argument is that the existence of God can be proved by the design that is obvious in the complexity of the world, which Encyclopædia Britannica states is "the most popular", because it is: ...the most accessible of the theistic arguments ... which identifies evidences of design in nature, inferring from them a divine designer ... The fact that the universe as a whole is a coherent and efficiently functioning system likewise, in this view, indicates a divine intelligence behind it. In An Enquiry Concerning Human Understanding, Hume wrote that the design argument seems to depend upon our experience, and its proponents "always suppose the universe, an effect quite singular and unparalleled, to be the proof of a Deity, a cause no less singular and unparalleled". Philosopher Louise E. Loeb (2010) notes that Hume is saying that only experience and observation can be our guide to making inferences about the conjunction between events. However, according to Hume: We observe neither God nor other universes, and hence no conjunction involving them. There is no observed conjunction to ground an inference either to extended objects or to God, as unobserved causes. Hume also criticised the argument in his Dialogues Concerning Natural Religion (1779). Hume proposes a finite universe with a finite number of particles. Given infinite time, these particles could randomly fall into any arrangement, including our seemingly designed world. A century later, the idea of order without design was rendered more plausible by Charles Darwin's discovery that the adaptations of the forms of life result from the natural selection of inherited characteristics. For philosopher James D. Madden, it is "Hume, rivaled only by Darwin, [who] has done the most to undermine in principle our confidence in arguments from design among all figures in the Western intellectual tradition". Finally, Hume discussed a version of the anthropic principle, which is the idea that theories of the universe are constrained by the need to allow for man's existence in it as an observer. Hume has his sceptical mouthpiece Philo suggest that there may have been many worlds, produced by an incompetent designer, whom he called a "stupid mechanic". In his Dialogues Concerning Natural Religion, Hume wrote: Many worlds might have been botched and bungled throughout an eternity, ere this system was struck out: much labour lost: many fruitless trials made: and a slow, but continued improvement carried on during infinite ages in the art of world-making. American philosopher Daniel Dennett has suggested that this mechanical explanation of teleology, although "obviously ... an amusing philosophical fantasy", anticipated the notion of natural selection, the 'continued improvement' being like "any Darwinian selection algorithm". Problem of miracles In his discussion of miracles, Hume argues that we should not believe miracles have occurred and that they do not therefore provide us with any reason to think God exists. In An Enquiry Concerning Human Understanding (Section 10), Hume defines a miracle as "a transgression of a law of nature by a particular volition of the Deity, or by the interposition of some invisible agent". Hume says we believe an event that has frequently occurred is likely to occur again, but we also take into account those instances where the event did not occur: A wise man ... considers which side is supported by the greater number of experiments. ... A hundred instances or experiments on one side, and fifty on another, afford a doubtful expectation of any event; though a hundred uniform experiments, with only one that is contradictory, reasonably beget a pretty strong degree of assurance. In all cases, we must balance the opposite experiments ... and deduct the smaller number from the greater, in order to know the exact force of the superior evidence. Hume discusses the testimony of those who report miracles. He wrote that testimony might be doubted even from some great authority in case the facts themselves are not credible: "[T]he evidence, resulting from the testimony, admits of a diminution, greater or less, in proportion as the fact is more or less unusual." Although Hume leaves open the possibility for miracles to occur and be reported, he offers various arguments against this ever having happened in history. He points out that people often lie, and they have good reasons to lie about miracles occurring either because they believe they are doing so for the benefit of their religion or because of the fame that results. Furthermore, people by nature enjoy relating miracles they have heard without caring for their veracity and thus miracles are easily transmitted even when false. Also, Hume notes that miracles seem to occur mostly in "ignorant and barbarous nations" and times, and the reason they do not occur in the civilised societies is such societies are not awed by what they know to be natural events. Hume recognizes that over a long period of time, various coincidences can provide the appearance of intention. Finally, the miracles of each religion argue against all other religions and their miracles, and so even if a proportion of all reported miracles across the world fit Hume's requirement for belief, the miracles of each religion make the other less likely. Hume was extremely pleased with his argument against miracles in his Enquiry. He states, "I flatter myself, that I have discovered an argument of a like nature, which, if just, will, with the wise and learned, be an everlasting check to all kinds of superstitious delusion, and consequently, will be useful as long as the world endures." Thus, Hume's argument against miracles had a more abstract basis founded upon the scrutiny, not just primarily of miracles, but of all forms of belief systems. It is a commonsense notion of veracity based upon epistemological evidence, and founded on a principle of rationality, proportionality and reasonability. The criterion for assessing Hume's belief system is based on the balance of probability whether something is more likely than not to have occurred. Since the weight of empirical experience contradicts the notion for the existence of miracles, such accounts should be treated with scepticism. Further, the myriad of accounts of miracles contradict one another, as some people who receive miracles will aim to prove the authority of Jesus, whereas others will aim to prove the authority of Muhammad or some other religious prophet or deity. These various differing accounts weaken the overall evidential power of miracles. Despite all this, Hume observes that belief in miracles is popular, and that "the gazing populace… receive greedily, without examination, whatever soothes superstition, and promotes wonder." Critics have argued that Hume's position assumes the character of miracles and natural laws prior to any specific examination of miracle claims, thus it amounts to a subtle form of begging the question. To assume that testimony is a homogeneous reference group seems unwise- to compare private miracles with public miracles, unintellectual observers with intellectual observers and those who have little to gain and much to lose with those with much to gain and little to lose is not convincing to many. Indeed, many have argued that miracles not only do not contradict the laws of nature but require the laws of nature to be intelligible as miraculous, and thus subverting the law of nature. For example, William Adams remarks that "there must be an ordinary course of nature before anything can be extraordinary. There must be a stream before anything can be interrupted." They have also noted that it requires an appeal to inductive inference, as none have observed every part of nature nor examined every possible miracle claim, for instance those in the future. This, in Hume's philosophy, was especially problematic. Little appreciated is the voluminous literature either foreshadowing Hume, in the likes of Thomas Sherlock or directly responding to and engaging with Hume—from William Paley, William Adams, John Douglas, John Leland, and George Campbell, among others. Regarding the latter, it is rumoured that, having read Campbell's Dissertation, Hume remarked that "the Scotch theologue had beaten him." Hume's main argument concerning miracles is that miracles by definition are singular events that differ from the established laws of nature. Such natural laws are codified as a result of past experiences. Therefore, a miracle is a violation of all prior experience and thus incapable on this basis of reasonable belief. However, the probability that something has occurred in contradiction of all past experience should always be judged to be less than the probability that either one's senses have deceived one, or the person recounting the miraculous occurrence is lying or mistaken, Hume would say, all of which he had past experience of. For Hume, this refusal to grant credence does not guarantee correctness. He offers the example of an Indian Prince, who, having grown up in a hot country, refuses to believe that water has frozen. By Hume's lights, this refusal is not wrong and the prince "reasoned justly;" it is presumably only when he has had extensive experience of the freezing of water that he has warrant to believe that the event could occur. So, for Hume, either the miraculous event will become a recurrent event or else it will never be rational to believe it occurred. The connection to religious belief is left unexplained throughout, except for the close of his discussion where Hume notes the reliance of Christianity upon testimony of miraculous occurrences. He makes an ironic remark that anyone who "is moved by faith to assent" to revealed testimony "is conscious of a continued miracle in his own person, which subverts all principles of his understanding, and gives him a determination to believe what is most contrary to custom and experience." Hume writes that "All the testimony whichever was really given for any miracle, or ever will be given, is a subject of derision." As a historian of England From 1754 to 1762 Hume published The History of England, a six-volume work, that extends (according to its subtitle) "From the Invasion of Julius Caesar to the Revolution in 1688." Inspired by Voltaire's sense of the breadth of history, Hume widened the focus of the field away from merely kings, parliaments, and armies, to literature and science as well. He argued that the quest for liberty was the highest standard for judging the past, and concluded that after considerable fluctuation, England at the time of his writing had achieved "the most entire system of liberty that was ever known amongst mankind". It "must be regarded as an event of cultural importance. In its own day, moreover, it was an innovation, soaring high above its very few predecessors." Hume's History of England made him famous as a historian before he was ever considered a serious philosopher. In this work, Hume uses history to tell the story of the rise of England and what led to its greatness and the disastrous effects that religion has had on its progress. For Hume, the history of England's rise may give a template for others who would also like to rise to its current greatness. Hume's The History of England was profoundly impacted by his Scottish background. The science of sociology, which is rooted in Scottish thinking of the eighteenth century, had never before been applied to British philosophical history. Because of his Scottish background, Hume was able to bring an outsider's lens to English history that the insulated English whigs lacked. Hume's coverage of the political upheavals of the 17th century relied in large part on the Earl of Clarendon's History of the Rebellion and Civil Wars in England (1646–69). Generally, Hume took a moderate royalist position and considered revolution unnecessary to achieve necessary reform. Hume was considered a Tory historian and emphasised religious differences more than constitutional issues. Laird Okie explains that "Hume preached the virtues of political moderation, but ... it was moderation with an anti-Whig, pro-royalist coloring." For "Hume shared the ... Tory belief that the Stuarts were no more high-handed than their Tudor predecessors". "Even though Hume wrote with an anti-Whig animus, it is, paradoxically, correct to regard the History as an establishment work, one which implicitly endorsed the ruling oligarchy". Historians have debated whether Hume posited a universal unchanging human nature, or allowed for evolution and development. The debate between Tory and the Whig historians can be seen in the initial reception to Hume's History of England. The whig-dominated world of 1754 overwhelmingly disapproved of Hume's take on English history. In later editions of the book, Hume worked to "soften or expunge many villainous whig strokes which had crept into it." Hume did not consider himself a pure Tory. Before 1745, he was more akin to an "independent whig." In 1748, he described himself as "a whig, though a very skeptical one." This description of himself as in between whiggism and toryism, helps one understand that his History of England should be read as his attempt to work out his own philosophy of history. Robert Roth argues that Hume's histories display his biases against Presbyterians and Puritans. Roth says his anti-Whig pro-monarchy position diminished the influence of his work, and that his emphasis on politics and religion led to a neglect of social and economic history. Hume was an early cultural historian of science. His short biographies of leading scientists explored the process of scientific change. He developed new ways of seeing scientists in the context of their times by looking at how they interacted with society and each other. He covers over forty scientists, with special attention paid to Francis Bacon, Robert Boyle, and Isaac Newton. Hume particularly praised William Harvey, writing about his treatise of the circulation of the blood: "Harvey is entitled to the glory of having made, by reasoning alone, without any mixture of accident, a capital discovery in one of the most important branches of science." The History became a best-seller and made Hume a wealthy man who no longer had to take up salaried work for others. It was influential for nearly a century, despite competition from imitations by Smollett (1757), Goldsmith (1771) and others. By 1894, there were at least 50 editions as well as abridgements for students, and illustrated pocket editions, probably produced specifically for women. Political theory Many of Hume's political ideas, such as limited government, private property when there is scarcity, and constitutionalism, are first principles of liberalism. Thomas Jefferson banned the History from University of Virginia, feeling that it had "spread universal toryism over the land." By comparison, Samuel Johnson thought Hume to be "a Tory by chance [...] for he has no principle. If he is anything, he is a Hobbist." A major concern of Hume's political philosophy is the importance of the rule of law. He also stresses throughout his political essays the importance of moderation in politics, public spirit, and regard to the community. Throughout the period of the American Revolution, Hume had varying views. For instance, in 1768 he encouraged total revolt on the part of the Americans. In 1775, he became certain that a revolution would take place and said that he believed in the American principle and wished the British government would let them be. Hume's influence on some of the Founders can be seen in Benjamin Franklin's suggestion at the Philadelphia Convention of 1787 that no high office in any branch of government should receive a salary, which is a suggestion Hume had made in his emendation of James Harrington's Oceana. The legacy of religious civil war in 18th-century Scotland, combined with the relatively recent memory of the 1715 and 1745 Jacobite risings, had fostered in Hume a distaste for enthusiasm and factionalism. These appeared to him to threaten the fragile and nascent political and social stability of a country that was deeply politically and religiously divided. Hume thought that society is best governed by a general and impartial system of laws; he is less concerned about the form of government that administers these laws, so long as it does so fairly. However, he also clarified that a republic must produce laws, while "monarchy, when absolute, contains even something repugnant to law." Hume expressed suspicion of attempts to reform society in ways that departed from long-established custom, and he counselled peoples not to resist their governments except in cases of the most egregious tyranny. However, he resisted aligning himself with either of Britain's two political parties, the Whigs and the Tories, explaining that "my views of things are more conformable to Whig principles; my representations of persons to Tory prejudices". The scholar Jerry Z. Muller argues that Hume's political thoughts have characteristics that later became typical for American and British conservatism, which contain more positive views of capitalism than conservatism does elsewhere. Canadian philosopher Neil McArthur writes that Hume believed that we should try to balance our demands for liberty with the need for strong authority, without sacrificing either. McArthur characterises Hume as a "precautionary conservative," whose actions would have been "determined by prudential concerns about the consequences of change, which often demand we ignore our own principles about what is ideal or even legitimate." Hume supported the liberty of the press, and was sympathetic to democracy, when suitably constrained. American historian Douglass Adair has argued that Hume was a major inspiration for James Madison's writings, and the essay "Federalist No. 10" in particular. Hume offered his view on the best type of society in an essay titled "Idea of a Perfect Commonwealth", which lays out what he thought was the best form of government. He hoped that "in some future age, an opportunity might be afforded of reducing the theory to practice, either by a dissolution of some old government, or by the combination of men to form a new one, in some distant part of the world". He defended a strict separation of powers, decentralisation, extending the franchise to anyone who held property of value and limiting the power of the clergy. The system of the Swiss militia was proposed as the best form of protection. Elections were to take place on an annual basis and representatives were to be unpaid. Political philosophers Leo Strauss and Joseph Cropsey, writing of Hume's thoughts about "the wise statesman", note that he "will bear a reverence to what carries the marks of age." Also, if he wishes to improve a constitution, his innovations will take account of the "ancient fabric", in order not to disturb society. In the political analysis of philosopher George Holland Sabine, the scepticism of Hume extended to the doctrine of government by consent. He notes that "allegiance is a habit enforced by education and consequently as much a part of human nature as any other motive." In the 1770s, Hume was critical of British policies toward the American colonies and advocated for American independence. He wrote in 1771 that "our union with America…in the nature of things, cannot long subsist." Contributions to economic thought Hume expressed his economic views in his Political Discourses, which were incorporated in Essays and Treatises as Part II of Essays, Moral and Political. To what extent he was influenced by Adam Smith is difficult to assess; however, both of them had similar principles supported from historical events. At the same time Hume did not demonstrate concrete system of economic theory which could be observed in Smith's Wealth of Nations. However, he introduced several new ideas around which the "classical economics" of the 18th century was built. Through his discussions on politics, Hume developed many ideas that are prevalent in the field of economics. This includes ideas on private property, inflation, and foreign trade. Referring to his essay "Of the Balance of Trade", economist Paul Krugman (2012) has remarked that "David Hume created what I consider the first true economic model." In contrast to Locke, Hume believes that private property is not a natural right. Hume argues it is justified, because resources are limited. Private property would be an unjustified, "idle ceremonial," if all goods were unlimited and available freely. Hume also believed in an unequal distribution of property, because perfect equality would destroy the ideas of thrift and industry. Perfect equality would thus lead to impoverishment. David Hume anticipated modern monetarism. First, Hume contributed to the theory of quantity and of interest rate. Hume has been credited with being the first to prove that, on an abstract level, there is no quantifiable amount of nominal money that a country needs to thrive. He understood that there was a difference between nominal and real money. Second, Hume has a theory of causation which fits in with the Chicago-school "black box" approach. According to Hume, cause and effect are related only through correlation. Hume shared the belief with modern monetarists that changes in the supply of money can affect consumption and investment. Lastly, Hume was a vocal advocate of a stable private sector, though also having some non-monetarist aspects to his economic philosophy. Having a stated preference for rising prices, for instance, Hume considered government debt to be a sort of substitute for actual money, referring to such debt as "a kind of paper credit." He also believed in heavy taxation, believing that it increases effort. Hume's economic approach evidently resembles his other philosophies, in that he does not choose one side indefinitely, but sees gray in the situation Legacy Due to Hume's vast influence on contemporary philosophy, a large number of approaches in contemporary philosophy and cognitive science are today called "Humean." The writings of Thomas Reid, a Scottish philosopher and contemporary of Hume, were often critical of Hume's scepticism. Reid formulated his common sense philosophy, in part, as a reaction against Hume's views. Hume influenced, and was influenced by, the Christian philosopher Joseph Butler. Hume was impressed by Butler's way of thinking about religion, and Butler may well have been influenced by Hume's writings. Attention to Hume's philosophical works grew after the German philosopher Immanuel Kant, in his Prolegomena to Any Future Metaphysics (1783), credited Hume with awakening him from his "dogmatic slumber." According to Arthur Schopenhauer, "there is more to be learned from each page of David Hume than from the collected philosophical works of Hegel, Herbart and Schleiermacher taken together." A. J. Ayer, while introducing his classic exposition of logical positivism in 1936, claimed that his views were "the logical outcome of the empiricism of Berkeley and David Hume". Albert Einstein, in 1915, wrote that he was inspired by Hume's positivism when formulating his theory of special relativity. Hume's problem of induction was also of fundamental importance to the philosophy of Karl Popper. In his autobiography, Unended Quest, he wrote: "Knowledge ... is objective; and it is hypothetical or conjectural. This way of looking at the problem made it possible for me to reformulate Hume's problem of induction." This insight resulted in Popper's major work The Logic of Scientific Discovery. In his Conjectures and Refutations, he wrote that he "approached the problem of induction through Hume", since Hume was "perfectly right in pointing out that induction cannot be logically justified". Hume's rationalism in religious subjects influenced, via German-Scottish theologian Johann Joachim Spalding, the German neology school and rational theology, and contributed to the transformation of German theology in the Age of Enlightenment. Hume pioneered a comparative history of religion, tried to explain various rites and traditions as being based on deception and challenged various aspects of rational and natural theology, such as the argument from design. Danish theologian and philosopher Søren Kierkegaard adopted "Hume's suggestion that the role of reason is not to make us wise but to reveal our ignorance," though taking it as a reason for the necessity of religious faith, or fideism. The "fact that Christianity is contrary to reason…is the necessary precondition for true faith." Political theorist Isaiah Berlin, who has also pointed out the similarities between the arguments of Hume and Kierkegaard against rational theology, has written about Hume's influence on what Berlin calls the counter-Enlightenment and on German anti-rationalism. Berlin has also once said of Hume that "no man has influenced the history of philosophy to a deeper or more disturbing degree." In 2003, philosopher Jerry Fodor described Hume's Treatise as "the founding document of cognitive science." Hume engaged with contemporary intellectuals including Jean-Jacques Rousseau, James Boswell, and Adam Smith (who acknowledged Hume's influence on his economics and political philosophy). Morris and Brown (2019) write that Hume is "generally regarded as one of the most important philosophers to write in English." In September 2020, the David Hume Tower, a University of Edinburgh building, was renamed to 40 George Square; this was following a campaign led by students of the university to rename it, in objection to Hume's writings related to race. Works 1734. A Kind of History of My Life. – MSS 23159 National Library of Scotland. A letter to an unnamed physician, asking for advice about "the Disease of the Learned" that then afflicted him. Here he reports that at the age of eighteen "there seem'd to be open'd up to me a new Scene of Thought" that made him "throw up every other Pleasure or Business" and turned him to scholarship. 1739–1740. A Treatise of Human Nature: Being an Attempt to introduce the experimental Method of Reasoning into Moral Subjects. Hume intended to see whether the Treatise of Human Nature met with success, and if so, to complete it with books devoted to Politics and Criticism. However, as Hume explained, "It fell dead-born from the press, without reaching such distinction as even to excite a murmur among the zealots" and so his further project was not completed. 1740. An Abstract of a Book lately Published: Entitled A Treatise of Human Nature etc. Anonymously published, but almost certainly written by Hume in an attempt to popularise his Treatise. This work is of considerable philosophical interest as it spells out what Hume considered "The Chief Argument" of the Treatise, in a way that seems to anticipate the structure of the Enquiry concerning Human Understanding. 1741. Essays, Moral, Political, and Literary (2nd ed.) A collection of pieces written and published over many years, though most were collected together in 1753–54. Many of the essays are on politics and economics; other topics include aesthetic judgement, love, marriage and polygamy, and the demographics of ancient Greece and Rome. The Essays show some influence from Addison's Tatler and The Spectator, which Hume read avidly in his youth. 1745. A Letter from a Gentleman to His Friend in Edinburgh: Containing Some Observations on a Specimen of the Principles concerning Religion and Morality, said to be maintain'd in a Book lately publish'd, intituled A Treatise of Human Nature etc. Contains a letter written by Hume to defend himself against charges of atheism and scepticism, while applying for a chair at Edinburgh University. 1742. "Of Essay Writing." 1748. An Enquiry Concerning Human Understanding. Contains reworking of the main points of the Treatise, Book 1, with the addition of material on free will (adapted from Book 2), miracles, the Design Argument, and mitigated scepticism. Of Miracles, section X of the Enquiry, was often published separately. 1751. An Enquiry Concerning the Principles of Morals. A reworking of material on morality from Book 3 of the Treatise, but with a significantly different emphasis. It "was thought by Hume to be the best of his writings." 1752. Political Discourses (part II of Essays, Moral, Political, and Literary within the larger Essays and Treatises on Several Subjects, vol. 1). Included in Essays and Treatises on Several Subjects (1753–56) reprinted 1758–77. 1752–1758. Political Discourses/Discours politiques 1757. Four Dissertations – includes 4 essays: "The Natural History of Religion" "Of the Passions" "Of Tragedy" "Of the Standard of Taste" 1754–1762. The History of England – sometimes referred to as The History of Great Britain. More a category of books than a single work, Hume's history spanned "from the invasion of Julius Caesar to the Revolution of 1688" and went through over 100 editions. Many considered it the standard history of England in its day. 1760. "Sister Peg" Hume claimed to have authored an anonymous political pamphlet satirizing the failure of the British Parliament to create a Scottish militia in 1760. Although the authorship of the work is disputed, Hume wrote Alexander Carlyle in early 1761 claiming authorship. The readership of the time attributed the work to Adam Ferguson, a friend and associate of Hume's who has been sometimes called "the founder of modern sociology." Some contemporary scholars concur in the judgment that Ferguson, not Hume, was the author of this work. 1776. "My Own Life." Penned in April, shortly before his death, this autobiography was intended for inclusion in a new edition of Essays and Treatises on Several Subjects. It was first published by Adam Smith, who claimed that by doing so he had incurred "ten times more abuse than the very violent attack I had made upon the whole commercial system of Great Britain." 1777. "Essays on Suicide and the Immortality of the Soul." 1779. Dialogues Concerning Natural Religion. Published posthumously by his nephew, David Hume the Younger. Being a discussion among three fictional characters concerning the nature of God, and is an important portrayal of the argument from design. Despite some controversy, most scholars agree that the view of Philo, the most sceptical of the three, comes closest to Hume's own. See also Age of Enlightenment George Anderson Human science Hume Studies Hume's principle Humeanism Mencius Scientific scepticism The Missing Shade of Blue References Notes Citations Bibliography Anderson, R. F. (1966). Hume's First Principles, University of Nebraska Press, Lincoln. Bongie, L. L. (1998). David Hume – Prophet of the Counter-Revolution. Liberty Fund, Indianapolis Broackes, Justin (1995). Hume, David, in Ted Honderich (ed.) The Oxford Companion to Philosophy, New York, Oxford University Press Daiches D., Jones P., Jones J. (eds). The Scottish Enlightenment: 1730–1790 A Hotbed of Genius The University of Edinburgh, 1986. In paperback, The Saltire Society, 1996 Einstein, A. (1915) Letter to Moritz Schlick, Schwarzschild, B. (trans. & ed.) in The Collected Papers of Albert Einstein, vol. 8A, R. Schulmann, A. J. Fox, J. Illy, (eds.) Princeton University Press, Princeton, NJ (1998), p. 220. Flew, A. (1986). David Hume: Philosopher of Moral Science, Basil Blackwell, Oxford. Fogelin, R. J. (1993). Hume's scepticism. In Norton, D. F. (ed.) (1993). The Cambridge Companion to Hume, Cambridge University Press, pp. 90–116. Graham, R. (2004). The Great Infidel – A Life of David Hume. John Donald, Edinburgh. Harwood, Sterling (1996). "Moral Sensibility Theories", in The Encyclopedia of Philosophy (Supplement) (New York: Macmillan Publishing Co.). Hume, D. (1751). An Enquiry Concerning the Principles of Morals. David Hume, Essays Moral, Political, and Literary edited with preliminary dissertations and notes by T.H. Green and T.H. Grose, 1:1–8. London: Longmans, Green 1907. Hume, D. (1752–1758). Political Discourses:Bilingual English-French (translated by Fabien Grandjean). Mauvezin, France, Trans-Europ-Repress, 1993, 22 cm, V-260 p. Bibliographic notes, index. Husserl, E. (1970). The Crisis of European Sciences and Transcendental Phenomenology, Carr, D. (trans.), Northwestern University Press, Evanston. Klibansky, Raymond and Mossner, Ernest C. (eds.) (1954). New Letters of David Hume. Oxford: Oxford University Press. Kolakowski, L. (1968). The Alienation of Reason: A History of Positivist Thought. Doubleday: Garden City. Penelhum, T. (1993). Hume's moral philosophy. In Norton, D. F. (ed.), (1993). The Cambridge Companion to Hume, Cambridge University Press, pp. 117–147. Phillipson, N. (1989). Hume, Weidenfeld & Nicolson, London. Popkin, Richard H. (1993) "Sources of Knowledge of Sextus Empiricus in Hume's Time" Journal of the History of Ideas, Vol. 54, No. 1. (Jan. 1993), pp. 137–141. Popkin, R. & Stroll, A. (1993) Philosophy. Reed Educational and Professional Publishing Ltd, Oxford. Popper. K. (1960). Knowledge without authority. In Miller D. (ed.), (1983). Popper, Oxford, Fontana, pp. 46–57. Robbins, Lionel (1998). A History of Economic Thought: The LSE Lectures. Edited by Steven G. Medema and Warren J. Samuels. Princeton University Press, Princeton, NJ. Robinson, Dave & Groves, Judy (2003). Introducing Political Philosophy. Icon Books. . Russell, B. (1946). A History of Western Philosophy. London, Allen and Unwin. Russell, Paul, "Hume on Free Will", The Stanford Encyclopedia of Philosophy (Winter 2016 Edition), Edward N. Zalta (ed.), online. Sgarbi, M. (2012). "Hume's Source of the 'Impression-Idea' Distinction", Anales del Seminario de Historia de la Filosofía, 2: 561–576 Spencer, Mark G., ed. David Hume: Historical Thinker, Historical Writer (Penn State University Press; 2013) 282 pages; Interdisciplinary essays that consider his intertwined work as historian and philosopher Spiegel, Henry William, (1991). The Growth of Economic Thought, 3rd Ed., Durham: Duke University Press. Stroud, B. (1977). Hume, Routledge: London & New York. Taylor, A. E. (1927). David Hume and the Miraculous, Leslie Stephen Lecture. Cambridge, pp. 53–54. reprinted in his Philosophical Studies (1934) Further reading Ardal, Pall (1966). Passion and Value in Hume's Treatise, Edinburgh, Edinburgh University Press. Bailey, Alan & O'Brien, Dan (eds.) (2012). The Continuum Companion to Hume, New York: Continuum. Bailey, Alan & O'Brien, Dan. (2014). Hume's Critique of Religion: Sick Men's Dreams, Dordrecht: Springer. Beauchamp, Tom & Rosenberg, Alexander (1981). Hume and the Problem of Causation, New York, Oxford University Press. Beveridge, Craig (1982), review of The Life of David Hume by Ernest Campbell Mossner, in Murray, Glen (ed.), Cencrastus No. 8, Spring 1982, p. 46, Campbell Mossner, Ernest (1980). The Life of David Hume, Oxford University Press. Gilles Deleuze (1953). Empirisme et subjectivité. Essai sur la Nature Humaine selon Hume, Paris: Presses Universitaires de France; trans. Empiricism and Subjectivity, New York: Columbia University Press, 1991. Demeter, Tamás (2014). "Natural Theology as Superstition: Hume and the Changing Ideology of Moral Inquiry." In Demeter, T. et al. (eds.), Conflicting Values of Inquiry, Leiden: Brill. Garrett, Don (1996). Cognition and Commitment in Hume's Philosophy. New York & Oxford: Oxford University Press. Gaskin, J.C.A. (1978). Hume's Philosophy of Religion. Humanities Press International. Harris, James A. (2015). Hume: An Intellectual Biography. Cambridge: Cambridge University Press. Hesselberg, A. Kenneth (1961). Hume, Natural Law and Justice. Duquesne Review, Spring 1961, pp. 46–47. Kail, P. J. E. (2007) Projection and Realism in Hume's Philosophy, Oxford: Oxford University Press. Kemp Smith, Norman (1941). The Philosophy of David Hume. London: Macmillan. Norton, David Fate (1982). David Hume: Common-Sense Moralist, Sceptical Metaphysician. Princeton: Princeton University Press. Norton, David Fate & Taylor, Jacqueline (eds.) (2009). The Cambridge Companion to Hume, Cambridge: Cambridge University Press. Radcliffe, Elizabeth S. (ed.) (2008). A Companion to Hume, Malden: Blackwell. Rosen, Frederick (2003). Classical Utilitarianism from Hume to Mill (Routledge Studies in Ethics & Moral Theory). Russell, Paul (1995). Freedom and Moral Sentiment: Hume's Way of Naturalizing Responsibility. New York & Oxford: Oxford University Press. Russell, Paul (2008). The Riddle of Hume's Treatise: Skepticism, Naturalism and Irreligion. New York & Oxford: Oxford University Press. Stroud, Barry (1977). Hume, London & New York: Routledge. (Complete study of Hume's work parting from the interpretation of Hume's naturalistic philosophical programme). Wei, Jua (2017). Commerce and Politics in Hume’s History of England, Woodbridge: Boydell and Brewer online review Willis, Andre C (2015). Toward a Humean True Religion: Genuine Theism, Moderate Hope, and Practical Morality, University Park: Penn State University Press. Wilson, Fred (2008). The External World and Our Knowledge of It : Hume's critical realism, an exposition and a defence, Toronto: University of Toronto Press. External links The David Hume Collection at McGill University Library Books by David Hume at the Online Books Page Hume Texts Online searchable texts, with related resources Peter Millican. Papers and Talks on Hume Peter Millican. Research Translations of philosophical classics into contemporary English, from English, Latin, French and German. David Hume: My Own Life and Adam Smith: obituary of Hume Bibliography of Hume's influence on Utilitarianism The Hume Society, publishes Hume Studies and holds conferences 1711 births 1776 deaths 18th-century Scottish male writers 18th-century British philosophers 18th-century British diplomats 18th-century British economists 18th-century British essayists 18th-century Scottish educators 18th-century Scottish historians Action theorists Alumni of the University of Edinburgh British diplomats British male essayists British male non-fiction writers British sceptics Burials at Old Calton Burial Ground Civil servants from Edinburgh British consciousness researchers and theorists Conservatism Criticism of rationalism British critics of religions Critics of the Catholic Church Deist philosophers Diplomats from Edinburgh Empiricists Enlightenment philosophers Epistemologists Freethought writers Historians of England History of economic thought Members of the Philosophical Society of Edinburgh Metaphilosophers Ontologists People of the Scottish Enlightenment Philosophers from Edinburgh Philosophers of art Philosophers of economics British philosophers of education Philosophers of history Philosophers of identity Philosophers of logic Philosophers of mathematics Philosophers of mind Philosophers of psychology Philosophers of religion Philosophers of science Philosophers of social science Philosophy writers Preclassical economists Scottish economists Scottish educational theorists Scottish ethicists Scottish deists Scottish diplomats Scottish essayists Scottish humanists Scottish libertarians Scottish librarians Scottish logicians Scottish monarchists Scottish philosophers Scottish political philosophers Secular humanists Skeptic philosophers Social philosophers Theorists on Western civilization Virtue ethicists Writers about activism and social change Writers about religion and science Writers from Edinburgh
David Hume
[ "Mathematics" ]
18,368
[]
7,938
https://en.wikipedia.org/wiki/Diatomic%20molecule
Diatomic molecules () are molecules composed of only two atoms, of the same or different chemical elements. If a diatomic molecule consists of two atoms of the same element, such as hydrogen () or oxygen (), then it is said to be homonuclear. Otherwise, if a diatomic molecule consists of two different atoms, such as carbon monoxide () or nitric oxide (), the molecule is said to be heteronuclear. The bond in a homonuclear diatomic molecule is non-polar. The only chemical elements that form stable homonuclear diatomic molecules at standard temperature and pressure (STP) (or at typical laboratory conditions of 1 bar and 25 °C) are the gases hydrogen (), nitrogen (), oxygen (), fluorine (), and chlorine (), and the liquid bromine (). The noble gases (helium, neon, argon, krypton, xenon, and radon) are also gases at STP, but they are monatomic. The homonuclear diatomic gases and noble gases together are called "elemental gases" or "molecular gases", to distinguish them from other gases that are chemical compounds. At slightly elevated temperatures, the halogens bromine () and iodine () also form diatomic gases. All halogens have been observed as diatomic molecules, except for astatine and tennessine, which are uncertain. Other elements form diatomic molecules when evaporated, but these diatomic species repolymerize when cooled. Heating ("cracking") elemental phosphorus gives diphosphorus (). Sulfur vapor is mostly disulfur (). Dilithium () and disodium () are known in the gas phase. Ditungsten () and dimolybdenum () form with sextuple bonds in the gas phase. Dirubidium () is diatomic. Heteronuclear molecules All other diatomic molecules are chemical compounds of two different elements. Many elements can combine to form heteronuclear diatomic molecules, depending on temperature and pressure. Examples are gases carbon monoxide (CO), nitric oxide (NO), and hydrogen chloride (HCl). Many 1:1 binary compounds are not normally considered diatomic because they are polymeric at room temperature, but they form diatomic molecules when evaporated, for example gaseous MgO, SiO, and many others. Occurrence Hundreds of diatomic molecules have been identified in the environment of the Earth, in the laboratory, and in interstellar space. About 99% of the Earth's atmosphere is composed of two species of diatomic molecules: nitrogen (78%) and oxygen (21%). The natural abundance of hydrogen (H2) in the Earth's atmosphere is only of the order of parts per million, but H2 is the most abundant diatomic molecule in the universe. The interstellar medium is dominated by hydrogen atoms. Molecular geometry All diatomic molecules are linear and characterized by a single parameter which is the bond length or distance between the two atoms. Diatomic nitrogen has a triple bond, diatomic oxygen has a double bond, and diatomic hydrogen, fluorine, chlorine, iodine, and bromine all have single bonds. Historical significance Diatomic elements played an important role in the elucidation of the concepts of element, atom, and molecule in the 19th century, because some of the most common elements, such as hydrogen, oxygen, and nitrogen, occur as diatomic molecules. John Dalton's original atomic hypothesis assumed that all elements were monatomic and that the atoms in compounds would normally have the simplest atomic ratios with respect to one another. For example, Dalton assumed water's formula to be HO, giving the atomic weight of oxygen as eight times that of hydrogen, instead of the modern value of about 16. As a consequence, confusion existed regarding atomic weights and molecular formulas for about half a century. As early as 1805, Gay-Lussac and von Humboldt showed that water is formed of two volumes of hydrogen and one volume of oxygen, and by 1811 Amedeo Avogadro had arrived at the correct interpretation of water's composition, based on what is now called Avogadro's law and the assumption of diatomic elemental molecules. However, these results were mostly ignored until 1860, partly due to the belief that atoms of one element would have no chemical affinity toward atoms of the same element, and also partly due to apparent exceptions to Avogadro's law that were not explained until later in terms of dissociating molecules. At the 1860 Karlsruhe Congress on atomic weights, Cannizzaro resurrected Avogadro's ideas and used them to produce a consistent table of atomic weights, which mostly agree with modern values. These weights were an important prerequisite for the discovery of the periodic law by Dmitri Mendeleev and Lothar Meyer. Excited electronic states Diatomic molecules are normally in their lowest or ground state, which conventionally is also known as the state. When a gas of diatomic molecules is bombarded by energetic electrons, some of the molecules may be excited to higher electronic states, as occurs, for example, in the natural aurora; high-altitude nuclear explosions; and rocket-borne electron gun experiments. Such excitation can also occur when the gas absorbs light or other electromagnetic radiation. The excited states are unstable and naturally relax back to the ground state. Over various short time scales after the excitation (typically a fraction of a second, or sometimes longer than a second if the excited state is metastable), transitions occur from higher to lower electronic states and ultimately to the ground state, and in each transition results a photon is emitted. This emission is known as fluorescence. Successively higher electronic states are conventionally named , , , etc. (but this convention is not always followed, and sometimes lower case letters and alphabetically out-of-sequence letters are used, as in the example given below). The excitation energy must be greater than or equal to the energy of the electronic state in order for the excitation to occur. In quantum theory, an electronic state of a diatomic molecule is represented by the molecular term symbol where is the total electronic spin quantum number, is the total electronic angular momentum quantum number along the internuclear axis, and is the vibrational quantum number. takes on values 0, 1, 2, ..., which are represented by the electronic state symbols , , , ... For example, the following table lists the common electronic states (without vibrational quantum numbers) along with the energy of the lowest vibrational level () of diatomic nitrogen (N2), the most abundant gas in the Earth's atmosphere. The subscripts and superscripts after give additional quantum mechanical details about the electronic state. The superscript or determines whether reflection in a plane containing the internuclear axis introduces a sign change in the wavefunction. The sub-script or applies to molecules of identical atoms, and when reflecting the state along a plane perpendicular to the molecular axis, states that does not change are labelled (gerade), and states that change sign are labelled (ungerade). The aforementioned fluorescence occurs in distinct regions of the electromagnetic spectrum, called "emission bands": each band corresponds to a particular transition from a higher electronic state and vibrational level to a lower electronic state and vibrational level (typically, many vibrational levels are involved in an excited gas of diatomic molecules). For example, N2 - emission bands (a.k.a. Vegard-Kaplan bands) are present in the spectral range from 0.14 to 1.45 μm (micrometres). A given band can be spread out over several nanometers in electromagnetic wavelength space, owing to the various transitions that occur in the molecule's rotational quantum number, . These are classified into distinct sub-band branches, depending on the change in . The branch corresponds to , the branch to , and the branch to . Bands are spread out even further by the limited spectral resolution of the spectrometer that is used to measure the spectrum. The spectral resolution depends on the instrument's point spread function. Energy levels The molecular term symbol is a shorthand expression of the angular momenta that characterize the electronic quantum states of a diatomic molecule, which are also eigenstates of the electronic molecular Hamiltonian. It is also convenient, and common, to represent a diatomic molecule as two-point masses connected by a massless spring. The energies involved in the various motions of the molecule can then be broken down into three categories: the translational, rotational, and vibrational energies. The theoretical study of the rotational energy levels of the diatomic molecules can be described using the below description of the rotational energy levels. While the study of vibrational energy level of the diatomic molecules can be described using the harmonic oscillator approximation or using the quantum vibrational interaction potentials. These potentials give more accurate energy levels because they take multiple vibrational effects into account. Concerning history, the first treatment of diatomic molecules with quantum mechanics was made by Lucy Mensing in 1926. Translational energies The translational energy of the molecule is given by the kinetic energy expression: where is the mass of the molecule and is its velocity. Rotational energies Classically, the kinetic energy of rotation is where is the angular momentum is the moment of inertia of the molecule For microscopic, atomic-level systems like a molecule, angular momentum can only have specific discrete values given by where is a non-negative integer and is the reduced Planck constant. Also, for a diatomic molecule the moment of inertia is where is the reduced mass of the molecule and is the average distance between the centers of the two atoms in the molecule. So, substituting the angular momentum and moment of inertia into Erot, the rotational energy levels of a diatomic molecule are given by: Vibrational energies Another type of motion of a diatomic molecule is for each atom to oscillate—or vibrate—along the line connecting the two atoms. The vibrational energy is approximately that of a quantum harmonic oscillator: where is an integer is the reduced Planck constant and is the angular frequency of the vibration. Comparison between rotational and vibrational energy spacings The spacing, and the energy of a typical spectroscopic transition, between vibrational energy levels is about 100 times greater than that of a typical transition between rotational energy levels. Hund's cases The good quantum numbers for a diatomic molecule, as well as good approximations of rotational energy levels, can be obtained by modeling the molecule using Hund's cases. Mnemonics The mnemonics BrINClHOF, pronounced "Brinklehof", HONClBrIF, pronounced "Honkelbrif", “HOBrFINCl”, pronounced “Hoberfinkel”, and HOFBrINCl, pronounced "Hofbrinkle", have been coined to aid recall of the list of diatomic elements. Another method, for English-speakers, is the sentence: "Never Have Fear of Ice Cold Beer" as a representation of Nitrogen, Hydrogen, Fluorine, Oxygen, Iodine, Chlorine, Bromine. See also Symmetry of diatomic molecules AXE method Octatomic element Covalent bond Industrial gas References Further reading External links Hyperphysics – Rotational Spectra of Rigid Rotor Molecules Hyperphysics – Quantum Harmonic Oscillator 3D Chem – Chemistry, Structures, and 3D Molecules IUMSC – Indiana University Molecular Structure Center General chemistry Molecular geometry Stereochemistry
Diatomic molecule
[ "Physics", "Chemistry" ]
2,450
[ "Molecules", "Molecular geometry", "Stereochemistry", "Space", "nan", "Spacetime", "Diatomic molecules", "Matter" ]
7,955
https://en.wikipedia.org/wiki/DNA
Deoxyribonucleic acid (; DNA) is a polymer composed of two polynucleotide chains that coil around each other to form a double helix. The polymer carries genetic instructions for the development, functioning, growth and reproduction of all known organisms and many viruses. DNA and ribonucleic acid (RNA) are nucleic acids. Alongside proteins, lipids and complex carbohydrates (polysaccharides), nucleic acids are one of the four major types of macromolecules that are essential for all known forms of life. The two DNA strands are known as polynucleotides as they are composed of simpler monomeric units called nucleotides. Each nucleotide is composed of one of four nitrogen-containing nucleobases (cytosine [C], guanine [G], adenine [A] or thymine [T]), a sugar called deoxyribose, and a phosphate group. The nucleotides are joined to one another in a chain by covalent bonds (known as the phosphodiester linkage) between the sugar of one nucleotide and the phosphate of the next, resulting in an alternating sugar-phosphate backbone. The nitrogenous bases of the two separate polynucleotide strands are bound together, according to base pairing rules (A with T and C with G), with hydrogen bonds to make double-stranded DNA. The complementary nitrogenous bases are divided into two groups, the single-ringed pyrimidines and the double-ringed purines. In DNA, the pyrimidines are thymine and cytosine; the purines are adenine and guanine. Both strands of double-stranded DNA store the same biological information. This information is replicated when the two strands separate. A large part of DNA (more than 98% for humans) is non-coding, meaning that these sections do not serve as patterns for protein sequences. The two strands of DNA run in opposite directions to each other and are thus antiparallel. Attached to each sugar is one of four types of nucleobases (or bases). It is the sequence of these four nucleobases along the backbone that encodes genetic information. RNA strands are created using DNA strands as a template in a process called transcription, where DNA bases are exchanged for their corresponding bases except in the case of thymine (T), for which RNA substitutes uracil (U). Under the genetic code, these RNA strands specify the sequence of amino acids within proteins in a process called translation. Within eukaryotic cells, DNA is organized into long structures called chromosomes. Before typical cell division, these chromosomes are duplicated in the process of DNA replication, providing a complete set of chromosomes for each daughter cell. Eukaryotic organisms (animals, plants, fungi and protists) store most of their DNA inside the cell nucleus as nuclear DNA, and some in the mitochondria as mitochondrial DNA or in chloroplasts as chloroplast DNA. In contrast, prokaryotes (bacteria and archaea) store their DNA only in the cytoplasm, in circular chromosomes. Within eukaryotic chromosomes, chromatin proteins, such as histones, compact and organize DNA. These compacting structures guide the interactions between DNA and other proteins, helping control which parts of the DNA are transcribed. Properties DNA is a long polymer made from repeating units called nucleotides. The structure of DNA is dynamic along its length, being capable of coiling into tight loops and other shapes. In all species it is composed of two helical chains, bound to each other by hydrogen bonds. Both chains are coiled around the same axis, and have the same pitch of . The pair of chains have a radius of . According to another study, when measured in a different solution, the DNA chain measured wide, and one nucleotide unit measured long. The buoyant density of most DNA is 1.7g/cm3. DNA does not usually exist as a single strand, but instead as a pair of strands that are held tightly together. These two long strands coil around each other, in the shape of a double helix. The nucleotide contains both a segment of the backbone of the molecule (which holds the chain together) and a nucleobase (which interacts with the other DNA strand in the helix). A nucleobase linked to a sugar is called a nucleoside, and a base linked to a sugar and to one or more phosphate groups is called a nucleotide. A biopolymer comprising multiple linked nucleotides (as in DNA) is called a polynucleotide. The backbone of the DNA strand is made from alternating phosphate and sugar groups. The sugar in DNA is 2-deoxyribose, which is a pentose (five-carbon) sugar. The sugars are joined by phosphate groups that form phosphodiester bonds between the third and fifth carbon atoms of adjacent sugar rings. These are known as the 3′-end (three prime end), and 5′-end (five prime end) carbons, the prime symbol being used to distinguish these carbon atoms from those of the base to which the deoxyribose forms a glycosidic bond. Therefore, any DNA strand normally has one end at which there is a phosphate group attached to the 5′ carbon of a ribose (the 5′ phosphoryl) and another end at which there is a free hydroxyl group attached to the 3′ carbon of a ribose (the 3′ hydroxyl). The orientation of the 3′ and 5′ carbons along the sugar-phosphate backbone confers directionality (sometimes called polarity) to each DNA strand. In a nucleic acid double helix, the direction of the nucleotides in one strand is opposite to their direction in the other strand: the strands are antiparallel. The asymmetric ends of DNA strands are said to have a directionality of five prime end (5′ ), and three prime end (3′), with the 5′ end having a terminal phosphate group and the 3′ end a terminal hydroxyl group. One major difference between DNA and RNA is the sugar, with the 2-deoxyribose in DNA being replaced by the related pentose sugar ribose in RNA. The DNA double helix is stabilized primarily by two forces: hydrogen bonds between nucleotides and base-stacking interactions among aromatic nucleobases. The four bases found in DNA are adenine (), cytosine (), guanine () and thymine (). These four bases are attached to the sugar-phosphate to form the complete nucleotide, as shown for adenosine monophosphate. Adenine pairs with thymine and guanine pairs with cytosine, forming and base pairs. Nucleobase classification The nucleobases are classified into two types: the purines, and , which are fused five- and six-membered heterocyclic compounds, and the pyrimidines, the six-membered rings and . A fifth pyrimidine nucleobase, uracil (), usually takes the place of thymine in RNA and differs from thymine by lacking a methyl group on its ring. In addition to RNA and DNA, many artificial nucleic acid analogues have been created to study the properties of nucleic acids, or for use in biotechnology. Non-canonical bases Modified bases occur in DNA. The first of these recognized was 5-methylcytosine, which was found in the genome of Mycobacterium tuberculosis in 1925. The reason for the presence of these noncanonical bases in bacterial viruses (bacteriophages) is to avoid the restriction enzymes present in bacteria. This enzyme system acts at least in part as a molecular immune system protecting bacteria from infection by viruses. Modifications of the bases cytosine and adenine, the more common and modified DNA bases, play vital roles in the epigenetic control of gene expression in plants and animals. A number of noncanonical bases are known to occur in DNA. Most of these are modifications of the canonical bases plus uracil. Modified Adenine N6-carbamoyl-methyladenine N6-methyadenine Modified Guanine 7-Deazaguanine 7-Methylguanine Modified Cytosine N4-Methylcytosine 5-Carboxylcytosine 5-Formylcytosine 5-Glycosylhydroxymethylcytosine 5-Hydroxycytosine 5-Methylcytosine Modified Thymidine α-Glutamythymidine α-Putrescinylthymine Uracil and modifications Base J Uracil 5-Dihydroxypentauracil 5-Hydroxymethyldeoxyuracil Others Deoxyarchaeosine 2,6-Diaminopurine (2-Aminoadenine) Grooves Twin helical strands form the DNA backbone. Another double helix may be found tracing the spaces, or grooves, between the strands. These voids are adjacent to the base pairs and may provide a binding site. As the strands are not symmetrically located with respect to each other, the grooves are unequally sized. The major groove is wide, while the minor groove is in width. Due to the larger width of the major groove, the edges of the bases are more accessible in the major groove than in the minor groove. As a result, proteins such as transcription factors that can bind to specific sequences in double-stranded DNA usually make contact with the sides of the bases exposed in the major groove. This situation varies in unusual conformations of DNA within the cell (see below), but the major and minor grooves are always named to reflect the differences in width that would be seen if the DNA was twisted back into the ordinary B form. Base pairing Top, a base pair with three hydrogen bonds. Bottom, an base pair with two hydrogen bonds. Non-covalent hydrogen bonds between the pairs are shown as dashed lines. In a DNA double helix, each type of nucleobase on one strand bonds with just one type of nucleobase on the other strand. This is called complementary base pairing. Purines form hydrogen bonds to pyrimidines, with adenine bonding only to thymine in two hydrogen bonds, and cytosine bonding only to guanine in three hydrogen bonds. This arrangement of two nucleotides binding together across the double helix (from six-carbon ring to six-carbon ring) is called a Watson-Crick base pair. DNA with high GC-content is more stable than DNA with low -content. A Hoogsteen base pair (hydrogen bonding the 6-carbon ring to the 5-carbon ring) is a rare variation of base-pairing. As hydrogen bonds are not covalent, they can be broken and rejoined relatively easily. The two strands of DNA in a double helix can thus be pulled apart like a zipper, either by a mechanical force or high temperature. As a result of this base pair complementarity, all the information in the double-stranded sequence of a DNA helix is duplicated on each strand, which is vital in DNA replication. This reversible and specific interaction between complementary base pairs is critical for all the functions of DNA in organisms. ssDNA vs. dsDNA Most DNA molecules are actually two polymer strands, bound together in a helical fashion by noncovalent bonds; this double-stranded (dsDNA) structure is maintained largely by the intrastrand base stacking interactions, which are strongest for stacks. The two strands can come apart—a process known as melting—to form two single-stranded DNA (ssDNA) molecules. Melting occurs at high temperatures, low salt and high pH (low pH also melts DNA, but since DNA is unstable due to acid depurination, low pH is rarely used). The stability of the dsDNA form depends not only on the -content (% basepairs) but also on sequence (since stacking is sequence specific) and also length (longer molecules are more stable). The stability can be measured in various ways; a common way is the melting temperature (also called Tm value), which is the temperature at which 50% of the double-strand molecules are converted to single-strand molecules; melting temperature is dependent on ionic strength and the concentration of DNA. As a result, it is both the percentage of base pairs and the overall length of a DNA double helix that determines the strength of the association between the two strands of DNA. Long DNA helices with a high -content have more strongly interacting strands, while short helices with high content have more weakly interacting strands. In biology, parts of the DNA double helix that need to separate easily, such as the Pribnow box in some promoters, tend to have a high content, making the strands easier to pull apart. In the laboratory, the strength of this interaction can be measured by finding the melting temperature Tm necessary to break half of the hydrogen bonds. When all the base pairs in a DNA double helix melt, the strands separate and exist in solution as two entirely independent molecules. These single-stranded DNA molecules have no single common shape, but some conformations are more stable than others. Amount In humans, the total female diploid nuclear genome per cell extends for 6.37 Gigabase pairs (Gbp), is 208.23 cm long and weighs 6.51 picograms (pg). Male values are 6.27 Gbp, 205.00 cm, 6.41 pg. Each DNA polymer can contain hundreds of millions of nucleotides, such as in chromosome 1. Chromosome 1 is the largest human chromosome with approximately 220 million base pairs, and would be long if straightened. In eukaryotes, in addition to nuclear DNA, there is also mitochondrial DNA (mtDNA) which encodes certain proteins used by the mitochondria. The mtDNA is usually relatively small in comparison to the nuclear DNA. For example, the human mitochondrial DNA forms closed circular molecules, each of which contains 16,569 DNA base pairs, with each such molecule normally containing a full set of the mitochondrial genes. Each human mitochondrion contains, on average, approximately 5 such mtDNA molecules. Each human cell contains approximately 100 mitochondria, giving a total number of mtDNA molecules per human cell of approximately 500. However, the amount of mitochondria per cell also varies by cell type, and an egg cell can contain 100,000 mitochondria, corresponding to up to 1,500,000 copies of the mitochondrial genome (constituting up to 90% of the DNA of the cell). Sense and antisense A DNA sequence is called a "sense" sequence if it is the same as that of a messenger RNA copy that is translated into protein. The sequence on the opposite strand is called the "antisense" sequence. Both sense and antisense sequences can exist on different parts of the same strand of DNA (i.e. both strands can contain both sense and antisense sequences). In both prokaryotes and eukaryotes, antisense RNA sequences are produced, but the functions of these RNAs are not entirely clear. One proposal is that antisense RNAs are involved in regulating gene expression through RNA-RNA base pairing. A few DNA sequences in prokaryotes and eukaryotes, and more in plasmids and viruses, blur the distinction between sense and antisense strands by having overlapping genes. In these cases, some DNA sequences do double duty, encoding one protein when read along one strand, and a second protein when read in the opposite direction along the other strand. In bacteria, this overlap may be involved in the regulation of gene transcription, while in viruses, overlapping genes increase the amount of information that can be encoded within the small viral genome. Supercoiling DNA can be twisted like a rope in a process called DNA supercoiling. With DNA in its "relaxed" state, a strand usually circles the axis of the double helix once every 10.4 base pairs, but if the DNA is twisted the strands become more tightly or more loosely wound. If the DNA is twisted in the direction of the helix, this is positive supercoiling, and the bases are held more tightly together. If they are twisted in the opposite direction, this is negative supercoiling, and the bases come apart more easily. In nature, most DNA has slight negative supercoiling that is introduced by enzymes called topoisomerases. These enzymes are also needed to relieve the twisting stresses introduced into DNA strands during processes such as transcription and DNA replication. Alternative DNA structures DNA exists in many possible conformations that include A-DNA, B-DNA, and Z-DNA forms, although only B-DNA and Z-DNA have been directly observed in functional organisms. The conformation that DNA adopts depends on the hydration level, DNA sequence, the amount and direction of supercoiling, chemical modifications of the bases, the type and concentration of metal ions, and the presence of polyamines in solution. The first published reports of A-DNA X-ray diffraction patterns—and also B-DNA—used analyses based on Patterson functions that provided only a limited amount of structural information for oriented fibers of DNA. An alternative analysis was proposed by Wilkins et al. in 1953 for the in vivo B-DNA X-ray diffraction-scattering patterns of highly hydrated DNA fibers in terms of squares of Bessel functions. In the same journal, James Watson and Francis Crick presented their molecular modeling analysis of the DNA X-ray diffraction patterns to suggest that the structure was a double helix. Although the B-DNA form is most common under the conditions found in cells, it is not a well-defined conformation but a family of related DNA conformations that occur at the high hydration levels present in cells. Their corresponding X-ray diffraction and scattering patterns are characteristic of molecular paracrystals with a significant degree of disorder. Compared to B-DNA, the A-DNA form is a wider right-handed spiral, with a shallow, wide minor groove and a narrower, deeper major groove. The A form occurs under non-physiological conditions in partly dehydrated samples of DNA, while in the cell it may be produced in hybrid pairings of DNA and RNA strands, and in enzyme-DNA complexes. Segments of DNA where the bases have been chemically modified by methylation may undergo a larger change in conformation and adopt the Z form. Here, the strands turn about the helical axis in a left-handed spiral, the opposite of the more common B form. These unusual structures can be recognized by specific Z-DNA binding proteins and may be involved in the regulation of transcription. Alternative DNA chemistry For many years, exobiologists have proposed the existence of a shadow biosphere, a postulated microbial biosphere of Earth that uses radically different biochemical and molecular processes than currently known life. One of the proposals was the existence of lifeforms that use arsenic instead of phosphorus in DNA. A report in 2010 of the possibility in the bacterium GFAJ-1 was announced, though the research was disputed, and evidence suggests the bacterium actively prevents the incorporation of arsenic into the DNA backbone and other biomolecules. Quadruplex structures At the ends of the linear chromosomes are specialized regions of DNA called telomeres. The main function of these regions is to allow the cell to replicate chromosome ends using the enzyme telomerase, as the enzymes that normally replicate DNA cannot copy the extreme 3′ ends of chromosomes. These specialized chromosome caps also help protect the DNA ends, and stop the DNA repair systems in the cell from treating them as damage to be corrected. In human cells, telomeres are usually lengths of single-stranded DNA containing several thousand repeats of a simple TTAGGG sequence. These guanine-rich sequences may stabilize chromosome ends by forming structures of stacked sets of four-base units, rather than the usual base pairs found in other DNA molecules. Here, four guanine bases, known as a guanine tetrad, form a flat plate. These flat four-base units then stack on top of each other to form a stable G-quadruplex structure. These structures are stabilized by hydrogen bonding between the edges of the bases and chelation of a metal ion in the centre of each four-base unit. Other structures can also be formed, with the central set of four bases coming from either a single strand folded around the bases, or several different parallel strands, each contributing one base to the central structure. In addition to these stacked structures, telomeres also form large loop structures called telomere loops, or T-loops. Here, the single-stranded DNA curls around in a long circle stabilized by telomere-binding proteins. At the very end of the T-loop, the single-stranded telomere DNA is held onto a region of double-stranded DNA by the telomere strand disrupting the double-helical DNA and base pairing to one of the two strands. This triple-stranded structure is called a displacement loop or D-loop. Branched DNA Branched DNA can form networks containing multiple branches. In DNA, fraying occurs when non-complementary regions exist at the end of an otherwise complementary double-strand of DNA. However, branched DNA can occur if a third strand of DNA is introduced and contains adjoining regions able to hybridize with the frayed regions of the pre-existing double-strand. Although the simplest example of branched DNA involves only three strands of DNA, complexes involving additional strands and multiple branches are also possible. Branched DNA can be used in nanotechnology to construct geometric shapes, see the section on uses in technology below. Artificial bases Several artificial nucleobases have been synthesized, and successfully incorporated in the eight-base DNA analogue named Hachimoji DNA. Dubbed S, B, P, and Z, these artificial bases are capable of bonding with each other in a predictable way (S–B and P–Z), maintain the double helix structure of DNA, and be transcribed to RNA. Their existence could be seen as an indication that there is nothing special about the four natural nucleobases that evolved on Earth. On the other hand, DNA is tightly related to RNA which does not only act as a transcript of DNA but also performs as molecular machines many tasks in cells. For this purpose it has to fold into a structure. It has been shown that to allow to create all possible structures at least four bases are required for the corresponding RNA, while a higher number is also possible but this would be against the natural principle of least effort. Acidity The phosphate groups of DNA give it similar acidic properties to phosphoric acid and it can be considered as a strong acid. It will be fully ionized at a normal cellular pH, releasing protons which leave behind negative charges on the phosphate groups. These negative charges protect DNA from breakdown by hydrolysis by repelling nucleophiles which could hydrolyze it. Macroscopic appearance Pure DNA extracted from cells forms white, stringy clumps. Chemical modifications and altered DNA packaging Base modifications and DNA packaging Structure of cytosine with and without the 5-methyl group. Deamination converts 5-methylcytosine into thymine. The expression of genes is influenced by how the DNA is packaged in chromosomes, in a structure called chromatin. Base modifications can be involved in packaging, with regions that have low or no gene expression usually containing high levels of methylation of cytosine bases. DNA packaging and its influence on gene expression can also occur by covalent modifications of the histone protein core around which DNA is wrapped in the chromatin structure or else by remodeling carried out by chromatin remodeling complexes (see Chromatin remodeling). There is, further, crosstalk between DNA methylation and histone modification, so they can coordinately affect chromatin and gene expression. For one example, cytosine methylation produces 5-methylcytosine, which is important for X-inactivation of chromosomes. The average level of methylation varies between organisms—the worm Caenorhabditis elegans lacks cytosine methylation, while vertebrates have higher levels, with up to 1% of their DNA containing 5-methylcytosine. Despite the importance of 5-methylcytosine, it can deaminate to leave a thymine base, so methylated cytosines are particularly prone to mutations. Other base modifications include adenine methylation in bacteria, the presence of 5-hydroxymethylcytosine in the brain, and the glycosylation of uracil to produce the "J-base" in kinetoplastids. Damage DNA can be damaged by many sorts of mutagens, which change the DNA sequence. Mutagens include oxidizing agents, alkylating agents and also high-energy electromagnetic radiation such as ultraviolet light and X-rays. The type of DNA damage produced depends on the type of mutagen. For example, UV light can damage DNA by producing thymine dimers, which are cross-links between pyrimidine bases. On the other hand, oxidants such as free radicals or hydrogen peroxide produce multiple forms of damage, including base modifications, particularly of guanosine, and double-strand breaks. A typical human cell contains about 150,000 bases that have suffered oxidative damage. Of these oxidative lesions, the most dangerous are double-strand breaks, as these are difficult to repair and can produce point mutations, insertions, deletions from the DNA sequence, and chromosomal translocations. These mutations can cause cancer. Because of inherent limits in the DNA repair mechanisms, if humans lived long enough, they would all eventually develop cancer. DNA damages that are naturally occurring, due to normal cellular processes that produce reactive oxygen species, the hydrolytic activities of cellular water, etc., also occur frequently. Although most of these damages are repaired, in any cell some DNA damage may remain despite the action of repair processes. These remaining DNA damages accumulate with age in mammalian postmitotic tissues. This accumulation appears to be an important underlying cause of aging. Many mutagens fit into the space between two adjacent base pairs, this is called intercalation. Most intercalators are aromatic and planar molecules; examples include ethidium bromide, acridines, daunomycin, and doxorubicin. For an intercalator to fit between base pairs, the bases must separate, distorting the DNA strands by unwinding of the double helix. This inhibits both transcription and DNA replication, causing toxicity and mutations. As a result, DNA intercalators may be carcinogens, and in the case of thalidomide, a teratogen. Others such as benzo[a]pyrene diol epoxide and aflatoxin form DNA adducts that induce errors in replication. Nevertheless, due to their ability to inhibit DNA transcription and replication, other similar toxins are also used in chemotherapy to inhibit rapidly growing cancer cells. Biological functions DNA usually occurs as linear chromosomes in eukaryotes, and circular chromosomes in prokaryotes. The set of chromosomes in a cell makes up its genome; the human genome has approximately 3 billion base pairs of DNA arranged into 46 chromosomes. The information carried by DNA is held in the sequence of pieces of DNA called genes. Transmission of genetic information in genes is achieved via complementary base pairing. For example, in transcription, when a cell uses the information in a gene, the DNA sequence is copied into a complementary RNA sequence through the attraction between the DNA and the correct RNA nucleotides. Usually, this RNA copy is then used to make a matching protein sequence in a process called translation, which depends on the same interaction between RNA nucleotides. In an alternative fashion, a cell may copy its genetic information in a process called DNA replication. The details of these functions are covered in other articles; here the focus is on the interactions between DNA and other molecules that mediate the function of the genome. Genes and genomes Genomic DNA is tightly and orderly packed in the process called DNA condensation, to fit the small available volumes of the cell. In eukaryotes, DNA is located in the cell nucleus, with small amounts in mitochondria and chloroplasts. In prokaryotes, the DNA is held within an irregularly shaped body in the cytoplasm called the nucleoid. The genetic information in a genome is held within genes, and the complete set of this information in an organism is called its genotype. A gene is a unit of heredity and is a region of DNA that influences a particular characteristic in an organism. Genes contain an open reading frame that can be transcribed, and regulatory sequences such as promoters and enhancers, which control transcription of the open reading frame. In many species, only a small fraction of the total sequence of the genome encodes protein. For example, only about 1.5% of the human genome consists of protein-coding exons, with over 50% of human DNA consisting of non-coding repetitive sequences. The reasons for the presence of so much noncoding DNA in eukaryotic genomes and the extraordinary differences in genome size, or C-value, among species, represent a long-standing puzzle known as the "C-value enigma". However, some DNA sequences that do not code protein may still encode functional non-coding RNA molecules, which are involved in the regulation of gene expression. Some noncoding DNA sequences play structural roles in chromosomes. Telomeres and centromeres typically contain few genes but are important for the function and stability of chromosomes. An abundant form of noncoding DNA in humans are pseudogenes, which are copies of genes that have been disabled by mutation. These sequences are usually just molecular fossils, although they can occasionally serve as raw genetic material for the creation of new genes through the process of gene duplication and divergence. Transcription and translation A gene is a sequence of DNA that contains genetic information and can influence the phenotype of an organism. Within a gene, the sequence of bases along a DNA strand defines a messenger RNA sequence, which then defines one or more protein sequences. The relationship between the nucleotide sequences of genes and the amino-acid sequences of proteins is determined by the rules of translation, known collectively as the genetic code. The genetic code consists of three-letter 'words' called codons formed from a sequence of three nucleotides (e.g. ACT, CAG, TTT). In transcription, the codons of a gene are copied into messenger RNA by RNA polymerase. This RNA copy is then decoded by a ribosome that reads the RNA sequence by base-pairing the messenger RNA to transfer RNA, which carries amino acids. Since there are 4 bases in 3-letter combinations, there are 64 possible codons (43 combinations). These encode the twenty standard amino acids, giving most amino acids more than one possible codon. There are also three 'stop' or 'nonsense' codons signifying the end of the coding region; these are the TAG, TAA, and TGA codons, (UAG, UAA, and UGA on the mRNA). Replication Cell division is essential for an organism to grow, but, when a cell divides, it must replicate the DNA in its genome so that the two daughter cells have the same genetic information as their parent. The double-stranded structure of DNA provides a simple mechanism for DNA replication. Here, the two strands are separated and then each strand's complementary DNA sequence is recreated by an enzyme called DNA polymerase. This enzyme makes the complementary strand by finding the correct base through complementary base pairing and bonding it onto the original strand. As DNA polymerases can only extend a DNA strand in a 5′ to 3′ direction, different mechanisms are used to copy the antiparallel strands of the double helix. In this way, the base on the old strand dictates which base appears on the new strand, and the cell ends up with a perfect copy of its DNA. Extracellular nucleic acids Naked extracellular DNA (eDNA), most of it released by cell death, is nearly ubiquitous in the environment. Its concentration in soil may be as high as 2 μg/L, and its concentration in natural aquatic environments may be as high at 88 μg/L. Various possible functions have been proposed for eDNA: it may be involved in horizontal gene transfer; it may provide nutrients; and it may act as a buffer to recruit or titrate ions or antibiotics. Extracellular DNA acts as a functional extracellular matrix component in the biofilms of several bacterial species. It may act as a recognition factor to regulate the attachment and dispersal of specific cell types in the biofilm; it may contribute to biofilm formation; and it may contribute to the biofilm's physical strength and resistance to biological stress. Cell-free fetal DNA is found in the blood of the mother, and can be sequenced to determine a great deal of information about the developing fetus. Under the name of environmental DNA eDNA has seen increased use in the natural sciences as a survey tool for ecology, monitoring the movements and presence of species in water, air, or on land, and assessing an area's biodiversity. Neutrophil extracellular traps Neutrophil extracellular traps (NETs) are networks of extracellular fibers, primarily composed of DNA, which allow neutrophils, a type of white blood cell, to kill extracellular pathogens while minimizing damage to the host cells. Interactions with proteins All the functions of DNA depend on interactions with proteins. These protein interactions can be non-specific, or the protein can bind specifically to a single DNA sequence. Enzymes can also bind to DNA and of these, the polymerases that copy the DNA base sequence in transcription and DNA replication are particularly important. DNA-binding proteins Structural proteins that bind DNA are well-understood examples of non-specific DNA-protein interactions. Within chromosomes, DNA is held in complexes with structural proteins. These proteins organize the DNA into a compact structure called chromatin. In eukaryotes, this structure involves DNA binding to a complex of small basic proteins called histones, while in prokaryotes multiple types of proteins are involved. The histones form a disk-shaped complex called a nucleosome, which contains two complete turns of double-stranded DNA wrapped around its surface. These non-specific interactions are formed through basic residues in the histones, making ionic bonds to the acidic sugar-phosphate backbone of the DNA, and are thus largely independent of the base sequence. Chemical modifications of these basic amino acid residues include methylation, phosphorylation, and acetylation. These chemical changes alter the strength of the interaction between the DNA and the histones, making the DNA more or less accessible to transcription factors and changing the rate of transcription. Other non-specific DNA-binding proteins in chromatin include the high-mobility group proteins, which bind to bent or distorted DNA. These proteins are important in bending arrays of nucleosomes and arranging them into the larger structures that make up chromosomes. A distinct group of DNA-binding proteins is the DNA-binding proteins that specifically bind single-stranded DNA. In humans, replication protein A is the best-understood member of this family and is used in processes where the double helix is separated, including DNA replication, recombination, and DNA repair. These binding proteins seem to stabilize single-stranded DNA and protect it from forming stem-loops or being degraded by nucleases. In contrast, other proteins have evolved to bind to particular DNA sequences. The most intensively studied of these are the various transcription factors, which are proteins that regulate transcription. Each transcription factor binds to one particular set of DNA sequences and activates or inhibits the transcription of genes that have these sequences close to their promoters. The transcription factors do this in two ways. Firstly, they can bind the RNA polymerase responsible for transcription, either directly or through other mediator proteins; this locates the polymerase at the promoter and allows it to begin transcription. Alternatively, transcription factors can bind enzymes that modify the histones at the promoter. This changes the accessibility of the DNA template to the polymerase. As these DNA targets can occur throughout an organism's genome, changes in the activity of one type of transcription factor can affect thousands of genes. Consequently, these proteins are often the targets of the signal transduction processes that control responses to environmental changes or cellular differentiation and development. The specificity of these transcription factors' interactions with DNA come from the proteins making multiple contacts to the edges of the DNA bases, allowing them to "read" the DNA sequence. Most of these base-interactions are made in the major groove, where the bases are most accessible. DNA-modifying enzymes Nucleases and ligases Nucleases are enzymes that cut DNA strands by catalyzing the hydrolysis of the phosphodiester bonds. Nucleases that hydrolyse nucleotides from the ends of DNA strands are called exonucleases, while endonucleases cut within strands. The most frequently used nucleases in molecular biology are the restriction endonucleases, which cut DNA at specific sequences. For instance, the EcoRV enzyme shown to the left recognizes the 6-base sequence 5′-GATATC-3′ and makes a cut at the horizontal line. In nature, these enzymes protect bacteria against phage infection by digesting the phage DNA when it enters the bacterial cell, acting as part of the restriction modification system. In technology, these sequence-specific nucleases are used in molecular cloning and DNA fingerprinting. Enzymes called DNA ligases can rejoin cut or broken DNA strands. Ligases are particularly important in lagging strand DNA replication, as they join the short segments of DNA produced at the replication fork into a complete copy of the DNA template. They are also used in DNA repair and genetic recombination. Topoisomerases and helicases Topoisomerases are enzymes with both nuclease and ligase activity. These proteins change the amount of supercoiling in DNA. Some of these enzymes work by cutting the DNA helix and allowing one section to rotate, thereby reducing its level of supercoiling; the enzyme then seals the DNA break. Other types of these enzymes are capable of cutting one DNA helix and then passing a second strand of DNA through this break, before rejoining the helix. Topoisomerases are required for many processes involving DNA, such as DNA replication and transcription. Helicases are proteins that are a type of molecular motor. They use the chemical energy in nucleoside triphosphates, predominantly adenosine triphosphate (ATP), to break hydrogen bonds between bases and unwind the DNA double helix into single strands. These enzymes are essential for most processes where enzymes need to access the DNA bases. Polymerases Polymerases are enzymes that synthesize polynucleotide chains from nucleoside triphosphates. The sequence of their products is created based on existing polynucleotide chains—which are called templates. These enzymes function by repeatedly adding a nucleotide to the 3′ hydroxyl group at the end of the growing polynucleotide chain. As a consequence, all polymerases work in a 5′ to 3′ direction. In the active site of these enzymes, the incoming nucleoside triphosphate base-pairs to the template: this allows polymerases to accurately synthesize the complementary strand of their template. Polymerases are classified according to the type of template that they use. In DNA replication, DNA-dependent DNA polymerases make copies of DNA polynucleotide chains. To preserve biological information, it is essential that the sequence of bases in each copy are precisely complementary to the sequence of bases in the template strand. Many DNA polymerases have a proofreading activity. Here, the polymerase recognizes the occasional mistakes in the synthesis reaction by the lack of base pairing between the mismatched nucleotides. If a mismatch is detected, a 3′ to 5′ exonuclease activity is activated and the incorrect base removed. In most organisms, DNA polymerases function in a large complex called the replisome that contains multiple accessory subunits, such as the DNA clamp or helicases. RNA-dependent DNA polymerases are a specialized class of polymerases that copy the sequence of an RNA strand into DNA. They include reverse transcriptase, which is a viral enzyme involved in the infection of cells by retroviruses, and telomerase, which is required for the replication of telomeres. For example, HIV reverse transcriptase is an enzyme for AIDS virus replication. Telomerase is an unusual polymerase because it contains its own RNA template as part of its structure. It synthesizes telomeres at the ends of chromosomes. Telomeres prevent fusion of the ends of neighboring chromosomes and protect chromosome ends from damage. Transcription is carried out by a DNA-dependent RNA polymerase that copies the sequence of a DNA strand into RNA. To begin transcribing a gene, the RNA polymerase binds to a sequence of DNA called a promoter and separates the DNA strands. It then copies the gene sequence into a messenger RNA transcript until it reaches a region of DNA called the terminator, where it halts and detaches from the DNA. As with human DNA-dependent DNA polymerases, RNA polymerase II, the enzyme that transcribes most of the genes in the human genome, operates as part of a large protein complex with multiple regulatory and accessory subunits. Genetic recombination Structure of the Holliday junction intermediate in genetic recombination. The four separate DNA strands are coloured red, blue, green and yellow. A DNA helix usually does not interact with other segments of DNA, and in human cells, the different chromosomes even occupy separate areas in the nucleus called "chromosome territories". This physical separation of different chromosomes is important for the ability of DNA to function as a stable repository for information, as one of the few times chromosomes interact is in chromosomal crossover which occurs during sexual reproduction, when genetic recombination occurs. Chromosomal crossover is when two DNA helices break, swap a section and then rejoin. Recombination allows chromosomes to exchange genetic information and produces new combinations of genes, which increases the efficiency of natural selection and can be important in the rapid evolution of new proteins. Genetic recombination can also be involved in DNA repair, particularly in the cell's response to double-strand breaks. The most common form of chromosomal crossover is homologous recombination, where the two chromosomes involved share very similar sequences. Non-homologous recombination can be damaging to cells, as it can produce chromosomal translocations and genetic abnormalities. The recombination reaction is catalyzed by enzymes known as recombinases, such as RAD51. The first step in recombination is a double-stranded break caused by either an endonuclease or damage to the DNA. A series of steps catalyzed in part by the recombinase then leads to joining of the two helices by at least one Holliday junction, in which a segment of a single strand in each helix is annealed to the complementary strand in the other helix. The Holliday junction is a tetrahedral junction structure that can be moved along the pair of chromosomes, swapping one strand for another. The recombination reaction is then halted by cleavage of the junction and re-ligation of the released DNA. Only strands of like polarity exchange DNA during recombination. There are two types of cleavage: east-west cleavage and north–south cleavage. The north–south cleavage nicks both strands of DNA, while the east–west cleavage has one strand of DNA intact. The formation of a Holliday junction during recombination makes it possible for genetic diversity, genes to exchange on chromosomes, and expression of wild-type viral genomes. Evolution DNA contains the genetic information that allows all forms of life to function, grow and reproduce. However, it is unclear how long in the 4-billion-year history of life DNA has performed this function, as it has been proposed that the earliest forms of life may have used RNA as their genetic material. RNA may have acted as the central part of early cell metabolism as it can both transmit genetic information and carry out catalysis as part of ribozymes. This ancient RNA world where nucleic acid would have been used for both catalysis and genetics may have influenced the evolution of the current genetic code based on four nucleotide bases. This would occur, since the number of different bases in such an organism is a trade-off between a small number of bases increasing replication accuracy and a large number of bases increasing the catalytic efficiency of ribozymes. However, there is no direct evidence of ancient genetic systems, as recovery of DNA from most fossils is impossible because DNA survives in the environment for less than one million years, and slowly degrades into short fragments in solution. Claims for older DNA have been made, most notably a report of the isolation of a viable bacterium from a salt crystal 250 million years old, but these claims are controversial. Building blocks of DNA (adenine, guanine, and related organic molecules) may have been formed extraterrestrially in outer space. Complex DNA and RNA organic compounds of life, including uracil, cytosine, and thymine, have also been formed in the laboratory under conditions mimicking those found in outer space, using starting chemicals, such as pyrimidine, found in meteorites. Pyrimidine, like polycyclic aromatic hydrocarbons (PAHs), the most carbon-rich chemical found in the universe, may have been formed in red giants or in interstellar cosmic dust and gas clouds. Ancient DNA has been recovered from ancient organisms at a timescale where genome evolution can be directly observed, including from extinct organisms up to millions of years old, such as the woolly mammoth. Uses in technology Genetic engineering Methods have been developed to purify DNA from organisms, such as phenol-chloroform extraction, and to manipulate it in the laboratory, such as restriction digests and the polymerase chain reaction. Modern biology and biochemistry make intensive use of these techniques in recombinant DNA technology. Recombinant DNA is a man-made DNA sequence that has been assembled from other DNA sequences. They can be transformed into organisms in the form of plasmids or in the appropriate format, by using a viral vector. The genetically modified organisms produced can be used to produce products such as recombinant proteins, used in medical research, or be grown in agriculture. DNA profiling Forensic scientists can use DNA in blood, semen, skin, saliva or hair found at a crime scene to identify a matching DNA of an individual, such as a perpetrator. This process is formally termed DNA profiling, also called DNA fingerprinting. In DNA profiling, the lengths of variable sections of repetitive DNA, such as short tandem repeats and minisatellites, are compared between people. This method is usually an extremely reliable technique for identifying a matching DNA. However, identification can be complicated if the scene is contaminated with DNA from several people. DNA profiling was developed in 1984 by British geneticist Sir Alec Jeffreys, and first used in forensic science to convict Colin Pitchfork in the 1988 Enderby murders case. The development of forensic science and the ability to now obtain genetic matching on minute samples of blood, skin, saliva, or hair has led to re-examining many cases. Evidence can now be uncovered that was scientifically impossible at the time of the original examination. Combined with the removal of the double jeopardy law in some places, this can allow cases to be reopened where prior trials have failed to produce sufficient evidence to convince a jury. People charged with serious crimes may be required to provide a sample of DNA for matching purposes. The most obvious defense to DNA matches obtained forensically is to claim that cross-contamination of evidence has occurred. This has resulted in meticulous strict handling procedures with new cases of serious crime. DNA profiling is also used successfully to positively identify victims of mass casualty incidents, bodies or body parts in serious accidents, and individual victims in mass war graves, via matching to family members. DNA profiling is also used in DNA paternity testing to determine if someone is the biological parent or grandparent of a child with the probability of parentage is typically 99.99% when the alleged parent is biologically related to the child. Normal DNA sequencing methods happen after birth, but there are new methods to test paternity while a mother is still pregnant. DNA enzymes or catalytic DNA Deoxyribozymes, also called DNAzymes or catalytic DNA, were first discovered in 1994. They are mostly single stranded DNA sequences isolated from a large pool of random DNA sequences through a combinatorial approach called in vitro selection or systematic evolution of ligands by exponential enrichment (SELEX). DNAzymes catalyze variety of chemical reactions including RNA-DNA cleavage, RNA-DNA ligation, amino acids phosphorylation-dephosphorylation, carbon-carbon bond formation, etc. DNAzymes can enhance catalytic rate of chemical reactions up to 100,000,000,000-fold over the uncatalyzed reaction. The most extensively studied class of DNAzymes is RNA-cleaving types which have been used to detect different metal ions and designing therapeutic agents. Several metal-specific DNAzymes have been reported including the GR-5 DNAzyme (lead-specific), the CA1-3 DNAzymes (copper-specific), the 39E DNAzyme (uranyl-specific) and the NaA43 DNAzyme (sodium-specific). The NaA43 DNAzyme, which is reported to be more than 10,000-fold selective for sodium over other metal ions, was used to make a real-time sodium sensor in cells. Bioinformatics Bioinformatics involves the development of techniques to store, data mine, search and manipulate biological data, including DNA nucleic acid sequence data. These have led to widely applied advances in computer science, especially string searching algorithms, machine learning, and database theory. String searching or matching algorithms, which find an occurrence of a sequence of letters inside a larger sequence of letters, were developed to search for specific sequences of nucleotides. The DNA sequence may be aligned with other DNA sequences to identify homologous sequences and locate the specific mutations that make them distinct. These techniques, especially multiple sequence alignment, are used in studying phylogenetic relationships and protein function. Data sets representing entire genomes' worth of DNA sequences, such as those produced by the Human Genome Project, are difficult to use without the annotations that identify the locations of genes and regulatory elements on each chromosome. Regions of DNA sequence that have the characteristic patterns associated with protein- or RNA-coding genes can be identified by gene finding algorithms, which allow researchers to predict the presence of particular gene products and their possible functions in an organism even before they have been isolated experimentally. Entire genomes may also be compared, which can shed light on the evolutionary history of particular organism and permit the examination of complex evolutionary events. DNA nanotechnology DNA nanotechnology uses the unique molecular recognition properties of DNA and other nucleic acids to create self-assembling branched DNA complexes with useful properties. DNA is thus used as a structural material rather than as a carrier of biological information. This has led to the creation of two-dimensional periodic lattices (both tile-based and using the DNA origami method) and three-dimensional structures in the shapes of polyhedra. Nanomechanical devices and algorithmic self-assembly have also been demonstrated, and these DNA structures have been used to template the arrangement of other molecules such as gold nanoparticles and streptavidin proteins. DNA and other nucleic acids are the basis of aptamers, synthetic oligonucleotide ligands for specific target molecules used in a range of biotechnology and biomedical applications. History and anthropology Because DNA collects mutations over time, which are then inherited, it contains historical information, and, by comparing DNA sequences, geneticists can infer the evolutionary history of organisms, their phylogeny. This field of phylogenetics is a powerful tool in evolutionary biology. If DNA sequences within a species are compared, population geneticists can learn the history of particular populations. This can be used in studies ranging from ecological genetics to anthropology. Information storage DNA as a storage device for information has enormous potential since it has much higher storage density compared to electronic devices. However, high costs, slow read and write times (memory latency), and insufficient reliability has prevented its practical use. History DNA was first isolated by the Swiss physician Friedrich Miescher who, in 1869, discovered a microscopic substance in the pus of discarded surgical bandages. As it resided in the nuclei of cells, he called it "nuclein". In 1878, Albrecht Kossel isolated the non-protein component of "nuclein", nucleic acid, and later isolated its five primary nucleobases. In 1909, Phoebus Levene identified the base, sugar, and phosphate nucleotide unit of RNA (then named "yeast nucleic acid"). In 1929, Levene identified deoxyribose sugar in "thymus nucleic acid" (DNA). Levene suggested that DNA consisted of a string of four nucleotide units linked together through the phosphate groups ("tetranucleotide hypothesis"). Levene thought the chain was short and the bases repeated in a fixed order. In 1927, Nikolai Koltsov proposed that inherited traits would be inherited via a "giant hereditary molecule" made up of "two mirror strands that would replicate in a semi-conservative fashion using each strand as a template". In 1928, Frederick Griffith in his experiment discovered that traits of the "smooth" form of Pneumococcus could be transferred to the "rough" form of the same bacteria by mixing killed "smooth" bacteria with the live "rough" form. This system provided the first clear suggestion that DNA carries genetic information. In 1933, while studying virgin sea urchin eggs, Jean Brachet suggested that DNA is found in the cell nucleus and that RNA is present exclusively in the cytoplasm. At the time, "yeast nucleic acid" (RNA) was thought to occur only in plants, while "thymus nucleic acid" (DNA) only in animals. The latter was thought to be a tetramer, with the function of buffering cellular pH. In 1937, William Astbury produced the first X-ray diffraction patterns that showed that DNA had a regular structure. In 1943, Oswald Avery, along with co-workers Colin MacLeod and Maclyn McCarty, identified DNA as the transforming principle, supporting Griffith's suggestion (Avery–MacLeod–McCarty experiment). Erwin Chargaff developed and published observations now known as Chargaff's rules, stating that in DNA from any species of any organism, the amount of guanine should be equal to cytosine and the amount of adenine should be equal to thymine. Late in 1951, Francis Crick started working with James Watson at the Cavendish Laboratory within the University of Cambridge. DNA's role in heredity was confirmed in 1952 when Alfred Hershey and Martha Chase in the Hershey–Chase experiment showed that DNA is the genetic material of the enterobacteria phage T2. In May 1952, Raymond Gosling, a graduate student working under the supervision of Rosalind Franklin, took an X-ray diffraction image, labeled as "Photo 51", at high hydration levels of DNA. This photo was given to Watson and Crick by Maurice Wilkins and was critical to their obtaining the correct structure of DNA. Franklin told Crick and Watson that the backbones had to be on the outside. Before then, Linus Pauling, and Watson and Crick, had erroneous models with the chains inside and the bases pointing outwards. Franklin's identification of the space group for DNA crystals revealed to Crick that the two DNA strands were antiparallel. In February 1953, Linus Pauling and Robert Corey proposed a model for nucleic acids containing three intertwined chains, with the phosphates near the axis, and the bases on the outside. Watson and Crick completed their model, which is now accepted as the first correct model of the double helix of DNA. On 28 February 1953 Crick interrupted patrons' lunchtime at The Eagle pub in Cambridge, England to announce that he and Watson had "discovered the secret of life". The 25 April 1953 issue of the journal Nature published a series of five articles giving the Watson and Crick double-helix structure DNA and evidence supporting it. The structure was reported in a letter titled "MOLECULAR STRUCTURE OF NUCLEIC ACIDS A Structure for Deoxyribose Nucleic Acid, in which they said, "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material." This letter was followed by a letter from Franklin and Gosling, which was the first publication of their own X-ray diffraction data and of their original analysis method. Then followed a letter by Wilkins and two of his colleagues, which contained an analysis of in vivo B-DNA X-ray patterns, and which supported the presence in vivo of the Watson and Crick structure. In April 2023, scientists, based on new evidence, concluded that Rosalind Franklin was a contributor and "equal player" in the discovery process of DNA, rather than otherwise, as may have been presented subsequently after the time of the discovery. In 1962, after Franklin's death, Watson, Crick, and Wilkins jointly received the Nobel Prize in Physiology or Medicine. Nobel Prizes are awarded only to living recipients. A debate continues about who should receive credit for the discovery. In an influential presentation in 1957, Crick laid out the central dogma of molecular biology, which foretold the relationship between DNA, RNA, and proteins, and articulated the "adaptor hypothesis". Final confirmation of the replication mechanism that was implied by the double-helical structure followed in 1958 through the Meselson–Stahl experiment. Further work by Crick and co-workers showed that the genetic code was based on non-overlapping triplets of bases, called codons, allowing Har Gobind Khorana, Robert W. Holley, and Marshall Warren Nirenberg to decipher the genetic code. These findings represent the birth of molecular biology. In 1986, DNA analysis was first used for criminal investigative purposes when police in the UK requested Alec Jeffreys of the University of Leicester to verify or disprove a suspect's rape-murder "confession". In this particular case, the suspect had confessed to two rape-murders, but had later retracted his confession. DNA testing at the university labs soon disproved the veracity of the suspect's original "confession", and the suspect was exonerated from the murder-rape charges. See also References Further reading First published in October 1974 by MacMillan, with foreword by Francis Crick; the definitive DNA textbook, revised in 1994 with a nine-page postscript. External links DNA binding site prediction on protein DNA the Double Helix Game From the official Nobel Prize web site DNA under electron microscope Dolan DNA Learning Center Double Helix: 50 years of DNA, Nature ENCODE threads explorer ENCODE home page at Nature Double Helix 1953–2003 National Centre for Biotechnology Education Genetic Education Modules for Teachers – DNA from the Beginning Study Guide "Clue to chemistry of heredity found". The New York Times, June 1953. First American newspaper coverage of the discovery of the DNA structure DNA from the Beginning Another DNA Learning Center site on DNA, genes, and heredity from Mendel to the human genome project. The Register of Francis Crick Personal Papers 1938 – 2007 at Mandeville Special Collections Library, University of California, San Diego Seven-page, handwritten letter that Crick sent to his 12-year-old son Michael in 1953 describing the structure of DNA. See Crick's medal goes under the hammer, Nature, 5 April 2013. Biotechnology Helices Nucleic acids
DNA
[ "Chemistry", "Biology" ]
12,647
[ "Biomolecules by chemical classification", "nan", "Biotechnology", "Nucleic acids" ]
7,963
https://en.wikipedia.org/wiki/Disjunctive%20syllogism
In classical logic, disjunctive syllogism (historically known as modus tollendo ponens (MTP), Latin for "mode that affirms by denying") is a valid argument form which is a syllogism having a disjunctive statement for one of its premises. An example in English: I will choose soup or I will choose salad. I will not choose soup. Therefore, I will choose salad. Propositional logic In propositional logic, disjunctive syllogism (also known as disjunction elimination and or elimination, or abbreviated ∨E), is a valid rule of inference. If it is known that at least one of two statements is true, and that it is not the former that is true; we can infer that it has to be the latter that is true. Equivalently, if P is true or Q is true and P is false, then Q is true. The name "disjunctive syllogism" derives from its being a syllogism, a three-step argument, and the use of a logical disjunction (any "or" statement.) For example, "P or Q" is a disjunction, where P and Q are called the statement's disjuncts. The rule makes it possible to eliminate a disjunction from a logical proof. It is the rule that where the rule is that whenever instances of "", and "" appear on lines of a proof, "" can be placed on a subsequent line. Disjunctive syllogism is closely related and similar to hypothetical syllogism, which is another rule of inference involving a syllogism. It is also related to the law of noncontradiction, one of the three traditional laws of thought. Formal notation For a logical system that validates it, the disjunctive syllogism may be written in sequent notation as where is a metalogical symbol meaning that is a syntactic consequence of , and . It may be expressed as a truth-functional tautology or theorem in the object language of propositional logic as where , and are propositions expressed in some formal system. Natural language examples Here is an example: It is red or it is blue. It is not blue. Therefore, it is red. Here is another example: The breach is a safety violation, or it is not subject to fines. The breach is not a safety violation. Therefore, it is not subject to fines. Strong form Modus tollendo ponens can be made stronger by using exclusive disjunction instead of inclusive disjunction as a premise: Related argument forms Unlike modus ponens and modus ponendo tollens, with which it should not be confused, disjunctive syllogism is often not made an explicit rule or axiom of logical systems, as the above arguments can be proven with a combination of reductio ad absurdum and disjunction elimination. Other forms of syllogism include: hypothetical syllogism categorical syllogism Disjunctive syllogism holds in classical propositional logic and intuitionistic logic, but not in some paraconsistent logics. See also Stoic logic Type of syllogism (disjunctive, hypothetical, legal, poly-, prosleptic, quasi-, statistical) References Rules of inference Theorems in propositional logic Classical logic Paraconsistent logic
Disjunctive syllogism
[ "Mathematics" ]
735
[ "Theorems in propositional logic", "Rules of inference", "Theorems in the foundations of mathematics", "Proof theory" ]
7,964
https://en.wikipedia.org/wiki/Definition
A definition is a statement of the meaning of a term (a word, phrase, or other set of symbols). Definitions can be classified into two large categories: intensional definitions (which try to give the sense of a term), and extensional definitions (which try to list the objects that a term describes). Another important category of definitions is the class of ostensive definitions, which convey the meaning of a term by pointing out examples. A term may have many different senses and multiple meanings, and thus require multiple definitions. In mathematics, a definition is used to give a precise meaning to a new term, by describing a condition which unambiguously qualifies what the mathematical term is and is not. Definitions and axioms form the basis on which all of modern mathematics is to be constructed. Basic terminology In modern usage, a definition is something, typically expressed in words, that attaches a meaning to a word or group of words. The word or group of words that is to be defined is called the definiendum, and the word, group of words, or action that defines it is called the definiens. For example, in the definition "An elephant is a large gray animal native to Asia and Africa", the word "elephant" is the definiendum, and everything after the word "is" is the definiens. The definiens is not the meaning of the word defined, but is instead something that conveys the same meaning as that word. There are many sub-types of definitions, often specific to a given field of knowledge or study. These include, lexical definitions, or the common dictionary definitions of words already in a language; demonstrative definitions, which define something by pointing to an example of it ("This," [said while pointing to a large grey animal], "is an Asian elephant."); and precising definitions, which reduce the vagueness of a word, typically in some special sense ("'Large', among female Asian elephants, is any individual weighing over 5,500 pounds."). Intensional definitions vs extensional definitions An intensional definition, also called a connotative definition, specifies the necessary and sufficient conditions for a thing to be a member of a specific set. Any definition that attempts to set out the essence of something, such as that by genus and differentia, is an intensional definition. An extensional definition, also called a denotative definition, of a concept or term specifies its extension. It is a list naming every object that is a member of a specific set. Thus, the "seven deadly sins" can be defined intensionally as those singled out by Pope Gregory I as particularly destructive of the life of grace and charity within a person, thus creating the threat of eternal damnation. An extensional definition, on the other hand, would be the list of wrath, greed, sloth, pride, lust, envy, and gluttony. In contrast, while an intensional definition of "prime minister" might be "the most senior minister of a cabinet in the executive branch of parliamentary government", an extensional definition is not possible since it is not known who the future prime ministers will be (even though all prime ministers from the past and present can be listed). Classes of intensional definitions A genus–differentia definition is a type of intensional definition that takes a large category (the genus) and narrows it down to a smaller category by a distinguishing characteristic (i.e. the differentia). More formally, a genus–differentia definition consists of: a genus (or family): An existing definition that serves as a portion of the new definition; all definitions with the same genus are considered members of that genus. the differentia: The portion of the new definition that is not provided by the genus. For example, consider the following genus–differentia definitions: a triangle: A plane figure that has three straight bounding sides. a quadrilateral: A plane figure that has four straight bounding sides. Those definitions can be expressed as a genus ("a plane figure") and two differentiae ("that has three straight bounding sides" and "that has four straight bounding sides", respectively). It is also possible to have two different genus–differentia definitions that describe the same term, especially when the term describes the overlap of two large categories. For instance, both of these genus–differentia definitions of "square" are equally acceptable: a square: a rectangle that is a rhombus. a square: a rhombus that is a rectangle. Thus, a "square" is a member of both genera (the plural of genus): the genus "rectangle" and the genus "rhombus". Classes of extensional definitions One important form of the extensional definition is ostensive definition. This gives the meaning of a term by pointing, in the case of an individual, to the thing itself, or in the case of a class, to examples of the right kind. For example, one can explain who Alice (an individual) is, by pointing her out to another; or what a rabbit (a class) is, by pointing at several and expecting another to understand. The process of ostensive definition itself was critically appraised by Ludwig Wittgenstein. An enumerative definition of a concept or a term is an extensional definition that gives an explicit and exhaustive listing of all the objects that fall under the concept or term in question. Enumerative definitions are only possible for finite sets (and only practical for small sets). Divisio and partitio Divisio and partitio are classical terms for definitions. A partitio is simply an intensional definition. A divisio is not an extensional definition, but an exhaustive list of subsets of a set, in the sense that every member of the "divided" set is a member of one of the subsets. An extreme form of divisio lists all sets whose only member is a member of the "divided" set. The difference between this and an extensional definition is that extensional definitions list members, and not subsets. Nominal definitions vs real definitions In classical thought, a definition was taken to be a statement of the essence of a thing. Aristotle had it that an object's essential attributes form its "essential nature", and that a definition of the object must include these essential attributes. The idea that a definition should state the essence of a thing led to the distinction between nominal and real essence—a distinction originating with Aristotle. In the Posterior Analytics, he says that the meaning of a made-up name can be known (he gives the example "goat stag") without knowing what he calls the "essential nature" of the thing that the name would denote (if there were such a thing). This led medieval logicians to distinguish between what they called the quid nominis, or the "whatness of the name", and the underlying nature common to all the things it names, which they called the quid rei, or the "whatness of the thing". The name "hobbit", for example, is perfectly meaningful. It has a quid nominis, but one could not know the real nature of hobbits, and so the quid rei of hobbits cannot be known. By contrast, the name "man" denotes real things (men) that have a certain quid rei. The meaning of a name is distinct from the nature that a thing must have in order that the name apply to it. This leads to a corresponding distinction between nominal and real definitions. A nominal definition is the definition explaining what a word means (i.e., which says what the "nominal essence" is), and is definition in the classical sense as given above. A real definition, by contrast, is one expressing the real nature or quid rei of the thing. This preoccupation with essence dissipated in much of modern philosophy. Analytic philosophy, in particular, is critical of attempts to elucidate the essence of a thing. Russell described essence as "a hopelessly muddle-headed notion". More recently Kripke's formalisation of possible world semantics in modal logic led to a new approach to essentialism. Insofar as the essential properties of a thing are necessary to it, they are those things that it possesses in all possible worlds. Kripke refers to names used in this way as rigid designators. Operational vs. theoretical definitions A definition may also be classified as an operational definition or theoretical definition. Terms with multiple definitions Homonyms A homonym is, in the strict sense, one of a group of words that share the same spelling and pronunciation but have different meanings. Thus homonyms are simultaneously homographs (words that share the same spelling, regardless of their pronunciation) and homophones (words that share the same pronunciation, regardless of their spelling). The state of being a homonym is called homonymy. Examples of homonyms are the pair stalk (part of a plant) and stalk (follow/harass a person) and the pair left (past tense of leave) and left (opposite of right). A distinction is sometimes made between "true" homonyms, which are unrelated in origin, such as skate (glide on ice) and skate (the fish), and polysemous homonyms, or polysemes, which have a shared origin, such as mouth (of a river) and mouth (of an animal). Polysemes Polysemy is the capacity for a sign (such as a word, phrase, or symbol) to have multiple meanings (that is, multiple semes or sememes and thus multiple senses), usually related by contiguity of meaning within a semantic field. It is thus usually regarded as distinct from homonymy, in which the multiple meanings of a word may be unconnected or unrelated. In logic, mathematics and computing In mathematics, definitions are generally not used to describe existing terms, but to describe or characterize a concept. For naming the object of a definition mathematicians can use either a neologism (this was mainly the case in the past) or words or phrases of the common language (this is generally the case in modern mathematics). The precise meaning of a term given by a mathematical definition is often different from the English definition of the word used, which can lead to confusion, particularly when the meanings are close. For example, a set is not exactly the same thing in mathematics and in common language. In some case, the word used can be misleading; for example, a real number has nothing more (or less) real than an imaginary number. Frequently, a definition uses a phrase built with common English words, which has no meaning outside mathematics, such as primitive group or irreducible variety. In first-order logic definitions are usually introduced using extension by definition (so using a metalogic). On the other hand, lambda-calculi are a kind of logic where the definitions are included as the feature of the formal system itself. Classification Authors have used different terms to classify definitions used in formal languages like mathematics. Norman Swartz classifies a definition as "stipulative" if it is intended to guide a specific discussion. A stipulative definition might be considered a temporary, working definition, and can only be disproved by showing a logical contradiction. In contrast, a "descriptive" definition can be shown to be "right" or "wrong" with reference to general usage. Swartz defines a precising definition as one that extends the descriptive dictionary definition (lexical definition) for a specific purpose by including additional criteria. A precising definition narrows the set of things that meet the definition. C.L. Stevenson has identified persuasive definition as a form of stipulative definition which purports to state the "true" or "commonly accepted" meaning of a term, while in reality stipulating an altered use (perhaps as an argument for some specific belief). Stevenson has also noted that some definitions are "legal" or "coercive" – their object is to create or alter rights, duties, or crimes. Recursive definitions A recursive definition, sometimes also called an inductive definition, is one that defines a word in terms of itself, so to speak, albeit in a useful way. Normally this consists of three steps: At least one thing is stated to be a member of the set being defined; this is sometimes called a "base set". All things bearing a certain relation to other members of the set are also to count as members of the set. It is this step that makes the definition recursive. All other things are excluded from the set For instance, we could define a natural number as follows (after Peano): "0" is a natural number. Each natural number has a unique successor, such that: the successor of a natural number is also a natural number; distinct natural numbers have distinct successors; no natural number is succeeded by "0". Nothing else is a natural number. So "0" will have exactly one successor, which for convenience can be called "1". In turn, "1" will have exactly one successor, which could be called "2", and so on. The second condition in the definition itself refers to natural numbers, and hence involves self-reference. Although this sort of definition involves a form of circularity, it is not vicious, and the definition has been quite successful. In the same way, we can define ancestor as follows: A parent is an ancestor. A parent of an ancestor is an ancestor. Nothing else is an ancestor. Or simply: an ancestor is a parent or a parent of an ancestor. In medicine In medical dictionaries, guidelines and other consensus statements and classifications, definitions should as far as possible be: simple and easy to understand, preferably even by the general public; useful clinically or in related areas where the definition will be used; specific (that is, by reading the definition only, it should ideally not be possible to refer to any other entity than that being defined); measurable; a reflection of current scientific knowledge. Problems Certain rules have traditionally been given for definitions (in particular, genus-differentia definitions). A definition must set out the essential attributes of the thing defined. Definitions should avoid circularity. To define a horse as "a member of the species equus" would convey no information whatsoever. For this reason, Locke adds that a definition of a term must not consist of terms which are synonymous with it. This would be a circular definition, a circulus in definiendo. Note, however, that it is acceptable to define two relative terms in respect of each other. Clearly, we cannot define "antecedent" without using the term "consequent", nor conversely. The definition must not be too wide or too narrow. It must be applicable to everything to which the defined term applies (i.e. not miss anything out), and to nothing else (i.e. not include any things to which the defined term would not truly apply). The definition must not be obscure. The purpose of a definition is to explain the meaning of a term which may be obscure or difficult, by the use of terms that are commonly understood and whose meaning is clear. The violation of this rule is known by the Latin term obscurum per obscurius. However, sometimes scientific and philosophical terms are difficult to define without obscurity. A definition should not be negative where it can be positive. We should not define "wisdom" as the absence of folly, or a healthy thing as whatever is not sick. Sometimes this is unavoidable, however. For example, it appears difficult to define blindness in positive terms rather than as "the absence of sight in a creature that is normally sighted". Fallacies of definition Limitations of definition Given that a natural language such as English contains, at any given time, a finite number of words, any comprehensive list of definitions must either be circular or rely upon primitive notions. If every term of every definiens must itself be defined, "where at last should we stop?" A dictionary, for instance, insofar as it is a comprehensive list of lexical definitions, must resort to circularity. Many philosophers have chosen instead to leave some terms undefined. The scholastic philosophers claimed that the highest genera (called the ten generalissima) cannot be defined, since a higher genus cannot be assigned under which they may fall. Thus being, unity and similar concepts cannot be defined. Locke supposes in An Essay Concerning Human Understanding that the names of simple concepts do not admit of any definition. More recently Bertrand Russell sought to develop a formal language based on logical atoms. Other philosophers, notably Wittgenstein, rejected the need for any undefined simples. Wittgenstein pointed out in his Philosophical Investigations that what counts as a "simple" in one circumstance might not do so in another. He rejected the very idea that every explanation of the meaning of a term needed itself to be explained: "As though an explanation hung in the air unless supported by another one", claiming instead that explanation of a term is only needed to avoid misunderstanding. Locke and Mill also argued that individuals cannot be defined. Names are learned by connecting an idea with a sound, so that speaker and hearer have the same idea when the same word is used. This is not possible when no one else is acquainted with the particular thing that has "fallen under our notice". Russell offered his theory of descriptions in part as a way of defining a proper name, the definition being given by a definite description that "picks out" exactly one individual. Saul Kripke pointed to difficulties with this approach, especially in relation to modality, in his book Naming and Necessity. There is a presumption in the classic example of a definition that the definiens can be stated. Wittgenstein argued that for some terms this is not the case. The examples he used include game, number and family. In such cases, he argued, there is no fixed boundary that can be used to provide a definition. Rather, the items are grouped together because of a family resemblance. For terms such as these it is not possible and indeed not necessary to state a definition; rather, one simply comes to understand the use of the term. See also Analytic proposition Circular definition Definable set Definitionism Denotation Extensional definition Fallacies of definition Indeterminacy Intensional definition Lexical definition Logic programming Operational definition Ostensive definition Ramsey–Lewis method Semantics Synthetic proposition Theoretical definition Notes References (full text of 1st ed. (1906)) (worldcat) (full text of 2nd ed. (1916)) (full text: vol 1, vol 2) External links Definitions, Stanford Encyclopedia of Philosophy Gupta, Anil (2008) Definitions, Dictionaries, and Meanings, Norman Swartz 1997 Guy Longworth (ca. 2008) "Definitions: Uses and Varieties of" in: K. Brown (ed.): Elsevier Encyclopedia of Language and Linguistics, Elsevier. Definition and Meaning, a very short introduction by Garth Kemerling (2001). Philosophical logic Philosophy of language Semantics Linguistics terminology Mathematical terminology Concepts in logic Lexicography Meaning (philosophy of language)
Definition
[ "Mathematics" ]
4,025
[ "nan" ]
7,983
https://en.wikipedia.org/wiki/Double-hulled%20tanker
A double-hulled tanker refers to an oil tanker which has a double hull. They reduce the likelihood of leaks occurring compared to single-hulled tankers, and their ability to prevent or reduce oil spills led to double hulls being standardized for oil tankers and other types of ships including by the International Convention for the Prevention of Pollution from Ships or MARPOL Convention. After the Exxon Valdez oil spill disaster in Alaska in 1989, the US government required all new oil tankers built for use between US ports to be equipped with a full double hull. Reasons for use A number of manufacturers have embraced oil tankers with a double hull because it strengthens the hull of ships, reducing the likelihood of oil disasters in low-impact collisions and groundings over single-hull ships. They reduce the likelihood of leaks occurring at low speed impacts in port areas when the ship is under pilotage. Research of impact damage of ships has revealed that double-hulled tankers are unlikely to perforate both hulls in a collision, preventing oil from seeping out. However, for smaller tankers, U-shaped tanks might be susceptible to "free flooding" across the double bottom and up to the outside water level each side of the cargo tank. Salvors prefer to salvage doubled-hulled tankers because they permit the use of air pressure to vacuum out the flood water. In the 1960s, collision proof double hulls for nuclear ships were extensively investigated, due to escalating concerns over nuclear accidents. The ability of double-hulled tankers to prevent or reduce oil spills led to double hulls being standardized for other types of ships including oil tankers by the International Convention for the Prevention of Pollution from Ships or MARPOL Convention. In 1992, MARPOL was amended, making it "mandatory for tankers of 5,000 dwt and more ordered after 6 July 1993 to be fitted with double hulls, or an alternative design approved by IMO". However, in the aftermath of the Erika incident of the coast off France in December 1999, members of IMO adopted a revised schedule for the phase-out of single-hull tankers, which came into effect on 1 September 2003, with further amendments validated on 5 April 2005. After the Exxon Valdez oil spill disaster, when that ship grounded on Bligh Reef outside the port of Valdez, Alaska in 1989, the US government required all new oil tankers built for use between US ports to be equipped with a full double hull. However, the damage to the Exxon Valdez penetrated sections of the hull (the slops oil tanks, or slop tanks) that were protected by a double bottom, or partial double hull. Maintenance issues Although double-hulled tankers reduce the likelihood of ships grazing rocks and creating holes in the hull, a double hull does not protect against major, high-energy collisions or groundings which cause the majority of oil pollution, despite this being the reason that the double hull was mandated by United States legislation. Double-hulled tankers, if poorly designed, constructed, maintained and operated can be as problematic, if not more problematic than their single-hulled counterparts. Double-hulled tankers have a more complex design and structure than their single-hulled counterparts, which means that they require more maintenance and care in operating, which if not subject to responsible monitoring and policing, may cause problems. Double hulls often result in the weight of the hull increasing by at least 20%, and because the steel weight of doubled-hulled tanks should not be greater than that of single-hulled ships, the individual hull walls are typically thinner and theoretically less resistant to wear. Double hulls by no means eliminate the possibility of the hulls breaking apart. Due to the air space between the hulls, there is also a potential problem with volatile gases seeping out through worn areas of the internal hull, increasing the risk of an explosion. Although several international conventions against pollution are in place, as of 2003 there was still no formal body setting international mandatory standards, although the International Safety Guide for Oil Tankers and Terminals (ISGOTT) does provide guidelines giving advice on optimum use and safety, such as recommending that ballast tanks are not entered while loaded with cargo, and that weekly samples are made of the atmosphere inside for hydrocarbon gas. Due to the difficulties of maintenance, ship builders have been competitive in producing double-hulled ships which are easier to inspect, such as ballast and cargo tanks which are easily accessible and easier to spot corrosion in the hull. The Tanker Structure Cooperative Forum (TSCF) published the Guide to Inspection and Maintenance of Double-Hull Tanker Structures in 1995 giving advice based on experience of operating double-hulled tankers. See also Marine salvage Lloyd's Open Form References External links + + Ship types Ship design Shipbuilding
Double-hulled tanker
[ "Engineering" ]
989
[ "Shipbuilding", "Marine engineering" ]
7,988
https://en.wikipedia.org/wiki/Dual%20space
In mathematics, any vector space has a corresponding dual vector space (or just dual space for short) consisting of all linear forms on together with the vector space structure of pointwise addition and scalar multiplication by constants. The dual space as defined above is defined for all vector spaces, and to avoid ambiguity may also be called the . When defined for a topological vector space, there is a subspace of the dual space, corresponding to continuous linear functionals, called the continuous dual space. Dual vector spaces find application in many branches of mathematics that use vector spaces, such as in tensor analysis with finite-dimensional vector spaces. When applied to vector spaces of functions (which are typically infinite-dimensional), dual spaces are used to describe measures, distributions, and Hilbert spaces. Consequently, the dual space is an important concept in functional analysis. Early terms for dual include polarer Raum [Hahn 1927], espace conjugué, adjoint space [Alaoglu 1940], and transponierter Raum [Schauder 1930] and [Banach 1932]. The term dual is due to Bourbaki 1938. Algebraic dual space Given any vector space over a field , the (algebraic) dual space (alternatively denoted by or ) is defined as the set of all linear maps (linear functionals). Since linear maps are vector space homomorphisms, the dual space may be denoted . The dual space itself becomes a vector space over when equipped with an addition and scalar multiplication satisfying: for all , , and . Elements of the algebraic dual space are sometimes called covectors, one-forms, or linear forms. The pairing of a functional in the dual space and an element of is sometimes denoted by a bracket: or . This pairing defines a nondegenerate bilinear mapping called the natural pairing. Finite-dimensional case If is finite-dimensional, then has the same dimension as . Given a basis in , it is possible to construct a specific basis in , called the dual basis. This dual basis is a set of linear functionals on , defined by the relation for any choice of coefficients . In particular, letting in turn each one of those coefficients be equal to one and the other coefficients zero, gives the system of equations where is the Kronecker delta symbol. This property is referred to as the bi-orthogonality property. Consider the basis of V. Let be defined as the following: . These are a basis of because: The are linear functionals, which map such as and to scalars and . Then also, and . Therefore, for . Suppose . Applying this functional on the basis vectors of successively, lead us to (The functional applied in results in ). Therefore, is linearly independent on . Lastly, consider . Then and generates . Hence, it is a basis of . For example, if is , let its basis be chosen as . The basis vectors are not orthogonal to each other. Then, and are one-forms (functions that map a vector to a scalar) such that , , , and . (Note: The superscript here is the index, not an exponent.) This system of equations can be expressed using matrix notation as Solving for the unknown values in the first matrix shows the dual basis to be . Because and are functionals, they can be rewritten as and . In general, when is , if is a matrix whose columns are the basis vectors and is a matrix whose columns are the dual basis vectors, then where is the identity matrix of order . The biorthogonality property of these two basis sets allows any point to be represented as even when the basis vectors are not orthogonal to each other. Strictly speaking, the above statement only makes sense once the inner product and the corresponding duality pairing are introduced, as described below in . In particular, can be interpreted as the space of columns of real numbers, its dual space is typically written as the space of rows of real numbers. Such a row acts on as a linear functional by ordinary matrix multiplication. This is because a functional maps every -vector into a real number . Then, seeing this functional as a matrix , and as an matrix, and a matrix (trivially, a real number) respectively, if then, by dimension reasons, must be a matrix; that is, must be a row vector. If consists of the space of geometrical vectors in the plane, then the level curves of an element of form a family of parallel lines in , because the range is 1-dimensional, so that every point in the range is a multiple of any one nonzero element. So an element of can be intuitively thought of as a particular family of parallel lines covering the plane. To compute the value of a functional on a given vector, it suffices to determine which of the lines the vector lies on. Informally, this "counts" how many lines the vector crosses. More generally, if is a vector space of any dimension, then the level sets of a linear functional in are parallel hyperplanes in , and the action of a linear functional on a vector can be visualized in terms of these hyperplanes. Infinite-dimensional case If is not finite-dimensional but has a basis indexed by an infinite set , then the same construction as in the finite-dimensional case yields linearly independent elements () of the dual space, but they will not form a basis. For instance, consider the space , whose elements are those sequences of real numbers that contain only finitely many non-zero entries, which has a basis indexed by the natural numbers . For , is the sequence consisting of all zeroes except in the -th position, which is 1. The dual space of is (isomorphic to) , the space of all sequences of real numbers: each real sequence defines a function where the element of is sent to the number which is a finite sum because there are only finitely many nonzero . The dimension of is countably infinite, whereas does not have a countable basis. This observation generalizes to any infinite-dimensional vector space over any field : a choice of basis identifies with the space of functions such that is nonzero for only finitely many , where such a function is identified with the vector in (the sum is finite by the assumption on , and any may be written uniquely in this way by the definition of the basis). The dual space of may then be identified with the space of all functions from to : a linear functional on is uniquely determined by the values it takes on the basis of , and any function (with ) defines a linear functional on by Again, the sum is finite because is nonzero for only finitely many . The set may be identified (essentially by definition) with the direct sum of infinitely many copies of (viewed as a 1-dimensional vector space over itself) indexed by , i.e. there are linear isomorphisms On the other hand, is (again by definition), the direct product of infinitely many copies of indexed by , and so the identification is a special case of a general result relating direct sums (of modules) to direct products. If a vector space is not finite-dimensional, then its (algebraic) dual space is always of larger dimension (as a cardinal number) than the original vector space. This is in contrast to the case of the continuous dual space, discussed below, which may be isomorphic to the original vector space even if the latter is infinite-dimensional. The proof of this inequality between dimensions results from the following. If is an infinite-dimensional -vector space, the arithmetical properties of cardinal numbers implies that where cardinalities are denoted as absolute values. For proving that it suffices to prove that which can be done with an argument similar to Cantor's diagonal argument. The exact dimension of the dual is given by the Erdős–Kaplansky theorem. Bilinear products and dual spaces If V is finite-dimensional, then V is isomorphic to V∗. But there is in general no natural isomorphism between these two spaces. Any bilinear form on V gives a mapping of V into its dual space via where the right hand side is defined as the functional on V taking each to . In other words, the bilinear form determines a linear mapping defined by If the bilinear form is nondegenerate, then this is an isomorphism onto a subspace of V∗. If V is finite-dimensional, then this is an isomorphism onto all of V∗. Conversely, any isomorphism from V to a subspace of V∗ (resp., all of V∗ if V is finite dimensional) defines a unique nondegenerate bilinear form on V by Thus there is a one-to-one correspondence between isomorphisms of V to a subspace of (resp., all of) V∗ and nondegenerate bilinear forms on V. If the vector space V is over the complex field, then sometimes it is more natural to consider sesquilinear forms instead of bilinear forms. In that case, a given sesquilinear form determines an isomorphism of V with the complex conjugate of the dual space The conjugate of the dual space can be identified with the set of all additive complex-valued functionals such that Injection into the double-dual There is a natural homomorphism from into the double dual , defined by for all . In other words, if is the evaluation map defined by , then is defined as the map . This map is always injective; and it is always an isomorphism if is finite-dimensional. Indeed, the isomorphism of a finite-dimensional vector space with its double dual is an archetypal example of a natural isomorphism. Infinite-dimensional Hilbert spaces are not isomorphic to their algebraic double duals, but instead to their continuous double duals. Transpose of a linear map If is a linear map, then the transpose (or dual) is defined by for every . The resulting functional in is called the pullback of along . The following identity holds for all and : where the bracket [·,·] on the left is the natural pairing of V with its dual space, and that on the right is the natural pairing of W with its dual. This identity characterizes the transpose, and is formally similar to the definition of the adjoint. The assignment produces an injective linear map between the space of linear operators from V to W and the space of linear operators from W to V; this homomorphism is an isomorphism if and only if W is finite-dimensional. If then the space of linear maps is actually an algebra under composition of maps, and the assignment is then an antihomomorphism of algebras, meaning that . In the language of category theory, taking the dual of vector spaces and the transpose of linear maps is therefore a contravariant functor from the category of vector spaces over F to itself. It is possible to identify (f) with f using the natural injection into the double dual. If the linear map f is represented by the matrix A with respect to two bases of V and W, then f is represented by the transpose matrix AT with respect to the dual bases of W and V, hence the name. Alternatively, as f is represented by A acting on the left on column vectors, f is represented by the same matrix acting on the right on row vectors. These points of view are related by the canonical inner product on Rn, which identifies the space of column vectors with the dual space of row vectors. Quotient spaces and annihilators Let be a subset of . The annihilator of in , denoted here , is the collection of linear functionals such that for all . That is, consists of all linear functionals such that the restriction to vanishes: . Within finite dimensional vector spaces, the annihilator is dual to (isomorphic to) the orthogonal complement. The annihilator of a subset is itself a vector space. The annihilator of the zero vector is the whole dual space: , and the annihilator of the whole space is just the zero covector: . Furthermore, the assignment of an annihilator to a subset of reverses inclusions, so that if , then If and are two subsets of then If is any family of subsets of indexed by belonging to some index set , then In particular if and are subspaces of then and If is finite-dimensional and is a vector subspace, then after identifying with its image in the second dual space under the double duality isomorphism . In particular, forming the annihilator is a Galois connection on the lattice of subsets of a finite-dimensional vector space. If is a subspace of then the quotient space is a vector space in its own right, and so has a dual. By the first isomorphism theorem, a functional factors through if and only if is in the kernel of . There is thus an isomorphism As a particular consequence, if is a direct sum of two subspaces and , then is a direct sum of and . Dimensional analysis The dual space is analogous to a "negative"-dimensional space. Most simply, since a vector can be paired with a covector by the natural pairing to obtain a scalar, a covector can "cancel" the dimension of a vector, similar to reducing a fraction. Thus while the direct sum is a -dimensional space (if is -dimensional), behaves as an -dimensional space, in the sense that its dimensions can be canceled against the dimensions of . This is formalized by tensor contraction. This arises in physics via dimensional analysis, where the dual space has inverse units. Under the natural pairing, these units cancel, and the resulting scalar value is dimensionless, as expected. For example, in (continuous) Fourier analysis, or more broadly time–frequency analysis: given a one-dimensional vector space with a unit of time , the dual space has units of frequency: occurrences per unit of time (units of ). For example, if time is measured in seconds, the corresponding dual unit is the inverse second: over the course of 3 seconds, an event that occurs 2 times per second occurs a total of 6 times, corresponding to . Similarly, if the primal space measures length, the dual space measures inverse length. Continuous dual space When dealing with topological vector spaces, the continuous linear functionals from the space into the base field (or ) are particularly important. This gives rise to the notion of the "continuous dual space" or "topological dual" which is a linear subspace of the algebraic dual space , denoted by . For any finite-dimensional normed vector space or topological vector space, such as Euclidean n-space, the continuous dual and the algebraic dual coincide. This is however false for any infinite-dimensional normed space, as shown by the example of discontinuous linear maps. Nevertheless, in the theory of topological vector spaces the terms "continuous dual space" and "topological dual space" are often replaced by "dual space". For a topological vector space its continuous dual space, or topological dual space, or just dual space (in the sense of the theory of topological vector spaces) is defined as the space of all continuous linear functionals . Important examples for continuous dual spaces are the space of compactly supported test functions and its dual the space of arbitrary distributions (generalized functions); the space of arbitrary test functions and its dual the space of compactly supported distributions; and the space of rapidly decreasing test functions the Schwartz space, and its dual the space of tempered distributions (slowly growing distributions) in the theory of generalized functions. Properties If is a Hausdorff topological vector space (TVS), then the continuous dual space of is identical to the continuous dual space of the completion of . Topologies on the dual There is a standard construction for introducing a topology on the continuous dual of a topological vector space . Fix a collection of bounded subsets of . This gives the topology on of uniform convergence on sets from or what is the same thing, the topology generated by seminorms of the form where is a continuous linear functional on , and runs over the class This means that a net of functionals tends to a functional in if and only if Usually (but not necessarily) the class is supposed to satisfy the following conditions: Each point of belongs to some set : Each two sets and are contained in some set : is closed under the operation of multiplication by scalars: If these requirements are fulfilled then the corresponding topology on is Hausdorff and the sets form its local base. Here are the three most important special cases. The strong topology on is the topology of uniform convergence on bounded subsets in (so here can be chosen as the class of all bounded subsets in ). If is a normed vector space (for example, a Banach space or a Hilbert space) then the strong topology on is normed (in fact a Banach space if the field of scalars is complete), with the norm The stereotype topology on is the topology of uniform convergence on totally bounded sets in (so here can be chosen as the class of all totally bounded subsets in ). The weak topology on is the topology of uniform convergence on finite subsets in (so here can be chosen as the class of all finite subsets in ). Each of these three choices of topology on leads to a variant of reflexivity property for topological vector spaces: If is endowed with the strong topology, then the corresponding notion of reflexivity is the standard one: the spaces reflexive in this sense are just called reflexive. If is endowed with the stereotype dual topology, then the corresponding reflexivity is presented in the theory of stereotype spaces: the spaces reflexive in this sense are called stereotype. If is endowed with the weak topology, then the corresponding reflexivity is presented in the theory of dual pairs: the spaces reflexive in this sense are arbitrary (Hausdorff) locally convex spaces with the weak topology. Examples Let 1 < p < ∞ be a real number and consider the Banach space ℓ p of all sequences for which Define the number q by . Then the continuous dual of ℓ p is naturally identified with ℓ q: given an element , the corresponding element of is the sequence where denotes the sequence whose -th term is 1 and all others are zero. Conversely, given an element , the corresponding continuous linear functional on is defined by for all (see Hölder's inequality). In a similar manner, the continuous dual of is naturally identified with (the space of bounded sequences). Furthermore, the continuous duals of the Banach spaces c (consisting of all convergent sequences, with the supremum norm) and c0 (the sequences converging to zero) are both naturally identified with . By the Riesz representation theorem, the continuous dual of a Hilbert space is again a Hilbert space which is anti-isomorphic to the original space. This gives rise to the bra–ket notation used by physicists in the mathematical formulation of quantum mechanics. By the Riesz–Markov–Kakutani representation theorem, the continuous dual of certain spaces of continuous functions can be described using measures. Transpose of a continuous linear map If is a continuous linear map between two topological vector spaces, then the (continuous) transpose is defined by the same formula as before: The resulting functional is in . The assignment produces a linear map between the space of continuous linear maps from V to W and the space of linear maps from to . When T and U are composable continuous linear maps, then When V and W are normed spaces, the norm of the transpose in is equal to that of T in . Several properties of transposition depend upon the Hahn–Banach theorem. For example, the bounded linear map T has dense range if and only if the transpose is injective. When T is a compact linear map between two Banach spaces V and W, then the transpose is compact. This can be proved using the Arzelà–Ascoli theorem. When V is a Hilbert space, there is an antilinear isomorphism iV from V onto its continuous dual . For every bounded linear map T on V, the transpose and the adjoint operators are linked by When T is a continuous linear map between two topological vector spaces V and W, then the transpose is continuous when and are equipped with "compatible" topologies: for example, when for and , both duals have the strong topology of uniform convergence on bounded sets of X, or both have the weak-∗ topology of pointwise convergence on X. The transpose is continuous from to , or from to . Annihilators Assume that W is a closed linear subspace of a normed space V, and consider the annihilator of W in , Then, the dual of the quotient can be identified with W⊥, and the dual of W can be identified with the quotient . Indeed, let P denote the canonical surjection from V onto the quotient ; then, the transpose is an isometric isomorphism from into , with range equal to W⊥. If j denotes the injection map from W into V, then the kernel of the transpose is the annihilator of W: and it follows from the Hahn–Banach theorem that induces an isometric isomorphism . Further properties If the dual of a normed space is separable, then so is the space itself. The converse is not true: for example, the space is separable, but its dual is not. Double dual In analogy with the case of the algebraic double dual, there is always a naturally defined continuous linear operator from a normed space V into its continuous double dual , defined by As a consequence of the Hahn–Banach theorem, this map is in fact an isometry, meaning for all . Normed spaces for which the map Ψ is a bijection are called reflexive. When V is a topological vector space then Ψ(x) can still be defined by the same formula, for every , however several difficulties arise. First, when V is not locally convex, the continuous dual may be equal to { 0 } and the map Ψ trivial. However, if V is Hausdorff and locally convex, the map Ψ is injective from V to the algebraic dual of the continuous dual, again as a consequence of the Hahn–Banach theorem. Second, even in the locally convex setting, several natural vector space topologies can be defined on the continuous dual , so that the continuous double dual is not uniquely defined as a set. Saying that Ψ maps from V to , or in other words, that Ψ(x) is continuous on for every , is a reasonable minimal requirement on the topology of , namely that the evaluation mappings be continuous for the chosen topology on . Further, there is still a choice of a topology on , and continuity of Ψ depends upon this choice. As a consequence, defining reflexivity in this framework is more involved than in the normed case. See also Covariance and contravariance of vectors Dual module Dual norm Duality (mathematics) Duality (projective geometry) Pontryagin duality Reciprocal lattice – dual space basis, in crystallography Notes References Bibliography . External links Functional analysis Linear algebra Space Linear functionals
Dual space
[ "Mathematics" ]
4,785
[ "Functions and mappings", "Mathematical structures", "Functional analysis", "Mathematical objects", "Mathematical relations", "Category theory", "Duality theories", "Geometry", "Linear algebra", "Algebra" ]
7,990
https://en.wikipedia.org/wiki/Data%20warehouse
In computing, a data warehouse (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis and is a core component of business intelligence. Data warehouses are central repositories of data integrated from disparate sources. They store current and historical data organized so as to make it easy to create reports, query and get insights from the data. Unlike databases, they are intended to be used by analysts and managers to help make organizational decisions. The data stored in the warehouse is uploaded from operational systems (such as marketing or sales). The data may pass through an operational data store and may require data cleansing for additional operations to ensure data quality before it is used in the data warehouse for reporting. The two main approaches for building a data warehouse system are extract, transform, load (ETL) and extract, load, transform (ELT). Components The environment for data warehouses and marts includes the following: Source systems of data (often, the company's operational databases, such as relational databases); Data integration technology and processes to extract data from source systems, transform them, and load them into a data mart or warehouse; Architectures to store data in the warehouse or marts; Tools and applications for varied users; Metadata, data quality, and governance processes. Metadata includes data sources (database, table, and column names), refresh schedules and data usage measures. Related systems Operational databases Operational databases are optimized for the preservation of data integrity and speed of recording of business transactions through use of database normalization and an entity–relationship model. Operational system designers generally follow Codd's 12 rules of database normalization to ensure data integrity. Fully normalized database designs (that is, those satisfying all Codd rules) often result in information from a business transaction being stored in dozens to hundreds of tables. Relational databases are efficient at managing the relationships between these tables. The databases have very fast insert/update performance because only a small amount of data in those tables is affected by each transaction. To improve performance, older data are periodically purged. Data warehouses are optimized for analytic access patterns, which usually involve selecting specific fields rather than all fields as is common in operational databases. Because of these differences in access, operational databases (loosely, OLTP) benefit from the use of a row-oriented database management system (DBMS), whereas analytics databases (loosely, OLAP) benefit from the use of a column-oriented DBMS. Operational systems maintain a snapshot of the business, while warehouses maintain historic data through ETL processes that periodically migrate data from the operational systems to the warehouse. Online analytical processing (OLAP) is characterized by a low rate of transactions and complex queries that involve aggregations. Response time is an effective performance measure of OLAP systems. OLAP applications are widely used for data mining. OLAP databases store aggregated, historical data in multi-dimensional schemas (usually star schemas). OLAP systems typically have a data latency of a few hours, while data mart latency is closer to one day. The OLAP approach is used to analyze multidimensional data from multiple sources and perspectives. The three basic operations in OLAP are roll-up (consolidation), drill-down, and slicing & dicing. Online transaction processing (OLTP) is characterized by a large numbers of short online transactions (INSERT, UPDATE, DELETE). OLTP systems emphasize fast query processing and maintaining data integrity in multi-access environments. For OLTP systems, performance is the number of transactions per second. OLTP databases contain detailed and current data. The schema used to store transactional databases is the entity model (usually 3NF). Normalization is the norm for data modeling techniques in this system. Predictive analytics is about finding and quantifying hidden patterns in the data using complex mathematical models and to predict future outcomes. By contrast, OLAP focuses on historical data analysis and is reactive. Predictive systems are also used for customer relationship management (CRM). Data marts A data mart is a simple data warehouse focused on a single subject or functional area. Hence it draws data from a limited number of sources such as sales, finance or marketing. Data marts are often built and controlled by a single department in an organization. The sources could be internal operational systems, a central data warehouse, or external data. As with warehouses, stored data is usually not normalized. Types of data marts include dependent, independent, and hybrid data marts. Variants ETL The typical extract, transform, load (ETL)-based data warehouse uses staging, data integration, and access layers to house its key functions. The staging layer or staging database stores raw data extracted from each of the disparate source data systems. The integration layer integrates disparate data sets by transforming the data from the staging layer, often storing this transformed data in an operational data store (ODS) database. The integrated data are then moved to yet another database, often called the data warehouse database, where the data is arranged into hierarchical groups, often called dimensions, and into facts and aggregate facts. The combination of facts and dimensions is sometimes called a star schema. The access layer helps users retrieve data. The main source of the data is cleansed, transformed, catalogued, and made available for use by managers and other business professionals for data mining, online analytical processing, market research and decision support. However, the means to retrieve and analyze data, to extract, transform, and load data, and to manage the data dictionary are also considered essential components of a data warehousing system. Many references to data warehousing use this broader context. Thus, an expanded definition of data warehousing includes business intelligence tools, tools to extract, transform, and load data into the repository, and tools to manage and retrieve metadata. ELT ELT-based data warehousing gets rid of a separate ETL tool for data transformation. Instead, it maintains a staging area inside the data warehouse itself. In this approach, data gets extracted from heterogeneous source systems and are then directly loaded into the data warehouse, before any transformation occurs. All necessary transformations are then handled inside the data warehouse itself. Finally, the manipulated data gets loaded into target tables in the same data warehouse. Benefits A data warehouse maintains a copy of information from the source transaction systems. This architectural complexity provides the opportunity to: Integrate data from multiple sources into a single database and data model. More congregation of data to single database so a single query engine can be used to present data in an operational data store. Mitigate the problem of isolation-level lock contention in transaction processing systems caused by long-running analysis queries in transaction processing databases. Maintain data history, even if the source transaction systems do not. Integrate data from multiple source systems, enabling a central view across the enterprise. This benefit is always valuable, but particularly so when the organization grows via merging. Improve data quality, by providing consistent codes and descriptions, flagging or even fixing bad data. Present the organization's information consistently. Provide a single common data model for all data of interest regardless of data source. Restructure the data so that it makes sense to the business users. Restructure the data so that it delivers excellent query performance, even for complex analytic queries, without impacting the operational systems. Add value to operational business applications, notably customer relationship management (CRM) systems. Make decision–support queries easier to write. Organize and disambiguate repetitive data. History The concept of data warehousing dates back to the late 1980s when IBM researchers Barry Devlin and Paul Murphy developed the "business data warehouse". In essence, the data warehousing concept was intended to provide an architectural model for the flow of data from operational systems to decision support environments. The concept attempted to address the various problems associated with this flow, mainly the high costs associated with it. In the absence of a data warehousing architecture, an enormous amount of redundancy was required to support multiple decision support environments. In larger corporations, it was typical for multiple decision support environments to operate independently. Though each environment served different users, they often required much of the same stored data. The process of gathering, cleaning and integrating data from various sources, usually from long-term existing operational systems (usually referred to as legacy systems), was typically in part replicated for each environment. Moreover, the operational systems were frequently reexamined as new decision support requirements emerged. Often new requirements necessitated gathering, cleaning and integrating new data from "data marts" that was tailored for ready access by users. Additionally, with the publication of The IRM Imperative (Wiley & Sons, 1991) by James M. Kerr, the idea of managing and putting a dollar value on an organization's data resources and then reporting that value as an asset on a balance sheet became popular. In the book, Kerr described a way to populate subject-area databases from data derived from transaction-driven systems to create a storage area where summary data could be further leveraged to inform executive decision-making. This concept served to promote further thinking of how a data warehouse could be developed and managed in a practical way within any enterprise. Key developments in early years of data warehousing: 1960s – General Mills and Dartmouth College, in a joint research project, develop the terms dimensions and facts. 1970s – ACNielsen and IRI provide dimensional data marts for retail sales. 1970s – Bill Inmon begins to define and discuss the term Data Warehouse. 1975 – Sperry Univac introduces MAPPER (MAintain, Prepare, and Produce Executive Reports), a database management and reporting system that includes the world's first 4GL. It is the first platform designed for building Information Centers (a forerunner of contemporary data warehouse technology). 1983 – Teradata introduces the DBC/1012 database computer specifically designed for decision support. 1984 – Metaphor Computer Systems, founded by David Liddle and Don Massaro, releases a hardware/software package and GUI for business users to create a database management and analytic system. 1988 – Barry Devlin and Paul Murphy publish the article "An architecture for a business and information system" where they introduce the term "business data warehouse". 1990 – Red Brick Systems, founded by Ralph Kimball, introduces Red Brick Warehouse, a database management system specifically for data warehousing. 1991 – James M. Kerr authors The IRM Imperative, which suggests data resources could be reported as an asset on a balance sheet, furthering commercial interest in the establishment of data warehouses. 1991 – Prism Solutions, founded by Bill Inmon, introduces Prism Warehouse Manager, software for developing a data warehouse. 1992 – Bill Inmon publishes the book Building the Data Warehouse. 1995 – The Data Warehousing Institute, a for-profit organization that promotes data warehousing, is founded. 1996 – Ralph Kimball publishes the book The Data Warehouse Toolkit. 1998 – Focal modeling is implemented as an ensemble (hybrid) data warehouse modeling approach, with Patrik Lager as one of the main drivers. 2000 – Dan Linstedt releases in the public domain the Data vault modeling, conceived in 1990 as an alternative to Inmon and Kimball to provide long-term historical storage of data coming in from multiple operational systems, with emphasis on tracing, auditing and resilience to change of the source data model. 2008 – Bill Inmon, along with Derek Strauss and Genia Neushloss, publishes "DW 2.0: The Architecture for the Next Generation of Data Warehousing", explaining his top-down approach to data warehousing and coining the term, data-warehousing 2.0. 2008 – Anchor modeling was formalized in a paper presented at the International Conference on Conceptual Modeling, and won the best paper award 2012 – Bill Inmon develops and makes public technology known as "textual disambiguation". Textual disambiguation applies context to raw text and reformats the raw text and context into a standard data base format. Once raw text is passed through textual disambiguation, it can easily and efficiently be accessed and analyzed by standard business intelligence technology. Textual disambiguation is accomplished through the execution of textual ETL. Textual disambiguation is useful wherever raw text is found, such as in documents, Hadoop, email, and so forth. 2013 – Data vault 2.0 was released, having some minor changes to the modeling method, as well as integration with best practices from other methodologies, architectures and implementations including agile and CMMI principles Data organization Facts A fact is a value or measurement in the system being managed. Raw facts are ones reported by the reporting entity. For example, in a mobile telephone system, if a base transceiver station (BTS) receives 1,000 requests for traffic channel allocation, allocates for 820, and rejects the rest, it could report three facts to a management system: Raw facts are aggregated to higher levels in various dimensions to extract information more relevant to the service or business. These are called aggregated facts or summaries. For example, if there are three BTSs in a city, then the facts above can be aggregated to the city level in the network dimension. For example: Dimensional versus normalized approach for storage of data The two most important approaches to store data in a warehouse are dimensional and normalized. The dimensional approach uses a star schema as proposed by Ralph Kimball. The normalized approach, also called the third normal form (3NF) is an entity-relational normalized model proposed by Bill Inmon. Dimensional approach In a dimensional approach, transaction data is partitioned into "facts", which are usually numeric transaction data, and "dimensions", which are the reference information that gives context to the facts. For example, a sales transaction can be broken up into facts such as the number of products ordered and the total price paid for the products, and into dimensions such as order date, customer name, product number, order ship-to and bill-to locations, and salesperson responsible for receiving the order. This dimensional approach makes data easier to understand and speeds up data retrieval. Dimensional structures are easy for business users to understand because the structure is divided into measurements/facts and context/dimensions. Facts are related to the organization's business processes and operational system, and dimensions are the context about them (Kimball, Ralph 2008). Another advantage is that the dimensional model does not involve a relational database every time. Thus, this type of modeling technique is very useful for end-user queries in data warehouse. The model of facts and dimensions can also be understood as a data cube, where dimensions are the categorical coordinates in a multi-dimensional cube, the fact is a value corresponding to the coordinates. The main disadvantages of the dimensional approach are: It is complicated to maintain the integrity of facts and dimensions, loading the data warehouse with data from different operational systems It is difficult to modify the warehouse structure if the organization changes the way it does business. Normalized approach In the normalized approach, the data in the warehouse are stored following, to a degree, database normalization rules. Normalized relational database tables are grouped into subject areas (for example, customers, products and finance). When used in large enterprises, the result is dozens of tables linked by a web of joins.(Kimball, Ralph 2008). The main advantage of this approach is that it is straightforward to add information into the database. Disadvantages include that, because of the large number of tables, it can be difficult for users to join data from different sources into meaningful information and access the information without a precise understanding of the date sources and the data structure of the data warehouse. Both normalized and dimensional models can be represented in entity–relationship diagrams because both contain joined relational tables. The difference between them is the degree of normalization. These approaches are not mutually exclusive, and there are other approaches. Dimensional approaches can involve normalizing data to a degree (Kimball, Ralph 2008). In Information-Driven Business, Robert Hillard compares the two approaches based on the information needs of the business problem. He concludes that normalized models hold far more information than their dimensional equivalents (even when the same fields are used in both models) but at the cost of usability. The technique measures information quantity in terms of information entropy and usability in terms of the Small Worlds data transformation measure. Design methods Bottom-up design In the bottom-up approach, data marts are first created to provide reporting and analytical capabilities for specific business processes. These data marts can then be integrated to create a comprehensive data warehouse. The data warehouse bus architecture is primarily an implementation of "the bus", a collection of conformed dimensions and conformed facts, which are dimensions that are shared (in a specific way) between facts in two or more data marts. Top-down design The top-down approach is designed using a normalized enterprise data model. "Atomic" data, that is, data at the greatest level of detail, are stored in the data warehouse. Dimensional data marts containing data needed for specific business processes or specific departments are created from the data warehouse. Hybrid design Data warehouses often resemble the hub and spokes architecture. Legacy systems feeding the warehouse often include customer relationship management and enterprise resource planning, generating large amounts of data. To consolidate these various data models, and facilitate the extract transform load process, data warehouses often make use of an operational data store, the information from which is parsed into the actual data warehouse. To reduce data redundancy, larger systems often store the data in a normalized way. Data marts for specific reports can then be built on top of the data warehouse. A hybrid (also called ensemble) data warehouse database is kept on third normal form to eliminate data redundancy. A normal relational database, however, is not efficient for business intelligence reports where dimensional modelling is prevalent. Small data marts can shop for data from the consolidated warehouse and use the filtered, specific data for the fact tables and dimensions required. The data warehouse provides a single source of information from which the data marts can read, providing a wide range of business information. The hybrid architecture allows a data warehouse to be replaced with a master data management repository where operational (not static) information could reside. The data vault modeling components follow hub and spokes architecture. This modeling style is a hybrid design, consisting of the best practices from both third normal form and star schema. The data vault model is not a true third normal form, and breaks some of its rules, but it is a top-down architecture with a bottom up design. The data vault model is geared to be strictly a data warehouse. It is not geared to be end-user accessible, which, when built, still requires the use of a data mart or star schema-based release area for business purposes. Characteristics There are basic features that define the data in the data warehouse that include subject orientation, data integration, time-variant, nonvolatile data, and data granularity. Subject-oriented Unlike the operational systems, the data in the data warehouse revolves around the subjects of the enterprise. Subject orientation is not database normalization. Subject orientation can be really useful for decision-making. Gathering the required objects is called subject-oriented. Integrated The data found within the data warehouse is integrated. Since it comes from several operational systems, all inconsistencies must be removed. Consistencies include naming conventions, measurement of variables, encoding structures, physical attributes of data, and so forth. Time-variant While operational systems reflect current values as they support day-to-day operations, data warehouse data represents a long time horizon (up to 10 years) which means it stores mostly historical data. It is mainly meant for data mining and forecasting. (E.g. if a user is searching for a buying pattern of a specific customer, the user needs to look at data on the current and past purchases.) Nonvolatile The data in the data warehouse is read-only, which means it cannot be updated, created, or deleted (unless there is a regulatory or statutory obligation to do so). Options Aggregation In the data warehouse process, data can be aggregated in data marts at different levels of abstraction. The user may start looking at the total sale units of a product in an entire region. Then the user looks at the states in that region. Finally, they may examine the individual stores in a certain state. Therefore, typically, the analysis starts at a higher level and drills down to lower levels of details. Virtualization With data virtualization, the data used remains in its original locations and real-time access is established to allow analytics across multiple sources creating a virtual data warehouse. This can aid in resolving some technical difficulties such as compatibility problems when combining data from various platforms, lowering the risk of error caused by faulty data, and guaranteeing that the newest data is used. Furthermore, avoiding the creation of a new database containing personal information can make it easier to comply with privacy regulations. However, with data virtualization, the connection to all necessary data sources must be operational as there is no local copy of the data, which is one of the main drawbacks of the approach. Architecture The different methods used to construct/organize a data warehouse specified by an organization are numerous. The hardware utilized, software created and data resources specifically required for the correct functionality of a data warehouse are the main components of the data warehouse architecture. All data warehouses have multiple phases in which the requirements of the organization are modified and fine-tuned. Evolution in organization use These terms refer to the level of sophistication of a data warehouse: Offline operational data warehouse Data warehouses in this stage of evolution are updated on a regular time cycle (usually daily, weekly or monthly) from the operational systems and the data is stored in an integrated reporting-oriented database. Offline data warehouse Data warehouses at this stage are updated from data in the operational systems on a regular basis and the data warehouse data are stored in a data structure designed to facilitate reporting. On-time data warehouse Online Integrated Data Warehousing represent the real-time Data warehouses stage data in the warehouse is updated for every transaction performed on the source data Integrated data warehouse These data warehouses assemble data from different areas of business, so users can look up the information they need across other systems. See also List of business intelligence software References Further reading Davenport, Thomas H. and Harris, Jeanne G. Competing on Analytics: The New Science of Winning (2007) Harvard Business School Press. Ganczarski, Joe. Data Warehouse Implementations: Critical Implementation Factors Study (2009) VDM Verlag Kimball, Ralph and Ross, Margy. The Data Warehouse Toolkit Third Edition (2013) Wiley, Linstedt, Graziano, Hultgren. The Business of Data Vault Modeling Second Edition (2010) Dan linstedt, William Inmon. Building the Data Warehouse (2005) John Wiley and Sons, Data engineering
Data warehouse
[ "Engineering" ]
4,723
[ "Software engineering", "Data engineering" ]
7,991
https://en.wikipedia.org/wiki/Disperser
A disperser is a one-sided extractor. Where an extractor requires that every event gets the same probability under the uniform distribution and the extracted distribution, only the latter is required for a disperser. So for a disperser, an event we have: Definition (Disperser): A -disperser is a function such that for every distribution on with the support of the distribution is of size at least . Graph theory An (N, M, D, K, e)-disperser is a bipartite graph with N vertices on the left side, each with degree D, and M vertices on the right side, such that every subset of K vertices on the left side is connected to more than (1 − e)M vertices on the right. An extractor is a related type of graph that guarantees an even stronger property; every (N, M, D, K, e)-extractor is also an (N, M, D, K, e)-disperser. Other meanings A disperser is a high-speed mixing device used to disperse or dissolve pigments and other solids into a liquid. See also Expander graph References Graph families
Disperser
[ "Mathematics" ]
240
[ "Combinatorics stubs", "Combinatorics" ]
8,007
https://en.wikipedia.org/wiki/Diameter
In geometry, a diameter of a circle is any straight line segment that passes through the centre of the circle and whose endpoints lie on the circle. It can also be defined as the longest chord of the circle. Both definitions are also valid for the diameter of a sphere. In more modern usage, the length of a diameter is also called the diameter. In this sense one speaks of diameter rather than diameter (which refers to the line segment itself), because all diameters of a circle or sphere have the same length, this being twice the radius The word "diameter" is derived from (), "diameter of a circle", from (), "across, through" and (), "measure". It is often abbreviated or Constructions With straightedge and compass, a diameter of a given circle can be constructed as the perpendicular bisector of an arbitrary chord. Drawing two diameters in this way can be used to locate the center of a circle, as their crossing point. To construct a diameter parallel to a given line, choose the chord to be perpendicular to the line. The circle having a given line segment as its diameter can be constructed by straightedge and compass, by finding the midpoint of the segment and then drawing the circle centered at the midpoint through one of the ends of the line segment. Symbol The symbol or variable for diameter, , is sometimes used in technical drawings or specifications as a prefix or suffix for a number (e.g. "⌀ 55 mm"), indicating that it represents diameter. Photographic filter thread sizes are often denoted in this way. The symbol has a code point in Unicode at , in the Miscellaneous Technical set. It should not be confused with several other characters (such as or ) that resemble it but have unrelated meanings. It has the compose sequence . Generalizations The definitions given above are only valid for circles and spheres. However, they are special cases of a more general definition that is valid for any kind of -dimensional object, or a set of scattered points. The diameter of a set is the least upper bound of the set of all distances between pairs of points in the subset. A different and incompatible definition is sometimes used for the diameter of a conic section. In this context, a diameter is any chord which passes through the conic's centre. A diameter of an ellipse is any line passing through the centre of the ellipse. Half of any such diameter may be called a semidiameter, although this term is most often a synonym for the radius of a circle or sphere. The longest diameter is called the major axis. Conjugate diameters are a pair of diameters where one is parallel to a tangent to the ellipse at the endpoint of the other diameter. Several kinds of object can be measured by equivalent diameter, the diameter of a circular or spherical approximation to the object. This includes hydraulic diameter, the equivalent diameter of a channel or pipe through which liquid flows, and the Sauter mean diameter of a collection of particles. The diameter of a circle is exactly twice its radius. However, this is true only for a circle, and only in the Euclidean metric. Jung's theorem provides more general inequalities relating the diameter to the radius. See also Caliper, micrometer, tools for measuring diameters Eratosthenes, who calculated the diameter of the Earth around 240 BC. References Elementary geometry Length Circles
Diameter
[ "Physics", "Mathematics" ]
695
[ "Scalar physical quantities", "Physical quantities", "Distance", "Quantity", "Size", "Elementary mathematics", "Elementary geometry", "Length", "Wikipedia categories named after physical quantities", "Circles", "Pi" ]
8,078
https://en.wikipedia.org/wiki/Dynamite
Dynamite is an explosive made of nitroglycerin, sorbents (such as powdered shells or clay), and stabilizers. It was invented by the Swedish chemist and engineer Alfred Nobel in Geesthacht, Northern Germany, and was patented in 1867. It rapidly gained wide-scale use as a more robust alternative to the traditional black powder explosives. It allows the use of nitroglycerine's favorable explosive properties while greatly reducing its risk of accidental detonation. History Dynamite was invented by Swedish chemist Alfred Nobel in 1866 and was the first safely manageable explosive stronger than black powder. Alfred Nobel's father, Immanuel Nobel, was an industrialist, engineer, and inventor. He built bridges and buildings in Stockholm and founded Sweden's first rubber factory. His construction work inspired him to research new methods of blasting rock that were more effective than black powder. After some bad business deals in Sweden, in 1838 Immanuel moved his family to Saint Petersburg, where Alfred and his brothers were educated privately under Swedish and Russian tutors. At the age of 17, Alfred Nobel was sent abroad for two years; in the United States he met Swedish engineer John Ericsson and in France studied under famed chemist Théophile-Jules Pelouze and his pupil Ascanio Sobrero, who had first synthesized nitroglycerin in 1847. Pelouze cautioned Nobel against using nitroglycerine as a commercial explosive because of its great sensitivity to shock. In 1857, Nobel filed the first of several hundred patents, mostly concerning air pressure, gas and fluid gauges, but remained fascinated with nitroglycerin's potential as an explosive. Nobel, along with his father and brother Emil, experimented with various combinations of nitroglycerin and black powder. Nobel came up with a way to safely detonate nitroglycerin by inventing the detonator, or blasting cap, that allowed a controlled explosion set off from a distance using a fuse. In 1863 Nobel performed his first successful detonation of pure nitroglycerin, using a blasting cap made of a copper percussion cap and mercury fulminate. In 1864, Alfred Nobel filed patents for both the blasting cap and his method of synthesizing nitroglycerin, using sulfuric acid, nitric acid and glycerin. On 3 September 1864, while experimenting with nitroglycerin, Emil and several others were killed in an explosion at the factory at Immanuel Nobel's estate at Heleneborg. After this, Alfred founded the company Nitroglycerin Aktiebolaget in Vinterviken to continue work in a more isolated area and the following year moved to Germany, where he founded another company, Dynamit Nobel. Despite the invention of the blasting cap, the instability of nitroglycerin rendered it useless as a commercial explosive. To solve this problem, Nobel sought to combine it with another substance that would make it safe for transport and handling but would not reduce its effectiveness as an explosive. He tried combinations of cement, coal, and sawdust, but was unsuccessful. Finally, he tried diatomaceous earth, which is fossilized algae, that he brought from the Elbe River near his factory in Hamburg, which successfully stabilized the nitroglycerin into a portable explosive. Nobel obtained patents for his inventions in England on 7 May 1867 and in Sweden on 19 October 1867. After its introduction, dynamite rapidly gained wide-scale use as a safe alternative to black powder and nitroglycerin. Nobel tightly controlled the patents, and unlicensed duplicating companies were quickly shut down. A few American businessmen got around the patent by using absorbents other than diatomaceous earth, such as resin. Nobel originally sold dynamite as "Nobel's Blasting Powder" and later changed the name to dynamite, from the Ancient Greek word dýnamis (), meaning "power". Manufacture Form Dynamite is usually sold in the form of cardboard cylinders about long and about in diameter, with a mass of about . A stick of dynamite thus produced contains roughly 1 MJ (megajoule) of energy. Other sizes also exist, rated by either portion (Quarter-Stick or Half-Stick) or by weight. Dynamite is usually rated by "weight strength" (the amount of nitroglycerin it contains), usually from 20% to 60%. For example, 40% dynamite is composed of 40% nitroglycerin and 60% "dope" (the absorbent storage medium mixed with the stabilizer and any additives). Storage considerations The maximum shelf life of nitroglycerin-based dynamite is recommended as one year from the date of manufacture under good storage conditions. Over time, regardless of the sorbent used, sticks of dynamite will "weep" or "sweat" nitroglycerin, which can then pool in the bottom of the box or storage area. For that reason, explosive manuals recommend the regular up-ending of boxes of dynamite in storage. Crystals will form on the outside of the sticks, causing them to be even more sensitive to shock, friction, and temperature. Therefore, while the risk of an explosion without the use of a blasting cap is minimal for fresh dynamite, old dynamite is dangerous. Modern packaging helps eliminate this by placing the dynamite into sealed plastic bags and using wax-coated cardboard. Dynamite is moderately sensitive to shock. Shock resistance tests are usually carried out with a drop-hammer: about 100 mg of explosive is placed on an anvil, upon which a weight of between is dropped from different heights until detonation is achieved. With a hammer of 2 kg, mercury fulminate detonates with a drop distance of 1 to 2 cm, nitroglycerin with 4 to 5 cm, dynamite with 15 to 30 cm, and ammoniacal explosives with 40 to 50 cm. Major manufacturers South Africa For several decades beginning in the 1940s, the largest producer of dynamite in the world was the Union of South Africa. There, the De Beers company established a factory in 1902 at Somerset West. The explosives factory was later operated by AECI (African Explosives and Chemical Industries). The demand for the product came mainly from the country's vast gold mines, centered on the Witwatersrand. The factory at Somerset West was in operation in 1903 and by 1907 it was already producing 340,000 cases, each, annually. A rival factory at Modderfontein was producing another 200,000 cases per year. There were two large explosions at the Somerset West plant during the 1960s. Some workers died, but the loss of life was limited by the modular design of the factory and its earth works, and the planting of trees that directed the blasts upward. There were several other explosions at the Modderfontein factory. After 1985, pressure from trade unions forced AECI to phase out the production of dynamite. The factory then went on to produce ammonium nitrate emulsion-based explosives that are safer to manufacture and handle. United States Dynamite was first manufactured in the US by the Giant Powder Company of San Francisco, California, whose founder had obtained the exclusive rights from Nobel in 1867. Giant was eventually acquired by DuPont, which produced dynamite under the Giant name until Giant was dissolved by DuPont in 1905. Thereafter, DuPont produced dynamite under its own name until 1911–12, when its explosives monopoly was broken up by the U.S. Circuit Court in the "Powder Case". Two new companies were formed upon the breakup, the Hercules Powder Company and the Atlas Powder Company, which took up the manufacture of dynamite (in different formulations). Currently, only Dyno Nobel manufactures dynamite in the US. The only facility producing it is located in Carthage, Missouri, but the material is purchased from Dyno Nobel by other manufacturers who put their labels on the dynamite and boxes. Non-dynamite explosives Other explosives are often referred to or confused with dynamite: TNT Trinitrotoluene (TNT) is often assumed to be the same as (or confused for) dynamite largely because of the ubiquity of both explosives during the 20th century. This incorrect connection between TNT and dynamite was enhanced by cartoons such as Bugs Bunny, where animators labeled any kind of bomb (ranging from sticks of dynamite to kegs of black powder) as TNT, because the acronym was shorter and more memorable and did not require literacy to recognize that TNT meant "bomb". Aside from both being high explosives, TNT and dynamite have little in common. TNT is a second generation castable explosive adopted by the military, while dynamite, in contrast, has never been popular in warfare because it degenerates quickly under severe conditions and can be detonated by either fire or a wayward bullet. The German armed forces adopted TNT as a filling for artillery shells in 1902, some 40 years after the invention of dynamite, which is a first generation phlegmatized explosive primarily intended for civilian earthmoving. TNT has never been popular or widespread in civilian earthmoving, as it is considerably more expensive and less powerful by weight than dynamite, as well as being slower to mix and pack into boreholes. TNT's primary asset is its remarkable insensitivity and stability: it is waterproof and incapable of detonating without the extreme shock and heat provided by a blasting cap (or a sympathetic detonation); this stability also allows it to be melted at , poured into high explosive shells and allowed to re-solidify, with no extra danger or change in the TNT's characteristics. Accordingly, more than 90% of the TNT produced in America was always for the military market, with most TNT used for filling shells, hand grenades and aerial bombs, and the remainder being packaged in brown "bricks" (not red cylinders) for use as demolition charges by combat engineers. "Extra" dynamite In the United States, in 1885, the chemist Russell S. Penniman invented "ammonium dynamite", a form of explosive that used ammonium nitrate as a substitute for the more costly nitroglycerin. Ammonium nitrate has only 85% of the chemical energy of nitroglycerin. It is rated by either "weight strength" (the amount of ammonium nitrate in the medium) or "cartridge strength" (the potential explosive strength generated by an amount of explosive of a certain density and grain size used in comparison to the explosive strength generated by an equivalent density and grain size of a standard explosive). For example, high-explosive 65% Extra dynamite has a weight strength of 65% ammonium nitrate and 35% "dope" (the absorbent medium mixed with the stabilizers and additives). Its "cartridge strength" would be its weight in pounds times its strength in relation to an equal amount of ANFO (the civilian baseline standard) or TNT (the military baseline standard). For example, 65% ammonium dynamite with a 20% cartridge strength would mean the stick was equal to an equivalent weight strength of 20% ANFO. "Military dynamite" "Military dynamite" (or M1 dynamite) is a dynamite substitute made with more stable ingredients than nitroglycerin. It contains 75% RDX, 15% TNT and 10% desensitizers and plasticizers. It has only 60% equivalent strength as commercial dynamite, but is much safer to store and handle. Regulation Various countries around the world have enacted explosives laws and require licenses to manufacture, distribute, store, use, and possess explosives or ingredients. See also Blast fishing Blasting machine Dynamite gun Nobel Prize Relative effectiveness factor References Further reading Cartwright, A. P. (1964). The dynamite Company: The Story of African Explosives and Chemical Industries Limited. Cape Town: Purnell & Sons (S.A.) (Pty) Ltd. Schück, H. and Sohlman, R. (1929). The Life of Alfred Nobel. London: William Heinemann Ltd. External links Alfred Nobel’s dynamite companies Oregon State Police – Arson and Explosives Section (Handling instructions and photos) (Dynamite US patent) Dynamite and TNT at The Periodic Table of Videos (University of Nottingham) Alfred Nobel Explosives Swedish inventions 1867 introductions 19th-century inventions
Dynamite
[ "Chemistry" ]
2,493
[ "Explosives", "Explosions" ]
8,080
https://en.wikipedia.org/wiki/List%20of%20decades%2C%20centuries%2C%20and%20millennia
The list below includes links to articles with further details for each decade, century, and millennium from 15,000BC to AD3000. Notes See also List of years Timelines of world history List of timelines Chronology See calendar and list of calendars for other groupings of years. See history, history by period, and periodization for different organizations of historical events. For earlier time periods, see Timeline of the Big Bang, Geologic time scale, Timeline of evolution, and Logarithmic timeline. Decades Historical timelines
List of decades, centuries, and millennia
[ "Physics" ]
108
[ "Physical quantities", "Time", "Lists by time", "Wikipedia timelines", "Spacetime" ]
8,082
https://en.wikipedia.org/wiki/Diamond
Diamond is a solid form of the element carbon with its atoms arranged in a crystal structure called diamond cubic. Diamond as a form of carbon is tasteless, odourless, strong, brittle solid, colourless in pure form, a poor conductor of electricity, and insoluble in water. Another solid form of carbon known as graphite is the chemically stable form of carbon at room temperature and pressure, but diamond is metastable and converts to it at a negligible rate under those conditions. Diamond has the highest hardness and thermal conductivity of any natural material, properties that are used in major industrial applications such as cutting and polishing tools. They are also the reason that diamond anvil cells can subject materials to pressures found deep in the Earth. Because the arrangement of atoms in diamond is extremely rigid, few types of impurity can contaminate it (two exceptions are boron and nitrogen). Small numbers of defects or impurities (about one per million of lattice atoms) can color a diamond blue (boron), yellow (nitrogen), brown (defects), green (radiation exposure), purple, pink, orange, or red. Diamond also has a very high refractive index and a relatively high optical dispersion. Most natural diamonds have ages between 1 billion and 3.5 billion years. Most were formed at depths between in the Earth's mantle, although a few have come from as deep as . Under high pressure and temperature, carbon-containing fluids dissolved various minerals and replaced them with diamonds. Much more recently (hundreds to tens of million years ago), they were carried to the surface in volcanic eruptions and deposited in igneous rocks known as kimberlites and lamproites. Synthetic diamonds can be grown from high-purity carbon under high pressures and temperatures or from hydrocarbon gases by chemical vapor deposition (CVD). Properties Diamond is a solid form of pure carbon with its atoms arranged in a crystal. Solid carbon comes in different forms known as allotropes depending on the type of chemical bond. The two most common allotropes of pure carbon are diamond and graphite. In graphite, the bonds are sp2 orbital hybrids and the atoms form in planes, with each bound to three nearest neighbors, 120 degrees apart. In diamond, they are sp3 and the atoms form tetrahedra, with each bound to four nearest neighbors. Tetrahedra are rigid, the bonds are strong, and, of all known substances, diamond has the greatest number of atoms per unit volume, which is why it is both the hardest and the least compressible. It also has a high density, ranging from 3150 to 3530 kilograms per cubic metre (over three times the density of water) in natural diamonds and 3520 kg/m in pure diamond. In graphite, the bonds between nearest neighbors are even stronger, but the bonds between parallel adjacent planes are weak, so the planes easily slip past each other. Thus, graphite is much softer than diamond. However, the stronger bonds make graphite less flammable. Diamonds have been adopted for many uses because of the material's exceptional physical characteristics. It has the highest thermal conductivity and the highest sound velocity. It has low adhesion and friction, and its coefficient of thermal expansion is extremely low. Its optical transparency extends from the far infrared to the deep ultraviolet and it has high optical dispersion. It also has high electrical resistance. It is chemically inert, not reacting with most corrosive substances, and has excellent biological compatibility. Thermodynamics The equilibrium pressure and temperature conditions for a transition between graphite and diamond are well established theoretically and experimentally. The equilibrium pressure varies linearly with temperature, between at and at (the diamond/graphite/liquid triple point). However, the phases have a wide region about this line where they can coexist. At standard temperature and pressure, and , the stable phase of carbon is graphite, but diamond is metastable and its rate of conversion to graphite is negligible. However, at temperatures above about , diamond rapidly converts to graphite. Rapid conversion of graphite to diamond requires pressures well above the equilibrium line: at , a pressure of is needed. Above the graphite–diamond–liquid carbon triple point, the melting point of diamond increases slowly with increasing pressure; but at pressures of hundreds of GPa, it decreases. At high pressures, silicon and germanium have a BC8 body-centered cubic crystal structure, and a similar structure is predicted for carbon at high pressures. At , the transition is predicted to occur at . Results published in an article in the scientific journal Nature Physics in 2010 suggest that, at ultra-high pressures and temperatures (about 10 million atmospheres or 1 TPa and 50,000 °C), diamond melts into a metallic fluid. The extreme conditions required for this to occur are present in the ice giants Neptune and Uranus. Both planets are made up of approximately 10 percent carbon and could hypothetically contain oceans of liquid carbon. Since large quantities of metallic fluid can affect the magnetic field, this could serve as an explanation as to why the geographic and magnetic poles of the two planets are unaligned. Crystal structure The most common crystal structure of diamond is called diamond cubic. It is formed of unit cells (see the figure) stacked together. Although there are 18 atoms in the figure, each corner atom is shared by eight unit cells and each atom in the center of a face is shared by two, so there are a total of eight atoms per unit cell. The length of each side of the unit cell is denoted by a and is 3.567 angstroms. The nearest neighbor distance in the diamond lattice is 1.732a/4 where a is the lattice constant, usually given in Angstrøms as a = 3.567 Å, which is 0.3567 nm. A diamond cubic lattice can be thought of as two interpenetrating face-centered cubic lattices with one displaced by of the diagonal along a cubic cell, or as one lattice with two atoms associated with each lattice point. Viewed from a crystallographic direction, it is formed of layers stacked in a repeating ABCABC ... pattern. Diamonds can also form an ABAB ... structure, which is known as hexagonal diamond or lonsdaleite, but this is far less common and is formed under different conditions from cubic carbon. Crystal habit Diamonds occur most often as euhedral or rounded octahedra and twinned octahedra known as macles. As diamond's crystal structure has a cubic arrangement of the atoms, they have many facets that belong to a cube, octahedron, rhombicosidodecahedron, tetrakis hexahedron, or disdyakis dodecahedron. The crystals can have rounded-off and unexpressive edges and can be elongated. Diamonds (especially those with rounded crystal faces) are commonly found coated in nyf, an opaque gum-like skin. Some diamonds contain opaque fibers. They are referred to as opaque if the fibers grow from a clear substrate or fibrous if they occupy the entire crystal. Their colors range from yellow to green or gray, sometimes with cloud-like white to gray impurities. Their most common shape is cuboidal, but they can also form octahedra, dodecahedra, macles, or combined shapes. The structure is the result of numerous impurities with sizes between 1 and 5 microns. These diamonds probably formed in kimberlite magma and sampled the volatiles. Diamonds can also form polycrystalline aggregates. There have been attempts to classify them into groups with names such as boart, ballas, stewartite, and framesite, but there is no widely accepted set of criteria. Carbonado, a type in which the diamond grains were sintered (fused without melting by the application of heat and pressure), is black in color and tougher than single crystal diamond. It has never been observed in a volcanic rock. There are many theories for its origin, including formation in a star, but no consensus. Mechanical Hardness Diamond is the hardest material on the qualitative Mohs scale. To conduct the quantitative Vickers hardness test, samples of materials are struck with a pyramid of standardized dimensions using a known force – a diamond crystal is used for the pyramid to permit a wide range of materials to be tested. From the size of the resulting indentation, a Vickers hardness value for the material can be determined. Diamond's great hardness relative to other materials has been known since antiquity, and is the source of its name. This does not mean that it is infinitely hard, indestructible, or unscratchable. Indeed, diamonds can be scratched by other diamonds and worn down over time even by softer materials, such as vinyl phonograph records. Diamond hardness depends on its purity, crystalline perfection, and orientation: hardness is higher for flawless, pure crystals oriented to the <111> direction (along the longest diagonal of the cubic diamond lattice). Therefore, whereas it might be possible to scratch some diamonds with other materials, such as boron nitride, the hardest diamonds can only be scratched by other diamonds and nanocrystalline diamond aggregates. The hardness of diamond contributes to its suitability as a gemstone. Because it can only be scratched by other diamonds, it maintains its polish extremely well. Unlike many other gems, it is well-suited to daily wear because of its resistance to scratching—perhaps contributing to its popularity as the preferred gem in engagement or wedding rings, which are often worn every day. The hardest natural diamonds mostly originate from the Copeton and Bingara fields located in the New England area in New South Wales, Australia. These diamonds are generally small, perfect to semiperfect octahedra, and are used to polish other diamonds. Their hardness is associated with the crystal growth form, which is single-stage crystal growth. Most other diamonds show more evidence of multiple growth stages, which produce inclusions, flaws, and defect planes in the crystal lattice, all of which affect their hardness. It is possible to treat regular diamonds under a combination of high pressure and high temperature to produce diamonds that are harder than the diamonds used in hardness gauges. Diamonds cut glass, but this does not positively identify a diamond because other materials, such as quartz, also lie above glass on the Mohs scale and can also cut it. Diamonds can scratch other diamonds, but this can result in damage to one or both stones. Hardness tests are infrequently used in practical gemology because of their potentially destructive nature. The extreme hardness and high value of diamond means that gems are typically polished slowly, using painstaking traditional techniques and greater attention to detail than is the case with most other gemstones; these tend to result in extremely flat, highly polished facets with exceptionally sharp facet edges. Diamonds also possess an extremely high refractive index and fairly high dispersion. Taken together, these factors affect the overall appearance of a polished diamond and most diamantaires still rely upon skilled use of a loupe (magnifying glass) to identify diamonds "by eye". Toughness Somewhat related to hardness is another mechanical property toughness, which is a material's ability to resist breakage from forceful impact. The toughness of natural diamond has been measured as 50–65 MPa·m1/2. This value is good compared to other ceramic materials, but poor compared to most engineering materials such as engineering alloys, which typically exhibit toughness over 80MPa·m1/2. As with any material, the macroscopic geometry of a diamond contributes to its resistance to breakage. Diamond has a cleavage plane and is therefore more fragile in some orientations than others. Diamond cutters use this attribute to cleave some stones before faceting them. "Impact toughness" is one of the main indexes to measure the quality of synthetic industrial diamonds. Yield strength Diamond has compressive yield strength of 130–140GPa. This exceptionally high value, along with the hardness and transparency of diamond, are the reasons that diamond anvil cells are the main tool for high pressure experiments. These anvils have reached pressures of . Much higher pressures may be possible with nanocrystalline diamonds. Elasticity and tensile strength Usually, attempting to deform bulk diamond crystal by tension or bending results in brittle fracture. However, when single crystalline diamond is in the form of micro/nanoscale wires or needles (~100–300nanometers in diameter, micrometers long), they can be elastically stretched by as much as 9–10 percent tensile strain without failure, with a maximum local tensile stress of about , very close to the theoretical limit for this material. Electrical conductivity Other specialized applications also exist or are being developed, including use as semiconductors: some blue diamonds are natural semiconductors, in contrast to most diamonds, which are excellent electrical insulators. The conductivity and blue color originate from boron impurity. Boron substitutes for carbon atoms in the diamond lattice, donating a hole into the valence band. Substantial conductivity is commonly observed in nominally undoped diamond grown by chemical vapor deposition. This conductivity is associated with hydrogen-related species adsorbed at the surface, and it can be removed by annealing or other surface treatments. Thin needles of diamond can be made to vary their electronic band gap from the normal 5.6 eV to near zero by selective mechanical deformation. High-purity diamond wafers 5 cm in diameter exhibit perfect resistance in one direction and perfect conductance in the other, creating the possibility of using them for quantum data storage. The material contains only 3 parts per million of nitrogen. The diamond was grown on a stepped substrate, which eliminated cracking. Surface property Diamonds are naturally lipophilic and hydrophobic, which means the diamonds' surface cannot be wet by water, but can be easily wet and stuck by oil. This property can be utilized to extract diamonds using oil when making synthetic diamonds. However, when diamond surfaces are chemically modified with certain ions, they are expected to become so hydrophilic that they can stabilize multiple layers of water ice at human body temperature. The surface of diamonds is partially oxidized. The oxidized surface can be reduced by heat treatment under hydrogen flow. That is to say, this heat treatment partially removes oxygen-containing functional groups. But diamonds (sp3C) are unstable against high temperature (above about ) under atmospheric pressure. The structure gradually changes into sp2C above this temperature. Thus, diamonds should be reduced below this temperature. Chemical stability At room temperature, diamonds do not react with any chemical reagents including strong acids and bases. In an atmosphere of pure oxygen, diamond has an ignition point that ranges from to ; smaller crystals tend to burn more easily. It increases in temperature from red to white heat and burns with a pale blue flame, and continues to burn after the source of heat is removed. By contrast, in air the combustion will cease as soon as the heat is removed because the oxygen is diluted with nitrogen. A clear, flawless, transparent diamond is completely converted to carbon dioxide; any impurities will be left as ash. Heat generated from cutting a diamond will not ignite the diamond, and neither will a cigarette lighter, but house fires and blow torches are hot enough. Jewelers must be careful when molding the metal in a diamond ring. Diamond powder of an appropriate grain size (around 50microns) burns with a shower of sparks after ignition from a flame. Consequently, pyrotechnic compositions based on synthetic diamond powder can be prepared. The resulting sparks are of the usual red-orange color, comparable to charcoal, but show a very linear trajectory which is explained by their high density. Diamond also reacts with fluorine gas above about . Color Diamond has a wide band gap of corresponding to the deep ultraviolet wavelength of 225nanometers. This means that pure diamond should transmit visible light and appear as a clear colorless crystal. Colors in diamond originate from lattice defects and impurities. The diamond crystal lattice is exceptionally strong, and only atoms of nitrogen, boron, and hydrogen can be introduced into diamond during the growth at significant concentrations (up to atomic percents). Transition metals nickel and cobalt, which are commonly used for growth of synthetic diamond by high-pressure high-temperature techniques, have been detected in diamond as individual atoms; the maximum concentration is 0.01% for nickel and even less for cobalt. Virtually any element can be introduced to diamond by ion implantation. Nitrogen is by far the most common impurity found in gem diamonds and is responsible for the yellow and brown color in diamonds. Boron is responsible for the blue color. Color in diamond has two additional sources: irradiation (usually by alpha particles), that causes the color in green diamonds, and plastic deformation of the diamond crystal lattice. Plastic deformation is the cause of color in some brown and perhaps pink and red diamonds. In order of increasing rarity, yellow diamond is followed by brown, colorless, then by blue, green, black, pink, orange, purple, and red. "Black", or carbonado, diamonds are not truly black, but rather contain numerous dark inclusions that give the gems their dark appearance. Colored diamonds contain impurities or structural defects that cause the coloration, while pure or nearly pure diamonds are transparent and colorless. Most diamond impurities replace a carbon atom in the crystal lattice, known as a carbon flaw. The most common impurity, nitrogen, causes a slight to intense yellow coloration depending upon the type and concentration of nitrogen present. The Gemological Institute of America (GIA) classifies low saturation yellow and brown diamonds as diamonds in the normal color range, and applies a grading scale from "D" (colorless) to "Z" (light yellow). Yellow diamonds of high color saturation or a different color, such as pink or blue, are called fancy colored diamonds and fall under a different grading scale. In 2008, the Wittelsbach Diamond, a blue diamond once belonging to the King of Spain, fetched over US$24 million at a Christie's auction. In May 2009, a blue diamond fetched the highest price per carat ever paid for a diamond when it was sold at auction for 10.5 million Swiss francs (6.97 million euros, or US$9.5 million at the time). That record was, however, beaten the same year: a vivid pink diamond was sold for US$10.8 million in Hong Kong on December 1, 2009. Clarity Clarity is one of the 4C's (color, clarity, cut and carat weight) that helps in identifying the quality of diamonds. The Gemological Institute of America (GIA) developed 11 clarity scales to decide the quality of a diamond for its sale value. The GIA clarity scale spans from Flawless (FL) to included (I) having internally flawless (IF), very, very slightly included (VVS), very slightly included (VS) and slightly included (SI) in between. Impurities in natural diamonds are due to the presence of natural minerals and oxides. The clarity scale grades the diamond based on the color, size, location of impurity and quantity of clarity visible under 10x magnification. Inclusions in diamond can be extracted by optical methods. The process is to take pre-enhancement images, identifying the inclusion removal part and finally removing the diamond facets and noises. Fluorescence Between 25% and 35% of natural diamonds exhibit some degree of fluorescence when examined under invisible long-wave ultraviolet light or higher energy radiation sources such as X-rays and lasers. Incandescent lighting will not cause a diamond to fluoresce. Diamonds can fluoresce in a variety of colors including blue (most common), orange, yellow, white, green and very rarely red and purple. Although the causes are not well understood, variations in the atomic structure, such as the number of nitrogen atoms present are thought to contribute to the phenomenon. Thermal conductivity Diamonds can be identified by their high thermal conductivity (900–). Their high refractive index is also indicative, but other materials have similar refractivity. Geology Diamonds are extremely rare, with concentrations of at most parts per billion in source rock. Before the 20th century, most diamonds were found in alluvial deposits. Loose diamonds are also found along existing and ancient shorelines, where they tend to accumulate because of their size and density. Rarely, they have been found in glacial till (notably in Wisconsin and Indiana), but these deposits are not of commercial quality. These types of deposit were derived from localized igneous intrusions through weathering and transport by wind or water. Most diamonds come from the Earth's mantle, and most of this section discusses those diamonds. However, there are other sources. Some blocks of the crust, or terranes, have been buried deep enough as the crust thickened so they experienced ultra-high-pressure metamorphism. These have evenly distributed microdiamonds that show no sign of transport by magma. In addition, when meteorites strike the ground, the shock wave can produce high enough temperatures and pressures for microdiamonds and nanodiamonds to form. Impact-type microdiamonds can be used as an indicator of ancient impact craters. Popigai impact structure in Russia may have the world's largest diamond deposit, estimated at trillions of carats, and formed by an asteroid impact. A common misconception is that diamonds form from highly compressed coal. Coal is formed from buried prehistoric plants, and most diamonds that have been dated are far older than the first land plants. It is possible that diamonds can form from coal in subduction zones, but diamonds formed in this way are rare, and the carbon source is more likely carbonate rocks and organic carbon in sediments, rather than coal. Surface distribution Diamonds are far from evenly distributed over the Earth. A rule of thumb known as Clifford's rule states that they are almost always found in kimberlites on the oldest part of cratons, the stable cores of continents with typical ages of 2.5billion years or more. However, there are exceptions. The Argyle diamond mine in Australia, the largest producer of diamonds by weight in the world, is located in a mobile belt, also known as an orogenic belt, a weaker zone surrounding the central craton that has undergone compressional tectonics. Instead of kimberlite, the host rock is lamproite. Lamproites with diamonds that are not economically viable are also found in the United States, India, and Australia. In addition, diamonds in the Wawa belt of the Superior province in Canada and microdiamonds in the island arc of Japan are found in a type of rock called lamprophyre. Kimberlites can be found in narrow (1 to 4 meters) dikes and sills, and in pipes with diameters that range from about 75 m to 1.5 km. Fresh rock is dark bluish green to greenish gray, but after exposure rapidly turns brown and crumbles. It is hybrid rock with a chaotic mixture of small minerals and rock fragments (clasts) up to the size of watermelons. They are a mixture of xenocrysts and xenoliths (minerals and rocks carried up from the lower crust and mantle), pieces of surface rock, altered minerals such as serpentine, and new minerals that crystallized during the eruption. The texture varies with depth. The composition forms a continuum with carbonatites, but the latter have too much oxygen for carbon to exist in a pure form. Instead, it is locked up in the mineral calcite (). All three of the diamond-bearing rocks (kimberlite, lamproite and lamprophyre) lack certain minerals (melilite and kalsilite) that are incompatible with diamond formation. In kimberlite, olivine is large and conspicuous, while lamproite has Ti-phlogopite and lamprophyre has biotite and amphibole. They are all derived from magma types that erupt rapidly from small amounts of melt, are rich in volatiles and magnesium oxide, and are less oxidizing than more common mantle melts such as basalt. These characteristics allow the melts to carry diamonds to the surface before they dissolve. Exploration Kimberlite pipes can be difficult to find. They weather quickly (within a few years after exposure) and tend to have lower topographic relief than surrounding rock. If they are visible in outcrops, the diamonds are never visible because they are so rare. In any case, kimberlites are often covered with vegetation, sediments, soils, or lakes. In modern searches, geophysical methods such as aeromagnetic surveys, electrical resistivity, and gravimetry, help identify promising regions to explore. This is aided by isotopic dating and modeling of the geological history. Then surveyors must go to the area and collect samples, looking for kimberlite fragments or indicator minerals. The latter have compositions that reflect the conditions where diamonds form, such as extreme melt depletion or high pressures in eclogites. However, indicator minerals can be misleading; a better approach is geothermobarometry, where the compositions of minerals are analyzed as if they were in equilibrium with mantle minerals. Finding kimberlites requires persistence, and only a small fraction contain diamonds that are commercially viable. The only major discoveries since about 1980 have been in Canada. Since existing mines have lifetimes of as little as 25 years, there could be a shortage of new natural diamonds in the future. Ages Diamonds are dated by analyzing inclusions using the decay of radioactive isotopes. Depending on the elemental abundances, one can look at the decay of rubidium to strontium, samarium to neodymium, uranium to lead, argon-40 to argon-39, or rhenium to osmium. Those found in kimberlites have ages ranging from , and there can be multiple ages in the same kimberlite, indicating multiple episodes of diamond formation. The kimberlites themselves are much younger. Most of them have ages between tens of millions and 300 million years old, although there are some older exceptions (Argyle, Premier and Wawa). Thus, the kimberlites formed independently of the diamonds and served only to transport them to the surface. Kimberlites are also much younger than the cratons they have erupted through. The reason for the lack of older kimberlites is unknown, but it suggests there was some change in mantle chemistry or tectonics. No kimberlite has erupted in human history. Origin in mantle Most gem-quality diamonds come from depths of 150–250 km in the lithosphere. Such depths occur below cratons in mantle keels, the thickest part of the lithosphere. These regions have high enough pressure and temperature to allow diamonds to form and they are not convecting, so diamonds can be stored for billions of years until a kimberlite eruption samples them. Host rocks in a mantle keel include harzburgite and lherzolite, two type of peridotite. The most dominant rock type in the upper mantle, peridotite is an igneous rock consisting mostly of the minerals olivine and pyroxene; it is low in silica and high in magnesium. However, diamonds in peridotite rarely survive the trip to the surface. Another common source that does keep diamonds intact is eclogite, a metamorphic rock that typically forms from basalt as an oceanic plate plunges into the mantle at a subduction zone. A smaller fraction of diamonds (about 150 have been studied) come from depths of 330–660 km, a region that includes the transition zone. They formed in eclogite but are distinguished from diamonds of shallower origin by inclusions of majorite (a form of garnet with excess silicon). A similar proportion of diamonds comes from the lower mantle at depths between 660 and 800 km. Diamond is thermodynamically stable at high pressures and temperatures, with the phase transition from graphite occurring at greater temperatures as the pressure increases. Thus, underneath continents it becomes stable at temperatures of 950degrees Celsius and pressures of 4.5 gigapascals, corresponding to depths of 150kilometers or greater. In subduction zones, which are colder, it becomes stable at temperatures of 800 °C and pressures of 3.5gigapascals. At depths greater than 240 km, iron–nickel metal phases are present and carbon is likely to be either dissolved in them or in the form of carbides. Thus, the deeper origin of some diamonds may reflect unusual growth environments. In 2018 the first known natural samples of a phase of ice called Ice VII were found as inclusions in diamond samples. The inclusions formed at depths between 400 and 800 km, straddling the upper and lower mantle, and provide evidence for water-rich fluid at these depths. Carbon sources The mantle has roughly one billion gigatonnes of carbon (for comparison, the atmosphere-ocean system has about 44,000 gigatonnes). Carbon has two stable isotopes, 12C and 13C, in a ratio of approximately 99:1 by mass. This ratio has a wide range in meteorites, which implies that it also varied a lot in the early Earth. It can also be altered by surface processes like photosynthesis. The fraction is generally compared to a standard sample using a ratio δ13C expressed in parts per thousand. Common rocks from the mantle such as basalts, carbonatites, and kimberlites have ratios between −8 and −2. On the surface, organic sediments have an average of −25 while carbonates have an average of 0. Populations of diamonds from different sources have distributions of δ13C that vary markedly. Peridotitic diamonds are mostly within the typical mantle range; eclogitic diamonds have values from −40 to +3, although the peak of the distribution is in the mantle range. This variability implies that they are not formed from carbon that is primordial (having resided in the mantle since the Earth formed). Instead, they are the result of tectonic processes, although (given the ages of diamonds) not necessarily the same tectonic processes that act in the present. Diamond-forming carbon originates in the top 700 kilometers (430 mi) or so of the upper mantle closest to the surface, known as the asthenosphere. Formation and growth Diamonds in the mantle form through a metasomatic process where a C–O–H–N–S fluid or melt dissolves minerals in a rock and replaces them with new minerals. (The vague term C–O–H–N–S is commonly used because the exact composition is not known.) Diamonds form from this fluid either by reduction of oxidized carbon (e.g., CO2 or CO3) or oxidation of a reduced phase such as methane. Using probes such as polarized light, photoluminescence, and cathodoluminescence, a series of growth zones can be identified in diamonds. The characteristic pattern in diamonds from the lithosphere involves a nearly concentric series of zones with very thin oscillations in luminescence and alternating episodes where the carbon is resorbed by the fluid and then grown again. Diamonds from below the lithosphere have a more irregular, almost polycrystalline texture, reflecting the higher temperatures and pressures as well as the transport of the diamonds by convection. Transport to the surface Geological evidence supports a model in which kimberlite magma rises at 4–20 meters per second, creating an upward path by hydraulic fracturing of the rock. As the pressure decreases, a vapor phase exsolves from the magma, and this helps to keep the magma fluid. At the surface, the initial eruption explodes out through fissures at high speeds (over ). Then, at lower pressures, the rock is eroded, forming a pipe and producing fragmented rock (breccia). As the eruption wanes, there is pyroclastic phase and then metamorphism and hydration produces serpentinites. Double diamonds In rare cases, diamonds have been found that contain a cavity within which is a second diamond. The first double diamond, the Matryoshka, was found by Alrosa in Yakutia, Russia, in 2019. Another one was found in the Ellendale Diamond Field in Western Australia in 2021. In space Although diamonds on Earth are rare, they are very common in space. In meteorites, about three percent of the carbon is in the form of nanodiamonds, having diameters of a few nanometers. Sufficiently small diamonds can form in the cold of space because their lower surface energy makes them more stable than graphite. The isotopic signatures of some nanodiamonds indicate they were formed outside the Solar System in stars. High pressure experiments predict that large quantities of diamonds condense from methane into a "diamond rain" on the ice giant planets Uranus and Neptune. Some extrasolar planets may be almost entirely composed of diamond. Diamonds may exist in carbon-rich stars, particularly white dwarfs. One theory for the origin of carbonado, the toughest form of diamond, is that it originated in a white dwarf or supernova. Diamonds formed in stars may have been the first minerals. Industry The most familiar uses of diamonds today are as gemstones used for adornment, and as industrial abrasives for cutting hard materials. The markets for gem-grade and industrial-grade diamonds value diamonds differently. Gem-grade diamonds The dispersion of white light into spectral colors is the primary gemological characteristic of gem diamonds. In the 20th century, experts in gemology developed methods of grading diamonds and other gemstones based on the characteristics most important to their value as a gem. Four characteristics, known informally as the four Cs, are now commonly used as the basic descriptors of diamonds: these are its mass in carats (a carat being equal to 0.2grams), cut (quality of the cut is graded according to proportions, symmetry and polish), color (how close to white or colorless; for fancy diamonds how intense is its hue), and clarity (how free is it from inclusions). A large, flawless diamond is known as a paragon. A large trade in gem-grade diamonds exists. Although most gem-grade diamonds are sold newly polished, there is a well-established market for resale of polished diamonds (e.g. pawnbroking, auctions, second-hand jewelry stores, diamantaires, bourses, etc.). One hallmark of the trade in gem-quality diamonds is its remarkable concentration: wholesale trade and diamond cutting is limited to just a few locations; in 2003, 92% of the world's diamonds were cut and polished in Surat, India. Other important centers of diamond cutting and trading are the Antwerp diamond district in Belgium, where the International Gemological Institute is based, London, the Diamond District in New York City, the Diamond Exchange District in Tel Aviv and Amsterdam. One contributory factor is the geological nature of diamond deposits: several large primary kimberlite-pipe mines each account for significant portions of market share (such as the Jwaneng mine in Botswana, which is a single large-pit mine that can produce between of diamonds per year). Secondary alluvial diamond deposits, on the other hand, tend to be fragmented amongst many different operators because they can be dispersed over many hundreds of square kilometers (e.g., alluvial deposits in Brazil). The production and distribution of diamonds is largely consolidated in the hands of a few key players, and concentrated in traditional diamond trading centers, the most important being Antwerp, where 80% of all rough diamonds, 50% of all cut diamonds and more than 50% of all rough, cut and industrial diamonds combined are handled. This makes Antwerp a de facto "world diamond capital". The city of Antwerp also hosts the Antwerpsche Diamantkring, created in 1929 to become the first and biggest diamond bourse dedicated to rough diamonds. Another important diamond center is New York City, where almost 80% of the world's diamonds are sold, including auction sales. The De Beers company, as the world's largest diamond mining company, holds a dominant position in the industry, and has done so since soon after its founding in 1888 by the British businessman Cecil Rhodes. De Beers is currently the world's largest operator of diamond production facilities (mines) and distribution channels for gem-quality diamonds. The Diamond Trading Company (DTC) is a subsidiary of De Beers and markets rough diamonds from De Beers-operated mines. De Beers and its subsidiaries own mines that produce some 40% of annual world diamond production. For most of the 20th century over 80% of the world's rough diamonds passed through De Beers, but by 2001–2009 the figure had decreased to around 45%, and by 2013 the company's market share had further decreased to around 38% in value terms and even less by volume. De Beers sold off the vast majority of its diamond stockpile in the late 1990s – early 2000s and the remainder largely represents working stock (diamonds that are being sorted before sale). This was well documented in the press but remains little known to the general public. As a part of reducing its influence, De Beers withdrew from purchasing diamonds on the open market in 1999 and ceased, at the end of 2008, purchasing Russian diamonds mined by the largest Russian diamond company Alrosa. As of January 2011, De Beers states that it only sells diamonds from the following four countries: Botswana, Namibia, South Africa and Canada. Alrosa had to suspend their sales in October 2008 due to the global energy crisis, but the company reported that it had resumed selling rough diamonds on the open market by October 2009. Apart from Alrosa, other important diamond mining companies include BHP, which is the world's largest mining company; Rio Tinto, the owner of the Argyle (100%), Diavik (60%), and Murowa (78%) diamond mines; and Petra Diamonds, the owner of several major diamond mines in Africa. Further down the supply chain, members of The World Federation of Diamond Bourses (WFDB) act as a medium for wholesale diamond exchange, trading both polished and rough diamonds. The WFDB consists of independent diamond bourses in major cutting centers such as Tel Aviv, Antwerp, Johannesburg and other cities across the US, Europe and Asia. In 2000, the WFDB and The International Diamond Manufacturers Association established the World Diamond Council to prevent the trading of diamonds used to fund war and inhumane acts. WFDB's additional activities include sponsoring the World Diamond Congress every two years, as well as the establishment of the International Diamond Council (IDC) to oversee diamond grading. Once purchased by Sightholders (which is a trademark term referring to the companies that have a three-year supply contract with DTC), diamonds are cut and polished in preparation for sale as gemstones ('industrial' stones are regarded as a by-product of the gemstone market; they are used for abrasives). The cutting and polishing of rough diamonds is a specialized skill that is concentrated in a limited number of locations worldwide. Traditional diamond cutting centers are Antwerp, Amsterdam, Johannesburg, New York City, and Tel Aviv. Recently, diamond cutting centers have been established in China, India, Thailand, Namibia and Botswana. Cutting centers with lower cost of labor, notably Surat in Gujarat, India, handle a larger number of smaller carat diamonds, while smaller quantities of larger or more valuable diamonds are more likely to be handled in Europe or North America. The recent expansion of this industry in India, employing low cost labor, has allowed smaller diamonds to be prepared as gems in greater quantities than was previously economically feasible. Diamonds prepared as gemstones are sold on diamond exchanges called bourses. There are 28 registered diamond bourses in the world. Bourses are the final tightly controlled step in the diamond supply chain; wholesalers and even retailers are able to buy relatively small lots of diamonds at the bourses, after which they are prepared for final sale to the consumer. Diamonds can be sold already set in jewelry, or sold unset ("loose"). According to the Rio Tinto, in 2002 the diamonds produced and released to the market were valued at US$9 billion as rough diamonds, US$14 billion after being cut and polished, US$28 billion in wholesale diamond jewelry, and US$57 billion in retail sales. Cutting Mined rough diamonds are converted into gems through a multi-step process called "cutting". Diamonds are extremely hard, but also brittle and can be split up by a single blow. Therefore, diamond cutting is traditionally considered as a delicate procedure requiring skills, scientific knowledge, tools and experience. Its final goal is to produce a faceted jewel where the specific angles between the facets would optimize the diamond luster, that is dispersion of white light, whereas the number and area of facets would determine the weight of the final product. The weight reduction upon cutting is significant and can be of the order of 50%. Several possible shapes are considered, but the final decision is often determined not only by scientific, but also practical considerations. For example, the diamond might be intended for display or for wear, in a ring or a necklace, singled or surrounded by other gems of certain color and shape. Some of them may be considered as classical, such as round, pear, marquise, oval, hearts and arrows diamonds, etc. Some of them are special, produced by certain companies, for example, Phoenix, Cushion, Sole Mio diamonds, etc. The most time-consuming part of the cutting is the preliminary analysis of the rough stone. It needs to address a large number of issues, bears much responsibility, and therefore can last years in case of unique diamonds. The following issues are considered: The hardness of diamond and its ability to cleave strongly depend on the crystal orientation. Therefore, the crystallographic structure of the diamond to be cut is analyzed using X-ray diffraction to choose the optimal cutting directions. Most diamonds contain visible non-diamond inclusions and crystal flaws. The cutter has to decide which flaws are to be removed by the cutting and which could be kept. Splitting a diamond with a hammer is difficult, a well-calculated, angled blow can cut the diamond, piece-by-piece, but it can also ruin the diamond itself. Alternatively, it can be cut with a diamond saw, which is a more reliable method. After initial cutting, the diamond is shaped in numerous stages of polishing. Unlike cutting, which is a responsible but quick operation, polishing removes material by gradual erosion and is extremely time-consuming. The associated technique is well developed; it is considered as a routine and can be performed by technicians. After polishing, the diamond is reexamined for possible flaws, either remaining or induced by the process. Those flaws are concealed through various diamond enhancement techniques, such as repolishing, crack filling, or clever arrangement of the stone in the jewelry. Remaining non-diamond inclusions are removed through laser drilling and filling of the voids produced. Marketing Marketing has significantly affected the image of diamond as a valuable commodity. N. W. Ayer & Son, the advertising firm retained by De Beers in the mid-20th century, succeeded in reviving the American diamond market and the firm created new markets in countries where no diamond tradition had existed before. N. W. Ayer's marketing included product placement, advertising focused on the diamond product itself rather than the De Beers brand, and associations with celebrities and royalty. Without advertising the De Beers brand, De Beers was advertising its competitors' diamond products as well, but this was not a concern as De Beers dominated the diamond market throughout the 20th century. De Beers' market share dipped temporarily to second place in the global market below Alrosa in the aftermath of the global economic crisis of 2008, down to less than 29% in terms of carats mined, rather than sold. The campaign lasted for decades but was effectively discontinued by early 2011. De Beers still advertises diamonds, but the advertising now mostly promotes its own brands, or licensed product lines, rather than completely "generic" diamond products. The campaign was perhaps best captured by the slogan "a diamond is forever". This slogan is now being used by De Beers Diamond Jewelers, a jewelry firm which is a 50/50% joint venture between the De Beers mining company and LVMH, the luxury goods conglomerate. Brown-colored diamonds constituted a significant part of the diamond production, and were predominantly used for industrial purposes. They were seen as worthless for jewelry (not even being assessed on the diamond color scale). After the development of Argyle diamond mine in Australia in 1986, and marketing, brown diamonds have become acceptable gems. The change was mostly due to the numbers: the Argyle mine, with its of diamonds per year, makes about one-third of global production of natural diamonds; 80% of Argyle diamonds are brown. Industrial-grade diamonds Industrial diamonds are valued mostly for their hardness and thermal conductivity, making many of the gemological characteristics of diamonds, such as the 4 Cs, irrelevant for most applications. Eighty percent of mined diamonds (equal to about annually) are unsuitable for use as gemstones and are used industrially. In addition to mined diamonds, synthetic diamonds found industrial applications almost immediately after their invention in the 1950s; in 2014, of synthetic diamonds were produced, 90% of which were produced in China. Approximately 90% of diamond grinding grit is currently of synthetic origin. The boundary between gem-quality diamonds and industrial diamonds is poorly defined and partly depends on market conditions (for example, if demand for polished diamonds is high, some lower-grade stones will be polished into low-quality or small gemstones rather than being sold for industrial use). Within the category of industrial diamonds, there is a sub-category comprising the lowest-quality, mostly opaque stones, which are known as bort. Industrial use of diamonds has historically been associated with their hardness, which makes diamond the ideal material for cutting and grinding tools. As the hardest known naturally occurring material, diamond can be used to polish, cut, or wear away any material, including other diamonds. Common industrial applications of this property include diamond-tipped drill bits and saws, and the use of diamond powder as an abrasive. Less expensive industrial-grade diamonds (bort) with more flaws and poorer color than gems, are used for such purposes. Diamond is not suitable for machining ferrous alloys at high speeds, as carbon is soluble in iron at the high temperatures created by high-speed machining, leading to greatly increased wear on diamond tools compared to alternatives. Specialized applications include use in laboratories as containment for high-pressure experiments (see diamond anvil cell), high-performance bearings, and limited use in specialized windows. With the continuing advances being made in the production of synthetic diamonds, future applications are becoming feasible. The high thermal conductivity of diamond makes it suitable as a heat sink for integrated circuits in electronics. Mining Approximately of diamonds are mined annually, with a total value of nearly US$9 billion, and about are synthesized annually. Roughly 49% of diamonds originate from Central and Southern Africa, although significant sources of the mineral have been discovered in Canada, India, Russia, Brazil, and Australia. They are mined from kimberlite and lamproite volcanic pipes, which can bring diamond crystals, originating from deep within the Earth where high pressures and temperatures enable them to form, to the surface. The mining and distribution of natural diamonds are subjects of frequent controversy such as concerns over the sale of blood diamonds or conflict diamonds by African paramilitary groups. The diamond supply chain is controlled by a limited number of powerful businesses, and is also highly concentrated in a small number of locations around the world. Only a very small fraction of the diamond ore consists of actual diamonds. The ore is crushed, during which care is required not to destroy larger diamonds, and then sorted by density. Today, diamonds are located in the diamond-rich density fraction with the help of X-ray fluorescence, after which the final sorting steps are done by hand. Before the use of X-rays became commonplace, the separation was done with grease belts; diamonds have a stronger tendency to stick to grease than the other minerals in the ore. Historically, diamonds were found only in alluvial deposits in Guntur and Krishna district of the Krishna River delta in Southern India. India led the world in diamond production from the time of their discovery in approximately the 9th century BC to the mid-18th century AD, but the commercial potential of these sources had been exhausted by the late 18th century and at that time India was eclipsed by Brazil where the first non-Indian diamonds were found in 1725. Currently, one of the most prominent Indian mines is located at Panna. Diamond extraction from primary deposits (kimberlites and lamproites) started in the 1870s after the discovery of the Diamond Fields in South Africa. Production has increased over time and now an accumulated total of have been mined since that date. Twenty percent of that amount has been mined in the last five years, and during the last 10 years, nine new mines have started production; four more are waiting to be opened soon. Most of these mines are located in Canada, Zimbabwe, Angola, and one in Russia. In the U.S., diamonds have been found in Arkansas, Colorado, New Mexico, Wyoming, and Montana. In 2004, the discovery of a microscopic diamond in the U.S. led to the January 2008 bulk-sampling of kimberlite pipes in a remote part of Montana. The Crater of Diamonds State Park in Arkansas is open to the public, and is the only mine in the world where members of the public can dig for diamonds. Today, most commercially viable diamond deposits are in Russia (mostly in Sakha Republic, for example Mir pipe and Udachnaya pipe), Botswana, Australia (Northern and Western Australia) and the Democratic Republic of the Congo. In 2005, Russia produced almost one-fifth of the global diamond output, according to the British Geological Survey. Australia boasts the richest diamantiferous pipe, with production from the Argyle diamond mine reaching peak levels of 42metric tons per year in the 1990s. There are also commercial deposits being actively mined in the Northwest Territories of Canada and Brazil. Diamond prospectors continue to search the globe for diamond-bearing kimberlite and lamproite pipes. Political issues In some of the more politically unstable central African and west African countries, revolutionary groups have taken control of diamond mines, using proceeds from diamond sales to finance their operations. Diamonds sold through this process are known as conflict diamonds or blood diamonds. In response to public concerns that their diamond purchases were contributing to war and human rights abuses in central and western Africa, the United Nations, the diamond industry and diamond-trading nations introduced the Kimberley Process in 2002. The Kimberley Process aims to ensure that conflict diamonds do not become intermixed with the diamonds not controlled by such rebel groups. This is done by requiring diamond-producing countries to provide proof that the money they make from selling the diamonds is not used to fund criminal or revolutionary activities. Although the Kimberley Process has been moderately successful in limiting the number of conflict diamonds entering the market, some still find their way in. According to the International Diamond Manufacturers Association, conflict diamonds constitute 2–3% of all diamonds traded. Two major flaws still hinder the effectiveness of the Kimberley Process: (1) the relative ease of smuggling diamonds across African borders, and (2) the violent nature of diamond mining in nations that are not in a technical state of war and whose diamonds are therefore considered "clean". The Canadian Government has set up a body known as the Canadian Diamond Code of Conduct to help authenticate Canadian diamonds. This is a stringent tracking system of diamonds and helps protect the "conflict free" label of Canadian diamonds. Mineral resource exploitation in general causes irreversible environmental damage, which must be weighed against the socio-economic benefits to a country. Synthetics, simulants, and enhancements Synthetics Synthetic diamonds are diamonds manufactured in a laboratory, as opposed to diamonds mined from the Earth. The gemological and industrial uses of diamond have created a large demand for rough stones. This demand has been satisfied in large part by synthetic diamonds, which have been manufactured by various processes for more than half a century. However, in recent years it has become possible to produce gem-quality synthetic diamonds of significant size. It is possible to make colorless synthetic gemstones that, on a molecular level, are identical to natural stones and so visually similar that only a gemologist with special equipment can tell the difference. The majority of commercially available synthetic diamonds are yellow and are produced by so-called high-pressure high-temperature (HPHT) processes. The yellow color is caused by nitrogen impurities. Other colors may also be reproduced such as blue, green or pink, which are a result of the addition of boron or from irradiation after synthesis. Another popular method of growing synthetic diamond is chemical vapor deposition (CVD). The growth occurs under low pressure (below atmospheric pressure). It involves feeding a mixture of gases (typically to hydrogen) into a chamber and splitting them into chemically active radicals in a plasma ignited by microwaves, hot filament, arc discharge, welding torch, or laser. This method is mostly used for coatings, but can also produce single crystals several millimeters in size (see picture). As of 2010, nearly all 5,000 million carats (1,000tonnes) of synthetic diamonds produced per year are for industrial use. Around 50% of the 133 million carats of natural diamonds mined per year end up in industrial use. Mining companies' expenses average 40 to 60 US dollars per carat for natural colorless diamonds, while synthetic manufacturers' expenses average for synthetic, gem-quality colorless diamonds. However, a purchaser is more likely to encounter a synthetic when looking for a fancy-colored diamond because only 0.01% of natural diamonds are fancy-colored, while most synthetic diamonds are colored in some way. Simulants A diamond simulant is a non-diamond material that is used to simulate the appearance of a diamond, and may be referred to as diamante. Cubic zirconia is the most common. The gemstone moissanite (silicon carbide) can be treated as a diamond simulant, though more costly to produce than cubic zirconia. Both are produced synthetically. Enhancements Diamond enhancements are specific treatments performed on natural or synthetic diamonds (usually those already cut and polished into a gem), which are designed to better the gemological characteristics of the stone in one or more ways. These include laser drilling to remove inclusions, application of sealants to fill cracks, treatments to improve a white diamond's color grade, and treatments to give fancy color to a white diamond. Coatings are increasingly used to give a diamond simulant such as cubic zirconia a more "diamond-like" appearance. One such substance is diamond-like carbon—an amorphous carbonaceous material that has some physical properties similar to those of the diamond. Advertising suggests that such a coating would transfer some of these diamond-like properties to the coated stone, hence enhancing the diamond simulant. Techniques such as Raman spectroscopy should easily identify such a treatment. Identification Early diamond identification tests included a scratch test relying on the superior hardness of diamond. This test is destructive, as a diamond can scratch another diamond, and is rarely used nowadays. Instead, diamond identification relies on its superior thermal conductivity. Electronic thermal probes are widely used in the gemological centers to separate diamonds from their imitations. These probes consist of a pair of battery-powered thermistors mounted in a fine copper tip. One thermistor functions as a heating device while the other measures the temperature of the copper tip: if the stone being tested is a diamond, it will conduct the tip's thermal energy rapidly enough to produce a measurable temperature drop. This test takes about two to three seconds. Whereas the thermal probe can separate diamonds from most of their simulants, distinguishing between various types of diamond, for example synthetic or natural, irradiated or non-irradiated, etc., requires more advanced, optical techniques. Those techniques are also used for some diamonds simulants, such as silicon carbide, which pass the thermal conductivity test. Optical techniques can distinguish between natural diamonds and synthetic diamonds. They can also identify the vast majority of treated natural diamonds. "Perfect" crystals (at the atomic lattice level) have never been found, so both natural and synthetic diamonds always possess characteristic imperfections, arising from the circumstances of their crystal growth, that allow them to be distinguished from each other. Laboratories use techniques such as spectroscopy, microscopy, and luminescence under shortwave ultraviolet light to determine a diamond's origin. They also use specially made instruments to aid them in the identification process. Two screening instruments are the DiamondSure and the DiamondView, both produced by the DTC and marketed by the GIA. Several methods for identifying synthetic diamonds can be performed, depending on the method of production and the color of the diamond. CVD diamonds can usually be identified by an orange fluorescence. D–J colored diamonds can be screened through the Swiss Gemmological Institute's Diamond Spotter. Stones in the D–Z color range can be examined through the DiamondSure UV/visible spectrometer, a tool developed by De Beers. Similarly, natural diamonds usually have minor imperfections and flaws, such as inclusions of foreign material, that are not seen in synthetic diamonds. Screening devices based on diamond type detection can be used to make a distinction between diamonds that are certainly natural and diamonds that are potentially synthetic. Those potentially synthetic diamonds require more investigation in a specialized lab. Examples of commercial screening devices are D-Screen (WTOCD / HRD Antwerp), Alpha Diamond Analyzer (Bruker / HRD Antwerp), and D-Secure (DRC Techno). Etymology, earliest use and composition discovery The name diamond is derived from (adámas), 'proper, unalterable, unbreakable, untamed', from ἀ- (a-), 'not' + (damáō), 'to overpower, tame'. Diamonds are thought to have been first recognized and mined in India, where significant alluvial deposits of the stone could be found many centuries ago along the rivers Penner, Krishna, and Godavari. Diamonds have been known in India for at least 3,000years but most likely 6,000years. Diamonds have been treasured as gemstones since their use as religious icons in ancient India. Their usage in engraving tools also dates to early human history. The popularity of diamonds has risen since the 19th century because of increased supply, improved cutting and polishing techniques, growth in the world economy, and innovative and successful advertising campaigns. In 1772, the French scientist Antoine Lavoisier used a lens to concentrate the rays of the sun on a diamond in an atmosphere of oxygen, and showed that the only product of the combustion was carbon dioxide, proving that diamond is composed of carbon. Later, in 1797, the English chemist Smithson Tennant repeated and expanded that experiment. By demonstrating that burning diamond and graphite releases the same amount of gas, he established the chemical equivalence of these substances. See also Deep carbon cycle Diamondoid List of diamonds List of largest rough diamonds List of minerals Superhard material Citations General and cited references Further reading External links Properties of diamond: Ioffe database Abrasives Articles containing video clips Crystals Cubic minerals Economic geology Group IV semiconductors Impact event minerals Industrial minerals Luminescent minerals Minerals in space group 227 Native element minerals Transparent materials
Diamond
[ "Physics", "Chemistry", "Materials_science" ]
12,365
[ "Physical phenomena", "Luminescence", "Semiconductor materials", "Luminescent minerals", "Group IV semiconductors", "Optical phenomena", "Materials", "Crystallography", "Crystals", "Transparent materials", "Matter" ]
8,095
https://en.wikipedia.org/wiki/Donald%20Knuth
Donald Ervin Knuth ( ; born January 10, 1938) is an American computer scientist and mathematician. He is a professor emeritus at Stanford University. He is the 1974 recipient of the ACM Turing Award, informally considered the Nobel Prize of computer science. Knuth has been called the "father of the analysis of algorithms". Knuth is the author of the multi-volume work The Art of Computer Programming. He contributed to the development of the rigorous analysis of the computational complexity of algorithms and systematized formal mathematical techniques for it. In the process, he also popularized the asymptotic notation. In addition to fundamental contributions in several branches of theoretical computer science, Knuth is the creator of the TeX computer typesetting system, the related METAFONT font definition language and rendering system, and the Computer Modern family of typefaces. As a writer and scholar, Knuth created the WEB and CWEB computer programming systems designed to encourage and facilitate literate programming, and designed the MIX/MMIX instruction set architectures. He strongly opposes the granting of software patents, and has expressed his opinion to the United States Patent and Trademark Office and European Patent Organisation. Biography Early life Donald Knuth was born in Milwaukee, Wisconsin, to Ervin Henry Knuth and Louise Marie Bohning. He describes his heritage as "Midwestern Lutheran German". His father owned a small printing business and taught bookkeeping. While a student at Milwaukee Lutheran High School, Knuth thought of ingenious ways to solve problems. For example, in eighth grade, he entered a contest to find the number of words that the letters in "Ziegler's Giant Bar" could be rearranged to create; the judges had identified 2,500 such words. With time gained away from school due to a fake stomachache, Knuth used an unabridged dictionary and determined whether each dictionary entry could be formed using the letters in the phrase. Using this algorithm, he identified over 4,500 words, winning the contest. As prizes, the school received a new television and enough candy bars for all of his schoolmates to eat. Education Knuth received a scholarship in physics to the Case Institute of Technology (now part of Case Western Reserve University) in Cleveland, Ohio, enrolling in 1956. He also joined the Beta Nu Chapter of the Theta Chi fraternity. While studying physics at Case, Knuth was introduced to the IBM 650, an early commercial computer. After reading the computer's manual, Knuth decided to rewrite the assembly and compiler code for the machine used in his school because he believed he could do it better. In 1958, Knuth created a program to help his school's basketball team win its games. He assigned "values" to players in order to gauge their probability of scoring points, a novel approach that Newsweek and CBS Evening News later reported on. Knuth was one of the founding editors of the Case Institute's Engineering and Science Review, which won a national award as best technical magazine in 1959. He then switched from physics to mathematics, and received two degrees from Case in 1960: his Bachelor of Science, and simultaneously a master of science by a special award of the faculty, who considered his work exceptionally outstanding. At the end of his senior year at Case in 1960, Knuth proposed to Burroughs Corporation to write an ALGOL compiler for the B205 for $5,500. The proposal was accepted and he worked on the ALGOL compiler between graduating from Case and going to Caltech. In 1963, with mathematician Marshall Hall as his adviser, he earned a PhD in mathematics from the California Institute of Technology, with a thesis titled Finite Semifields and Projective Planes. Early work In 1963, after receiving his PhD, Knuth joined Caltech's faculty as an assistant professor. While at Caltech and after the success of the Burroughs B205 ALGOL compiler, he became consultant to Burroughs Corporation, joining the Product Planning Department. At Caltech he was operating as a mathematician but at Burroughs as a programmer working with the people he considered to have written the best software at the time in the ALGOL compiler for the B220 computer (successor to the B205). He was offered a $100,000 contract to write compilers at Green Tree Corporation but turned it down making a decision not to optimize income and continued at Caltech and Burroughs. He received a National Science Foundation Fellowship and Woodrow Wilson Foundation Fellowship but they had the condition that you could not do anything else but study as a graduate student so he would not be able to continue as a consultant to Burroughs. He chose to turn down the fellowships and continued with Burroughs. In summer 1962, he wrote a FORTRAN compiler for Univac, but considered that “I sold my soul to the devil” to write a FORTRAN compiler. After graduating, Knuth returned to Burroughs in June 1961 but did not tell them he had graduated with a master's degree, rather than the expected bachelor's degree. Impressed by the ALGOL syntax chart, symbol table, recursive-descent approach and the separation of the scanning, parsing and emitting functions of the compiler Knuth suggested an extension to the symbol table that one symbol could stand for a string of symbols. This became the basis of the DEFINE in Burroughs ALGOL, which has since been adopted by other languages. However, some really disliked the idea and wanted DEFINE removed. The last person to think it was a terrible idea was Edsger Dijkstra on a visit to Burroughs. Knuth worked on simulation languages at Burroughs producing SOL ‘Simulation Oriented Language’, an improvement on the state-of-the-art, co-designed with J. McNeeley. He attended a conference in Norway in May, 1967 organised by the people who invented the Simula language. Knuth influenced Burroughs to use Simula. Knuth had a long association with Burroughs as a consultant from 1960 to 1968 until his move into more academic work at Stanford in 1969. In 1962, Knuth accepted a commission from Addison-Wesley to write a book on computer programming language compilers. While working on this project, he decided that he could not adequately treat the topic without first developing a fundamental theory of computer programming, which became The Art of Computer Programming. He originally planned to publish this as a single book, but as he developed his outline for the book, he concluded that he required six volumes, and then seven, to thoroughly cover the subject. He published the first volume in 1968. Just before publishing the first volume of The Art of Computer Programming, Knuth left Caltech to accept employment with the Institute for Defense Analyses' Communications Research Division, then situated on the Princeton campus, which was performing mathematical research in cryptography to support the National Security Agency. In 1967, Knuth attended a Society for Industrial and Applied Mathematics conference and someone asked what he did. At the time, computer science was partitioned into numerical analysis, artificial intelligence, and programming languages. Based on his study and The Art of Computer Programming book, Knuth decided the next time someone asked he would say, "Analysis of algorithms". In 1969, Knuth left his position at Princeton to join the Stanford University faculty, where he became Fletcher Jones Professor of Computer Science in 1977. He became Professor of The Art of Computer Programming in 1990, and has been emeritus since 1993. Writings Knuth is a writer as well as a computer scientist. The Art of Computer Programming (TAOCP) In the 1970s, Knuth called computer science "a totally new field with no real identity. And the standard of available publications was not that high. A lot of the papers coming out were quite simply wrong. ... So one of my motivations was to put straight a story that had been very badly told." From 1972 to 1973, Knuth spent a year at the University of Oslo among people such as Ole-Johan Dahl. This is where he had originally intended to write the seventh volume in his book series, which was to deal with programming languages. But Knuth had finished only the first two volumes when he came to Oslo, and thus spent the year on the third volume, next to teaching. The third volume came out just after Knuth returned to Stanford in 1973. Concrete Mathematics: A Foundation for Computer Science originated with an expansion of the mathematical preliminaries section of Volume 1 of TAoCP. Knuth found that there were mathematical tools necessary for Volume 1, but missing from his repertoire, and decided that a course introducing those tools to computer science students would be useful. Knuth introduced the course at Stanford in 1970. Course notes developed by Oren Patashnik evolved into the 1988 text, with authors Ronald Graham, Knuth, and Patashnik. A second edition of Concrete Mathematics was published in 1994. By 2011, Volume 4A of TAoCP had been published. In April 2020, Knuth said he anticipated that Volume 4 of TAoCP will have at least parts A through F. Volume 4B was published in October 2022. Other works Knuth is also the author of Surreal Numbers, a mathematical novelette on John Horton Conway's set theory construction of an alternate system of numbers. Instead of simply explaining the subject, the book seeks to show the development of the mathematics. Knuth wanted the book to prepare students for doing original, creative research. In 1995, Knuth wrote the foreword to the book A=B by Marko Petkovšek, Herbert Wilf and Doron Zeilberger. He also occasionally contributes language puzzles to Word Ways: The Journal of Recreational Linguistics. Knuth has delved into recreational mathematics. He contributed articles to the Journal of Recreational Mathematics beginning in the 1960s, and was acknowledged as a major contributor in Joseph Madachy's Mathematics on Vacation. Knuth also appears in a number of Numberphile and Computerphile videos on YouTube, where he discusses topics from writing Surreal Numbers to why he does not use email. Knuth had proposed the name "algorithmics" as a better name for the discipline of computer science. Works about his religious beliefs In addition to his writings on computer science, Knuth, a Lutheran, is also the author of 3:16 Bible Texts Illuminated, in which he examines the Bible by a process of systematic sampling, namely an analysis of chapter 3, verse 16 of each book. Each verse is accompanied by a rendering in calligraphic art, contributed by a group of calligraphers led by Hermann Zapf. Knuth was invited to give a set of lectures at MIT on the views on religion and computer science behind his 3:16 project, resulting in another book, Things a Computer Scientist Rarely Talks About, where he published the lectures God and Computer Science. Opinion on software patents Knuth strongly opposes granting software patents to trivial solutions that should be obvious, but has expressed more nuanced views for nontrivial solutions such as the interior-point method of linear programming. He has expressed his disagreement directly to both the United States Patent and Trademark Office and European Patent Organisation. Programming Digital typesetting In the 1970s, the publishers of TAOCP abandoned Monotype in favor of phototypesetting. Knuth became so frustrated with the inability of the latter system to approach the quality of the previous volumes, which were typeset using the older system, that he took time out to work on digital typesetting and created TeX and Metafont. Literate programming While developing TeX, Knuth created a new methodology of programming, which he called literate programming, because he believed that programmers should think of programs as works of literature: Knuth embodied the idea of literate programming in the WEB system. The same WEB source is used to weave a TeX file, and to tangle a Pascal source file. These in their turn produce a readable description of the program and an executable binary respectively. A later iteration of the system, CWEB, replaces Pascal with C, C++, and Java. Knuth used WEB to program TeX and METAFONT, and published both programs as books, both originally published the same year: TeX: The Program (1986); and METAFONT: The Program (1986). Around the same time, LaTeX, the now-widely adopted macro package based on TeX, was first developed by Leslie Lamport, who later published its first user manual in 1986. Personal life Donald Knuth married Nancy Jill Carter on 24 June 1961, while he was a graduate student at the California Institute of Technology. They have two children: John Martin Knuth and Jennifer Sierra Knuth. Knuth gives informal lectures a few times a year at Stanford University, which he calls "Computer Musings". He was a visiting professor at the Oxford University Department of Computer Science in the United Kingdom until 2017 and an Honorary Fellow of Magdalen College. Knuth is an organist and a composer. He and his father served as organists for Lutheran congregations. Knuth and his wife have a 16-rank organ in their home. In 2016 he completed a piece for organ, Fantasia Apocalyptica, which he calls a "translation of the Greek text of the Revelation of Saint John the Divine into music". It was premièred in Sweden on January 10, 2018. Chinese name Knuth's Chinese name is Gao Dena (). He was given this name in 1977 by Frances Yao shortly before making a three-week trip to China. In the 1980 Chinese translation of Volume 1 of The Art of Computer Programming (), Knuth explains that he embraced his Chinese name because he wanted to be known by the growing numbers of computer programmers in China at the time. In 1989, his Chinese name was placed atop the Journal of Computer Science and Technology header, which Knuth says "makes me feel close to all Chinese people although I cannot speak your language". Humor Knuth used to pay a finder's fee of $2.56 for any typographical errors or mistakes discovered in his books, because "256 pennies is one hexadecimal dollar", and $0.32 for "valuable suggestions". According to an article in the Massachusetts Institute of Technology's Technology Review, these Knuth reward checks are "among computerdom's most prized trophies". Knuth had to stop sending real checks in 2008 due to bank fraud, and now gives each error finder a "certificate of deposit" from a publicly listed balance in his fictitious "Bank of San Serriffe". He once warned a correspondent, "Beware of bugs in the above code; I have only proved it correct, not tried it." Knuth published his first "scientific" article in a school magazine in 1957 under the title "The Potrzebie System of Weights and Measures". In it, he defined the fundamental unit of length as the thickness of Mad No. 26, and named the fundamental unit of force "whatmeworry". Mad published the article in issue No. 33 (June 1957). To demonstrate the concept of recursion, Knuth intentionally referred "Circular definition" and "Definition, circular" to each other in the index of The Art of Computer Programming, Volume 1. The preface of Concrete Mathematics has the following paragraph: At the TUG 2010 Conference, Knuth announced a satirical XML-based successor to TeX, titled "iTeX" (, performed with a bell ringing), which would support features such as arbitrarily scaled irrational units, 3D printing, input from seismographs and heart monitors, animation, and stereophonic sound. Awards and honors In 1971, Knuth received the first ACM Grace Murray Hopper Award. He has received various other awards, including the Turing Award, the National Medal of Science, the John von Neumann Medal, and the Kyoto Prize. Knuth was elected a Distinguished Fellow of the British Computer Society (DFBCS) in 1980 in recognition of his contributions to the field of computer science. In 1990, he was awarded the one-of-a-kind academic title Professor of The Art of Computer Programming; the title has since been revised to Professor Emeritus of The Art of Computer Programming. Knuth was elected to the National Academy of Sciences in 1975. He was also elected a member of the National Academy of Engineering in 1981 for organizing vast subject areas of computer science so that they are accessible to all segments of the computing community. In 1992, he became an associate of the French Academy of Sciences. Also that year, he retired from regular research and teaching at Stanford University in order to finish The Art of Computer Programming. He was elected a Foreign Member of the Royal Society (ForMemRS) in 2003. Knuth was elected as a Fellow (first class of Fellows) of the Society for Industrial and Applied Mathematics in 2009 for his outstanding contributions to mathematics. He is a member of the Norwegian Academy of Science and Letters. In 2012, he became a fellow of the American Mathematical Society and a member of the American Philosophical Society. Other awards and honors include: First ACM Grace Murray Hopper Award, 1971 Turing Award, 1974 Lester R. Ford Award, 1975 and 1993 Josiah Willard Gibbs Lecturer, 1978 National Medal of Science, 1979 Golden Plate Award of the American Academy of Achievement, 1985 Franklin Medal, 1988 John von Neumann Medal, 1995 Harvey Prize from the Technion, 1995 Kyoto Prize, 1996 Fellow of the Computer History Museum "for his fundamental early work in the history of computing algorithms, development of the TeX typesetting language, and for major contributions to mathematics and computer science." 1998 Asteroid 21656 Knuth, named in his honor in May 2001 Katayanagi Prize, 2010 BBVA Foundation Frontiers of Knowledge Award in the category of Information and Communication Technologies, 2010 Turing Lecture, 2011 Stanford University School of Engineering Hero Award, 2011 Flajolet Lecture Prize, 2014 Publications A short list of his publications include: The Art of Computer Programming: Computers and Typesetting (all books are hardcover unless otherwise noted): , x+483pp. (softcover). , xviii+600pp. , xii+361pp. (softcover). , xviii+566pp. , xvi+588pp. Books of collected papers: , (paperback) , (paperback) Donald E. Knuth, Selected Papers on Design of Algorithms (Stanford, California: Center for the Study of Language and Information—CSLI Lecture Notes, no. 191), 2010. (cloth), (paperback) Donald E. Knuth, Selected Papers on Fun and Games (Stanford, California: Center for the Study of Language and Information—CSLI Lecture Notes, no. 192), 2011. (cloth), (paperback) Donald E. Knuth, Companion to the Papers of Donald Knuth (Stanford, California: Center for the Study of Language and Information—CSLI Lecture Notes, no. 202), 2011. (cloth), (paperback) Other books: xiv+657 pp. Donald E. Knuth, The Stanford GraphBase: A Platform for Combinatorial Computing (New York, ACM Press) 1993. second paperback printing 2009. Donald E. Knuth, 3:16 Bible Texts Illuminated (Madison, Wisconsin: A-R Editions), 1990. Donald E. Knuth, Things a Computer Scientist Rarely Talks About (Center for the Study of Language and Information—CSLI Lecture Notes no 136), 2001. Donald E. Knuth, MMIXware: A RISC Computer for the Third Millennium (Heidelberg: Springer-Verlag— Lecture Notes in Computer Science, no. 1750), 1999. viii+550pp. Donald E. Knuth and Silvio Levy, The CWEB System of Structured Documentation (Reading, Massachusetts: Addison-Wesley), 1993. iv+227pp. . Third printing 2001 with hypertext support, ii + 237 pp. Donald E. Knuth, Tracy L. Larrabee, and Paul M. Roberts, Mathematical Writing (Washington, D.C.: Mathematical Association of America), 1989. ii+115pp Daniel H. Greene and Donald E. Knuth, Mathematics for the Analysis of Algorithms (Boston: Birkhäuser), 1990. viii+132pp. Donald E. Knuth, , 1976. 106pp. Donald E. Knuth, Stable Marriage and Its Relation to Other Combinatorial Problems: An Introduction to the Mathematical Analysis of Algorithms. Donald E. Knuth, Axioms and Hulls (Heidelberg: Springer-Verlag—Lecture Notes in Computer Science, no. 606), 1992. ix+109pp. See also Asymptotic notation Attribute grammar CC system Dancing Links Knuth -yllion Knuth–Bendix completion algorithm Knuth Prize Knuth shuffle Knuth's Algorithm X Knuth's Simpath algorithm Knuth's up-arrow notation Knuth–Morris–Pratt algorithm Davis–Knuth dragon Bender–Knuth involution Trabb Pardo–Knuth algorithm Fisher–Yates shuffle Robinson–Schensted–Knuth correspondence Man or boy test Plactic monoid Quater-imaginary base TeX Termial The Complexity of Songs Uniform binary search List of pioneers in computer science List of science and religion scholars References Bibliography External links Donald Knuth's home page at Stanford University. Knuth discusses software patenting, structured programming, collaboration and his development of TeX. Biography of Donald Knuth from the Institute for Operations Research and the Management Sciences Donald Ervin Knuth – Stanford Lectures (Archive) Interview with Donald Knuth by Lex Fridman Siobhan Roberts, The Yoda of Silicon Valley. The New York Times, 17 December 2018. American computer scientists American computer programmers Mathematics popularizers American people of German descent American technology writers 1938 births Living people Combinatorialists Free software programmers Programming language designers Scientists from California Writers from California Turing Award laureates Grace Murray Hopper Award laureates National Medal of Science laureates 1994 fellows of the Association for Computing Machinery Fellows of the American Mathematical Society Fellows of the British Computer Society Fellows of the Society for Industrial and Applied Mathematics Kyoto laureates in Advanced Technology Donegall Lecturers of Mathematics at Trinity College Dublin Members of the United States National Academy of Engineering Members of the United States National Academy of Sciences Foreign members of the Royal Society Foreign members of the Russian Academy of Sciences Members of the French Academy of Sciences Members of the Norwegian Academy of Science and Letters Members of the Department of Computer Science, University of Oxford Stanford University School of Engineering faculty Stanford University Department of Computer Science faculty California Institute of Technology alumni Case Western Reserve University alumni Scientists from Milwaukee American Lutherans American typographers and type designers Writers from Palo Alto, California 20th-century American mathematicians 21st-century American mathematicians 20th-century American scientists 21st-century American scientists American computer science educators Mad (magazine) people Burroughs Corporation people American organists American composers Academic staff of the University of Oslo Recipients of Franklin Medal
Donald Knuth
[ "Mathematics" ]
4,654
[ "Combinatorialists", "Combinatorics" ]
8,102
https://en.wikipedia.org/wiki/Dysprosium
Dysprosium is a chemical element; it has symbol Dy and atomic number 66. It is a rare-earth element in the lanthanide series with a metallic silver luster. Dysprosium is never found in nature as a free element, though, like other lanthanides, it is found in various minerals, such as xenotime. Naturally occurring dysprosium is composed of seven isotopes, the most abundant of which is 164Dy. Dysprosium was first identified in 1886 by Paul Émile Lecoq de Boisbaudran, but it was not isolated in pure form until the development of ion-exchange techniques in the 1950s. Dysprosium has relatively few applications where it cannot be replaced by other chemical elements. It is used for its high thermal neutron absorption cross-section in making control rods in nuclear reactors, for its high magnetic susceptibility () in data-storage applications, and as a component of Terfenol-D (a magnetostrictive material). Soluble dysprosium salts are mildly toxic, while the insoluble salts are considered non-toxic. Characteristics Physical properties Dysprosium is a rare-earth element and has a metallic, bright silver luster. It is quite soft and can be machined without sparking if overheating is avoided. Dysprosium's physical characteristics can be greatly affected by even small amounts of impurities. Dysprosium and holmium have the highest magnetic strengths of the elements, especially at low temperatures. Dysprosium has a simple ferromagnetic ordering at temperatures below its Curie temperature of , at which point it undergoes a first-order phase transition from the orthorhombic crystal structure to hexagonal close-packed (hcp). It then has a helical antiferromagnetic state, in which all of the atomic magnetic moments in a particular basal plane layer are parallel and oriented at a fixed angle to the moments of adjacent layers. This unusual antiferromagnetism transforms into a disordered (paramagnetic) state at . It transforms from the hcp phase to the body-centered cubic phase at . Chemical properties Dysprosium metal retains its luster in dry air but it will tarnish slowly in moist air, and it burns readily to form dysprosium(III) oxide: 4 Dy + 3 O2 → 2 Dy2O3 Dysprosium is quite electropositive and reacts slowly with cold water (and quickly with hot water) to form dysprosium hydroxide: 2 Dy (s) + 6 H2O (l) → 2 Dy(OH)3 (aq) + 3 H2 (g) Dysprosium hydroxide decomposes to form DyO(OH) at elevated temperatures, which then decomposes again to dysprosium(III) oxide. Dysprosium metal vigorously reacts with all the halogens at above 200 °C: 2 Dy (s) + 3 F2 (g) → 2 DyF3 (s) [green] 2 Dy (s) + 3 Cl2 (g) → 2 DyCl3 (s) [white] 2 Dy (s) + 3 Br2 (l) → 2 DyBr3 (s) [white] 2 Dy (s) + 3 I2 (g) → 2 DyI3 (s) [green] Dysprosium dissolves readily in dilute sulfuric acid to form solutions containing the yellow Dy(III) ions, which exist as a [Dy(OH2)9]3+ complex: 2 Dy (s) + 3 H2SO4 (aq) → 2 Dy3+ (aq) + 3 (aq) + 3 H2 (g) The resulting compound, dysprosium(III) sulfate, is noticeably paramagnetic. Compounds Dysprosium halides, such as DyF3 and DyBr3, tend to take on a yellow color. Dysprosium oxide, also known as dysprosia, is a white powder that is highly magnetic, more so than iron oxide. Dysprosium combines with various non-metals at high temperatures to form binary compounds with varying composition and oxidation states +3 and sometimes +2, such as DyN, DyP, DyH2 and DyH3; DyS, DyS2, Dy2S3 and Dy5S7; DyB2, DyB4, DyB6 and DyB12, as well as Dy3C and Dy2C3. Dysprosium carbonate, Dy2(CO3)3, and dysprosium sulfate, Dy2(SO4)3, result from similar reactions. Most dysprosium compounds are soluble in water, though dysprosium carbonate tetrahydrate (Dy2(CO3)3·4H2O) and dysprosium oxalate decahydrate (Dy2(C2O4)3·10H2O) are both insoluble in water. Two of the most abundant dysprosium carbonates, Dy2(CO3)3·2–3H2O (similar to the mineral tengerite-(Y)), and DyCO3(OH) (similar to minerals kozoite-(La) and kozoite-(Nd)), are known to form via a poorly ordered (amorphous) precursor phase with a formula of Dy2(CO3)3·4H2O. This amorphous precursor consists of highly hydrated spherical nanoparticles of 10–20 nm diameter that are exceptionally stable under dry treatment at ambient and high temperatures. Dysprosium forms several intermetallics, including the dysprosium stannides. Isotopes Naturally occurring dysprosium is composed of seven isotopes: 156Dy, 158Dy, 160Dy, 161Dy, 162Dy, 163Dy, and 164Dy. These are all considered stable, although only the last two are theoretically stable: the others can theoretically undergo alpha decay. Of the naturally occurring isotopes, 164Dy is the most abundant at 28%, followed by 162Dy at 26%. The least abundant is 156Dy at 0.06%. Dysprosium is the heaviest element to have isotopes that are predicted to be stable rather than observationally stable isotopes that are predicted to be radioactive. Twenty-nine radioisotopes have been synthesized, ranging in atomic mass from 138 to 173. The most stable of these is 154Dy, with a half-life of approximately 3 years, followed by 159Dy with a half-life of 144.4 days. The least stable is 138Dy, with a half-life of 200 ms. As a general rule, isotopes that are lighter than the stable isotopes tend to decay primarily by β+ decay, while those that are heavier tend to decay by β− decay. However, 154Dy decays primarily by alpha decay, and 152Dy and 159Dy decay primarily by electron capture. Dysprosium also has at least 11 metastable isomers, ranging in atomic mass from 140 to 165. The most stable of these is 165mDy, which has a half-life of 1.257 minutes. 149Dy has two metastable isomers, the second of which, 149m2Dy, has a half-life of 28 ns. History In 1878, erbium ores were found to contain the oxides of holmium and thulium. French chemist Paul Émile Lecoq de Boisbaudran, while working with holmium oxide, separated dysprosium oxide from it in Paris in 1886. His procedure for isolating the dysprosium involved dissolving dysprosium oxide in acid, then adding ammonia to precipitate the hydroxide. He was only able to isolate dysprosium from its oxide after more than 30 attempts at his procedure. On succeeding, he named the element dysprosium from the Greek dysprositos (δυσπρόσιτος), meaning "hard to get". The element was not isolated in relatively pure form until after the development of ion exchange techniques by Frank Spedding at Iowa State University in the early 1950s. Due to its role in permanent magnets used for wind turbines, it has been argued that dysprosium will be one of the main objects of geopolitical competition in a world running on renewable energy. But this perspective has been criticised for failing to recognise that most wind turbines do not use permanent magnets and for underestimating the power of economic incentives for expanded production. In 2021, Dy was turned into a 2-dimensional supersolid quantum gas. Occurrence While dysprosium is never encountered as a free element, it is found in many minerals, including xenotime, fergusonite, gadolinite, euxenite, polycrase, blomstrandine, monazite and bastnäsite, often with erbium and holmium or other rare earth elements. No dysprosium-dominant mineral (that is, with dysprosium prevailing over other rare earths in the composition) has yet been found. In the high-yttrium version of these, dysprosium happens to be the most abundant of the heavy lanthanides, comprising up to 7–8% of the concentrate (as compared to about 65% for yttrium). The concentration of Dy in the Earth's crust is about 5.2 mg/kg and in sea water 0.9 ng/L. Production Dysprosium is obtained primarily from monazite sand, a mixture of various phosphates. The metal is obtained as a by-product in the commercial extraction of yttrium. In isolating dysprosium, most of the unwanted metals can be removed magnetically or by a flotation process. Dysprosium can then be separated from other rare earth metals by an ion exchange displacement process. The resulting dysprosium ions can then react with either fluorine or chlorine to form dysprosium fluoride, DyF3, or dysprosium chloride, DyCl3. These compounds can be reduced using either calcium or lithium metals in the following reactions: 3 Ca + 2 DyF3 → 2 Dy + 3 CaF2 3 Li + DyCl3 → Dy + 3 LiCl The components are placed in a tantalum crucible and fired in a helium atmosphere. As the reaction progresses, the resulting halide compounds and molten dysprosium separate due to differences in density. When the mixture cools, the dysprosium can be cut away from the impurities. About 100 tonnes of dysprosium are produced worldwide each year, with 99% of that total produced in China. Dysprosium prices have climbed nearly twentyfold, from $7 per pound in 2003, to $130 a pound in late 2010. The price increased to $1,400/kg in 2011 but fell to $240 in 2015, largely due to illegal production in China which circumvented government restrictions. Currently, most dysprosium is being obtained from the ion-adsorption clay ores of southern China. the Browns Range Project pilot plant, 160 km south east of Halls Creek, Western Australia, is producing per annum. According to the United States Department of Energy, the wide range of its current and projected uses, together with the lack of any immediately suitable replacement, makes dysprosium the single most critical element for emerging clean energy technologies; even their most conservative projections predicted a shortfall of dysprosium before 2015. As of late 2015, there is a nascent rare earth (including dysprosium) extraction industry in Australia. Applications Dysprosium is used, in conjunction with vanadium and other elements, in making laser materials and commercial lighting. Because of dysprosium's high thermal-neutron absorption cross-section, dysprosium-oxide–nickel cermets are used in neutron-absorbing control rods in nuclear reactors. Dysprosium–cadmium chalcogenides are sources of infrared radiation, which is useful for studying chemical reactions. Because dysprosium and its compounds are highly susceptible to magnetization, they are employed in various data-storage applications, such as in hard disks. Dysprosium is increasingly in demand for the permanent magnets used in electric-car motors and wind-turbine generators. Neodymium–iron–boron magnets can have up to 6% of the neodymium substituted by dysprosium to raise the coercivity for demanding applications, such as drive motors for electric vehicles and generators for wind turbines. This substitution would require up to 100 grams of dysprosium per electric car produced. Based on Toyota's projected 2 million units per year, the use of dysprosium in applications such as this would quickly exhaust its available supply. The dysprosium substitution may also be useful in other applications because it improves the corrosion resistance of the magnets. Dysprosium is one of the components of Terfenol-D, along with iron and terbium. Terfenol-D has the highest room-temperature magnetostriction of any known material, which is employed in transducers, wide-band mechanical resonators, and high-precision liquid-fuel injectors. Dysprosium is used in dosimeters for measuring ionizing radiation. Crystals of calcium sulfate or calcium fluoride are doped with dysprosium. When these crystals are exposed to radiation, the dysprosium atoms become excited and luminescent. The luminescence can be measured to determine the degree of exposure to which the dosimeter has been subjected. Nanofibers of dysprosium compounds have high strength and a large surface area. Therefore, they can be used to reinforce other materials and act as a catalyst. Fibers of dysprosium oxide fluoride can be produced by heating an aqueous solution of DyBr3 and NaF to 450 °C at 450 bars for 17 hours. This material is remarkably robust, surviving over 100 hours in various aqueous solutions at temperatures exceeding 400 °C without redissolving or aggregating. Additionally, dysprosium has been used to create a two dimensional supersolid in a laboratory environment. Supersolids are expected to exhibit unusual properties, including superfluidity. Dysprosium iodide and dysprosium bromide are used in high-intensity metal-halide lamps. These compounds dissociate near the hot center of the lamp, releasing isolated dysprosium atoms. The latter re-emit light in the green and red part of the spectrum, thereby effectively producing bright light. Several paramagnetic crystal salts of dysprosium (dysprosium gallium garnet, DGG; dysprosium aluminium garnet, DAG; dysprosium iron garnet, DyIG) are used in adiabatic demagnetization refrigerators. The trivalent dysprosium ion (Dy3+) has been studied due to its downshifting luminescence properties. Dy-doped yttrium aluminium garnet (Dy:YAG) excited in the ultraviolet region of the electromagnetic spectrum results in the emission of photons of longer wavelength in the visible region. This idea is the basis for a new generation of UV-pumped white light-emitting diodes. The stable isotopes of dysprosium have been laser cooled and confined in magneto-optical traps for quantum physics experiments. The first Bose and Fermi quantum degenerate gases of an open shell lanthanide were created with dysprosium. Because dysprosium is highly magnetic—indeed it is the most magnetic fermionic element and nearly tied with terbium for most magnetic bosonic atom—such gases serve as the basis for quantum simulation with strongly dipolar atoms. Due to its strong magnetic properties, Dysprosium alloys are used in the marine industry's sound navigation and ranging (SONAR) system. The inclusion of dysprosium alloys in the design of SONAR transducers and receivers can improve sensitivity and accuracy by providing more stable and efficient magnetic fields. Precautions Like many powders, dysprosium powder may present an explosion hazard when mixed with air and when an ignition source is present. Thin foils of the substance can also be ignited by sparks or by static electricity. Dysprosium fires cannot be extinguished with water. It can react with water to produce flammable hydrogen gas. Dysprosium chloride fires can be extinguished with water. Dysprosium fluoride and dysprosium oxide are non-flammable. Dysprosium nitrate, Dy(NO3)3, is a strong oxidizing agent and readily ignites on contact with organic substances. Soluble dysprosium salts, such as dysprosium chloride and dysprosium nitrate are mildly toxic when ingested. Based on the toxicity of dysprosium chloride to mice, it is estimated that the ingestion of 500 grams or more could be fatal to a human (c.f. lethal dose of 300 grams of common table salt for a 100 kilogram human). The insoluble salts are non-toxic. References External links It's Elemental – Dysprosium Chemical elements Chemical elements with hexagonal close-packed structure Lanthanides Energy development Ferromagnetic materials Reducing agents Renewable energy technology
Dysprosium
[ "Physics", "Chemistry" ]
3,810
[ "Chemical elements", "Redox", "Ferromagnetic materials", "Reducing agents", "Materials", "Atoms", "Matter" ]
8,104
https://en.wikipedia.org/wiki/Desertification
Desertification is a type of gradual land degradation of fertile land into arid desert due to a combination of natural processes and human activities. The immediate cause of desertification is the loss of most vegetation. This is driven by a number of factors, alone or in combination, such as drought, climatic shifts, tillage for agriculture, overgrazing and deforestation for fuel or construction materials. Though vegetation plays a major role in determining the biological composition of the soil, studies have shown that, in many environments, the rate of erosion and runoff decreases exponentially with increased vegetation cover. Unprotected, dry soil surfaces blow away with the wind or are washed away by flash floods, leaving infertile lower soil layers that bake in the sun and become an unproductive hardpan. This spread of arid areas is caused by a variety of factors, such as overexploitation of soil as a result of human activity and the effects of climate change. At least 90% of the inhabitants of drylands live in developing countries, where they also suffer from poor economic and social conditions. This situation is exacerbated by land degradation because of the reduction in productivity, the precariousness of living conditions and the difficulty of access to resources and opportunities. Geographic areas most affected are located in Africa (Sahel region), Asia (Gobi Desert and Mongolia) and parts of South America. Drylands occupy approximately 40–41% of Earth's land area and are home to more than 2 billion people. Effects of desertification include sand and dust storms, food insecurity, and poverty. Methods of mitigating or reversing desertification include improving soil quality, greening deserts, managing grazing, and tree-planting (reforestation and afforestation). Throughout geological history, the development of deserts has occurred naturally over long intervals of time. The modern study of desertification emerged from the study of the 1980s drought in the Sahel. Definitions Desertification is a gradual process of increased soil aridity. Desertification has been defined in the text of the United Nations Convention to Combat Desertification (UNCCD) as "land degradation in arid, semi-arid and dry sub-humid regions resulting from various factors, including climatic variations and human activities." Definition of Desert – That area of the earth where the sum of rain and snowfall is much less than other areas, where the annual average rainfall is less than 25CM. Definition by UNO (1995) – Land degradation in barren, humid and sub-humid areas due to climate change and human activities is called desertification. As of 2005, considerable controversy existed over the proper definition of the term desertification with more than 100 formal definitions in existence. The most widely accepted of these was that of the Princeton University Dictionary which defined it as "the process of fertile land transforming into desert typically as a result of deforestation, drought or improper/inappropriate agriculture". This definition clearly demonstrated the interconnectedness of desertification and human activities, in particular land use and land management practices. It also highlighted the economic, social and environmental implications of desertification. However, this original understanding that desertification involved the physical expansion of deserts has been rejected as the concept has further evolved since then. There exists also controversy around the sub-grouping of types of desertification, including, for example, the validity and usefulness of such terms as "man-made desert" and "non-pattern desert". Causes Immediate causes The immediate cause of desertification is the loss of most vegetation. This is driven by a number of factors, alone or in combination, such as drought, climatic shifts, tillage for agriculture, overgrazing and deforestation for fuel or construction materials. Though vegetation plays a major role in determining the biological composition of the soil, studies have shown that, in many environments, the rate of erosion and runoff decreases exponentially with increased vegetation cover. Unprotected, dry soil surfaces blow away with the wind or are washed away by flash floods, leaving infertile lower soil layers that bake in the sun and become an unproductive hardpan. Influence of human activities Early studies argued one of the most common causes of desertification was overgrazing, over consumption of vegetation by cattle or other livestock. However, the role of local overexploitation in driving desertification in the recent past is controversial. Drought in the Sahel region is now thought to be principally the result of seasonal variability in rainfall caused by large-scale sea surface temperature variations, largely driven by natural variability and anthropogenic emissions of aerosols (reflective sulphate particles) and greenhouse gases. As a result, changing ocean temperature and reductions in sulfate emissions have caused a re-greening of the region. This has led some scholars to argue that agriculture-induced vegetation loss is a minor factor in desertification. Human population dynamics have a considerable impact on overgrazing, over-farming and deforestation, as previously acceptable techniques have become unsustainable. There are multiple reasons farmers use intensive farming as opposed to extensive farming but the main reason is to maximize yields. By increasing productivity, they require a lot more fertilizer, pesticides, and labor to upkeep machinery. This continuous use of the land rapidly depletes the nutrients of the soil causing desertification to spread. Natural variations Scientists agree that the existence of a desert in the place where the Sahara desert is now located is due to natural variations in solar insolation due to orbital precession of the Earth. Such variations influence the strength of the West African Monsoon, inducing feedback in vegetation and dust emission that amplify the cycle of wet and dry Sahara climate. There is also a suggestion the transition of the Sahara from savanna to desert during the mid-Holocene was partially due to overgrazing by the cattle of the local population. Climate change Research into desertification is complex, and there is no single metric which can define all aspects. However, more intense climate change is still expected to increase the current extent of drylands on the Earth's continents: from 38% in late 20th century to 50% or 56% by the end of the century, under the "moderate" and high-warming Representative Concentration Pathways 4.5 and 8.5. Most of the expansion will be seen over regions such as "southwest North America, the northern fringe of Africa, southern Africa, and Australia". Drylands cover 41% of the earth's land surface and include 45% of the world's agricultural land. These regions are among the most vulnerable ecosystems to anthropogenic climate and land use change and are under threat of desertification. An observation-based attribution study of desertification was carried out in 2020 which accounted for climate change, climate variability, CO2 fertilization as well as both the gradual and rapid ecosystem changes caused by land use. The study found that, between 1982 and 2015, 6% of the world's drylands underwent desertification driven by unsustainable land use practices compounded by anthropogenic climate change. Despite an average global greening, anthropogenic climate change has degraded 12.6% (5.43 million km2) of drylands, contributing to desertification and affecting 213 million people, 93% of who live in developing economies. Effects Sand and dust storms There has been a 25% increase in global annual dust emissions between the late nineteenth century to present day. The increase of desertification has also increased the amount of loose sand and dust that the wind can pick up ultimately resulting in a storm. For example, dust storms in the Middle East “are becoming more frequent and intense in recent years” because “long-term reductions in rainfall [cause] lower soil moisture and vegetative cover”. Dust storms can contribute to certain respiratory disorders such as pneumonia, skin irritations, asthma and many more. They can pollute open water, reduce the effectiveness of clean energy efforts, and halt most forms of transportation. Dust and sand storms can have a negative effect on the climate which can make desertification worse. Dust particles in the air scatter incoming radiation from the sun (Hassan, 2012). The dust can provide momentary coverage for the ground temperature but the atmospheric temperature will increase. This can disform and shorten the life time of clouds which can result in less rainfall. Food insecurity Global food security is being threatened by desertification. The more that population grows, the more food that has to be grown. The agricultural business is being displaced from one country to another. For example, Europe on average imports over 50% of its food. Meanwhile, 44% of agricultural land is located in dry lands and it supplies 60% of the world's food production. Desertification is decreasing the amount of sustainable land for agricultural uses but demands are continuously growing. In the near future, the demands will overcome the supply. The violent herder–farmer conflicts in Nigeria, Sudan, Mali and other countries in the Sahel region have been exacerbated by climate change, land degradation and population growth. Increasing poverty At least 90% of the inhabitants of drylands live in developing countries, where they also suffer from poor economic and social conditions. This situation is exacerbated by land degradation because of the reduction in productivity, the precariousness of living conditions and the difficulty of access to resources and opportunities. Many underdeveloped countries are affected by overgrazing, land exhaustion and overdrafting of groundwater due to pressures to exploit marginal drylands for farming. Decision-makers are understandably averse to invest in arid zones with low potential. This absence of investment contributes to the marginalization of these zones. When unfavorable agri-climatic conditions are combined with an absence of infrastructure and access to markets, as well as poorly adapted production techniques and an underfed and undereducated population, most such zones are excluded from development. Desertification often causes rural lands to become unable to support the same sized populations that previously lived there. This results in mass migrations out of rural areas and into urban areas particularly in Africa creating unemployment and slums. The number of these environmental refugees grows every year, with projections for sub-Saharan Africa showing a probable increase from 14 million in 2010 to nearly 200 million by 2050. This presents a future crisis for the region, as neighboring nations do not always have the ability to support large populations of refugees. In Mongolia, the land is 90% fragile dry land, which causes many herders to migrate to the city for work. With very limited resources, the herders that stay on the dry land graze very carefully in order to preserve the land. Agriculture is a main source of income for many desert communities. The increase in desertification in these regions has degraded the land to such an extent where people can no longer productively farm and make a profit. This has negatively impacted the economy and increased poverty rates. There is, however, increased global advocacy e.g. the UN SDG 15 to combat desertification and restore affected lands. Geographic areas affected Drylands occupy approximately 40–41% of Earth's land area and are home to more than 2 billion people. It has been estimated that some 10–20% of drylands are already degraded, the total area affected by desertification being between 6 and 12 million square kilometers, that about 1–6% of the inhabitants of drylands live in desertified areas, and that a billion people are under threat from further desertification. Sahel The impact of climate change and human activities on desertification are exemplified in the Sahel region of Africa. The region is characterized by a dry hot climate, high temperatures and low rainfall (100–600 mm per year). So, droughts are the rule in the Sahel region. The Sahel has lost approximately 650,000 km2 of its productive agricultural land over the past 50 years; the propagation of desertification in this area is considerable. The climate of the Sahara has undergone enormous variations over the last few hundred thousand years, oscillating between wet (grassland) and dry (desert) every 20,000 years (a phenomenon believed to be caused by long-term changes in the North African climate cycle that alters the path of the North African Monsoon, caused by an approximately 40,000-year cycle in which the axial tilt of the earth changes between 22° and 24.5°). Some statistics have shown that, since 1900, the Sahara has expanded by 250 km to the south over a stretch of land from west to east 6,000 km long. Lake Chad, located in the Sahel region, has undergone desiccation due to water withdrawal for irrigation and decrease in rainfall. The lake has shrunk by over 90% since 1987, displacing millions of inhabitants. Recent efforts have managed to make some progress toward its restoration, but it is still considered to be at risk of disappearing entirely. To limit desertification, the Great Green Wall (Africa) initiative was started in 2007 involving the planting of vegetation along a stretch of 7,775 km, 15 km wide, involving 22 countries to 2030. The purpose of this mammoth planting initiative is to enhance retention of water in the ground following the seasonal rainfall, thus promoting land rehabilitation and future agriculture. Senegal has already contributed to the project by planting 50,000 acres of trees. It is said to have improved land quality and caused an increase in economic opportunity in the region. Gobi Desert and Mongolia Another major area that is being impacted by desertification is the Gobi Desert located in Northern China and Southern Mongolia. The Gobi Desert is the fastest expanding desert on Earth, as it transforms over of grassland into wasteland annually. Although the Gobi Desert itself is still a distance away from Beijing, reports from field studies state there are large sand dunes forming only 70 km (43.5 mi) outside the city. In Mongolia, around 90% of grassland is considered vulnerable to desertification by the UN. An estimated 13% of desertification in Mongolia is caused by natural factors; the rest is due to human influence particularly overgrazing and increased erosion of soils in cultivated areas. During the period 1940 to 2015, the mean air temperature increased by 2.24 °C. The warmest ten-year period was during the latest decade to 2021. Precipitation has decreased by 7% over this period resulting in increased arid conditions throughout Mongolia. The Gobi desert continues to expand northward, with over 70% of Mongolia's land degraded through overgrazing, deforestation, and climate change. In addition, the Mongolia government has listed forest fires, blights, unsustainable forestry and mining activities as leading causes of desertification in the country. The transition from sheep to goat farming in order to meet export demands for cashmere wool has caused degradation of grazing lands. Compared to sheep, goats do more damage to grazing lands by eating roots and flowers. To mitigate the financial impact of desertification in Inner Mongolia, Bai Jingying teaches women how to do traditional embroidery, which they then sell to provide additional income. South America South America is another area vulnerable by desertification, as 25% of the land is classified as drylands and over 68% of the land area has undergone soil erosion as a result of deforestation and overgrazing. 27 to 43% of the land areas in Bolivia, Chile, Ecuador and Peru are at risk due to desertification. In Argentina, Mexico and Paraguay, greater than half the land area is degraded by desertification and cannot be used for agriculture. In Central America, drought has caused increased unemployment and decreased food security - also causing migration of people. Similar impacts have been seen in rural parts of Mexico where about 1,000 km2 of land have been lost yearly due to desertification. In Argentina, desertification has the potential to disrupt the nation's food supply. Reversing desertification Techniques and countermeasures exist for mitigating or reversing desertification. For some of these measures, there are numerous barriers to their implementation. Yet for others, the solution simply requires the exercise of human reason. One proposed barrier is that the costs of adopting sustainable agricultural practices sometimes exceed the benefits for individual farmers, even while they are socially and environmentally beneficial. Another issue is a lack of political will, and lack of funding to support land reclamation and anti-desertification programs. Desertification is recognized as a major threat to biodiversity. Some countries have developed biodiversity action plans to counter its effects, particularly in relation to the protection of endangered flora and fauna. Improving soil quality Techniques focus on two aspects: provisioning of water, and fixation and hyper-fertilizing soil. Fixating the soil is often done through the use of shelter belts, woodlots and windbreaks. Windbreaks are made from trees and bushes and are used to reduce soil erosion and evapotranspiration. Some soils (for example, clay), due to lack of water can become consolidated rather than porous (as in the case of sandy soils). Some techniques as zaï or tillage are then used to still allow the planting of crops. Another technique that is useful is contour trenching. This involves the digging of 150 m long, 1 m deep trenches in the soil. The trenches are made parallel to the height lines of the landscape, preventing the water from flowing within the trenches and causing erosion. Stone walls are placed around the trenches to prevent the trenches from closing up again. This method was invented by Peter Westerveld. Enriching of the soil and restoration of its fertility is often achieved by plants. Of these, leguminous plants which extract nitrogen from the air and fix it in the soil, succulents (such as Opuntia), and food crops/trees as grains, barley, beans and dates are the most important. Sand fences can also be used to control drifting of soil and sand erosion. Another way to restore soil fertility is through the use of nitrogen-rich fertilizer. Due to the higher cost of this fertilizer, many smallholder farmers are reluctant to use it, especially in areas where subsistence farming is common. Several nations, including India, Zambia, and Malawi have responded to this by implementing subsidies to help encourage adoption of this technique. Some research centres (such as Bel-Air Research Center IRD/ISRA/UCAD) are also experimenting with the inoculation of tree species with mycorrhiza in arid zones. The mycorrhiza are basically fungi attaching themselves to the roots of the plants. They hereby create a symbiotic relation with the trees, increasing the surface area of the tree's roots greatly (allowing the tree to gather much more nutrient from the soil). The bioengineering of soil microbes, particularly photosynthesizers, has also been suggested and theoretically modeled as a method to protect drylands. The aim would be to enhance the existing cooperative loops between soil microbes and vegetation. Desert greening As there are many different types of deserts, there are also different types of desert reclamation methodologies. An example for this is the salt flats in the Rub' al Khali desert in Saudi Arabia. These salt flats are one of the most promising desert areas for seawater agriculture and could be revitalized without the use of freshwater or much energy. Farmer-managed natural regeneration (FMNR) is another technique that has produced successful results for desert reclamation. Since 1980, this method to reforest degraded landscape has been applied with some success in Niger. This simple and low-cost method has enabled farmers to regenerate some 30,000 square kilometers in Niger. The process involves enabling native sprouting tree growth through selective pruning of shrub shoots. The residue from pruned trees can be used to provide mulching for fields thus increasing soil water retention and reducing evaporation. Additionally, properly spaced and pruned trees can increase crop yields. The Humbo Assisted Regeneration Project which uses FMNR techniques in Ethiopia has received money from The World Bank's BioCarbon Fund, which supports projects that sequester or conserve carbon in forests or agricultural ecosystems. The Food and Agriculture Organization of the United Nations launched the FAO Drylands Restoration Initiative in 2012 to draw together knowledge and experience on dryland restoration. In 2015, FAO published global guidelines for the restoration of degraded forests and landscapes in drylands, in collaboration with the Turkish Ministry of Forestry and Water Affairs and the Turkish Cooperation and Coordination Agency. The "Green Wall of China" is a high-profile example of one method that has been finding success in this battle with desertification. This wall is a much larger-scale version of what American farmers did in the 1930s to stop the great Midwest dust bowl. This plan was proposed in the late 1970s, and has become a major ecological engineering project that is not predicted to end until the year 2055. According to Chinese reports, there have been nearly 66 billion trees planted in China's great green wall. The green wall of China has decreased desert land in China by an annual average of 1,980 square km. The frequency of sandstorms nationwide have fallen 20% due to the green wall. Due to the success that China has been finding in stopping the spread of desertification, plans are currently being made in Africa to start a "wall" along the borders of the Sahara desert as well to be financed by the United Nations Global Environment Facility trust. In 2007 the African Union started the Great Green Wall of Africa project in order to combat desertification in 20 countries. The wall is 8,000 km wide, stretching across the entire width of the continent and has 8 billion dollars in support of the project. The project has restored 36millionhectares of land, and by 2030 the initiative plans to restore a total of 100 millionhectares. The Great Green Wall has created many job opportunities for the participating countries, with over 20,000 jobs created in Nigeria alone. Better managed grazing Restored grasslands store CO2 from the atmosphere as organic plant material. Grazing livestock, usually not left to wander, consume the grass and minimize its growth. A method proposed to restore grasslands uses fences with many small paddocks, moving herds from one paddock to another after a day or two in order to mimic natural grazers and allowing the grass to grow optimally. Proponents of managed grazing methods estimate that increasing this method could increase carbon content of the soils in the world's 3.5 billion hectares of agricultural grassland and offset nearly 12 years of CO2 emissions. History The world's most noted deserts have been formed by natural processes interacting over long intervals of time. During most of these times, deserts have grown and shrunk independently of human activities. Paleodeserts are large sand seas now inactive because they are stabilized by vegetation, some extending beyond the present margins of core deserts, such as the Sahara, the largest hot desert. Historical evidence shows that the serious and extensive land deterioration occurring several centuries ago in arid regions had three centers: the Mediterranean, the Mesopotamian Valley, and the Loess Plateau of China, where population was dense. The earliest known discussion of the topic arose soon after the French colonization of West Africa, when the Comité d'Etudes commissioned a study on desséchement progressif to explore the prehistoric expansion of the Sahara Desert. The modern study of desertification emerged from the study of the 1980s drought in the Sahel. See also Aridification Oasification Soil retrogression and degradation Water scarcity World Day to Combat Desertification and Drought References Sources External links Official website of the Secretariat of the United Nations Convention to Combat Desertification (UNCCD) Procedural history and related documents on the UNCCD, from the United Nations Audiovisual Library of International Law Official website of Action Against Desertification, a United Nations Food and Agriculture Organization initiative of the African, Caribbean and Pacific Group of States Global Deserts Outlook (2006), thematic assessment report in the Global Environment Outlook (GEO) series of the United Nations Environment Program (UNEP). Environmental soil science Paleoclimatology
Desertification
[ "Environmental_science" ]
4,916
[ "Environmental soil science" ]
8,179
https://en.wikipedia.org/wiki/Dye
A dye is a colored substance that chemically bonds to the substrate to which it is being applied. This distinguishes dyes from pigments which do not chemically bind to the material they color. Dye is generally applied in an aqueous solution and may require a mordant to improve the fastness of the dye on the fiber. The majority of natural dyes are derived from non-animal sources such as roots, berries, bark, leaves, wood, fungi and lichens. However, due to large-scale demand and technological improvements, most dyes used in the modern world are synthetically produced from substances such as petrochemicals. Some are extracted from insects and/or minerals. Synthetic dyes are produced from various chemicals. The great majority of dyes are obtained in this way because of their superior cost, optical properties (color), and resilience (fastness, mordancy). Both dyes and pigments are colored, because they absorb only some wavelengths of visible light. Dyes are usually soluble in some solvent, whereas pigments are insoluble. Some dyes can be rendered insoluble with the addition of salt to produce a lake pigment. History Textile dyeing dates back to the Neolithic period. Throughout history, people have dyed their textiles using common, locally available materials. Scarce dyestuffs that produced brilliant and permanent colors such as the natural invertebrate dyes Tyrian purple and crimson kermes were highly prized luxury items in the ancient and medieval world. Plant-based dyes such as woad, indigo, saffron, and madder were important trade goods in the economies of Asia and Europe. Across Asia and Africa, patterned fabrics were produced using resist dyeing techniques to control the absorption of color in piece-dyed cloth. Dyes from the New World such as cochineal and logwood were brought to Europe by the Spanish treasure fleets, and the dyestuffs of Europe were carried by colonists to America. Dyed flax fibers have been found in the Republic of Georgia in a prehistoric cave dated to 36,000 BP. Archaeological evidence shows that, particularly in India and Phoenicia, dyeing has been widely carried out for over 5,000 years. Early dyes were obtained from animal, vegetable or mineral sources, with no to very little processing. By far the greatest source of dyes has been from the plant kingdom, notably roots, berries, bark, leaves and wood, only few of which are used on a commercial scale. Early industrialization was conducted by J. Pullar and Sons in Scotland. The first synthetic dye, mauve, was discovered serendipitously by William Henry Perkin in 1856. The discovery of mauveine started a surge in synthetic dyes and in organic chemistry in general. Other aniline dyes followed, such as fuchsine, safranine, and induline. Many thousands of synthetic dyes have since been prepared. The discovery of mauve also led to developments within immunology and chemotherapy. In 1863 the forerunner to Bayer AG was formed in what became Wuppertal, Germany. In 1891, Paul Ehrlich discovered that certain cells or organisms took up certain dyes selectively. He then reasoned that a sufficiently large dose could be injected to kill pathogenic microorganisms, if the dye did not affect other cells. Ehrlich went on to use a compound to target syphilis, the first time a chemical was used in order to selectively kill bacteria in the body. He also used methylene blue to target the plasmodium responsible for malaria. Chemistry The color of a dye is dependent upon the ability of the substance to absorb light within the visible region of the electromagnetic spectrum (380–750 nm). An earlier theory known as Witt theory stated that a colored dye had two components, a chromophore which imparts color by absorbing light in the visible region (some examples are nitro, azo, quinoid groups) and an auxochrome which serves to deepen the color. This theory has been superseded by modern electronic structure theory which states that the color in dyes is due to excitation of valence π-electrons by visible light. Types Dyes are classified according to their solubility and chemical properties. Acid dyes are water-soluble anionic dyes that are applied to fibers such as silk, wool, nylon and modified acrylic fibers using neutral to acid dye baths. Attachment to the fiber is attributed, at least partly, to salt formation between anionic groups in the dyes and cationic groups in the fiber. Acid dyes are not substantive to cellulosic fibers. Most synthetic food colors fall in this category. Examples of acid dye are Alizarine Pure Blue B, Acid red 88, etc. Basic dyes are water-soluble cationic dyes that are mainly applied to acrylic fibers, but find some use for wool and silk. Usually acetic acid is added to the dye bath to help the uptake of the dye onto the fiber. Basic dyes are also used in the coloration of paper. Direct or substantive dyeing is normally carried out in a neutral or slightly alkaline dye bath, at or near boiling point, with the addition of either sodium chloride (NaCl) or sodium sulfate (Na2SO4) or sodium carbonate (Na2CO3). Direct dyes are used on cotton, paper, leather, wool, silk and nylon. They are also used as pH indicators and as biological stains. Laser dyes are used in the production of some lasers, optical media (CD-R), and camera sensors (color filter array). Mordant dyes require a mordant, which improves the fastness of the dye against water, light and perspiration. The choice of mordant is very important as different mordants can change the final color significantly. Most natural dyes are mordant dyes and there is therefore a large literature base describing dyeing techniques. The most important mordant dyes are the synthetic mordant dyes, or chrome dyes, used for wool; these comprise some 30% of dyes used for wool, and are especially useful for black and navy shades. The mordant potassium dichromate is applied as an after-treatment. It is important to note that many mordants, particularly those in the heavy metal category, can be hazardous to health and extreme care must be taken in using them. Vat dyes are essentially insoluble in water and incapable of dyeing fibres directly. However, reduction in alkaline liquor produces the water-soluble alkali metal salt of the dye. This form is often colorless, in which case it is referred to as a Leuco dye, and has an affinity for the textile fibre. Subsequent oxidation reforms the original insoluble dye. The color of denim is due to indigo, the original vat dye. Reactive dyes utilize a chromophore attached to a substituent that is capable of directly reacting with the fiber substrate. The covalent bonds that attach reactive dye to natural fibers make them among the most permanent of dyes. "Cold" reactive dyes, such as Procion MX, Cibacron F, and Drimarene K, are very easy to use because the dye can be applied at room temperature. Reactive dyes are by far the best choice for dyeing cotton and other cellulose fibers at home or in the art studio. Disperse dyes were originally developed for the dyeing of cellulose acetate, and are water-insoluble. The dyes are finely ground in the presence of a dispersing agent and sold as a paste, or spray-dried and sold as a powder. Their main use is to dye polyester, but they can also be used to dye nylon, cellulose triacetate, and acrylic fibers. In some cases, a dyeing temperature of is required, and a pressurized dyebath is used. The very fine particle size gives a large surface area that aids dissolution to allow uptake by the fiber. The dyeing rate can be significantly influenced by the choice of dispersing agent used during the grinding. Azoic dyeing is a technique in which an insoluble Azo dye is produced directly onto or within the fiber. This is achieved by treating a fiber with both diazoic and coupling components. With suitable adjustment of dyebath conditions the two components react to produce the required insoluble azo dye. This technique of dyeing is unique, in that the final color is controlled by the choice of the diazoic and coupling components. This method of dyeing cotton is declining in importance due to the toxic nature of the chemicals used. Sulfur dyes are inexpensive dyes used to dye cotton with dark colors. Dyeing is effected by heating the fabric in a solution of an organic compound, typically a nitrophenol derivative, and sulfide or polysulfide. The organic compound reacts with the sulfide source to form dark colors that adhere to the fabric. Sulfur Black 1, the largest selling dye by volume, does not have a well defined chemical structure. Some dyes commonly used in Staining: Food dyes One other class that describes the role of dyes, rather than their mode of use, is the food dye. Because food dyes are classed as food additives, they are manufactured to a higher standard than some industrial dyes. Food dyes can be direct, mordant and vat dyes, and their use is strictly controlled by legislation. Many are azo dyes, although anthraquinone and triphenylmethane compounds are used for colors such as green and blue. Some naturally occurring dyes are also used. Other important dyes A number of other classes have also been established, including: Oxidation bases, for mainly hair and fur Laser dyes: rhodamine 6G and coumarin dyes. Leather dyes, for leather Fluorescent brighteners, for textile fibres and paper Solvent dyes, for wood staining and producing colored lacquers, solvent inks, coloring oils, waxes. Contrast dyes, injected for magnetic resonance imaging, are essentially the same as clothing dye except they are coupled to an agent that has strong paramagnetic properties. Mayhems dye, used in water cooling for looks, often rebranded RIT dye Chromophoric dyes By the nature of their chromophore, dyes are divided into: :Category:Acridine dyes, derivates of acridine :Category:Anthraquinone dyes, derivates of anthraquinone Arylmethane dyes :Category:Diarylmethane dyes, based on diphenyl methane :Category:Triarylmethane dyes, derivates of triphenylmethane :Category:Azo dyes, based on -N=N- azo structure Phthalocyanine dyes, derivatives of phthalocyanine Quinone-imine dyes, derivatives of quinone :Category:Azin dyes :Category:Eurhodin dyes Category:Safranih dyes, derivates of safranin Indamins :Category:Indophenol dyes, derivates of indophenol :Category:Oxazin dyes, derivates of oxazin Oxazone dyes, derivates of oxazone :Category:Thiazine dyes :Category:Thiazole dyes :Category:Safranin dyes Xanthene dyes Fluorene dyes, derivatives of fluorene Pyronin dyes :Category:Fluorone dyes, based on fluorone :Category:Rhodamine dyes, derivatives of rhodamine Pollution Dyes produced by the textile, printing and paper industries are a source of pollution of rivers and waterways. An estimated 700,000 tons of dyestuffs are produced annually (1990 data). The disposal of that material has received much attention, using chemical and biological means. Vital dyes A "vital dye" or stain is a dye capable of penetrating living cells or tissues without causing immediate visible degenerative changes. Such dyes are useful in medical and pathological fields in order to selectively color certain structures (such as cells) in order to distinguish them from surrounding tissue and thus make them more visible for study (for instance, under a microscope). As the visibility is meant to allow study of the cells or tissues, it is usually important that the dye not have other effects on the structure or function of the tissue that might impair objective observation. A distinction is drawn between dyes that are meant to be used on cells that have been removed from the organism prior to study (supravital staining) and dyes that are used within a living body - administered by injection or other means (intravital staining) - as the latter is (for instance) subject to higher safety standards, and must typically be a chemical known to avoid causing adverse effects on any biochemistry (until cleared from the tissue), not just to the tissue being studied, or in the short term. The term "vital stain" is occasionally used interchangeably with both intravital and supravital stains, the underlying concept in either case being that the cells examined are still alive. In a stricter sense, the term "vital staining" means the polar opposite of "supravital staining." If living cells absorb the stain during supravital staining, they exclude it during "vital staining"; for example, they color negatively while only dead cells color positively, and thus viability can be determined by counting the percentage of total cells that stain negatively. Because the dye determines whether the staining is supravital or intravital, a combination of supravital and vital dyes can be used to more accurately classify cells into various groups (e.g., viable, dead, dying). See also Biological pigment, any colored substance in organisms Blue Wool Scale Hair coloring Industrial dye degradation J-aggregate Laser dyes List of dyes Oxidant Phototendering Stain Natural dyes Pigments Inorganic pigments Organic pigments References Further reading Abelshauser, Werner. German History and Global Enterprise: BASF: The History of a Company (2004) covers 1865 to 2000 Beer, John J. The Emergence of the German Dye Industry (1959) Textile chemistry
Dye
[ "Chemistry" ]
3,014
[ "nan" ]
8,197
https://en.wikipedia.org/wiki/Desmond%20Morris
Desmond John Morris FLS hon. caus. (born 24 January 1928) is an English zoologist, ethologist and surrealist painter, as well as a popular author in human sociobiology. He is known for his 1967 book The Naked Ape, and for his television programmes such as Zoo Time. Early life and education Morris was born in Purton, Wiltshire, to Marjorie (née Hunt) and children's fiction author Harry Morris. In 1933, the Morrises moved to Swindon where Desmond developed an interest in natural history and writing. He was educated at Dauntsey's School, a boarding school in Wiltshire. In 1946, Morris joined the British Army for two years of national service, becoming a lecturer in fine arts at the Chiseldon Army College in Wiltshire. After being demobilised in 1948, he held his first one-man show of his own paintings at the Swindon Arts Centre, and studied zoology at the University of Birmingham. In 1950 he held a surrealist art exhibition with Joan Miró at the London Gallery. He held many other exhibitions in later years. Also in 1950, Desmond Morris wrote and directed two surrealist films, Time Flower and The Butterfly and the Pin. In 1951 he began a doctorate at the Department of Zoology, University of Oxford, in animal behaviour. In 1954, he earned a Doctor of Philosophy for his work on the reproductive behaviour of the ten-spined stickleback. Career Morris stayed at Oxford, researching the reproductive behaviour of birds. In 1956 he moved to London as Head of the Granada TV and Film Unit for the Zoological Society of London, and studied the picture-making abilities of apes. The work included creating programmes for film and television on animal behaviour and other zoology topics. He hosted Granada TV's weekly Zoo Time programme until 1959, scripting and hosting 500 programmes, and 100 episodes of the show Life in the Animal World for BBC2. In 1957 he organised an exhibition at the Institute of Contemporary Arts in London, showing paintings and drawings composed by common chimpanzees. In 1958 he co-organised an exhibition, The Lost Image, which compared pictures by infants, human adults, and apes, at the Royal Festival Hall in London. In 1959 he left Zoo Time to become the Zoological Society's Curator of Mammals. In 1964, he delivered the Royal Institution Christmas Lecture on Animal Behaviour. In 1967 he spent a year as executive director of the London Institute of Contemporary Arts. Morris's books include The Naked Ape: A Zoologist's Study of the Human Animal, published in 1967. The book sold well enough for Morris to move to Malta in 1968 to write a sequel and other books. In 1973 he returned to Oxford to work for the ethologist Niko Tinbergen. From 1973 to 1981, Morris was a Research Fellow at Wolfson College, Oxford. In 1979 he undertook a television series for Thames TV, The Human Race, followed in 1982 by Man Watching in Japan, The Animals Road Show in 1986 and then several other series. Morris wrote and presented the BBC documentary The Human Animal and its accompanying book in 1994. National Life Stories conducted an oral history interview (C1672/16) with Morris, in 2015, for its Science and Religion collection held by the British Library. Morris is a Fellow honoris causa of the Linnean Society of London. Personal life Morris's father suffered lung damage in World War I, and died when Morris was 14. He was not allowed to go to the funeral and said later; "It was the beginning of a lifelong hatred of the establishment. The church, the government and the military were all on my hate list and have remained there ever since." His grandfather William Morris, an enthusiastic Victorian naturalist and founder of the Swindon local newspaper, greatly influenced him during his time living in Swindon. In July 1952, Morris married Ramona Baulch; they had one son, Jason. In 1978 Morris was elected vice-chairman of Oxford United. While a director of the club, he designed its ox-head badge based on a Minoan-style bull's head, which remains in use to this day. Morris lived in the same house in North Oxford as the 19th-century lexicographer James Murray who worked on the Oxford English Dictionary. He has exhibited at the Taurus Gallery in North Parade, Oxford, close to where he lived. He is the patron of the Friends of Swindon Museum and Art Gallery and gave a talk to launch the charity in 1993. Since the death of his wife in 2018 he has lived with his son and family in Ireland. Bibliography Books The Big Cats (1965) – part of The Bodley Head Natural Science Picture Books, looking at the habits of the five Big Cats. The Mammals: A Guide to the Living Species (1965) – a listing of mammal genera, non-rodent non-bat species, and additional information on select species. Men and Pandas (1966) with Ramona Morris – third volume in the Ramona and Desmond Morris animal series. – a look at the humanity's animalistic qualities and its similarity with other apes. In 2011, Time magazine placed it on its list of the 100 best or most influential non-fiction books written in English since 1923. Men and Snakes (1968) with Ramona Morris – an exploration of the various complex relationships between humans and snakes The Human Zoo (1969) – a continuation of The Naked Ape, analysing human behaviour in big modern societies and their resemblance to animal behaviour in captivity. Patterns of Reproductive Behavior (1970) Intimate Behaviour (1971) – A study of the human side of intimate behaviour, examining how natural selection shaped human physical contact. Manwatching: A Field Guide to Human Behaviour (1978) – includes discussion of topic "Tie Signs" Gestures: Their Origin and Distribution (1979) Animal Days (1979) The Soccer Tribe (1981) Pocket Guide to Manwatching (1982) Inrock (1983) Bodywatching – A Field Guide to the Human Species (1985) The Book of Ages: Who Did What When (1985) The Art of Ancient Cyprus (1985) Catwatching and Cat Lore (1986) Dogwatching (1986) Horsewatching (1989) Animalwatching (1990) Babywatching (1991) Christmas Watching (1992) Bodytalk (1994) The Human Animal (1994) – book and BBC documentary TV series The Human Sexes (1997) – Discovery/BBC documentary TV series Cat World: A Feline Encyclopedia (1997) The Secret Surrealist: The Paintings of Desmond Morris (1999) Body Guards: Protective Amulets and Charms (1999) The Naked Eye (2001) Dogs: The Ultimate Dictionary of over 1,000 Dog Breeds (2001) Peoplewatching: The Desmond Morris Guide to Body Language (2002) The Naked Woman: A Study of the Female Body (2004) Linguaggio muto (Dumb Language) (2004) The Nature of Happiness (2004) Watching (2006) – autobiography Fantastic Cats (2007) The Naked Man: A Study of the Male Body (2008) Baby: A Portrait of the First Two Years of Life (2008) Planet Ape (2009) (co-authored with [Steve Parker]) Owl (2009) – Part of the Reaktion Books Animal series The Artistic Ape (2013) Monkey (2013) – Part of the Reaktion Books Animal series Leopard (2014) – Part of the Reaktion Books Animal series Bison (2015) – Part of the Reaktion Books Animal series Cats in Art (2017) – Part of the Reaktion Books Animal series The Lives of the Surrealists (2018) Postures: Body Language in Art (2019) The British Surrealists (2022) "101 Surrealists" (2024) Book reviews Filmography Zootime (Weekly, 1956–67) Life (1965–67) The Human Race (1982) The Animals Roadshow (1987–89) The Animal Contract (1989) Animal Country (1991–96) The Human Animal (1994) The Human Sexes'' (1997) Criticism Some of Morris's theories have been criticised as untestable. For instance, geneticist Adam Rutherford writes that Morris commits "the scientific sin of the 'just-so' story – speculation that sounds appealing but cannot be tested or is devoid of evidence". However, this is also a criticism of adaptationism in evolutionary biology, not just of Morris. Morris is also criticised for suggesting that gender roles have an evolutionary rather than a purely cultural background. References External links Official website including a complete biography Dinjet il-Qattus/Catlore by Desmond Morris, translated into Maltese by Toni Aquilina, D es Litt. 1928 births Military personnel from Wiltshire 20th-century British Army personnel Living people 20th-century British artists 20th-century English writers 21st-century English painters 21st-century English writers English contemporary artists Alumni of Magdalen College, Oxford Alumni of the University of Birmingham English curators English painters English science writers English television presenters English zoologists Ethologists Fellows of the Zoological Society of London Fellows of Wolfson College, Oxford Founding members of the World Cultural Council Human evolution theorists People educated at Dauntsey's School People from Purton English surrealist artists The New York Review of Books people British Army soldiers
Desmond Morris
[ "Biology" ]
1,890
[ "Ethology", "Behavior", "Ethologists" ]
8,200
https://en.wikipedia.org/wiki/Discovery%20of%20chemical%20elements
The discoveries of the 118 chemical elements known to exist as of 2025 are presented here in chronological order. The elements are listed generally in the order in which each was first defined as the pure element, as the exact date of discovery of most elements cannot be accurately determined. There are plans to synthesize more elements, and it is not known how many elements are possible. Each element's name, atomic number, year of first report, name of the discoverer, and notes related to the discovery are listed. Periodic table of elements Graphical timeline Cumulative diagram Pre-modern and early modern discoveries Modern discoveries For 18th-century discoveries, around the time that Antoine Lavoisier first questioned the phlogiston theory, the recognition of a new "earth" has been regarded as being equivalent to the discovery of a new element (as was the general practice then). For some elements (e.g. Be, B, Na, Mg, Al, Si, K, Ca, Mn, Co, Ni, Zr, Mo), this presents further difficulties as their compounds were widely known since medieval or even ancient times, even though the elements themselves were not. Since the true nature of those compounds was sometimes only gradually discovered, it is sometimes very difficult to name one specific discoverer. In such cases the first publication on their chemistry is noted, and a longer explanation given in the notes. See also History of the periodic table Periodic table Extended periodic table The Mystery of Matter: Search for the Elements (2014/2015 PBS film) Transfermium Wars References External links History of the Origin of the Chemical Elements and Their Discoverers Last updated by Boris Pritychenko on March 30, 2004 History of Elements of the Periodic Table Timeline of Element Discoveries The Historyscoper Discovery of the Elements – The Movie – YouTube (1:18) The History Of Metals Timeline . A timeline showing the discovery of metals and the development of metallurgy. —Eric Scerri, 2007, The periodic table: Its story and its significance, Oxford University Press, New York, Elements, discoveries Timeline History of chemistry History of physics Discovery
Discovery of chemical elements
[ "Chemistry" ]
427
[ "Lists of chemical elements" ]
8,214
https://en.wikipedia.org/wiki/Decimal
The decimal numeral system (also called the base-ten positional numeral system and denary or decanary) is the standard system for denoting integer and non-integer numbers. It is the extension to non-integer numbers (decimal fractions) of the Hindu–Arabic numeral system. The way of denoting numbers in the decimal system is often referred to as decimal notation. A decimal numeral (also often just decimal or, less correctly, decimal number), refers generally to the notation of a number in the decimal numeral system. Decimals may sometimes be identified by a decimal separator (usually "." or "," as in or ). Decimal may also refer specifically to the digits after the decimal separator, such as in " is the approximation of to two decimals". Zero-digits after a decimal separator serve the purpose of signifying the precision of a value. The numbers that may be represented in the decimal system are the decimal fractions. That is, fractions of the form , where is an integer, and is a non-negative integer. Decimal fractions also result from the addition of an integer and a fractional part; the resulting sum sometimes is called a fractional number. Decimals are commonly used to approximate real numbers. By increasing the number of digits after the decimal separator, one can make the approximation errors as small as one wants, when one has a method for computing the new digits. Originally and in most uses, a decimal has only a finite number of digits after the decimal separator. However, the decimal system has been extended to infinite decimals for representing any real number, by using an infinite sequence of digits after the decimal separator (see decimal representation). In this context, the usual decimals, with a finite number of non-zero digits after the decimal separator, are sometimes called terminating decimals. A repeating decimal is an infinite decimal that, after some place, repeats indefinitely the same sequence of digits (e.g., ). An infinite decimal represents a rational number, the quotient of two integers, if and only if it is a repeating decimal or has a finite number of non-zero digits. Origin Many numeral systems of ancient civilizations use ten and its powers for representing numbers, possibly because there are ten fingers on two hands and people started counting by using their fingers. Examples are firstly the Egyptian numerals, then the Brahmi numerals, Greek numerals, Hebrew numerals, Roman numerals, and Chinese numerals. Very large numbers were difficult to represent in these old numeral systems, and only the best mathematicians were able to multiply or divide large numbers. These difficulties were completely solved with the introduction of the Hindu–Arabic numeral system for representing integers. This system has been extended to represent some non-integer numbers, called decimal fractions or decimal numbers, for forming the decimal numeral system. Decimal notation For writing numbers, the decimal system uses ten decimal digits, a decimal mark, and, for negative numbers, a minus sign "−". The decimal digits are 0, 1, 2, 3, 4, 5, 6, 7, 8, 9; the decimal separator is the dot "" in many countries (mostly English-speaking), and a comma "" in other countries. For representing a non-negative number, a decimal numeral consists of either a (finite) sequence of digits (such as "2017"), where the entire sequence represents an integer: or a decimal mark separating two sequences of digits (such as "20.70828") . If , that is, if the first sequence contains at least two digits, it is generally assumed that the first digit is not zero. In some circumstances it may be useful to have one or more 0's on the left; this does not change the value represented by the decimal: for example, . Similarly, if the final digit on the right of the decimal mark is zero—that is, if —it may be removed; conversely, trailing zeros may be added after the decimal mark without changing the represented number; for example, and . For representing a negative number, a minus sign is placed before . The numeral represents the number . The integer part or integral part of a decimal numeral is the integer written to the left of the decimal separator (see also truncation). For a non-negative decimal numeral, it is the largest integer that is not greater than the decimal. The part from the decimal separator to the right is the fractional part, which equals the difference between the numeral and its integer part. When the integral part of a numeral is zero, it may occur, typically in computing, that the integer part is not written (for example, , instead of ). In normal writing, this is generally avoided, because of the risk of confusion between the decimal mark and other punctuation. In brief, the contribution of each digit to the value of a number depends on its position in the numeral. That is, the decimal system is a positional numeral system. Decimal fractions Decimal fractions (sometimes called decimal numbers, especially in contexts involving explicit fractions) are the rational numbers that may be expressed as a fraction whose denominator is a power of ten. For example, the decimal expressions represent the fractions , , , and , and therefore denote decimal fractions. An example of a fraction that cannot be represented by a decimal expression (with a finite number of digits) is , 3 not being a power of 10. More generally, a decimal with digits after the separator (a point or comma) represents the fraction with denominator , whose numerator is the integer obtained by removing the separator. It follows that a number is a decimal fraction if and only if it has a finite decimal representation. Expressed as fully reduced fractions, the decimal numbers are those whose denominator is a product of a power of 2 and a power of 5. Thus the smallest denominators of decimal numbers are Approximation using decimal numbers Decimal numerals do not allow an exact representation for all real numbers. Nevertheless, they allow approximating every real number with any desired accuracy, e.g., the decimal 3.14159 approximates , being less than 10−5 off; so decimals are widely used in science, engineering and everyday life. More precisely, for every real number and every positive integer , there are two decimals and with at most digits after the decimal mark such that and . Numbers are very often obtained as the result of measurement. As measurements are subject to measurement uncertainty with a known upper bound, the result of a measurement is well-represented by a decimal with digits after the decimal mark, as soon as the absolute measurement error is bounded from above by . In practice, measurement results are often given with a certain number of digits after the decimal point, which indicate the error bounds. For example, although 0.080 and 0.08 denote the same number, the decimal numeral 0.080 suggests a measurement with an error less than 0.001, while the numeral 0.08 indicates an absolute error bounded by 0.01. In both cases, the true value of the measured quantity could be, for example, 0.0803 or 0.0796 (see also significant figures). Infinite decimal expansion For a real number and an integer , let denote the (finite) decimal expansion of the greatest number that is not greater than that has exactly digits after the decimal mark. Let denote the last digit of . It is straightforward to see that may be obtained by appending to the right of . This way one has , and the difference of and amounts to , which is either 0, if , or gets arbitrarily small as tends to infinity. According to the definition of a limit, is the limit of when tends to infinity. This is written asor , which is called an infinite decimal expansion of . Conversely, for any integer and any sequence of digits the (infinite) expression is an infinite decimal expansion of a real number . This expansion is unique if neither all are equal to 9 nor all are equal to 0 for large enough (for all greater than some natural number ). If all for equal to 9 and , the limit of the sequence is the decimal fraction obtained by replacing the last digit that is not a 9, i.e.: , by , and replacing all subsequent 9s by 0s (see 0.999...). Any such decimal fraction, i.e.: for , may be converted to its equivalent infinite decimal expansion by replacing by and replacing all subsequent 0s by 9s (see 0.999...). In summary, every real number that is not a decimal fraction has a unique infinite decimal expansion. Each decimal fraction has exactly two infinite decimal expansions, one containing only 0s after some place, which is obtained by the above definition of , and the other containing only 9s after some place, which is obtained by defining as the greatest number that is less than , having exactly digits after the decimal mark. Rational numbers Long division allows computing the infinite decimal expansion of a rational number. If the rational number is a decimal fraction, the division stops eventually, producing a decimal numeral, which may be prolongated into an infinite expansion by adding infinitely many zeros. If the rational number is not a decimal fraction, the division may continue indefinitely. However, as all successive remainders are less than the divisor, there are only a finite number of possible remainders, and after some place, the same sequence of digits must be repeated indefinitely in the quotient. That is, one has a repeating decimal. For example, = 0.012345679012... (with the group 012345679 indefinitely repeating). The converse is also true: if, at some point in the decimal representation of a number, the same string of digits starts repeating indefinitely, the number is rational. or, dividing both numerator and denominator by 6, . Decimal computation Most modern computer hardware and software systems commonly use a binary representation internally (although many early computers, such as the ENIAC or the IBM 650, used decimal representation internally). For external use by computer specialists, this binary representation is sometimes presented in the related octal or hexadecimal systems. For most purposes, however, binary values are converted to or from the equivalent decimal values for presentation to or input from humans; computer programs express literals in decimal by default. (123.1, for example, is written as such in a computer program, even though many computer languages are unable to encode that number precisely.) Both computer hardware and software also use internal representations which are effectively decimal for storing decimal values and doing arithmetic. Often this arithmetic is done on data which are encoded using some variant of binary-coded decimal, especially in database implementations, but there are other decimal representations in use (including decimal floating point such as in newer revisions of the IEEE 754 Standard for Floating-Point Arithmetic). Decimal arithmetic is used in computers so that decimal fractional results of adding (or subtracting) values with a fixed length of their fractional part always are computed to this same length of precision. This is especially important for financial calculations, e.g., requiring in their results integer multiples of the smallest currency unit for book keeping purposes. This is not possible in binary, because the negative powers of have no finite binary fractional representation; and is generally impossible for multiplication (or division). See Arbitrary-precision arithmetic for exact calculations. History Many ancient cultures calculated with numerals based on ten, perhaps because two human hands have ten fingers. Standardized weights used in the Indus Valley Civilisation () were based on the ratios: 1/20, 1/10, 1/5, 1/2, 1, 2, 5, 10, 20, 50, 100, 200, and 500, while their standardized ruler – the Mohenjo-daro ruler – was divided into ten equal parts. Egyptian hieroglyphs, in evidence since around 3000 BCE, used a purely decimal system, as did the Linear A script () of the Minoans and the Linear B script (c. 1400–1200 BCE) of the Mycenaeans. The Únětice culture in central Europe (2300-1600 BC) used standardised weights and a decimal system in trade. The number system of classical Greece also used powers of ten, including an intermediate base of 5, as did Roman numerals. Notably, the polymath Archimedes (c. 287–212 BCE) invented a decimal positional system in his Sand Reckoner which was based on 108. Hittite hieroglyphs (since 15th century BCE) were also strictly decimal. The Egyptian hieratic numerals, the Greek alphabet numerals, the Hebrew alphabet numerals, the Roman numerals, the Chinese numerals and early Indian Brahmi numerals are all non-positional decimal systems, and required large numbers of symbols. For instance, Egyptian numerals used different symbols for 10, 20 to 90, 100, 200 to 900, 1000, 2000, 3000, 4000, to 10,000. The world's earliest positional decimal system was the Chinese rod calculus. History of decimal fractions Starting from the 2nd century BCE, some Chinese units for length were based on divisions into ten; by the 3rd century CE these metrological units were used to express decimal fractions of lengths, non-positionally. Calculations with decimal fractions of lengths were performed using positional counting rods, as described in the 3rd–5th century CE Sunzi Suanjing. The 5th century CE mathematician Zu Chongzhi calculated a 7-digit approximation of . Qin Jiushao's book Mathematical Treatise in Nine Sections (1247) explicitly writes a decimal fraction representing a number rather than a measurement, using counting rods. The number 0.96644 is denoted . Historians of Chinese science have speculated that the idea of decimal fractions may have been transmitted from China to the Middle East. Al-Khwarizmi introduced fractions to Islamic countries in the early 9th century CE, written with a numerator above and denominator below, without a horizontal bar. This form of fraction remained in use for centuries. Positional decimal fractions appear for the first time in a book by the Arab mathematician Abu'l-Hasan al-Uqlidisi written in the 10th century. The Jewish mathematician Immanuel Bonfils used decimal fractions around 1350 but did not develop any notation to represent them. The Persian mathematician Jamshid al-Kashi used, and claimed to have discovered, decimal fractions in the 15th century. A forerunner of modern European decimal notation was introduced by Simon Stevin in the 16th century. Stevin's influential booklet De Thiende ("the art of tenths") was first published in Dutch in 1585 and translated into French as La Disme. John Napier introduced using the period (.) to separate the integer part of a decimal number from the fractional part in his book on constructing tables of logarithms, published posthumously in 1620. Natural languages A method of expressing every possible natural number using a set of ten symbols emerged in India. Several Indian languages show a straightforward decimal system. Dravidian languages have numbers between 10 and 20 expressed in a regular pattern of addition to 10. The Hungarian language also uses a straightforward decimal system. All numbers between 10 and 20 are formed regularly (e.g. 11 is expressed as "tizenegy" literally "one on ten"), as with those between 20 and 100 (23 as "huszonhárom" = "three on twenty"). A straightforward decimal rank system with a word for each order (10 , 100 , 1000 , 10,000 ), and in which 11 is expressed as ten-one and 23 as two-ten-three, and 89,345 is expressed as 8 (ten thousands) 9 (thousand) 3 (hundred) 4 (tens) 5 is found in Chinese, and in Vietnamese with a few irregularities. Japanese, Korean, and Thai have imported the Chinese decimal system. Many other languages with a decimal system have special words for the numbers between 10 and 20, and decades. For example, in English 11 is "eleven" not "ten-one" or "one-teen". Incan languages such as Quechua and Aymara have an almost straightforward decimal system, in which 11 is expressed as ten with one and 23 as two-ten with three. Some psychologists suggest irregularities of the English names of numerals may hinder children's counting ability. Other bases Some cultures do, or did, use other bases of numbers. Pre-Columbian Mesoamerican cultures such as the Maya used a base-20 system (perhaps based on using all twenty fingers and toes). The Yuki language in California and the Pamean languages in Mexico have octal (base-8) systems because the speakers count using the spaces between their fingers rather than the fingers themselves. The existence of a non-decimal base in the earliest traces of the Germanic languages is attested by the presence of words and glosses meaning that the count is in decimal (cognates to "ten-count" or "tenty-wise"); such would be expected if normal counting is not decimal, and unusual if it were. Where this counting system is known, it is based on the "long hundred" = 120, and a "long thousand" of 1200. The descriptions like "long" only appear after the "small hundred" of 100 appeared with the Christians. Gordon's Introduction to Old Norse p. 293, gives number names that belong to this system. An expression cognate to 'one hundred and eighty' translates to 200, and the cognate to 'two hundred' translates to 240. Goodare details the use of the long hundred in Scotland in the Middle Ages, giving examples such as calculations where the carry implies i C (i.e. one hundred) as 120, etc. That the general population were not alarmed to encounter such numbers suggests common enough use. It is also possible to avoid hundred-like numbers by using intermediate units, such as stones and pounds, rather than a long count of pounds. Goodare gives examples of numbers like vii score, where one avoids the hundred by using extended scores. There is also a paper by W.H. Stevenson, on 'Long Hundred and its uses in England'. Many or all of the Chumashan languages originally used a base-4 counting system, in which the names for numbers were structured according to multiples of 4 and 16. Many languages use quinary (base-5) number systems, including Gumatj, Nunggubuyu, Kuurn Kopan Noot and Saraveca. Of these, Gumatj is the only true 5–25 language known, in which 25 is the higher group of 5. Some Nigerians use duodecimal systems. So did some small communities in India and Nepal, as indicated by their languages. The Huli language of Papua New Guinea is reported to have base-15 numbers. Ngui means 15, ngui ki means 15 × 2 = 30, and ngui ngui means 15 × 15 = 225. Umbu-Ungu, also known as Kakoli, is reported to have base-24 numbers. Tokapu means 24, tokapu talu means 24 × 2 = 48, and tokapu tokapu means 24 × 24 = 576. Ngiti is reported to have a base-32 number system with base-4 cycles. The Ndom language of Papua New Guinea is reported to have base-6 numerals. Mer means 6, mer an thef means 6 × 2 = 12, nif means 36, and nif thef means 36×2 = 72. See also Notes References Elementary arithmetic Fractions (mathematics) Positional numeral systems
Decimal
[ "Mathematics" ]
4,150
[ "Fractions (mathematics)", "Elementary arithmetic", "Mathematical objects", "Elementary mathematics", "Numeral systems", "Arithmetic", "Numbers", "Positional numeral systems" ]
8,221
https://en.wikipedia.org/wiki/Death
Death is the end of life; the irreversible cessation of all biological functions that sustain a living organism. The remains of a former organism normally begin to decompose shortly after death. Death eventually and inevitably occurs in all organisms. Some organisms, such as Turritopsis dohrnii, are biologically immortal; however, they can still die from means other than aging. Death is generally applied to whole organisms; the equivalent for individual components of an organism, such as cells or tissues, is necrosis. Something that is not considered an organism, such as a virus, can be physically destroyed but is not said to die, as a virus is not considered alive in the first place. As of the early 21st century, 56 million people die per year. The most common reason is aging, followed by cardiovascular disease, which is a disease that affects the heart or blood vessels. As of 2022, an estimated total of almost 110 billion humans have died, or roughly 94% of all humans to have ever lived. A substudy of gerontology known as biogerontology seeks to eliminate death by natural aging in humans, often through the application of natural processes found in certain organisms. However, as humans do not have the means to apply this to themselves, they have to use other ways to reach the maximum lifespan for a human, often through lifestyle changes, such as calorie reduction, dieting, and exercise. The idea of lifespan extension is considered and studied as a way for people to live longer. Determining when a person has definitively died has proven difficult. Initially, death was defined as occurring when breathing and the heartbeat ceased, a status still known as clinical death. However, the development of cardiopulmonary resuscitation (CPR) meant that such a state was no longer strictly irreversible. Brain death was then considered a more fitting option, but several definitions exist for this. Some people believe that all brain functions must cease. Others believe that even if the brainstem is still alive, the personality and identity are irretrievably lost, so therefore, the person should be considered entirely dead. Brain death is sometimes used as a legal definition of death. For all organisms with a brain, death can instead be focused on this organ. The cause of death is usually considered important, and an autopsy can be done. There are many causes, from accidents to diseases. Many cultures and religions have a concept of an afterlife that may hold the idea of judgment of good and bad deeds in one's life. There are also different customs for honoring the body, such as a funeral, cremation, or sky burial. After a death, an obituary may be posted in a newspaper, and the "survived by" kin and friends usually go through the grieving process. Diagnosis Definition There are many scientific approaches and various interpretations of the concept. Additionally, the advent of life-sustaining therapy and the numerous criteria for defining death from both a medical and legal standpoint have made it difficult to create a single unifying definition. Defining life to define death One of the challenges in defining death is in distinguishing it from life. As a point in time, death seems to refer to the moment when life ends. Determining when death has occurred is difficult, as cessation of life functions is often not simultaneous across organ systems. Such determination, therefore, requires drawing precise conceptual boundaries between life and death. This is difficult due to there being little consensus on how to define life. It is possible to define life in terms of consciousness. When consciousness ceases, an organism can be said to have died. One of the flaws in this approach is that there are many organisms that are alive but probably not conscious. Another problem is in defining consciousness, which has many different definitions given by modern scientists, psychologists and philosophers. Additionally, many religious traditions, including Abrahamic and Dharmic traditions, hold that death does not (or may not) entail the end of consciousness. In certain cultures, death is more of a process than a single event. It implies a slow shift from one spiritual state to another. Other definitions for death focus on the character of cessation of organismic functioning and human death, which refers to irreversible loss of personhood. More specifically, death occurs when a living entity experiences irreversible cessation of all functioning. As it pertains to human life, death is an irreversible process where someone loses their existence as a person. Definition of death by heartbeat and breath Historically, attempts to define the exact moment of a human's death have been subjective or imprecise. Death was defined as the cessation of heartbeat (cardiac arrest) and breathing, but the development of CPR and prompt defibrillation have rendered that definition inadequate because breathing and heartbeat can sometimes be restarted. This type of death where circulatory and respiratory arrest happens is known as the circulatory definition of death (CDD). Proponents of the CDD believe this definition is reasonable because a person with permanent loss of circulatory and respiratory function should be considered dead. Critics of this definition state that while cessation of these functions may be permanent, it does not mean the situation is irreversible because if CPR is applied fast enough, the person could be revived. Thus, the arguments for and against the CDD boil down to defining the actual words "permanent" and "irreversible," which further complicates the challenge of defining death. Furthermore, events causally linked to death in the past no longer kill in all circumstances; without a functioning heart or lungs, life can sometimes be sustained with a combination of life support devices, organ transplants, and artificial pacemakers. Brain death Today, where a definition of the moment of death is required, doctors and coroners usually turn to "brain death" or "biological death" to define a person as being dead; people are considered dead when the electrical activity in their brain ceases. It is presumed that an end of electrical activity indicates the end of consciousness. Suspension of consciousness must be permanent and not transient, as occurs during certain sleep stages, and especially a coma. In the case of sleep, electroencephalograms (EEGs) are used to tell the difference. The category of "brain death" is seen as problematic by some scholars. For instance, Dr. Franklin Miller, a senior faculty member at the Department of Bioethics, National Institutes of Health, notes: "By the late 1990s... the equation of brain death with death of the human being was increasingly challenged by scholars, based on evidence regarding the array of biological functioning displayed by patients correctly diagnosed as having this condition who were maintained on mechanical ventilation for substantial periods of time. These patients maintained the ability to sustain circulation and respiration, control temperature, excrete wastes, heal wounds, fight infections and, most dramatically, to gestate fetuses (in the case of pregnant "brain-dead" women)." While "brain death" is viewed as problematic by some scholars, there are proponents of it that believe this definition of death is the most reasonable for distinguishing life from death. The reasoning behind the support for this definition is that brain death has a set of criteria that is reliable and reproducible. Also, the brain is crucial in determining our identity or who we are as human beings. The distinction should be made that "brain death" cannot be equated with one in a vegetative state or coma, in that the former situation describes a state that is beyond recovery. EEGs can detect spurious electrical impulses, while certain drugs, hypoglycemia, hypoxia, or hypothermia can suppress or even stop brain activity temporarily; because of this, hospitals have protocols for determining brain death involving EEGs at widely separated intervals under defined conditions. Neocortical brain death People maintaining that only the neo-cortex of the brain is necessary for consciousness sometimes argue that only electrical activity should be considered when defining death. Eventually, the criterion for death may be the permanent and irreversible loss of cognitive function, as evidenced by the death of the cerebral cortex. All hope of recovering human thought and personality is then gone, given current and foreseeable medical technology. Even by whole-brain criteria, the determination of brain death can be complicated. Total brain death At present, in most places, the more conservative definition of death (irreversible cessation of electrical activity in the whole brain, as opposed to just in the neo-cortex) has been adopted. One example is the Uniform Determination Of Death Act in the United States. In the past, the adoption of this whole-brain definition was a conclusion of the President's Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research in 1980. They concluded that this approach to defining death sufficed in reaching a uniform definition nationwide. A multitude of reasons was presented to support this definition, including uniformity of standards in law for establishing death, consumption of a family's fiscal resources for artificial life support, and legal establishment for equating brain death with death to proceed with organ donation. Problems in medical practice Aside from the issue of support of or dispute against brain death, there is another inherent problem in this categorical definition: the variability of its application in medical practice. In 1995, the American Academy of Neurology (AAN) established the criteria that became the medical standard for diagnosing neurologic death. At that time, three clinical features had to be satisfied to determine "irreversible cessation" of the total brain, including coma with clear etiology, cessation of breathing, and lack of brainstem reflexes. These criteria were updated again, most recently in 2010, but substantial discrepancies remain across hospitals and medical specialties. Donations The problem of defining death is especially imperative as it pertains to the dead donor rule, which could be understood as one of the following interpretations of the rule: there must be an official declaration of death in a person before starting organ procurement, or that organ procurement cannot result in the death of the donor. A great deal of controversy has surrounded the definition of death and the dead donor rule. Advocates of the rule believe that the rule is legitimate in protecting organ donors while also countering any moral or legal objection to organ procurement. Critics, on the other hand, believe that the rule does not uphold the best interests of the donors and that the rule does not effectively promote organ donation. Signs Signs of death or strong indications that a warm-blooded animal is no longer alive are: Respiratory arrest (no breathing) Cardiac arrest (no pulse) Brain death (no neuronal activity) The stages that follow after death are: , paleness which happens in 15–120 minutes after death , the reduction in body temperature following death. This is generally a steady decline until matching ambient temperature , the limbs of the corpse become stiff (Latin rigor) and difficult to move or manipulate , a settling of the blood in the lower (dependent) portion of the body Putrefaction, the beginning signs of decomposition Decomposition, the reduction into simpler forms of matter, accompanied by a strong, unpleasant odor. Skeletonization, the end of decomposition, where all soft tissues have decomposed, leaving only the skeleton. Fossilization, the natural preservation of the skeletal remains formed over a very long period Legal The death of a person has legal consequences that may vary between jurisdictions. Most countries follow the whole-brain death criteria, where all functions of the brain must have completely ceased. However, in other jurisdictions, some follow the brainstem version of brain death. Afterward, a death certificate is issued in most jurisdictions, either by a doctor or by an administrative office, upon presentation of a doctor's declaration of death. Misdiagnosis There are many anecdotal references to people being declared dead by physicians and then "coming back to life," sometimes days later in their coffin or when embalming procedures are about to begin. From the mid-18th century onwards, there was an upsurge in the public's fear of being mistakenly buried alive and much debate about the uncertainty of the signs of death. Various suggestions were made to test for signs of life before burial, ranging from pouring vinegar and pepper into the corpse's mouth to applying red hot pokers to the feet or into the rectum. Writing in 1895, the physician J.C. Ouseley claimed that as many as 2,700 people were buried prematurely each year in England and Wales, although some estimates peg the figure to be closer to 800. In cases of electric shock, cardiopulmonary resuscitation (CPR) for an hour or longer can allow stunned nerves to recover, allowing an apparently dead person to survive. People found unconscious under icy water may survive if their faces are kept continuously cold until they arrive at an emergency room. This "diving response," in which metabolic activity and oxygen requirements are minimal, is something humans share with cetaceans called the mammalian diving reflex. As medical technologies advance, ideas about when death occurs may have to be reevaluated in light of the ability to restore a person to vitality after longer periods of apparent death (as happened when CPR and defibrillation showed that cessation of heartbeat is inadequate as a decisive indicator of death). The lack of electrical brain activity may not be enough to consider someone scientifically dead. Therefore, the concept of information-theoretic death has been suggested as a better means of defining when true death occurs, though the concept has few practical applications outside the field of cryonics. Causes The leading cause of human death in developing countries is infectious disease. The leading causes in developed countries are atherosclerosis (heart disease and stroke), cancer, and other diseases related to obesity and aging. By an extremely wide margin, the largest unifying cause of death in the developed world is biological aging, leading to various complications known as aging-associated diseases. These conditions cause loss of homeostasis, leading to cardiac arrest, causing loss of oxygen and nutrient supply, causing irreversible deterioration of the brain and other tissues. Of the roughly 150,000 people who die each day across the globe, about two thirds die of age-related causes. In industrialized nations, the proportion is much higher, approaching 90%. With improved medical capability, dying has become a condition to be managed. In developing nations, inferior sanitary conditions and lack of access to modern medical technology make death from infectious diseases more common than in developed countries. One such disease is tuberculosis, a bacterial disease that killed 1.8 million people in 2015. In 2004, malaria caused about 2.7 million deaths annually. The AIDS death toll in Africa may reach 90–100 million by 2025. According to Jean Ziegler, the United Nations Special Reporter on the Right to Food, 2000 – Mar 2008, mortality due to malnutrition accounted for 58% of the total mortality rate in 2006. Ziegler says worldwide, approximately 62 million people died from all causes and of those deaths, more than 36 million died of hunger or diseases due to deficiencies in micronutrients. Tobacco smoking killed 100 million people worldwide in the 20th century and could kill 1 billion people worldwide in the 21st century, a World Health Organization report warned. Many leading developed world causes of death can be postponed by diet and physical activity, but the accelerating incidence of disease with age still imposes limits on human longevity. The evolutionary cause of aging is, at best, only beginning to be understood. It has been suggested that direct intervention in the aging process may now be the most effective intervention against major causes of death. Selye proposed a unified non-specific approach to many causes of death. He demonstrated that stress decreases the adaptability of an organism and proposed to describe adaptability as a special resource, adaptation energy. The animal dies when this resource is exhausted. Selye assumed that adaptability is a finite supply presented at birth. Later, Goldstone proposed the concept of production or income of adaptation energy which may be stored (up to a limit) as a capital reserve of adaptation. In recent works, adaptation energy is considered an internal coordinate on the "dominant path" in the model of adaptation. It is demonstrated that oscillations of well-being appear when the reserve of adaptability is almost exhausted. In 2012, suicide overtook car crashes as the leading cause of human injury deaths in the U.S., followed by poisoning, falls, and murder. Accidents and disasters, from nuclear disasters to structural collapses, also claim lives. One of the deadliest incidents of all time is the 1975 Banqiao Dam Failure, with varying estimates, up to 240,000 dead. Other incidents with high death tolls are the Wanggongchang explosion (when a gunpowder factory ended up with 20,000 deaths), a collapse of a wall of Circus Maximus that killed 13,000 people, and the Chernobyl disaster that killed between 95 and 4,000 people. Natural disasters kill around 45,000 people annually, although this number can vary to millions to thousands on a per-decade basis. Some of the deadliest natural disasters are the 1931 China floods, which killed an estimated 4 million people, although estimates widely vary; the 1887 Yellow River flood, which killed an estimated 2 million people in China; and the 1970 Bhola cyclone, which killed as many as 500,000 people in Pakistan. If naturally occurring famines are considered natural disasters, the Chinese famine of 1906–1907, which killed 15–20 million people, can be considered the deadliest natural disaster in recorded history. In animals, predation can be a common cause of death. Livestock have a 6% death rate from predation. However, younger animals are more susceptible to predation. For example, 50% of young foxes die to birds, bobcats, coyotes, and other foxes as well. Young bear cubs in the Yellowstone National Park only have a 40% chance to survive to adulthood from other bears and predators. Autopsy An autopsy, also known as a postmortem examination or an obduction, is a medical procedure that consists of a thorough examination of a human corpse to determine the cause and manner of a person's death and to evaluate any disease or injury that may be present. It is usually performed by a specialized medical doctor called a pathologist. Autopsies are either performed for legal or medical purposes. A forensic autopsy is carried out when the cause of death may be a criminal matter, while a clinical or academic autopsy is performed to find the medical cause of death and is used in cases of unknown or uncertain death, or for research purposes. Autopsies can be further classified into cases where external examination suffices, and those where the body is dissected and an internal examination is conducted. Permission from next of kin may be required for internal autopsy in some cases. Once an internal autopsy is complete the body is generally reconstituted by sewing it back together. A necropsy, which is not always a medical procedure, was a term previously used to describe an unregulated postmortem examination. In modern times, this term is more commonly associated with the corpses of animals. Death before birth Death before birth can happen in several ways: stillbirth, when the fetus dies before or during the delivery process; miscarriage, when the embryo dies before independent survival; and abortion, the artificial termination of the pregnancy. Stillbirth and miscarriage can happen for various reasons, while abortion is carried out purposely. Stillbirth Stillbirth can happen right before or after the delivery of a fetus. It can result from defects of the fetus or risk factors present in the mother. Reductions of these factors, caesarean sections when risks are present, and early detection of birth defects have lowered the rate of stillbirth. However, 1% of births in the United States end in a stillbirth. Miscarriage A miscarriage is defined by the World Health Organization as, "The expulsion or extraction from its mother of an embryo or fetus weighing 500g or less." Miscarriage is one of the most frequent problems in pregnancy, and is reported in around 12–15% of all clinical pregnancies; however, by including pregnancy losses during menstruation, it could be up to 17–22% of all pregnancies. There are many risk-factors involved in miscarriage; consumption of caffeine, tobacco, alcohol, drugs, having a previous miscarriage, and the use of abortion can increase the chances of having a miscarriage. Abortion An abortion may be performed for many reasons, such as pregnancy from rape, financial constraints of having a child, teenage pregnancy, and the lack of support from a significant other. There are two forms of abortion: a medical abortion and an in-clinic abortion or sometimes referred to as a surgical abortion. A medical abortion involves taking a pill that will terminate the pregnancy no more than 11 weeks past the last period, and an in-clinic abortion involves a medical procedure using suction to empty the uterus; this is possible after 12 weeks, but it may be more difficult to find an operating doctor who will go through with the procedure. Senescence Senescence refers to a scenario when a living being can survive all calamities but eventually dies due to causes relating to old age. Conversely, premature death can refer to a death that occurs before old age arrives, for example, human death before a person reaches the age of 75. Animal and plant cells normally reproduce and function during the whole period of natural existence, but the aging process derives from the deterioration of cellular activity and the ruination of regular functioning. The aptitude of cells for gradual deterioration and mortality means that cells are naturally sentenced to stable and long-term loss of living capacities, even despite continuing metabolic reactions and viability. In the United Kingdom, for example, nine out of ten of all the deaths that occur daily relates to senescence, while around the world, it accounts for two-thirds of 150,000 deaths that take place daily. Almost all animals who survive external hazards to their biological functioning eventually die from biological aging, known in life sciences as "senescence." Some organisms experience negligible senescence, even exhibiting biological immortality. These include the jellyfish Turritopsis dohrnii, the hydra, and the planarian. Unnatural causes of death include suicide and predation. Of all causes, roughly 150,000 people die around the world each day. Of these, two-thirds die directly or indirectly due to senescence, but in industrialized countries – such as the United States, the United Kingdom, and Germany – the rate approaches 90% (i.e., nearly nine out of ten of all deaths are related to senescence). Physiological death is now seen as a process, more than an event: conditions once considered indicative of death are now reversible. Where in the process, a dividing line is drawn between life and death depends on factors beyond the presence or absence of vital signs. In general, clinical death is neither necessary nor sufficient for a determination of legal death. A patient with working heart and lungs determined to be brain dead can be pronounced legally dead without clinical death occurring. Life extension Life extension refers to an increase in maximum or average lifespan, especially in humans, by slowing or reversing aging processes through anti-aging measures. Aging is the most common cause of death worldwide. Aging is seen as inevitable, so according to Aubrey de Grey little is spent on research into anti-aging therapies, a phenomenon known as pro-aging trance. The average lifespan is determined by vulnerability to accidents and age or lifestyle-related afflictions such as cancer or cardiovascular disease. Extension of lifespan can be achieved by good diet, exercise, and avoidance of hazards such as smoking. Maximum lifespan is determined by the rate of aging for a species inherent in its genes. A recognized method of extending maximum lifespan is calorie restriction. Theoretically, the extension of the maximum lifespan can be achieved by reducing the rate of aging damage, by periodic replacement of damaged tissues, molecular repair, or rejuvenation of deteriorated cells and tissues. A United States poll found religious and irreligious people, as well as men and women and people of different economic classes, have similar rates of support for life extension, while Africans and Hispanics have higher rates of support than white people. 38% said they would desire to have their aging process cured. Researchers of life extension can be known as "biomedical gerontologists." They try to understand aging, and develop treatments to reverse aging processes, or at least slow them for the improvement of health and maintenance of youthfulness. Those who use life extension findings and apply them to themselves are called "life extensionists" or "longevists." The primary life extension strategy currently is to apply anti-aging methods to attempt to live long enough to benefit from a cure for aging. Cryonics Cryonics (from Greek κρύος 'kryos-' meaning 'icy cold') is the low-temperature preservation of animals, including humans, who cannot be sustained by contemporary medicine, with the hope that healing and resuscitation may be possible in the future. Cryopreservation of people and other large animals is not reversible with current technology. The stated rationale for cryonics is that people who are considered dead by current legal or medical definitions, may not necessarily be dead according to the more stringent 'information-theoretic' definition of death. Some scientific literature is claimed to support the feasibility of cryonics. Medical science and cryobiologists generally regard cryonics with skepticism. Location Around 1930, most people in Western countries died in their own homes, surrounded by family, and comforted by clergy, neighbors, and doctors making house calls. By the mid-20th century, half of all Americans died in a hospital. By the start of the 21st century, only about 20 to 25% of people in developed countries died outside of a medical institution. The shift from dying at home towards dying in a professional medical environment has been termed the "Invisible Death." This shift occurred gradually over the years until most deaths now occur outside the home. Psychology Death studies is a field within psychology. To varying degrees people inherently fear death, both the process and the eventuality; it is hard wired and part of the 'survival instinct' of all animals. Discussing, thinking about, or planning for their deaths causes them discomfort. This fear may cause them to put off financial planning, preparing a will and testament, or requesting help from a hospice organization. Mortality salience is the awareness that death is inevitable. However, self-esteem and culture are ways to reduce the anxiety this effect can cause. The awareness of someone's own death can cause a deepened bond in their in-group as a defense mechanism. This can also cause the person to become very judging. In a study, two groups were formed; one group was asked to reflect upon their mortality, the other was not, afterwards, the groups were told to set a bond for a prostitute. The group that did not reflect on death had an average of $50, the group who was reminded about their death had an average of $455. Different people have different responses to the idea of their deaths. Philosopher Galen Strawson writes that the death that many people wish for is an instant, painless, unexperienced annihilation. In this unlikely scenario, the person dies without realizing it and without being able to fear it. One moment the person is walking, eating, or sleeping, and the next moment, the person is dead. Strawson reasons that this type of death would not take anything away from the person, as he believes a person cannot have a legitimate claim to ownership in the future. Society and culture In society, the nature of death and humanity's awareness of its mortality has, for millennia, been a concern of the world's religious traditions and philosophical inquiry. Including belief in resurrection or an afterlife (associated with Abrahamic religions), reincarnation or rebirth (associated with Dharmic religions), or that consciousness permanently ceases to exist, known as eternal oblivion (associated with secular humanism). Commemoration ceremonies after death may include various mourning, funeral practices, and ceremonies of honoring the deceased. The physical remains of a person, commonly known as a corpse or body, are usually interred whole or cremated, though among the world's cultures, there are a variety of other methods of mortuary disposal. In the English language, blessings directed towards a dead person include rest in peace (originally the Latin, requiescat in pace) or its initialism RIP. Death is the center of many traditions and organizations; customs relating to death are a feature of every culture around the world. Much of this revolves around the care of the dead, as well as the afterlife and the disposal of bodies upon the onset of death. The disposal of human corpses does, in general, begin with the last offices before significant time has passed, and ritualistic ceremonies often occur, most commonly interment or cremation. This is not a unified practice; in Tibet, for instance, the body is given a sky burial and left on a mountain top. Proper preparation for death and techniques and ceremonies for producing the ability to transfer one's spiritual attainments into another body (reincarnation) are subjects of detailed study in Tibet. Mummification or embalming is also prevalent in some cultures to retard the rate of decay. The rise of secularism resulted in material mementos of death declining. Some parts of death in culture are legally based, having laws for when death occurs, such as the receiving of a death certificate, the settlement of the deceased estate, and the issues of inheritance and, in some countries, inheritance taxation. Capital punishment is also a culturally divisive aspect of death. In most jurisdictions where capital punishment is carried out today, the death penalty is reserved for premeditated murder, espionage, treason, or as part of military justice. In some countries, sexual crimes, such as adultery and sodomy, carry the death penalty, as do religious crimes, such as apostasy, the formal renunciation of one's religion. In many retentionist countries, drug trafficking is also a capital offense. In China, human trafficking and serious cases of corruption are also punished by the death penalty. In militaries around the world, courts-martial have imposed death sentences for offenses such as cowardice, desertion, insubordination, and mutiny. Mutiny is punishable by death in the United States. Death in warfare and suicide attacks also have cultural links, and the ideas of dulce et decorum est pro patria mori, which translates to "It is sweet and proper to die for one's country", is a concept that dates to antiquity. Additionally, grieving relatives of dead soldiers and death notification are embedded in many cultures. Recently in the Western world—with the increase in terrorism following the September 11 attacks but also further back in time with suicide bombings, kamikaze missions in World War II, and suicide missions in a host of other conflicts in history—death for a cause by way of suicide attack, including martyrdom, have had significant cultural impacts. Suicide, in general, and particularly euthanasia, are also points of cultural debate. Both acts are understood very differently in different cultures. In Japan, for example, ending a life with honor by seppuku was considered a desirable death, whereas according to traditional Christian and Islamic cultures, suicide is viewed as a sin. Death is personified in many cultures, with such symbolic representations as the Grim Reaper, Azrael, the Hindu god Yama, and Father Time. In the west, the Grim Reaper, or figures similar to it, is the most popular depiction of death in western cultures. In Brazil, death is counted officially when it is registered by existing family members at a cartório, a government-authorized registry. Before being able to file for an official death, the deceased must have been registered for an official birth at the cartório. Though a Public Registry Law guarantees all Brazilian citizens the right to register deaths, regardless of their financial means of their family members (often children), the Brazilian government has not taken away the burden, the hidden costs, and fees of filing for a death. For many impoverished families, the indirect costs and burden of filing for a death lead to a more appealing, unofficial, local, and cultural burial, which, in turn, raises the debate about inaccurate mortality rates. Talking about death and witnessing it is a difficult issue in most cultures. Western societies may like to treat the dead with the utmost material respect, with an official embalmer and associated rites. Eastern societies (like India) may be more open to accepting it as a fait accompli, with a funeral procession of the dead body ending in an open-air burning-to-ashes. Origins of death The origin of death is a theme or myth of how death came to be. It is present in nearly all cultures across the world, as death is a universal happening. This makes it an origin myth, a myth that describes how a feature of the natural or social world appeared. There can be some similarities between myths and cultures. In North American mythology, the theme of a man who wants to be immortal and a man who wants to die can be seen across many Indigenous people. In Christianity, death is the result of the fall of man after eating the fruit from the tree of the knowledge of good and evil. In Greek mythology, the opening of Pandora's box releases death upon the world. Consciousness Much interest and debate surround the question of what happens to one's consciousness as one's body dies. The belief in the permanent loss of consciousness after death is often called eternal oblivion. The belief that the stream of consciousness is preserved after physical death is described by the term afterlife. Near-death experiences (NDEs) describe the subjective experiences associated with impending death. Some survivors of such experiences report it as "seeing the afterlife while they were dying". Seeing a being of light and talking with it, life flashing before the eyes, and the confirmation of cultural beliefs of the afterlife are common themes in NDEs. In biology After death, the remains of a former organism become part of the biogeochemical cycle, during which animals may be consumed by a predator or a scavenger. Organic material may then be further decomposed by detritivores, organisms that recycle detritus, returning it to the environment for reuse in the food chain, where these chemicals may eventually end up being consumed and assimilated into the cells of an organism. Examples of detritivores include earthworms, woodlice, and millipedes. Microorganisms also play a vital role, raising the temperature of the decomposing matter as they break it down into yet simpler molecules. Not all materials need to be fully decomposed. Coal, a fossil fuel formed over vast tracts of time in swamp ecosystems, is one example. Natural selection The contemporary evolutionary theory sees death as an important part of the process of natural selection. It is considered that organisms less adapted to their environment are more likely to die, having produced fewer offspring, thereby reducing their contribution to the gene pool. Their genes are thus eventually bred out of a population, leading at worst to extinction and, more positively, making the process possible, referred to as speciation. Frequency of reproduction plays an equally important role in determining species survival: an organism that dies young but leaves numerous offspring displays, according to Darwinian criteria, much greater fitness than a long-lived organism leaving only one. Death also has a role in competition, where if a species out-competes another, there is a risk of death for the population, especially in the case where they are directly fighting over resources. Extinction Death plays a role in extinction, the cessation of existence of a species or group of taxa, reducing biodiversity, due to extinction being generally considered to be the death of the last individual of that species (although the capacity to breed and recover may have been lost before this point). Because a species' potential range may be very large, determining this moment is difficult, and is usually done retrospectively. Evolution of aging and mortality Inquiry into the evolution of aging aims to explain why so many living things and the vast majority of animals weaken and die with age. However, there are exceptions, such as Hydra and the jellyfish Turritopsis dohrnii, which research shows to be biologically immortal. Organisms showing only asexual reproduction, such as bacteria, some protists, like the euglenoids and many amoebozoans, and unicellular organisms with sexual reproduction, colonial or not, like the volvocine algae Pandorina and Chlamydomonas, are "immortal" at some extent, dying only due to external hazards, like being eaten or meeting with a fatal accident. In multicellular organisms and also in multinucleate ciliates with a Weismannist development, that is, with a division of labor between mortal somatic (body) cells and "immortal" germ (reproductive) cells, death becomes an essential part of life, at least for the somatic line. The Volvox algae are among the simplest organisms to exhibit that division of labor between two completely different cell types, and as a consequence, include the death of somatic line as a regular, genetically regulated part of its life history. Grief in animals Animals have sometimes shown grief for their partners or "friends." When two chimpanzees form a bond together, sexual or not, and one of them dies, the surviving chimpanzee will show signs of grief, ripping out their hair in anger and starting to cry; if the body is removed, they will resist, they will eventually go quiet when the body is gone, but upon seeing the body again, the chimp will return to a violent state. Furthermore, anthropologist Barbara J. King has suggested that one way to evaluate the expression of grief in animals is to look for altered behaviors such as social withdrawal, disrupted eating or sleeping, expression of affect, or increased stress reactions in response to the death of a family member, mate, or friend. These criteria do not assume the ability to anticipate death, understand its finality, or experience emotions equivalent to those of humans, but at the same time do not rule out the possibility of those abilities existing in some animals or that different kinds of emotional experiences might constitute grief. Based on these criteria, King gives examples of observed potential mourning behaviors in animals such as cetaceans, apes and monkeys, elephants, domesticated animals (including dogs, cats, rabbits, horses, and farmed animals), giraffes, peccaries, donkeys, prairie voles, and some species of birds. Death of abiotic factors Some non-living things can be considered dead. For example, a volcano, batteries, electrical components, and stars are all nonliving things that can "die," whether from destruction or cessation of function. A volcano, a break in the earth's crust that allows lava, ash, and gases to escape, has three states that it may be in, active, dormant, and extinct. An active volcano has recently or is currently erupting; in a dormant volcano, it has not erupted for a significant amount of time, but it may erupt again; in an extinct volcano, it may be cut off from the supply of its lava and will never be expected to erupt again, so the volcano can be considered to be dead. A battery can be considered dead after the charge is fully used up. Electrical components are similar in this fashion, in the case that it may not be able to be used again, such as after a spill of water on the components, the component can be considered dead. Stars also have a life-span and, therefore, can die. After it starts to run out of fuel, it starts to expand, this can be analogous to the star aging. After it exhausts all fuel, it may explode in a supernova, collapse into a black hole, or turn into a neutron star. Religious views Buddhism In Buddhist doctrine and practice, death plays an important role. Awareness of death motivated Prince Siddhartha to strive to find the "deathless" and finally attain enlightenment. In Buddhist doctrine, death functions as a reminder of the value of having been born as a human being. Rebirth as a human being is considered the only state in which one can attain enlightenment. Therefore, death helps remind oneself that one should not take life for granted. The belief in rebirth among Buddhists does not necessarily remove death anxiety since all existence in the cycle of rebirth is considered filled with suffering, and being reborn many times does not necessarily mean that one progresses. Death is part of several key Buddhist tenets, such as the Four Noble Truths and dependent origination. Christianity While there are different sects of Christianity with different branches of belief, the overarching ideology on death grows from the knowledge of the afterlife. After death, the individual will undergo a separation from mortality to immortality; their soul leaves the body, entering a realm of spirits. Following this separation of body and spirit (death), resurrection will occur. Representing the same transformation Jesus Christ embodied after his body was placed in the tomb for three days, each person's body will be resurrected, reuniting the spirit and body in a perfect form. This process allows the individual's soul to withstand death and transform into life after death. Hinduism In Hindu texts, death is described as the individual eternal spiritual jiva-atma (soul or conscious self) exiting the current temporary material body. The soul exits this body when the body can no longer sustain the conscious self (life), which may be due to mental or physical reasons or, more accurately, the inability to act on one's kama (material desires). During conception, the soul enters a compatible new body based on the remaining merits and demerits of one's karma (good/bad material activities based on dharma) and the state of one's mind (impressions or last thoughts) at the time of death. Usually, the process of reincarnation makes one forget all memories of one's previous life. Because nothing really dies and the temporary material body is always changing, both in this life and the next, death means forgetfulness of one's previous experiences. Islam The Islamic view is that death is the separation of the soul from the body as well as the beginning of the afterlife. The afterlife, or akhirah, is one of the six main beliefs in Islam. Rather than seeing death as the end of life, Muslims consider death as a continuation of life in another form. In Islam, life on earth right now is a short, temporary life and a testing period for every soul. True life begins with the Day of Judgement when all people will be divided into two groups. The righteous believers will be welcomed to janna (heaven), and the disbelievers and evildoers will be punished in jahannam (hellfire). Muslims believe death to be wholly natural and predetermined by God. Only God knows the exact time of a person's death. The Quran emphasizes that death is inevitable, no matter how much people try to escape death, it will reach everyone. (Q50:16) Life on earth is the one and only chance for people to prepare themselves for the life to come and choose to either believe or not believe in God, and death is the end of that learning opportunity. Judaism There are a variety of beliefs about the afterlife within Judaism, but none of them contradict the preference for life over death. This is partially because death puts a cessation to the possibility of fulfilling any commandments. Language The word "death" comes from Old English dēaþ, which in turn comes from Proto-Germanic *dauþuz (reconstructed by etymological analysis). This comes from the Proto-Indo-European stem *dheu- meaning the "process, act, condition of dying." The concept and symptoms of death, and varying degrees of delicacy used in discussion in public forums, have generated numerous scientific, legal, and socially acceptable terms or euphemisms. When a person has died, it is also said they have "passed away", "passed on", "expired", or "gone", among other socially accepted, religiously specific, slang, and irreverent terms. As a formal reference to a dead person, it has become common practice to use the participle form of "decease", as in "the deceased"; another noun form is "decedent". Bereft of life, the dead person is a "corpse", "cadaver", "body", "set of remains", or when all flesh is gone, a "skeleton". The terms "carrion" and "carcass" are also used, usually for dead non-human animals. The ashes left after a cremation are lately called "cremains". See also Deathbed Death drive Death row Death trajectory Dying declaration End-of-life care Eschatology Faked death Karōshi Last rites List of expressions related to death Spiritual death Survivalism (life after death) Taboo on the dead Thanatology References Bibliography Further reading External links "Death" Stanford Encyclopedia of Philosophy "Death" (video; 10:18) by Timothy Ferris, producer of the Voyager Golden Record for NASA. 2021 A biologist explains life and death in different kinds of organisms, in relation to evolution. How the medical profession categorized causes of death. Interviews with people dying in hospices, and portraits of them before and shortly after, death. Senescence
Death
[ "Chemistry", "Biology" ]
9,457
[ "Senescence", "Metabolism", "Cellular processes" ]
8,230
https://en.wikipedia.org/wiki/Demeter
In ancient Greek religion and mythology, Demeter (; Attic: Dēmḗtēr ; Doric: Dāmā́tēr) is the Olympian goddess of the harvest and agriculture, presiding over crops, grains, food, and the fertility of the earth. Although Demeter is mostly known as a grain goddess, she also appeared as a goddess of health, birth, and marriage, and had connections to the Underworld. She is also called Deo ( Dēṓ). In Greek tradition, Demeter is the second child of the Titans Rhea and Cronus, and sister to Hestia, Hera, Hades, Poseidon, and Zeus. Like her other siblings except Zeus, she was swallowed by her father as an infant and rescued by Zeus. Through her brother Zeus, she became the mother of Persephone, a fertility goddess and resurrection deity. One of the most notable Homeric Hymns, the Homeric Hymn to Demeter, tells the story of Persephone's abduction by Hades and Demeter's search for her. When Hades, the King of the Underworld, wished to make Persephone his wife, he abducted her from a field while she was picking flowers, with Zeus' leave. Demeter searched everywhere to find her missing daughter to no avail until she was informed that Hades had taken her to the Underworld. In response, Demeter neglected her duties as goddess of agriculture, plunging the earth into a deadly famine where nothing would grow, causing mortals to die. Zeus ordered Hades to return Persephone to her mother to avert the disaster. However, because Persephone had eaten food from the Underworld, she could not stay with Demeter forever, but had to divide the year between her mother and her husband, explaining the seasonal cycle as Demeter does not let plants grow while Persephone is gone. Her cult titles include Sito (), "she of the Grain", as the giver of food or grain, and Thesmophoros (, thesmos: divine order, unwritten law; , phoros: bringer, bearer), "giver of customs" or "legislator", in association with the secret female-only festival called the Thesmophoria. Though Demeter is often described simply as the goddess of the harvest, she presided also over the sacred law and the cycle of life and death. She and her daughter Persephone were the central figures of the Eleusinian Mysteries, a religious tradition that predated the Olympian pantheon and which may have its roots in the Mycenaean period –1200 BC. Demeter was often considered to be the same figure as the Anatolian goddess Cybele, and she was identified with the Roman goddess Ceres. Etymology Demeter may appear in Linear A as da-ma-te on three documents (AR Zf 1 and 2, and KY Za 2), all three dedicated to religious situations and all three bearing just the name (i-da-ma-te on AR Zf 1 and 2). It is unlikely that Demeter appears as da-ma-te in a Linear B (Mycenean Greek) inscription (PY En 609); the word , da-ma-te, probably refers to "households". On the other hand, , si-to-po-ti-ni-ja, "Potnia of the Grain", is regarded as referring to her Bronze Age predecessor or to one of her epithets. Demeter's character as mother-goddess is identified in the second element of her name meter () derived from Proto-Indo-European (PIE) *méh₂tēr (mother). In antiquity, different explanations were already proffered for the first element of her name. It is possible that Da (), a word which corresponds to Gē () in Attic, is the Doric form of De (), "earth", the old name of the chthonic earth-goddess, and that Demeter is "Mother-Earth". Liddell & Scott find this "improbable" and Beekes writes, "there is no indication that [da] means "earth", although it has also been assumed in the name of Poseidon found in the Linear B inscription E-ne-si-da-o-ne, "earth-shaker". John Chadwick also argues that the dā element in the name of Demeter is not so simply equated with "earth". M. L. West has proposed that the word Demeter, initially Damater, could be a borrowing from an Illyrian deity attested in the Messapic goddess Damatura, with a form dā- ("earth", from PIE *dʰǵʰ(e)m-) attached to -matura ("mother"), akin to the Illyrian god Dei-paturos (dei-, "sky", attached to -paturos, "father"). The Lesbian form Dō- may simply reflect a different colloquial pronunciation of the non-Greek name. Another theory suggests that the element De- might be connected with Deo, an epithet of Demeter and it could derive from the Cretan word dea (), Ionic zeia ()—variously identified with emmer, spelt, rye, or other grains by modern scholars—so that she is the mother and the giver of food generally. This view is shared by British scholar Jane Ellen Harrison, who suggests that Démeter's name means Grain-Mother, instead of Earth-Mother. An alternative Proto-Indo-European etymology comes through Potnia and Despoina, where Des- represents a derivative of PIE *dem (house, dome), and Demeter is "mother of the house" (from PIE *dems-méh₂tēr). R. S. P. Beekes rejects a Greek interpretation, but not necessarily an Indo-European one. Iconography Demeter was frequently associated with images of the harvest, including flowers, fruit, and grain. She was also sometimes pictured with her daughter Persephone. However, Demeter is not generally portrayed with any of her consorts; the exception is Iasion, the youth of Crete who lay with her in a thrice-ploughed field and was killed afterward by a jealous Zeus with a thunderbolt. Demeter is assigned the zodiac constellation Virgo, the Virgin, by Marcus Manilius in his 1st-century Roman work Astronomicon. In art, the constellation Virgo holds Spica, a sheaf of wheat in her hand and sits beside constellation Leo the Lion. In Arcadia, she was known as "Black Demeter". She was said to have taken the form of a mare to escape the pursuit of her younger brother, Poseidon, and having been raped by him despite her disguise, she dressed all in black and retreated into a cave to mourn and to purify herself. She was consequently depicted with the head of a horse in this region. A sculpture of the Black Demeter was made by Onatas. Description In the earliest conceptions of Demeter she is the goddess of grain and threshing, however her functions were extended beyond the fields and she was often identified with the earth goddess (Gaia). Some of the epithets of Gaia and Demeter are similar showing the identity of their nature. In most of her myths and cults, Demeter is the "Grain-Mother" or the "Earth-Mother". In the older chthonic cults the earth goddess was related to the Underworld and in the secret rites (mysteries) Demeter and Persephone share the double function of death and fertility. Demeter is the giver of the secret rites and the giver of the laws of cereal agriculture. She was occasionally identified with the Great Mother Rhea-Cybele who was worshipped in Crete and Asia Minor with the music of cymbals and violent rites. It seems that poppies were connected with the cult of the Great Mother. As an agricultural goddess In epic poetry and Hesiod's Theogony, Demeter is the Grain-Mother, the goddess of cereals who provides grain for bread and blesses its harvesters. In Homer's Iliad, the light-haired Demeter with the help of the wind separates the grain from the chaff. Homer mentions the Thalysia a Greek harvest-festival of first fruits in honour of Demeter . In Hesiod, prayers to Zeus-Chthonios (chthonic Zeus) and Demeter help the crops grow full and strong. This was her main function at Eleusis, and she became panhellenic. In Cyprus, "grain-harvesting" was damatrizein. Demeter was the zeidoros arοura, the Homeric "Mother Earth arοura" who gave the gift of cereals (zeai or deai). Most of the epithets of Demeter describe her as a goddess of grain. Her name Deo in literature probably relates her with deai a Cretan word for cereals. In Attica she was called Haloas (of the threshing floor) according to the earliest conception of Demeter as the Corn-Mother. She was sometimes called Chloe (ripe-grain or fresh-green) and sometimes Ioulo (ioulos : grain sheaf). Chloe was the goddess of young corn and young vegetation and "Iouloi" were harvest songs in honour of the goddess. The reapers called Demeter Amallophoros (bringer of sheaves) and Amaia (reaper). The goddess was the giver of abundance of food and she was known as Sito (of the grain) and Himalis (of abundance ). The bread from the first harvest-fruits was called thalysian bread (Thalysia) in honour of Demeter. The sacrificial cakes burned on the altar were called "ompniai" and in Attica the goddess was known as Ompnia (related to corns). These cakes were oferred to all gods. In some fests big loafs (artoi) were offered to the goddess and in Boeotia she was known as Megalartos (of the big loaf) and Megalomazos (of the big mass, or big porridge). Her function was extended to vegetation generally and to all fruits and she had the epithets eukarpos (of good crop),karpophoros (bringer of fruits), malophoros (apple bearer) and sometimes Oria (all the fruits of the season). These epithets show an identity in nature with the earth goddess. The central theme in the Eleusinian Mysteries was the reunion of Persephone with her mother, Demeter when new crops were reunited with the old seed, a form of eternity. According to the Athenian rhetorician Isocrates, Demeter's greatest gifts to humankind were agriculture which gave to men a civilized way of life, and the Mysteries which give the initiate higher hopes in this life and the afterlife. These two gifts were intimately connected in Demeter's myths and mystery cults. Demeter is the giver of mystic rites and the giver of the civilized way of life (teaching the laws of agriculture). Her epithet Eleusinia relates her with the Eleusinian mysteries, however at Sparta Eleusinia had an early use, and it was probably a name rather than an epithet. Demeter Thesmophoros (law-giving) is closely associated to the laws of cereal agriculture. The festival Thesmophoria was celebrated throughout Greece and was connected to a form of agrarian magic. Near Pheneus in Arcadia she was known as Demeter-Thesmia (lawfull), and she received rites according to the local version. Demeter's emblem is the poppy, a bright red flower that grows among the barley. As an earth and underworld goddess In addition to her role as an agricultural goddess, Demeter was often worshipped more generally as a goddess of the earth, from which crops spring up. Her individuality was rooted to the less developed personality of Gaia (earth). In Arcadia Demeter Melaina (the black Demeter) was represented as snake-haired with a horse's head holding a dove and dolphin, perhaps to symbolize her power over the Underworld, the air, and the water. The cult of Demeter in the region was related to Despoina, a very old chthonic divinity. Demeter shares the double function of death and fertility with her daughter Persephone. Demeter and Persephone were called Despoinai (the mistresses) and Demeters. This duality was also used in the classical period (Thesmophoroi, Double named goddesses) and particularly in an oath: "By the two goddesses". In the cult of Phlya she was worshipped as Anesidora who sends up gifts from the Underworld. In Sparta, she was known as Demeter-Chthonia (chthonic Demeter). After each death the mourning should end with a sacrifice to the goddess. Pausanias believes that her cult was introduced from Hermione, where Demeter was associated with Hades. In a local legend a hollow in the earth was the entrance to the underworld, by which the souls could pass easily. In Elis she was called Demeter-Chamyne (goddess of the ground), in an old chthonic cult associated with the descent to Hades. At Levadia the goddess was known as Demeter-Europa and she was associated with Trophonius, an old divinity of the underworld. The oracle of Trophonius was famous in the antiquity. Pindar uses the rare epithet Chalkokrotos (bronze sounding). Brazen musical instruments were used in the mysteries of Demeter and the Great-Mother Rhea-Cybele was also worshipped with the music of cymbals. In central Greece Demeter was known as Amphictyonis (of the dwellers-round), in a cult of the goddess at Anthele near Thermopylae (hot gates). She was the patron goddess of an ancient Amphictyony. Thermopylae is the place of hot springs considered to be entrances to Hades, since Demeter was a chthonic goddess in the older local cults. The Athenians called the dead "Demetrioi", and this may reflect a link between Demeter and the ancient cult of the dead, linked to the agrarian belief that a new life would sprout from the dead body, as a new plant arises from buried seed. This was most likely a belief shared by initiates in Demeter's mysteries, as interpreted by Pindar: "Blessed is he who has seen before he goes under the earth; for he knows the end of life and knows also its divine beginning." In Arcadia Demeter had the epithets Erinys (fury) and Melaina (black) which are associated with the myth of Demeter's rape by Poseidon. The epithets stress the darker side of her character and her relation to the dark underworld, in an old chthonic cult associated with wooden structures (xoana). Erinys had a similar function with the avenging Dike (Justice). In the mysteries of Pheneus the goddess was known as Cidaria. Her priest would put on the mask of Demeter, which was kept secret. The cult may have been connected with both the Underworld and a form of agrarian magic. As a poppy goddess Theocritus described one of Demeter's earlier roles as that of a goddess of poppies: Karl Kerényi asserted that poppies were connected with a Cretan cult which was eventually carried to the Eleusinian Mysteries in Classical Greece. In a clay statuette from Gazi, the Minoan poppy goddess wears the seed capsules, sources of nourishment and narcosis, in her diadem. According to Kerényi, "It seems probable that the Great Mother Goddess who bore the names Rhea and Demeter, brought the poppy with her from her Cretan cult to Eleusis and it is almost certain that in the Cretan cult sphere opium was prepared from poppies." Worship In Crete In an older tradition in Crete the vegetation cult was related with the deity of the cave. During the Bronze Age, a goddess of nature dominated both in Minoan and Mycenean cults. In the Linear B inscriptions po-ti-ni-ja (potnia) refers to the goddess of nature who was concerned with birth and vegetation and had certain chthonic apects. Some scholars believe that she was the universal mother goddess. A Linear B inscription at Knossos mentions the potnia of the labyrinth da-pu-ri-to-jo po-ti-ni-ja. Poseidon was often given the title wa-na-ka (wanax) in Linear B inscriptions in his role as King of the Underworld, and his title E-ne-si-da-o-ne indicates his chthonic nature. He was the male companion (paredros) of the goddess in the Minoan and probably Mycenean cult. In the cave of Amnisos, Enesidaon is associated with the cult of Eileithyia, the goddess of childbirth, who was involved with the annual birth of the divine child. Elements of this early form of worship survived in the Eleusinian cult, where the following words were uttered: "the mighty Potnia had born a strong son." On the Greek mainland Tablets from Pylos of BC record sacrificial goods destined for "the Two Queens and Poseidon" ("to the Two Queens and the King":wa-na-ssoi, wa-na-ka-te). The "Two Queens" may be related to Demeter and Persephone or their precursors, goddesses who were no longer associated with Poseidon in later periods. In Pylos potnia (mistress) is the major goddess of the city and "wanax " in the tablets has a similar nature with her male consort in the Minoan cult. Potnia retained some chthonic cults, and in popular religion these were related to the goddess Demeter. In Greek religion potniai(mistresses) appear in plural (like the Erinyes) and are closely related to the Eleusinian Demeter. Major cults to Demeter are known at Eleusis in Attica, Hermion (in Crete), Megara, Celeae, Lerna, Aegila, Munychia, Corinth, Delos, Priene, Akragas, Iasos, Pergamon, Selinus, Tegea, Thoricus, Dion (in Macedonia) Lykosoura, Mesembria, Enna, and Samothrace. Probably the earliest Amphictyony centred on the cult of Demeter at Anthele (Ἀνθήλη), lay on the coast of Malis south of Thessaly, near Thermopylae. Mysian Demeter had a seven-day festival at Pellené in Arcadia. The geographer Pausanias passed the shrine to Mysian Demeter on the road from Mycenae to Argos and reports that according to Argive tradition, the shrine was founded by an Argive named Mysius who venerated Demeter. "Saint Demetra" Even after Theodosius I issued the Edict of Thessalonica and banned paganism throughout the Roman Empire, people throughout Greece continued to pray to Demeter as "Saint Demetra", patron saint of agriculture. Around 1765–1766, the antiquary Richard Chandler, alongside the architect Nicholas Revett and the painter William Pars, visited Eleusis and mentioned a statue of a caryatid as well as the folklore that surrounded it, they stated that it was considered sacred by the locals because it protected their crops. They called the statue "Saint Demetra", a saint whose story had many similarities to the myth of Demeter and Persephone, except that her daughter had been abducted by the Turks and not by Hades. The locals covered the statue with flowers to ensure the fertility of their fields. This tradition continued until 1865, when the statue was forcibly removed by Edward Daniel Clarke and donated to the University of Cambridge. The statue is now located in the Fitzwilliam Museum, the art and antiquities museum of the University of Cambridge. Festivals Demeter's two major festivals were sacred mysteries. Her Thesmophoria festival (11–13 October) was women-only. Her Eleusinian mysteries were open to initiates of any gender or social class. At the heart of both festivals were myths concerning Demeter as the mother and Persephone as her daughter. Conflation with other goddesses In the Roman period, Demeter became conflated with the Roman agricultural goddess Ceres through interpretatio romana. The worship of Demeter has formally merged with that of Ceres around 205 BC, along with the ritus graecia cereris, a Greek-inspired form of cult, as part of Rome's general religious recruitment of deities as allies against Carthage, towards the end of the Second Punic War. The cult originated in southern Italy (part of Magna Graecia) and was probably based on the Thesmophoria, a mystery cult dedicated to Demeter and Persephone as "Mother and Maiden". It arrived along with its Greek priestesses, who were granted Roman citizenship so that they could pray to the gods "with a foreign and external knowledge, but with a domestic and civil intention". The new cult was installed in the already ancient Temple of Ceres, Liber and Libera, Rome's Aventine patrons of the plebs; from the end of the 3rd century BC, Demeter's temple at Enna, in Sicily, was acknowledged as Ceres' oldest, most authoritative cult centre, and Libera was recognized as Proserpina, Roman equivalent to Persephone. Their joint cult recalls Demeter's search for Persephone after the latter's abduction into the Underworld by Hades. At the Aventine, the new cult took its place alongside the old. It did not refer to Liber, whose open and gender-mixed cult played a central role in plebeian culture as a patron and protector of plebeian rights, freedoms and values. The exclusively female initiates and priestesses of the new "greek style" mysteries of Ceres and Proserpina were expected to uphold Rome's traditional, patrician-dominated social hierarchy and traditional morality. Unmarried girls should emulate the chastity of Proserpina, the maiden; married women should seek to emulate Ceres, the devoted and fruitful mother. Their rites were intended to secure a good harvest and increase the fertility of those who partook in the mysteries. Beginning in the 5th century BCE in Asia Minor, Demeter was also considered equivalent to the Phrygian goddess Cybele. Demeter's festival of Thesmophoria was popular throughout Asia Minor, and the myth of Persephone and Adonis in many ways mirrors the myth of Cybele and Attis. Some late antique sources syncretized several "great goddess" figures into a single deity. For example, the Platonist philosopher Apuleius, writing in the late 2nd century, identified Ceres (Demeter) with Isis, having her declare: I, mother of the universe, mistress of all the elements, first-born of the ages, highest of the gods, queen of the shades, first of those who dwell in heaven, representing in one shape all gods and goddesses. My will controls the shining heights of heaven, the health-giving sea winds, and the mournful silences of hell; the entire world worships my single godhead in a thousand shapes, with divers rites, and under many a different name. The Phrygians, first-born of mankind, call me the Pessinuntian Mother of the gods; ... the ancient Eleusinians Actaean Ceres; ... and the Egyptians who excel in ancient learning, honour me with the worship which is truly mine and call me by my true name: Queen Isis. --Apuleius, translated by E. J. Kenny. The Golden Ass Mythology Lineage, consorts, and offspring Alongside the rest of her siblings, with the exception of her youngest brother Zeus, she was swallowed as a newborn by her father due to his fear of being overthrown by one of his children; she was later freed when Zeus made Cronus disgorge all of his children by giving him a special potion. Demeter is notable as the mother of Persephone, described by both Hesiod and in the Homeric Hymn to Demeter as the result of a union with her younger brother Zeus. An alternate recounting of the matter appears in a fragment of the lost Orphic theogony, which preserves part of a myth in which Zeus mates with his mother, Rhea, in the form of a snake, explaining the origin of the symbol on Hermes' staff. Their daughter is said to be Persephone, whom Zeus, in turn, mates with to conceive Dionysus. According to the Orphic fragments, "After becoming the mother of Zeus, she who was formerly Rhea became Demeter." There is some evidence that the figures of the Queen of the Underworld and the daughter of Demeter were initially considered separate goddesses. However, they must have become conflated by the time of Hesiod in the 7th century BC. Demeter and Persephone were often worshipped together and were often referred to by joint cultic titles. In their cult at Eleusis, they were referred to simply as "the goddesses", usually distinguished as "the older" and "the younger"; in Rhodes and Sparta, they were worshipped as "the Demeters"; in the Thesmophoria, they were known as "the thesmophoroi" ("the legislators"). In Arcadia they were known as "the Great Goddesses" and "the mistresses". In Mycenaean Pylos, Demeter and Persephone were probably called the "queens" (wa-na-ssoi). Both Homer and Hesiod, writing c. 700 BC, described Demeter making love with the agricultural hero Iasion in a ploughed field during the marriage of Cadmus and Harmonia. According to Hesiod, this union resulted in the birth of Plutus. According to Diodorus Siculus, in his Bibliotheca historica written in the 1st century BC, Demeter and Zeus were also the parents of Dionysus. Diodorus described the myth of Dionysus' double birth (once from the earth, i.e. Demeter, when the plant sprouts) and once from the vine (when the fruit sprouts from the plant). Diodorus also related a version of the myth of Dionysus' destruction by the Titans ("sons of Gaia"), who boiled him, and how Demeter gathered up his remains so that he could be born a third time (Diod. iii.62). Diodorus states that Dionysus' birth from Zeus and his older sister Demeter was somewhat of a minority belief, possibly via conflation of Demeter with her daughter, as most sources state that the parents of Dionysus were Zeus and Persephone, and later Zeus and Semele. Hesiod's Theogony (c. 700 BC) describes Demeter as the second daughter of Cronus and Rhea, and the sister of Hestia, Hera, Hades, Poseidon, and Zeus. In Arcadia, a major Arcadian deity known as Despoina ("Mistress") was said to be the daughter of Demeter and Poseidon. According to Pausanias, a Thelpusian tradition said that during Demeter's search for Persephone, Poseidon pursued her. Demeter turned into a horse to avoid her younger brother's advances. However, he turned into a stallion and mated with the goddess, resulting in the birth of the horse god Arion and a daughter "whose name they are not wont to divulge to the uninitiated". Elsewhere, he says that the Phigalians assert that the offspring of Poseidon and Demeter was not a horse, but Despoina, "as the Arcadians call her". In Orphic literature, Demeter seems to be the mother of the witchcraft goddess Hecate. The goddess took Mecon, a young Athenian, as a lover; he was at some point transformed into a poppy flower. The following is a list of Demeter's offspring, by various fathers. Beside each offspring, the earliest source to record the parentage is given, along with the century to which the source (in some cases approximately) dates. Abduction of Persephone Demeter's daughter Persephone was abducted to the Underworld by Hades, who received permission from her father Zeus to take her as his bride. Demeter searched for her ceaselessly for nine days, preoccupied with her grief. Hecate then approached her and said that while she had not seen what happened to Persephone, she heard her screams. Together the two goddesses went to Helios, the sun god, who witnessed everything that happened on earth thanks to his lofty position. Helios then revealed to Demeter that Hades had snatched a screaming Persephone to make her his wife with the permission of Zeus, the girl's father. Demeter then filled with anger. The seasons halted; living things ceased their growth and began to die. Faced with the extinction of all life on earth, Zeus sent his messenger Hermes to the Underworld to bring Persephone back. Hades agreed to release her if she had eaten nothing while in his realm, but Persephone had eaten a small number of pomegranate seeds. This bound her to Hades and the Underworld for certain months of every year, most likely the dry Mediterranean summer, when plant life is threatened by drought, despite the popular belief that it is autumn or winter. There are several variations on the basic myth; the earliest account, the Homeric Hymn to Demeter, relates that Persephone is secretly slipped a pomegranate seed by Hades and in Ovid's version, Persephone willingly and secretly eats the pomegranate seeds, thinking to deceive Hades, but is discovered and made to stay. Contrary to popular perception, Persephone's time in the Underworld does not correspond with the unfruitful seasons of the ancient Greek calendar, nor her return to the upper world with springtime. Demeter's descent to retrieve Persephone from the Underworld is connected to the Eleusinian Mysteries. The myth of the capture of Persephone seems to be pre-Greek. In the Greek version, Ploutos (πλούτος, wealth) represents the wealth of the corn that was stored in underground silos or ceramic jars (pithoi). Similar subterranean pithoi were used in ancient times for funerary practices. At the beginning of the autumn, when the corn of the old crop is laid on the fields, she ascends and is reunited with her mother, Demeter, for at this time, the old crop and the new meet each other. In the Orphic tradition, while she was searching for her daughter, a mortal woman named Baubo received Demeter as her guest and offered her a meal and wine. Demeter declined them both because she mourned the loss of Persephone. Baubo then, thinking she had displeased the goddess, lifted her skirt and showed her genitalia to the goddess, simultaneously revealing Iacchus, Demeter's son. Demeter was most pleased with the sight and delighted she accepted the food and wine. This tale survives in the account of Clement of Alexandria, an early Christian writer who wrote about pagan practices and mythology. Several Baubo figurines (figurines of women revealing their vulvas) have been discovered, supporting the story. Demeter at Eleusis Demeter's search for her daughter Persephone took her to the palace of Celeus, the King of Eleusis in Attica. She assumed the form of an old woman and asked him for shelter. He took her in, to nurse Demophon and Triptolemus, his sons by Metanira. To reward his kindness, she planned to make Demophon immortal; she secretly anointed the boy with ambrosia and laid him in the hearth's flames to gradually burn away his mortal self. But Metanira walked in, saw her son in the fire and screamed in fright. Demeter abandoned the attempt. Instead, she taught Triptolemus the secrets of agriculture, and he, in turn, taught them to any who wished to learn them. Thus, humanity learned how to plant, grow and harvest grain. The myth has several versions; some are linked to figures such as Eleusis, Rarus and Trochilus. The Demophon element may be based on an earlier folk tale. Demeter and Iasion Homer's Odyssey (c. late 8th century BC) contains perhaps the earliest direct references to the myth of Demeter and her consort Iasion, a Samothracian hero whose name may refer to bindweed, a small white flower that frequently grows in wheat fields. In the Odyssey, Calypso describes how Demeter, "without disguise", made love to Iasion. "So it was when Demeter of the braided tresses followed her heart and lay in love with Iasion in the triple-furrowed field; Zeus was aware of it soon enough and hurled the bright thunderbolt and killed him." However, Ovid states that Iasion lived up to old age as the husband of Demeter. In ancient Greek culture, part of the opening of each agricultural year involved the cutting of three furrows in the field to ensure its fertility. Hesiod expanded on the basics of this myth. According to him, the liaison between Demeter and Iasion took place at the wedding of Cadmus and Harmonia in Crete. Demeter, in this version, had lured Iasion away from the other revellers. Hesiod says that Demeter subsequently gave birth to Plutus. Demeter and Poseidon In Arcadia, located in what is now southern Greece, the major goddess Despoina was considered the daughter of Demeter and Poseidon Hippios ("Horse-Poseidon"). In the associated myths, Poseidon represents the river spirit of the Underworld, and he appears as a horse, as often happens in northern European folklore. The myth describes how he pursued his older sister, Demeter, who hid from him among the horses of the king Onkios, but even in the form of a mare, she could not conceal her divinity. Poseidon caught and raped his older sister in the form of a stallion. Demeter was furious at Poseidon's assault; in this furious form, she became known as Demeter Erinys. Her anger at Poseidon drove her to dress all in black and retreat into a cave to purify herself, an act which was the cause of a universal famine. Demeter's absence caused the death of crops, livestock, and eventually of the people who depended on them (later Arcadian tradition held that it was both her rage at Poseidon and her loss of her daughter caused the famine, merging the two myths). Demeter washed away her anger in the River Ladon, becoming Demeter Lousia, the "bathed Demeter". "In her alliance with Poseidon," Kerényi noted, "she was Earth, who bears plants and beasts, and could therefore assume the shape of an ear of grain or a mare." Moreover, she bore a daughter Despoina (: the "Mistress"), whose name should not be uttered outside the Arcadian Mysteries, and a horse named Arion, with a black mane and tail. At Phigaleia, a xoanon (wood-carved statue) of Demeter was erected in a cave which, tradition held, was the cave into which Black Demeter retreated. The statue depicted a Medusa-like figure with a horse's head and snake-like hair, holding a dove and a dolphin, which probably represented her power over air and water: Demeter and Erysichthon Another myth involving Demeter's rage resulting in famine is that of Erysichthon, king of Thessaly. The myth tells of Erysichthon ordering all of the trees in one of Demeter's sacred groves to be cut down, as he wanted to build an extension of his palace and hold feasts there. One tree, a huge oak, was covered with votive wreaths, symbols of the prayers Demeter had granted, so Erysichthon's men refused to cut it down. The king used an axe to cut it down, killing a dryad nymph in the process. The nymph's dying words were a curse on Erysichthon. Demeter punished the king by calling upon Limos, the spirit of unrelenting and insatiable hunger, to enter his stomach. The more the king ate, the hungrier he became. Erysichthon sold all his possessions to buy food but was still hungry. Finally, he sold his daughter, Mestra, into slavery. Mestra was freed from slavery by her former lover, Poseidon, who gave her the gift of shape-shifting into any creature to escape her bonds. Erysichthon used her shape-shifting ability to sell her numerous times to make more money to feed himself, but no amount of food was enough. Eventually, Erysichthon ate himself. In a variation, Erysichthon tore down a temple of Demeter, wishing to build a roof for his house; she punished him the same way, and near the end of his life, she sent a snake to plague him. Afterwards, Demeter put him among the stars (the constellation Ophiuchus), as she did the snake, to continue to inflict its punishment on Erysichthon. In the Pergamon Altar, which depicts the battle of the gods against the Giants (Gigantomachy), survive remains of what seems to have been Demeter fighting a Giant labelled "Erysichthon." Demeter is also depicted fighting against the Giants next to Hermes in the Suessula Gigantomachy vase, now housed in the Louvre Museum. Usually, ancient depictions of the Gigantomachy tend to exclude Demeter due to her non-martial nature. Wrath myths While travelling far and wide looking for her daughter, Demeter arrived exhausted in Attica. A woman named Misme took her in and offered her a cup of water with pennyroyal and barley groats, for it was a hot day. Demeter, in her thirst, swallowed the drink clumsily. Witnessing that, Misme's son Ascalabus laughed, mocked her, and asked her if she would like a deep jar of that drink. Demeter then poured her drink over him and turned him into a gecko, hated by both men and gods. It was said that Demeter showed her favour to those who killed geckos. Before Hades abducted her daughter, he had kept the nymph Minthe as his mistress. But after he married Persephone, he set Minthe aside. Minthe would often brag about being lovelier than Persephone and say Hades would soon come back to her and kick Persephone out of his halls. Demeter, hearing that, grew angry and trampled Minthe; from the earth then sprang a lovely-smelling herb named after the nymph. In other versions, Persephone herself is the one who kills and turns Minthe into a plant for sleeping with Hades. In an Argive myth, when Demeter arrived in Argolis, a man named Colontas refused to receive her in his house, whereas his daughter Chthonia disapproved of his actions. Colontas was punished by being burnt along with his house, while Demeter took Chthonia to Hermione, where she built a sanctuary for the goddess. Demeter pinned Ascalaphus under a rock for reporting, as sole witness, to Hades that Persephone had consumed some pomegranate seeds. Later, after Heracles rolled the stone off Ascalaphus, Demeter turned him into a short-eared owl instead. Demeter also turned the Sirens into half-bird monsters for not helping her daughter Persephone when she was abducted by Hades. Once, the Colchian princess Medea ended a famine that plagued Corinth by making sacrifices to Demeter and the nymphs. Favour myths Demeter gave Triptolemus her serpent-drawn chariot (one of the serpents that drew this chariot was Kykreides) and seed and bade him scatter it across the earth (teach humankind the knowledge of agriculture). Triptolemus rode through Europe and Asia until he came to the land of Lyncus, a Scythian king. Lyncus pretended to offer what's accustomed of hospitality to him, but once Triptolemus fell asleep, he attacked him with a dagger, wanting to take credit for his work. Demeter then saved Triptolemus by turning Lyncus into a lynx and ordered Triptolemus to return home airborne. Hyginus records a very similar myth, in which Demeter saves Triptolemus from an evil king named Carnabon who additionally seized Triptolemus' chariot and killed one of the dragons, so he might not escape; Demeter restored the chariot to Triptolemus, substituted the dead dragon with another one, and punished Carnabon by putting him among the stars holding a dragon as if to kill it. During her wanderings, Demeter came upon the town of Pheneus; to the Pheneates that received her warmly and offered her shelter, she gave all sorts of pulse, except for beans, deeming it impure. Two of the Pheneates, Trisaules and Damithales, had a temple of Demeter built for her. Demeter also gifted a fig tree to Phytalus, an Eleusinian man, for welcoming her in his home. In the tale of Eros and Psyche, Demeter, along with her sister Hera, visited Aphrodite, raging with fury about the girl who had married her son. Aphrodite asks the two to search for her; the two try to talk sense into her, arguing that her son is not a little boy, although he might appear as one, and there's no harm in him falling in love with Psyche. Aphrodite took offence at their words. Sometime later, Psyche in her wanderings came across an abandoned shrine of Demeter, and sorted out the neglected sickles and harvest implements she found there. As she was doing so, Demeter appeared to her and called from afar; she warned the girl of Aphrodite's great wrath and her plan to take revenge on her. Then Psyche begged the goddess to help her, but Demeter answered that she could not interfere and incur Aphrodite's anger at her, and for that reason, Psyche had to leave the shrine or else be kept as a captive of hers. When her son Philomelus invented the plough and used it to cultivate the fields, Demeter was so impressed by his good work that she immortalized him in the sky by turning him into a constellation, the Boötes. Hierax, a man of justice and distinction, set up sanctuaries for Demeter and received plenteous harvests from her in return. When the tribe neglected Poseidon favour of Demeter, the sea god destroyed all of her crops, so Hierax sent them instead his own food and was transformed into a hawk by Poseidon. Besides giving gifts to those who were welcoming to her, Demeter was also a goddess who nursed the young; all of Plemaeus's children born by his first wife died in a cradle; Demeter took pity on him and reared herself his son Orthopolis. Plemaeus built a temple to her to thank her. Demeter also raised Trophonius, the prophetic son of either Apollo or Erginus. Other accounts Demeter seems to have accompanied Dionysus when he descended into the Underworld to retrieve his mother Semele in order to visit her now married daughter, and perhaps lead her back to the land of the living for the remainder of the year. In many vases from Athens Dionysus is seen in the company of mother and daughter. Once Tantalus, a son of Zeus, invited the gods over for dinner. Tantalus, wanting to test them, cut his son Pelops, cooked him and offered him as a meal to them. They all saw through Tantalus' crime except Demeter, who ate Pelops' shoulder before the gods brought him back to life. Genealogy See also Family tree of the Greek gods Greek mythology in popular culture Isis and Osiris Demophon of Eleusis Notes References Antoninus Liberalis, The Metamorphoses of Antoninus Liberalis translated by Francis Celoria (Routledge 1992). Online version at the Topos Text Project. Apollodorus, Apollodorus, The Library, with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. Online version at the Perseus Digital Library. Apuleius, The golden ass, or, Metamorphoses. E. J. Kenney. 2004. London: Penguin Books. Burkert, Walter, Greek Religion, Harvard University Press, 1985. . Callimachus, Callimachus and Lycophron with an English Translation by A. W. Mair; Aratus, with an English Translation by G. R. Mair, London: W. Heinemann, New York: G. P. Putnam 1921. Internet Archive. Cole.S.G, Demeter in the ancient Greek city and the countryside in eds S. Alcock, R. Osborn Placing the gods.Sanctuaries and secret spaces in Ancient Greece(Oxford 1994), p. 199-216 Diodorus Siculus, Library of History, Volume III: Books 4.59-8, translated by C. H. Oldfather, Loeb Classical Library No. 340. Cambridge, Massachusetts, Harvard University Press, 1939. . Online version at Internet Archive. Online version by Bill Thayer. Farnell Lewis Richard, The cults of the Greek city states Vol III, Oxford at the Clarendon Press. 1907 Gantz, Timothy, Early Greek Myth: A Guide to Literary and Artistic Sources, Johns Hopkins University Press, 1996, Two volumes: (Vol. 1), (Vol. 2). Graf, Fritz. "Demeter," Brill's New Pauly, Ed. Hubert Cancik and et al. Brill Reference Online. Web. 27 September 2017. Graves, Robert; The Greek Myths, Moyer Bell Ltd; Unabridged edition (December 1988), . Grimal, Pierre, The Dictionary of Classical Mythology, Wiley-Blackwell, 1996. . Halieutica in Oppian, Colluthus, Tryphiodorus. Oppian, Colluthus, and Tryphiodorus. Translated by A. W. Mair. Loeb Classical Library 219. Cambridge, MA: Harvard University Press, 1928. Online version at topos text. Hard, Robin, The Routledge Handbook of Greek Mythology: Based on H.J. Rose's "Handbook of Greek Mythology", Psychology Press, 2004, . Google Books. Harrison, Jane Ellen (1908), Prolegomena to the Study of Greek Religion, second edition, Cambridge: Cambridge University Press, 1908. Internet Archive. Harrison, Jane Ellen (1928), Myths of Greece and Rome, Garden City, New York, Doubleday, Doran & Company, Inc., 1928. Online version at Internet Sacred Text Archive. Hesiod, Theogony, in The Homeric Hymns and Homerica with an English Translation by Hugh G. Evelyn-White, Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1914. Online version at the Perseus Digital Library. Hesiod, Works and Days, in The Homeric Hymns and Homerica with an English Translation by Hugh G. Evelyn-White, Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1914. Online version at the Perseus Digital Library. Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library. Homer, The Odyssey with an English Translation by A.T. Murray, PH.D. in two volumes. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann, Ltd. 1919. Online version at the Perseus Digital Library. Homeric Hymn 2 to Demeter, in The Homeric Hymns and Homerica with an English Translation by Hugh G. Evelyn-White, Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1914. Online version at the Perseus Digital Library. Hyginus, Gaius Julius, Fabulae, in The Myths of Hyginus, edited and translated by Mary A. Grant, Lawrence: University of Kansas Press, 1960. Online version at ToposText. Hyginus, Gaius Julius, Astronomica from The Myths of Hyginus translated and edited by Mary Grant. University of Kansas Publications in Humanistic Studies. Online version at the Topos Text Project. Kerényi, Karl (1951), The Gods of the Greeks, Thames and Hudson, London, 1951. Kerényi, Karl (1967), Eleusis: Archetypal Image of Mother and Daughter, Princeton University Press, 1991. . Kerényi, Karl (1976), Dionysos: Archetypal Image of Indestructible Life, Princeton University Press, 1996. . Kern, Otto. Orphicorum Fragmenta, Berlin, 1922. Internet Archive. Lycophron, Alexandra in Callimachus and Lycophron with an English translation by A. W. Mair; Aratus, with an English translation by G. R. Mair, London: W. Heinemann, New York: G. P. Putnam 1921. Internet Archive. McKay, Kenneth John, Erysichthon, Brill Archive, 1962. Morford, Mark P. O., Robert J. Lenardon, Classical Mythology, Eighth Edition, Oxford University Press, 2007. . Martin P. Nilsson, Greek Popular Religion, 1940. Sacred-texts.com Nilsson Martin P. Die Geschichte der Griechieschen Religion Vol I, C.H Beck's Verlag Munchen, 1967 Ovid. Metamorphoses, Volume I: Books 1-8. Translated by Frank Justus Miller. Revised by G. P. Goold. Loeb Classical Library No. 42. Cambridge, Massachusetts: Harvard University Press, 1977, first published 1916. . Online version at Harvard University Press. The Oxford Classical Dictionary, second edition, Hammond, N.G.L. and Howard Hayes Scullard (editors), Oxford University Press, 1992. . Pausanias, Pausanias Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1918. Online version at the Perseus Digital Library. Robertson N.D, New light in Demeters mysteries. The festival Petrosia in GRBS37 (1996) p. 319-379 Servius, Servii grammatici qui feruntur in Vergilii carmina commentarii, Volume III, edited by Georgius Thilo and Hermannus Hagen, Bibliotheca Teubneriana, Leipzig, Teubner, 1881. Online version at the Perseus Digital Library. Smith, William, Dictionary of Greek and Roman Biography and Mythology, London (1873) Online version at the Perseus Digital Library. Stalmith A.B, The name of Demeter Thesmophoros in GRBS48 (2008) p. 115-131 Strabo, The Geography of Strabo. Edition by H.L. Jones. Cambridge, Mass.: Harvard University Press; London: William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library. Tripp, Edward, Crowell's Handbook of Classical Mythology, Thomas Y. Crowell Co; First edition (June 1970). . West, M. L. (1983), The Orphic Poems, Clarendon Press Oxford, 1983. . West, M. L. (2007), Indo-European Poetry and Myth, OUP Oxford, 2007. . Google Books. External links Hymn to Demeter, Ancient Greek and English text, Interlinear Translation edited & adapted from the 1914 prose translation by Hugh G. Evelyn-White, with Greek-English glossary, notes and illustrations. Foley P. Helene, The Homeric hymn to Demeter: translation, commentary, and interpretive essays, Princeton Univers. Press, 1994. with Ancient Greek text and English translation. Text of Homeric Hymn to Demeter Online book of Martin P. Nilsson, Greek Popular Religion "The Political Cosmology of the Homeric Hymn to Demeter" "The Sophian Prayer to Demeter" The Warburg Institute Iconographic Database (images of Demeter) Abundance goddesses Agricultural goddesses Children of Cronus Chthonic beings Deities in the Iliad Divine women of Zeus Earth goddesses Fertility goddesses Food goddesses Greek goddesses Greek underworld Horse deities Justice goddesses Kourotrophoi Metamorphoses characters Mother goddesses Mythological rape victims Nature goddesses Primordial teachers Rape of Persephone Seasons Shapeshifters in Greek mythology Spring deities Nursemaids in Greek mythology Supernatural beings identified with Christian saints Twelve Olympians Underworld goddesses Women of Helios Women of Poseidon
Demeter
[ "Physics" ]
11,430
[ "Physical phenomena", "Earth phenomena", "Seasons" ]
8,263
https://en.wikipedia.org/wiki/Dissociation%20constant
In chemistry, biochemistry, and pharmacology, a dissociation constant (KD) is a specific type of equilibrium constant that measures the propensity of a larger object to separate (dissociate) reversibly into smaller components, as when a complex falls apart into its component molecules, or when a salt splits up into its component ions. The dissociation constant is the inverse of the association constant. In the special case of salts, the dissociation constant can also be called an ionization constant. For a general reaction: A_\mathit{x} B_\mathit{y} <=> \mathit{x} A{} + \mathit{y} B in which a complex breaks down into x A subunits and y B subunits, the dissociation constant is defined as where [A], [B], and [Ax By] are the equilibrium concentrations of A, B, and the complex Ax By, respectively. One reason for the popularity of the dissociation constant in biochemistry and pharmacology is that in the frequently encountered case where x = y = 1, KD has a simple physical interpretation: when [A] = KD, then [B] = [AB] or, equivalently, . That is, KD, which has the dimensions of concentration, equals the concentration of free A at which half of the total molecules of B are associated with A. This simple interpretation does not apply for higher values of x or y. It also presumes the absence of competing reactions, though the derivation can be extended to explicitly allow for and describe competitive binding. It is useful as a quick description of the binding of a substance, in the same way that EC50 and IC50 describe the biological activities of substances. Concentration of bound molecules Molecules with one binding site Experimentally, the concentration of the molecule complex [AB] is obtained indirectly from the measurement of the concentration of a free molecules, either [A] or [B]. In principle, the total amounts of molecule [A]0 and [B]0 added to the reaction are known. They separate into free and bound components according to the mass conservation principle: To track the concentration of the complex [AB], one substitutes the concentration of the free molecules ([A] or [B]), of the respective conservation equations, by the definition of the dissociation constant, This yields the concentration of the complex related to the concentration of either one of the free molecules Macromolecules with identical independent binding sites Many biological proteins and enzymes can possess more than one binding site. Usually, when a ligand binds with a macromolecule , it can influence binding kinetics of other ligands binding to the macromolecule. A simplified mechanism can be formulated if the affinity of all binding sites can be considered independent of the number of ligands bound to the macromolecule. This is valid for macromolecules composed of more than one, mostly identical, subunits. It can be then assumed that each of these subunits are identical, symmetric and that they possess only a single binding site. Then the concentration of bound ligands [L]_{bound} becomes In this case, , but comprises all partially saturated forms of the macromolecule: where the saturation occurs stepwise For the derivation of the general binding equation a saturation function is defined as the quotient from the portion of bound ligand to the total amount of the macromolecule: K′n are so-called macroscopic or apparent dissociation constants and can result from multiple individual reactions. For example, if a macromolecule M has three binding sites, K′1 describes a ligand being bound to any of the three binding sites. In this example, K′2 describes two molecules being bound and K′3 three molecules being bound to the macromolecule. The microscopic or individual dissociation constant describes the equilibrium of ligands binding to specific binding sites. Because we assume identical binding sites with no cooperativity, the microscopic dissociation constant must be equal for every binding site and can be abbreviated simply as KD. In our example, K′1 is the amalgamation of a ligand binding to either of the three possible binding sites (I, II and III), hence three microscopic dissociation constants and three distinct states of the ligand–macromolecule complex. For K′2 there are six different microscopic dissociation constants (I–II, I–III, II–I, II–III, III–I, III–II) but only three distinct states (it does not matter whether you bind pocket I first and then II or II first and then I). For K′3 there are three different dissociation constants — there are only three possibilities for which pocket is filled last (I, II or III) — and one state (I–II–III). Even when the microscopic dissociation constant is the same for each individual binding event, the macroscopic outcome (K′1, K′2 and K′3) is not equal. This can be understood intuitively for our example of three possible binding sites. K′1 describes the reaction from one state (no ligand bound) to three states (one ligand bound to either of the three binding sides). The apparent K′1 would therefore be three times smaller than the individual KD. K′2 describes the reaction from three states (one ligand bound) to three states (two ligands bound); therefore, K′2 would be equal to KD. K′3 describes the reaction from three states (two ligands bound) to one state (three ligands bound); hence, the apparent dissociation constant K′3 is three times bigger than the microscopic dissociation constant KD. The general relationship between both types of dissociation constants for n binding sites is Hence, the ratio of bound ligand to macromolecules becomes where is the binomial coefficient. Then the first equation is proved by applying the binomial rule Protein–ligand binding The dissociation constant is commonly used to describe the affinity between a ligand L (such as a drug) and a protein P; i.e., how tightly a ligand binds to a particular protein. Ligand–protein affinities are influenced by non-covalent intermolecular interactions between the two molecules such as hydrogen bonding, electrostatic interactions, hydrophobic and van der Waals forces. Affinities can also be affected by high concentrations of other macromolecules, which causes macromolecular crowding. The formation of a ligand–protein complex LP can be described by a two-state process L + P <=> LP the corresponding dissociation constant is defined where [P], [L], and [LP] represent molar concentrations of the protein, ligand, and protein–ligand complex, respectively. The dissociation constant has molar units (M) and corresponds to the ligand concentration [L] at which half of the proteins are occupied at equilibrium, i.e., the concentration of ligand at which the concentration of protein with ligand bound [LP] equals the concentration of protein with no ligand bound [P]. The smaller the dissociation constant, the more tightly bound the ligand is, or the higher the affinity between ligand and protein. For example, a ligand with a nanomolar (nM) dissociation constant binds more tightly to a particular protein than a ligand with a micromolar (μM) dissociation constant. Sub-picomolar dissociation constants as a result of non-covalent binding interactions between two molecules are rare. Nevertheless, there are some important exceptions. Biotin and avidin bind with a dissociation constant of roughly 10−15 M = 1 fM = 0.000001 nM. Ribonuclease inhibitor proteins may also bind to ribonuclease with a similar 10−15 M affinity. The dissociation constant for a particular ligand–protein interaction can change with solution conditions (e.g., temperature, pH and salt concentration). The effect of different solution conditions is to effectively modify the strength of any intermolecular interactions holding a particular ligand–protein complex together. Drugs can produce harmful side effects through interactions with proteins for which they were not meant to or designed to interact. Therefore, much pharmaceutical research is aimed at designing drugs that bind to only their target proteins (negative design) with high affinity (typically 0.1–10 nM) or at improving the affinity between a particular drug and its in vivo protein target (positive design). Antibodies In the specific case of antibodies (Ab) binding to antigen (Ag), usually the term affinity constant refers to the association constant. Ab + Ag <=> AbAg This chemical equilibrium is also the ratio of the on-rate (kforward or ka) and off-rate (kback or kd) constants. Two antibodies can have the same affinity, but one may have both a high on- and off-rate constant, while the other may have both a low on- and off-rate constant. Acid–base reactions For the deprotonation of acids, K is known as Ka, the acid dissociation constant. Strong acids, such as sulfuric or phosphoric acid, have large dissociation constants; weak acids, such as acetic acid, have small dissociation constants. The symbol Ka, used for the acid dissociation constant, can lead to confusion with the association constant, and it may be necessary to see the reaction or the equilibrium expression to know which is meant. Acid dissociation constants are sometimes expressed by pKa, which is defined by This notation is seen in other contexts as well; it is mainly used for covalent dissociations (i.e., reactions in which chemical bonds are made or broken) since such dissociation constants can vary greatly. A molecule can have several acid dissociation constants. In this regard, that is depending on the number of the protons they can give up, we define monoprotic, diprotic and triprotic acids. The first (e.g., acetic acid or ammonium) have only one dissociable group, the second (e.g., carbonic acid, bicarbonate, glycine) have two dissociable groups and the third (e.g., phosphoric acid) have three dissociable groups. In the case of multiple pK values they are designated by indices: pK1, pK2, pK3 and so on. For amino acids, the pK1 constant refers to its carboxyl (–COOH) group, pK2 refers to its amino (–NH2) group and the pK3 is the pK value of its side chain. Dissociation constant of water The dissociation constant of water is denoted Kw: The concentration of water, [H2O], is omitted by convention, which means that the value of Kw differs from the value of Keq that would be computed using that concentration. The value of Kw varies with temperature, as shown in the table below. This variation must be taken into account when making precise measurements of quantities such as pH. {| class="wikitable" style="text-align:center;" |- ! Water temperature ! Kw ! pKw |- |0 °C |0.112 |14.95 |- |25 °C |1.023 |13.99 |- |50 °C |5.495 |13.26 |- |75 °C |19.95 |12.70 |- |100 °C |56.23 |12.25 |} See also Acid Equilibrium constant Ki Database Competitive inhibition pH Scatchard plot Ligand binding Avidity References Equilibrium chemistry Enzyme kinetics
Dissociation constant
[ "Chemistry" ]
2,481
[ "Equilibrium chemistry", "Chemical kinetics", "Enzyme kinetics" ]
8,267
https://en.wikipedia.org/wiki/Dimensional%20analysis
In engineering and science, dimensional analysis is the analysis of the relationships between different physical quantities by identifying their base quantities (such as length, mass, time, and electric current) and units of measurement (such as metres and grams) and tracking these dimensions as calculations or comparisons are performed. The term dimensional analysis is also used to refer to conversion of units from one dimensional unit to another, which can be used to evaluate scientific formulae. Commensurable physical quantities are of the same kind and have the same dimension, and can be directly compared to each other, even if they are expressed in differing units of measurement; e.g., metres and feet, grams and pounds, seconds and years. Incommensurable physical quantities are of different kinds and have different dimensions, and can not be directly compared to each other, no matter what units they are expressed in, e.g. metres and grams, seconds and grams, metres and seconds. For example, asking whether a gram is larger than an hour is meaningless. Any physically meaningful equation, or inequality, must have the same dimensions on its left and right sides, a property known as dimensional homogeneity. Checking for dimensional homogeneity is a common application of dimensional analysis, serving as a plausibility check on derived equations and computations. It also serves as a guide and constraint in deriving equations that may describe a physical system in the absence of a more rigorous derivation. The concept of physical dimension or quantity dimension, and of dimensional analysis, was introduced by Joseph Fourier in 1822. Formulation The Buckingham π theorem describes how every physically meaningful equation involving variables can be equivalently rewritten as an equation of dimensionless parameters, where m is the rank of the dimensional matrix. Furthermore, and most importantly, it provides a method for computing these dimensionless parameters from the given variables. A dimensional equation can have the dimensions reduced or eliminated through nondimensionalization, which begins with dimensional analysis, and involves scaling quantities by characteristic units of a system or physical constants of nature. This may give insight into the fundamental properties of the system, as illustrated in the examples below. The dimension of a physical quantity can be expressed as a product of the base physical dimensions such as length, mass and time, each raised to an integer (and occasionally rational) power. The dimension of a physical quantity is more fundamental than some scale or unit used to express the amount of that physical quantity. For example, mass is a dimension, while the kilogram is a particular reference quantity chosen to express a quantity of mass. The choice of unit is arbitrary, and its choice is often based on historical precedent. Natural units, being based on only universal constants, may be thought of as being "less arbitrary". There are many possible choices of base physical dimensions. The SI standard selects the following dimensions and corresponding dimension symbols: time (T), length (L), mass (M), electric current (I), absolute temperature (Θ), amount of substance (N) and luminous intensity (J). The symbols are by convention usually written in roman sans serif typeface. Mathematically, the dimension of the quantity is given by where , , , , , , are the dimensional exponents. Other physical quantities could be defined as the base quantities, as long as they form a basis – for instance, one could replace the dimension (I) of electric current of the SI basis with a dimension (Q) of electric charge, since . A quantity that has only (with all other exponents zero) is known as a geometric quantity. A quantity that has only both and is known as a kinematic quantity. A quantity that has only all of , , and is known as a dynamic quantity. A quantity that has all exponents null is said to have dimension one. The unit chosen to express a physical quantity and its dimension are related, but not identical concepts. The units of a physical quantity are defined by convention and related to some standard; e.g., length may have units of metres, feet, inches, miles or micrometres; but any length always has a dimension of L, no matter what units of length are chosen to express it. Two different units of the same physical quantity have conversion factors that relate them. For example, ; in this case 2.54 cm/in is the conversion factor, which is itself dimensionless. Therefore, multiplying by that conversion factor does not change the dimensions of a physical quantity. There are also physicists who have cast doubt on the very existence of incompatible fundamental dimensions of physical quantity, although this does not invalidate the usefulness of dimensional analysis. Simple cases As examples, the dimension of the physical quantity speed is The dimension of the physical quantity acceleration is The dimension of the physical quantity force is The dimension of the physical quantity pressure is The dimension of the physical quantity energy is The dimension of the physical quantity power is The dimension of the physical quantity electric charge is The dimension of the physical quantity voltage is The dimension of the physical quantity capacitance is Rayleigh's method In dimensional analysis, Rayleigh's method is a conceptual tool used in physics, chemistry, and engineering. It expresses a functional relationship of some variables in the form of an exponential equation. It was named after Lord Rayleigh. The method involves the following steps: Gather all the independent variables that are likely to influence the dependent variable. If is a variable that depends upon independent variables , , , ..., , then the functional equation can be written as . Write the above equation in the form , where is a dimensionless constant and , , , ..., are arbitrary exponents. Express each of the quantities in the equation in some base units in which the solution is required. By using dimensional homogeneity, obtain a set of simultaneous equations involving the exponents , , , ..., . Solve these equations to obtain the values of the exponents , , , ..., . Substitute the values of exponents in the main equation, and form the non-dimensional parameters by grouping the variables with like exponents. As a drawback, Rayleigh's method does not provide any information regarding number of dimensionless groups to be obtained as a result of dimensional analysis. Concrete numbers and base units Many parameters and measurements in the physical sciences and engineering are expressed as a concrete number—a numerical quantity and a corresponding dimensional unit. Often a quantity is expressed in terms of several other quantities; for example, speed is a combination of length and time, e.g. 60 kilometres per hour or 1.4 kilometres per second. Compound relations with "per" are expressed with division, e.g. 60 km/h. Other relations can involve multiplication (often shown with a centered dot or juxtaposition), powers (like m2 for square metres), or combinations thereof. A set of base units for a system of measurement is a conventionally chosen set of units, none of which can be expressed as a combination of the others and in terms of which all the remaining units of the system can be expressed. For example, units for length and time are normally chosen as base units. Units for volume, however, can be factored into the base units of length (m3), thus they are considered derived or compound units. Sometimes the names of units obscure the fact that they are derived units. For example, a newton (N) is a unit of force, which may be expressed as the product of mass (with unit kg) and acceleration (with unit m⋅s−2). The newton is defined as . Percentages, derivatives and integrals Percentages are dimensionless quantities, since they are ratios of two quantities with the same dimensions. In other words, the % sign can be read as "hundredths", since . Taking a derivative with respect to a quantity divides the dimension by the dimension of the variable that is differentiated with respect to. Thus: position () has the dimension L (length); derivative of position with respect to time (, velocity) has dimension T−1L—length from position, time due to the gradient; the second derivative (, acceleration) has dimension . Likewise, taking an integral adds the dimension of the variable one is integrating with respect to, but in the numerator. force has the dimension (mass multiplied by acceleration); the integral of force with respect to the distance () the object has travelled (, work) has dimension . In economics, one distinguishes between stocks and flows: a stock has a unit (say, widgets or dollars), while a flow is a derivative of a stock, and has a unit of the form of this unit divided by one of time (say, dollars/year). In some contexts, dimensional quantities are expressed as dimensionless quantities or percentages by omitting some dimensions. For example, debt-to-GDP ratios are generally expressed as percentages: total debt outstanding (dimension of currency) divided by annual GDP (dimension of currency)—but one may argue that, in comparing a stock to a flow, annual GDP should have dimensions of currency/time (dollars/year, for instance) and thus debt-to-GDP should have the unit year, which indicates that debt-to-GDP is the number of years needed for a constant GDP to pay the debt, if all GDP is spent on the debt and the debt is otherwise unchanged. Dimensional homogeneity (commensurability) The most basic rule of dimensional analysis is that of dimensional homogeneity. However, the dimensions form an abelian group under multiplication, so: For example, it makes no sense to ask whether 1 hour is more, the same, or less than 1 kilometre, as these have different dimensions, nor to add 1 hour to 1 kilometre. However, it makes sense to ask whether 1 mile is more, the same, or less than 1 kilometre, being the same dimension of physical quantity even though the units are different. On the other hand, if an object travels 100 km in 2 hours, one may divide these and conclude that the object's average speed was 50 km/h. The rule implies that in a physically meaningful expression only quantities of the same dimension can be added, subtracted, or compared. For example, if , and denote, respectively, the mass of some man, the mass of a rat and the length of that man, the dimensionally homogeneous expression is meaningful, but the heterogeneous expression is meaningless. However, is fine. Thus, dimensional analysis may be used as a sanity check of physical equations: the two sides of any equation must be commensurable or have the same dimensions. Even when two physical quantities have identical dimensions, it may nevertheless be meaningless to compare or add them. For example, although torque and energy share the dimension , they are fundamentally different physical quantities. To compare, add, or subtract quantities with the same dimensions but expressed in different units, the standard procedure is first to convert them all to the same unit. For example, to compare 32 metres with 35 yards, use to convert 35 yards to 32.004 m. A related principle is that any physical law that accurately describes the real world must be independent of the units used to measure the physical variables. For example, Newton's laws of motion must hold true whether distance is measured in miles or kilometres. This principle gives rise to the form that a conversion factor between two units that measure the same dimension must take multiplication by a simple constant. It also ensures equivalence; for example, if two buildings are the same height in feet, then they must be the same height in metres. Conversion factor In dimensional analysis, a ratio which converts one unit of measure into another without changing the quantity is called a conversion factor. For example, kPa and bar are both units of pressure, and . The rules of algebra allow both sides of an equation to be divided by the same expression, so this is equivalent to . Since any quantity can be multiplied by 1 without changing it, the expression "" can be used to convert from bars to kPa by multiplying it with the quantity to be converted, including the unit. For example, because , and bar/bar cancels out, so . Applications Dimensional analysis is most often used in physics and chemistry – and in the mathematics thereof – but finds some applications outside of those fields as well. Mathematics A simple application of dimensional analysis to mathematics is in computing the form of the volume of an -ball (the solid ball in n dimensions), or the area of its surface, the -sphere: being an -dimensional figure, the volume scales as , while the surface area, being -dimensional, scales as . Thus the volume of the -ball in terms of the radius is , for some constant . Determining the constant takes more involved mathematics, but the form can be deduced and checked by dimensional analysis alone. Finance, economics, and accounting In finance, economics, and accounting, dimensional analysis is most commonly referred to in terms of the distinction between stocks and flows. More generally, dimensional analysis is used in interpreting various financial ratios, economics ratios, and accounting ratios. For example, the P/E ratio has dimensions of time (unit: year), and can be interpreted as "years of earnings to earn the price paid". In economics, debt-to-GDP ratio also has the unit year (debt has a unit of currency, GDP has a unit of currency/year). Velocity of money has a unit of 1/years (GDP/money supply has a unit of currency/year over currency): how often a unit of currency circulates per year. Annual continuously compounded interest rates and simple interest rates are often expressed as a percentage (adimensional quantity) while time is expressed as an adimensional quantity consisting of the number of years. However, if the time includes year as the unit of measure, the dimension of the rate is 1/year. Of course, there is nothing special (apart from the usual convention) about using year as a unit of time: any other time unit can be used. Furthermore, if rate and time include their units of measure, the use of different units for each is not problematic. In contrast, rate and time need to refer to a common period if they are adimensional. (Note that effective interest rates can only be defined as adimensional quantities.) In financial analysis, bond duration can be defined as , where is the value of a bond (or portfolio), is the continuously compounded interest rate and is a derivative. From the previous point, the dimension of is 1/time. Therefore, the dimension of duration is time (usually expressed in years) because is in the "denominator" of the derivative. Fluid mechanics In fluid mechanics, dimensional analysis is performed to obtain dimensionless pi terms or groups. According to the principles of dimensional analysis, any prototype can be described by a series of these terms or groups that describe the behaviour of the system. Using suitable pi terms or groups, it is possible to develop a similar set of pi terms for a model that has the same dimensional relationships. In other words, pi terms provide a shortcut to developing a model representing a certain prototype. Common dimensionless groups in fluid mechanics include: Reynolds number (), generally important in all types of fluid problems: Froude number (), modeling flow with a free surface: Euler number (), used in problems in which pressure is of interest: Mach number (), important in high speed flows where the velocity approaches or exceeds the local speed of sound: where is the local speed of sound. History The origins of dimensional analysis have been disputed by historians. The first written application of dimensional analysis has been credited to François Daviet, a student of Joseph-Louis Lagrange, in a 1799 article at the Turin Academy of Science. This led to the conclusion that meaningful laws must be homogeneous equations in their various units of measurement, a result which was eventually later formalized in the Buckingham π theorem. Simeon Poisson also treated the same problem of the parallelogram law by Daviet, in his treatise of 1811 and 1833 (vol I, p. 39). In the second edition of 1833, Poisson explicitly introduces the term dimension instead of the Daviet homogeneity. In 1822, the important Napoleonic scientist Joseph Fourier made the first credited important contributions based on the idea that physical laws like should be independent of the units employed to measure the physical variables. James Clerk Maxwell played a major role in establishing modern use of dimensional analysis by distinguishing mass, length, and time as fundamental units, while referring to other units as derived. Although Maxwell defined length, time and mass to be "the three fundamental units", he also noted that gravitational mass can be derived from length and time by assuming a form of Newton's law of universal gravitation in which the gravitational constant is taken as unity, thereby defining . By assuming a form of Coulomb's law in which the Coulomb constant ke is taken as unity, Maxwell then determined that the dimensions of an electrostatic unit of charge were , which, after substituting his equation for mass, results in charge having the same dimensions as mass, viz. . Dimensional analysis is also used to derive relationships between the physical quantities that are involved in a particular phenomenon that one wishes to understand and characterize. It was used for the first time in this way in 1872 by Lord Rayleigh, who was trying to understand why the sky is blue. Rayleigh first published the technique in his 1877 book The Theory of Sound. The original meaning of the word dimension, in Fourier's Theorie de la Chaleur, was the numerical value of the exponents of the base units. For example, acceleration was considered to have the dimension 1 with respect to the unit of length, and the dimension −2 with respect to the unit of time. This was slightly changed by Maxwell, who said the dimensions of acceleration are T−2L, instead of just the exponents. Examples A simple example: period of a harmonic oscillator What is the period of oscillation of a mass attached to an ideal linear spring with spring constant suspended in gravity of strength ? That period is the solution for of some dimensionless equation in the variables , , , and . The four quantities have the following dimensions: [T]; [M]; [M/T2]; and [L/T2]. From these we can form only one dimensionless product of powers of our chosen variables, , and putting for some dimensionless constant gives the dimensionless equation sought. The dimensionless product of powers of variables is sometimes referred to as a dimensionless group of variables; here the term "group" means "collection" rather than mathematical group. They are often called dimensionless numbers as well. The variable does not occur in the group. It is easy to see that it is impossible to form a dimensionless product of powers that combines with , , and , because is the only quantity that involves the dimension L. This implies that in this problem the is irrelevant. Dimensional analysis can sometimes yield strong statements about the irrelevance of some quantities in a problem, or the need for additional parameters. If we have chosen enough variables to properly describe the problem, then from this argument we can conclude that the period of the mass on the spring is independent of : it is the same on the earth or the moon. The equation demonstrating the existence of a product of powers for our problem can be written in an entirely equivalent way: , for some dimensionless constant (equal to from the original dimensionless equation). When faced with a case where dimensional analysis rejects a variable (, here) that one intuitively expects to belong in a physical description of the situation, another possibility is that the rejected variable is in fact relevant, but that some other relevant variable has been omitted, which might combine with the rejected variable to form a dimensionless quantity. That is, however, not the case here. When dimensional analysis yields only one dimensionless group, as here, there are no unknown functions, and the solution is said to be "complete" – although it still may involve unknown dimensionless constants, such as . A more complex example: energy of a vibrating wire Consider the case of a vibrating wire of length (L) vibrating with an amplitude (L). The wire has a linear density (M/L) and is under tension (LM/T2), and we want to know the energy (L2M/T2) in the wire. Let and be two dimensionless products of powers of the variables chosen, given by The linear density of the wire is not involved. The two groups found can be combined into an equivalent form as an equation where is some unknown function, or, equivalently as where is some other unknown function. Here the unknown function implies that our solution is now incomplete, but dimensional analysis has given us something that may not have been obvious: the energy is proportional to the first power of the tension. Barring further analytical analysis, we might proceed to experiments to discover the form for the unknown function . But our experiments are simpler than in the absence of dimensional analysis. We'd perform none to verify that the energy is proportional to the tension. Or perhaps we might guess that the energy is proportional to , and so infer that . The power of dimensional analysis as an aid to experiment and forming hypotheses becomes evident. The power of dimensional analysis really becomes apparent when it is applied to situations, unlike those given above, that are more complicated, the set of variables involved are not apparent, and the underlying equations hopelessly complex. Consider, for example, a small pebble sitting on the bed of a river. If the river flows fast enough, it will actually raise the pebble and cause it to flow along with the water. At what critical velocity will this occur? Sorting out the guessed variables is not so easy as before. But dimensional analysis can be a powerful aid in understanding problems like this, and is usually the very first tool to be applied to complex problems where the underlying equations and constraints are poorly understood. In such cases, the answer may depend on a dimensionless number such as the Reynolds number, which may be interpreted by dimensional analysis. A third example: demand versus capacity for a rotating disc Consider the case of a thin, solid, parallel-sided rotating disc of axial thickness (L) and radius (L). The disc has a density (M/L3), rotates at an angular velocity (T−1) and this leads to a stress (T−2L−1M) in the material. There is a theoretical linear elastic solution, given by Lame, to this problem when the disc is thin relative to its radius, the faces of the disc are free to move axially, and the plane stress constitutive relations can be assumed to be valid. As the disc becomes thicker relative to the radius then the plane stress solution breaks down. If the disc is restrained axially on its free faces then a state of plane strain will occur. However, if this is not the case then the state of stress may only be determined though consideration of three-dimensional elasticity and there is no known theoretical solution for this case. An engineer might, therefore, be interested in establishing a relationship between the five variables. Dimensional analysis for this case leads to the following () non-dimensional groups: demand/capacity = thickness/radius or aspect ratio = Through the use of numerical experiments using, for example, the finite element method, the nature of the relationship between the two non-dimensional groups can be obtained as shown in the figure. As this problem only involves two non-dimensional groups, the complete picture is provided in a single plot and this can be used as a design/assessment chart for rotating discs. Properties Mathematical properties The dimensions that can be formed from a given collection of basic physical dimensions, such as T, L, and M, form an abelian group: The identity is written as 1; , and the inverse of L is 1/L or L−1. L raised to any integer power is a member of the group, having an inverse of L or 1/L. The operation of the group is multiplication, having the usual rules for handling exponents (). Physically, 1/L can be interpreted as reciprocal length, and 1/T as reciprocal time (see reciprocal second). An abelian group is equivalent to a module over the integers, with the dimensional symbol corresponding to the tuple . When physical measured quantities (be they like-dimensioned or unlike-dimensioned) are multiplied or divided by one other, their dimensional units are likewise multiplied or divided; this corresponds to addition or subtraction in the module. When measurable quantities are raised to an integer power, the same is done to the dimensional symbols attached to those quantities; this corresponds to scalar multiplication in the module. A basis for such a module of dimensional symbols is called a set of base quantities, and all other vectors are called derived units. As in any module, one may choose different bases, which yields different systems of units (e.g., choosing whether the unit for charge is derived from the unit for current, or vice versa). The group identity, the dimension of dimensionless quantities, corresponds to the origin in this module, . In certain cases, one can define fractional dimensions, specifically by formally defining fractional powers of one-dimensional vector spaces, like . However, it is not possible to take arbitrary fractional powers of units, due to representation-theoretic obstructions. One can work with vector spaces with given dimensions without needing to use units (corresponding to coordinate systems of the vector spaces). For example, given dimensions and , one has the vector spaces and , and can define as the tensor product. Similarly, the dual space can be interpreted as having "negative" dimensions. This corresponds to the fact that under the natural pairing between a vector space and its dual, the dimensions cancel, leaving a dimensionless scalar. The set of units of the physical quantities involved in a problem correspond to a set of vectors (or a matrix). The nullity describes some number (e.g., ) of ways in which these vectors can be combined to produce a zero vector. These correspond to producing (from the measurements) a number of dimensionless quantities, . (In fact these ways completely span the null subspace of another different space, of powers of the measurements.) Every possible way of multiplying (and exponentiating) together the measured quantities to produce something with the same unit as some derived quantity can be expressed in the general form Consequently, every possible commensurate equation for the physics of the system can be rewritten in the form Knowing this restriction can be a powerful tool for obtaining new insight into the system. Mechanics The dimension of physical quantities of interest in mechanics can be expressed in terms of base dimensions T, L, and M – these form a 3-dimensional vector space. This is not the only valid choice of base dimensions, but it is the one most commonly used. For example, one might choose force, length and mass as the base dimensions (as some have done), with associated dimensions F, L, M; this corresponds to a different basis, and one may convert between these representations by a change of basis. The choice of the base set of dimensions is thus a convention, with the benefit of increased utility and familiarity. The choice of base dimensions is not entirely arbitrary, because they must form a basis: they must span the space, and be linearly independent. For example, F, L, M form a set of fundamental dimensions because they form a basis that is equivalent to T, L, M: the former can be expressed as [F = LM/T2], L, M, while the latter can be expressed as [T = (LM/F)1/2], L, M. On the other hand, length, velocity and time (T, L, V) do not form a set of base dimensions for mechanics, for two reasons: There is no way to obtain mass – or anything derived from it, such as force – without introducing another base dimension (thus, they do not span the space). Velocity, being expressible in terms of length and time (), is redundant (the set is not linearly independent). Other fields of physics and chemistry Depending on the field of physics, it may be advantageous to choose one or another extended set of dimensional symbols. In electromagnetism, for example, it may be useful to use dimensions of T, L, M and Q, where Q represents the dimension of electric charge. In thermodynamics, the base set of dimensions is often extended to include a dimension for temperature, Θ. In chemistry, the amount of substance (the number of molecules divided by the Avogadro constant, ≈ ) is also defined as a base dimension, N. In the interaction of relativistic plasma with strong laser pulses, a dimensionless relativistic similarity parameter, connected with the symmetry properties of the collisionless Vlasov equation, is constructed from the plasma-, electron- and critical-densities in addition to the electromagnetic vector potential. The choice of the dimensions or even the number of dimensions to be used in different fields of physics is to some extent arbitrary, but consistency in use and ease of communications are common and necessary features. Polynomials and transcendental functions Bridgman's theorem restricts the type of function that can be used to define a physical quantity from general (dimensionally compounded) quantities to only products of powers of the quantities, unless some of the independent quantities are algebraically combined to yield dimensionless groups, whose functions are grouped together in the dimensionless numeric multiplying factor. This excludes polynomials of more than one term or transcendental functions not of that form. Scalar arguments to transcendental functions such as exponential, trigonometric and logarithmic functions, or to inhomogeneous polynomials, must be dimensionless quantities. (Note: this requirement is somewhat relaxed in Siano's orientational analysis described below, in which the square of certain dimensioned quantities are dimensionless.) While most mathematical identities about dimensionless numbers translate in a straightforward manner to dimensional quantities, care must be taken with logarithms of ratios: the identity , where the logarithm is taken in any base, holds for dimensionless numbers and , but it does not hold if and are dimensional, because in this case the left-hand side is well-defined but the right-hand side is not. Similarly, while one can evaluate monomials () of dimensional quantities, one cannot evaluate polynomials of mixed degree with dimensionless coefficients on dimensional quantities: for , the expression makes sense (as an area), while for , the expression does not make sense. However, polynomials of mixed degree can make sense if the coefficients are suitably chosen physical quantities that are not dimensionless. For example, This is the height to which an object rises in time  if the acceleration of gravity is 9.8 and the initial upward speed is 500 . It is not necessary for to be in seconds. For example, suppose  = 0.01 minutes. Then the first term would be Combining units and numerical values The value of a dimensional physical quantity is written as the product of a unit [] within the dimension and a dimensionless numerical value or numerical factor, . When like-dimensioned quantities are added or subtracted or compared, it is convenient to express them in the same unit so that the numerical values of these quantities may be directly added or subtracted. But, in concept, there is no problem adding quantities of the same dimension expressed in different units. For example, 1 metre added to 1 foot is a length, but one cannot derive that length by simply adding 1 and 1. A conversion factor, which is a ratio of like-dimensioned quantities and is equal to the dimensionless unity, is needed: is identical to The factor 0.3048 m/ft is identical to the dimensionless 1, so multiplying by this conversion factor changes nothing. Then when adding two quantities of like dimension, but expressed in different units, the appropriate conversion factor, which is essentially the dimensionless 1, is used to convert the quantities to the same unit so that their numerical values can be added or subtracted. Only in this manner is it meaningful to speak of adding like-dimensioned quantities of differing units. Quantity equations A quantity equation, also sometimes called a complete equation, is an equation that remains valid independently of the unit of measurement used when expressing the physical quantities. In contrast, in a numerical-value equation, just the numerical values of the quantities occur, without units. Therefore, it is only valid when each numerical values is referenced to a specific unit. For example, a quantity equation for displacement as speed multiplied by time difference would be: for = 5 m/s, where and may be expressed in any units, converted if necessary. In contrast, a corresponding numerical-value equation would be: where is the numeric value of when expressed in seconds and is the numeric value of when expressed in metres. Generally, the use of numerical-value equations is discouraged. Dimensionless concepts Constants The dimensionless constants that arise in the results obtained, such as the in the Poiseuille's Law problem and the in the spring problems discussed above, come from a more detailed analysis of the underlying physics and often arise from integrating some differential equation. Dimensional analysis itself has little to say about these constants, but it is useful to know that they very often have a magnitude of order unity. This observation can allow one to sometimes make "back of the envelope" calculations about the phenomenon of interest, and therefore be able to more efficiently design experiments to measure it, or to judge whether it is important, etc. Formalisms Paradoxically, dimensional analysis can be a useful tool even if all the parameters in the underlying theory are dimensionless, e.g., lattice models such as the Ising model can be used to study phase transitions and critical phenomena. Such models can be formulated in a purely dimensionless way. As we approach the critical point closer and closer, the distance over which the variables in the lattice model are correlated (the so-called correlation length, ) becomes larger and larger. Now, the correlation length is the relevant length scale related to critical phenomena, so one can, e.g., surmise on "dimensional grounds" that the non-analytical part of the free energy per lattice site should be , where is the dimension of the lattice. It has been argued by some physicists, e.g., Michael J. Duff, that the laws of physics are inherently dimensionless. The fact that we have assigned incompatible dimensions to Length, Time and Mass is, according to this point of view, just a matter of convention, borne out of the fact that before the advent of modern physics, there was no way to relate mass, length, and time to each other. The three independent dimensionful constants: , , and , in the fundamental equations of physics must then be seen as mere conversion factors to convert Mass, Time and Length into each other. Just as in the case of critical properties of lattice models, one can recover the results of dimensional analysis in the appropriate scaling limit; e.g., dimensional analysis in mechanics can be derived by reinserting the constants , , and (but we can now consider them to be dimensionless) and demanding that a nonsingular relation between quantities exists in the limit , and . In problems involving a gravitational field the latter limit should be taken such that the field stays finite. Dimensional equivalences Following are tables of commonly occurring expressions in physics, related to the dimensions of energy, momentum, and force. SI units Programming languages Dimensional correctness as part of type checking has been studied since 1977. Implementations for Ada and C++ were described in 1985 and 1988. Kennedy's 1996 thesis describes an implementation in Standard ML, and later in F#. There are implementations for Haskell, OCaml, and Rust, Python, and a code checker for Fortran. Griffioen's 2019 thesis extended Kennedy's Hindley–Milner type system to support Hart's matrices. McBride and Nordvall-Forsberg show how to use dependent types to extend type systems for units of measure. Mathematica 13.2 has a function for transformations with quantities named NondimensionalizationTransform that applies a nondimensionalization transform to an equation. Mathematica also has a function to find the dimensions of a unit such as 1 J named UnitDimensions. Mathematica also has a function that will find dimensionally equivalent combinations of a subset of physical quantities named DimensionalCombations. Mathematica can also factor out certain dimension with UnitDimensions by specifying an argument to the function UnityDimensions. For example, you can use UnityDimensions to factor out angles. In addition to UnitDimensions, Mathematica can find the dimensions of a QuantityVariable with the function QuantityVariableDimensions. Geometry: position vs. displacement Affine quantities Some discussions of dimensional analysis implicitly describe all quantities as mathematical vectors. In mathematics scalars are considered a special case of vectors; vectors can be added to or subtracted from other vectors, and, inter alia, multiplied or divided by scalars. If a vector is used to define a position, this assumes an implicit point of reference: an origin. While this is useful and often perfectly adequate, allowing many important errors to be caught, it can fail to model certain aspects of physics. A more rigorous approach requires distinguishing between position and displacement (or moment in time versus duration, or absolute temperature versus temperature change). Consider points on a line, each with a position with respect to a given origin, and distances among them. Positions and displacements all have units of length, but their meaning is not interchangeable: adding two displacements should yield a new displacement (walking ten paces then twenty paces gets you thirty paces forward), adding a displacement to a position should yield a new position (walking one block down the street from an intersection gets you to the next intersection), subtracting two positions should yield a displacement, but one may not add two positions. This illustrates the subtle distinction between affine quantities (ones modeled by an affine space, such as position) and vector quantities (ones modeled by a vector space, such as displacement). Vector quantities may be added to each other, yielding a new vector quantity, and a vector quantity may be added to a suitable affine quantity (a vector space acts on an affine space), yielding a new affine quantity. Affine quantities cannot be added, but may be subtracted, yielding relative quantities which are vectors, and these relative differences may then be added to each other or to an affine quantity. Properly then, positions have dimension of affine length, while displacements have dimension of vector length. To assign a number to an affine unit, one must not only choose a unit of measurement, but also a point of reference, while to assign a number to a vector unit only requires a unit of measurement. Thus some physical quantities are better modeled by vectorial quantities while others tend to require affine representation, and the distinction is reflected in their dimensional analysis. This distinction is particularly important in the case of temperature, for which the numeric value of absolute zero is not the origin 0 in some scales. For absolute zero, −273.15 °C ≘ 0 K = 0 °R ≘ −459.67 °F, where the symbol ≘ means corresponds to, since although these values on the respective temperature scales correspond, they represent distinct quantities in the same way that the distances from distinct starting points to the same end point are distinct quantities, and cannot in general be equated. For temperature differences, 1 K = 1 °C ≠ 1 °F = 1 °R. (Here °R refers to the Rankine scale, not the Réaumur scale). Unit conversion for temperature differences is simply a matter of multiplying by, e.g., 1 °F / 1 K (although the ratio is not a constant value). But because some of these scales have origins that do not correspond to absolute zero, conversion from one temperature scale to another requires accounting for that. As a result, simple dimensional analysis can lead to errors if it is ambiguous whether 1 K means the absolute temperature equal to −272.15 °C, or the temperature difference equal to 1 °C. Orientation and frame of reference Similar to the issue of a point of reference is the issue of orientation: a displacement in 2 or 3 dimensions is not just a length, but is a length together with a direction. (In 1 dimension, this issue is equivalent to the distinction between positive and negative.) Thus, to compare or combine two dimensional quantities in multi-dimensional Euclidean space, one also needs a bearing: they need to be compared to a frame of reference. This leads to the extensions discussed below, namely Huntley's directed dimensions and Siano's orientational analysis. Huntley's extensions Huntley has pointed out that a dimensional analysis can become more powerful by discovering new independent dimensions in the quantities under consideration, thus increasing the rank of the dimensional matrix. He introduced two approaches: The magnitudes of the components of a vector are to be considered dimensionally independent. For example, rather than an undifferentiated length dimension L, we may have Lx represent dimension in the x-direction, and so forth. This requirement stems ultimately from the requirement that each component of a physically meaningful equation (scalar, vector, or tensor) must be dimensionally consistent. Mass as a measure of the quantity of matter is to be considered dimensionally independent from mass as a measure of inertia. Directed dimensions As an example of the usefulness of the first approach, suppose we wish to calculate the distance a cannonball travels when fired with a vertical velocity component and a horizontal velocity component , assuming it is fired on a flat surface. Assuming no use of directed lengths, the quantities of interest are then , the distance travelled, with dimension L, , , both dimensioned as T−1L, and the downward acceleration of gravity, with dimension T−2L. With these four quantities, we may conclude that the equation for the range may be written: Or dimensionally from which we may deduce that and , which leaves one exponent undetermined. This is to be expected since we have two fundamental dimensions T and L, and four parameters, with one equation. However, if we use directed length dimensions, then will be dimensioned as T−1L, as T−1L, as L and as T−2L. The dimensional equation becomes: and we may solve completely as , and . The increase in deductive power gained by the use of directed length dimensions is apparent. Huntley's concept of directed length dimensions however has some serious limitations: It does not deal well with vector equations involving the cross product, nor does it handle well the use of angles as physical variables. It also is often quite difficult to assign the L, L, L, L, symbols to the physical variables involved in the problem of interest. He invokes a procedure that involves the "symmetry" of the physical problem. This is often very difficult to apply reliably: It is unclear as to what parts of the problem that the notion of "symmetry" is being invoked. Is it the symmetry of the physical body that forces are acting upon, or to the points, lines or areas at which forces are being applied? What if more than one body is involved with different symmetries? Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts? Are they the same or different? These difficulties are responsible for the limited application of Huntley's directed length dimensions to real problems. Quantity of matter In Huntley's second approach, he holds that it is sometimes useful (e.g., in fluid mechanics and thermodynamics) to distinguish between mass as a measure of inertia (inertial mass), and mass as a measure of the quantity of matter. Quantity of matter is defined by Huntley as a quantity only to inertial mass, while not implicating inertial properties. No further restrictions are added to its definition. For example, consider the derivation of Poiseuille's Law. We wish to find the rate of mass flow of a viscous fluid through a circular pipe. Without drawing distinctions between inertial and substantial mass, we may choose as the relevant variables: There are three fundamental variables, so the above five equations will yield two independent dimensionless variables: If we distinguish between inertial mass with dimension and quantity of matter with dimension , then mass flow rate and density will use quantity of matter as the mass parameter, while the pressure gradient and coefficient of viscosity will use inertial mass. We now have four fundamental parameters, and one dimensionless constant, so that the dimensional equation may be written: where now only is an undetermined constant (found to be equal to by methods outside of dimensional analysis). This equation may be solved for the mass flow rate to yield Poiseuille's law. Huntley's recognition of quantity of matter as an independent quantity dimension is evidently successful in the problems where it is applicable, but his definition of quantity of matter is open to interpretation, as it lacks specificity beyond the two requirements he postulated for it. For a given substance, the SI dimension amount of substance, with unit mole, does satisfy Huntley's two requirements as a measure of quantity of matter, and could be used as a quantity of matter in any problem of dimensional analysis where Huntley's concept is applicable. Siano's extension: orientational analysis Angles are, by convention, considered to be dimensionless quantities (although the wisdom of this is contested ) . As an example, consider again the projectile problem in which a point mass is launched from the origin at a speed and angle above the x-axis, with the force of gravity directed along the negative y-axis. It is desired to find the range , at which point the mass returns to the x-axis. Conventional analysis will yield the dimensionless variable , but offers no insight into the relationship between and . Siano has suggested that the directed dimensions of Huntley be replaced by using orientational symbols to denote vector directions, and an orientationless symbol 10. Thus, Huntley's L becomes L1 with L specifying the dimension of length, and specifying the orientation. Siano further shows that the orientational symbols have an algebra of their own. Along with the requirement that , the following multiplication table for the orientation symbols results: The orientational symbols form a group (the Klein four-group or "Viergruppe"). In this system, scalars always have the same orientation as the identity element, independent of the "symmetry of the problem". Physical quantities that are vectors have the orientation expected: a force or a velocity in the z-direction has the orientation of . For angles, consider an angle that lies in the z-plane. Form a right triangle in the z-plane with being one of the acute angles. The side of the right triangle adjacent to the angle then has an orientation and the side opposite has an orientation . Since (using to indicate orientational equivalence) we conclude that an angle in the xy-plane must have an orientation , which is not unreasonable. Analogous reasoning forces the conclusion that has orientation while has orientation 10. These are different, so one concludes (correctly), for example, that there are no solutions of physical equations that are of the form , where and are real scalars. An expression such as is not dimensionally inconsistent since it is a special case of the sum of angles formula and should properly be written: which for and yields . Siano distinguishes between geometric angles, which have an orientation in 3-dimensional space, and phase angles associated with time-based oscillations, which have no spatial orientation, i.e. the orientation of a phase angle is . The assignment of orientational symbols to physical quantities and the requirement that physical equations be orientationally homogeneous can actually be used in a way that is similar to dimensional analysis to derive more information about acceptable solutions of physical problems. In this approach, one solves the dimensional equation as far as one can. If the lowest power of a physical variable is fractional, both sides of the solution is raised to a power such that all powers are integral, putting it into normal form. The orientational equation is then solved to give a more restrictive condition on the unknown powers of the orientational symbols. The solution is then more complete than the one that dimensional analysis alone gives. Often, the added information is that one of the powers of a certain variable is even or odd. As an example, for the projectile problem, using orientational symbols, , being in the xy-plane will thus have dimension and the range of the projectile will be of the form: Dimensional homogeneity will now correctly yield and , and orientational homogeneity requires that . In other words, that must be an odd integer. In fact, the required function of theta will be which is a series consisting of odd powers of . It is seen that the Taylor series of and are orientationally homogeneous using the above multiplication table, while expressions like and are not, and are (correctly) deemed unphysical. Siano's orientational analysis is compatible with the conventional conception of angular quantities as being dimensionless, and within orientational analysis, the radian may still be considered a dimensionless unit. The orientational analysis of a quantity equation is carried out separately from the ordinary dimensional analysis, yielding information that supplements the dimensional analysis. See also Buckingham π theorem Dimensionless numbers in fluid mechanics Fermi estimate – used to teach dimensional analysis Numerical-value equation Rayleigh's method of dimensional analysis Similitude – an application of dimensional analysis System of measurement Related areas of mathematics Covariance and contravariance of vectors Exterior algebra Geometric algebra Quantity calculus Notes References As postscript , (5): 147, (6): 101, (7): 129 Wilson, Edwin B. (1920) "Theory of Dimensions", chapter XI of Aeronautics, via Internet Archive Further reading External links List of dimensions for variety of physical quantities Unicalc Live web calculator doing units conversion by dimensional analysis A C++ implementation of compile-time dimensional analysis in the Boost open-source libraries Buckingham's pi-theorem Quantity System calculator for units conversion based on dimensional approach Units, quantities, and fundamental constants project dimensional analysis maps Measurement Conversion of units of measurement Chemical engineering Mechanical engineering Environmental engineering
Dimensional analysis
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
10,504
[ "Applied and interdisciplinary physics", "Physical quantities", "Dimensional analysis", "Chemical engineering", "Quantity", "Measurement", "Size", "Environmental engineering", "Civil engineering", "Mechanical engineering", "nan", "Conversion of units of measurement", "Units of measurement" ]
8,271
https://en.wikipedia.org/wiki/Digital%20television
Digital television (DTV) is the transmission of television signals using digital encoding, in contrast to the earlier analog television technology which used analog signals. At the time of its development it was considered an innovative advancement and represented the first significant evolution in television technology since color television in the 1950s. Modern digital television is transmitted in high-definition television (HDTV) with greater resolution than analog TV. It typically uses a widescreen aspect ratio (commonly 16:9) in contrast to the narrower format (4:3) of analog TV. It makes more economical use of scarce radio spectrum space; it can transmit up to seven channels in the same bandwidth as a single analog channel, and provides many new features that analog television cannot. A transition from analog to digital broadcasting began around 2000. Different digital television broadcasting standards have been adopted in different parts of the world; below are the more widely used standards: Digital Video Broadcasting (DVB) uses coded orthogonal frequency-division multiplexing (OFDM) modulation and supports hierarchical transmission. This standard has been adopted in Europe, Africa, Asia and Australia, for a total of approximately 60 countries. Advanced Television System Committee (ATSC) standard uses eight-level vestigial sideband (8VSB) for terrestrial broadcasting. This standard has been adopted by 9 countries: the United States, Canada, Mexico, South Korea, Bahamas, Jamaica, the Dominican Republic, Haiti and Suriname. Integrated Services Digital Broadcasting (ISDB) is a system designed to provide good reception to fixed receivers and also portable or mobile receivers. It utilizes OFDM and two-dimensional interleaving. It supports hierarchical transmission of up to three layers and uses MPEG-2 video and Advanced Audio Coding. This standard has been adopted in Japan and the Philippines. ISDB-T International is an adaptation of this standard using H.264/MPEG-4 AVC, which has been adopted in most of South America as well as Botswana and Angola. Digital Terrestrial Multimedia Broadcast (DTMB) adopts time-domain synchronous (TDS) OFDM technology with a pseudo-random signal frame to serve as the guard interval (GI) of the OFDM block and the training symbol. The DTMB standard has been adopted in China, including Hong Kong and Macau. Digital Multimedia Broadcasting (DMB) is a digital radio transmission technology developed in South Korea as part of the national information technology project for sending multimedia such as TV, radio and datacasting to mobile devices such as mobile phones, laptops and GPS navigation systems. History Background Digital television's roots are tied to the availability of inexpensive, high-performance computers. It was not until the 1990s that digital TV became a real possibility. Digital television was previously not practically feasible due to the impractically high bandwidth requirements of uncompressed video, requiring around for a standard-definition television (SDTV) signal, and over for high-definition television (HDTV). Development In the mid-1980s, Toshiba released a television set with digital capabilities, using integrated circuit chips such as a microprocessor to convert analog television broadcast signals to digital video signals, enabling features such as freezing pictures and showing two channels at once. In 1986, Sony and NEC Home Electronics announced their own similar TV sets with digital video capabilities. However, they still relied on analog TV broadcast signals, with true digital TV broadcasts not yet being available at the time. A digital TV broadcast service was proposed in 1986 by Nippon Telegraph and Telephone (NTT) and the Ministry of Posts and Telecommunication (MPT) in Japan, where there were plans to develop an "Integrated Network System" service. However, it was not possible to practically implement such a digital TV service until the adoption of motion-compensated DCT video compression formats such as MPEG made it possible in the early 1990s. In the mid-1980s, as Japanese consumer electronics firms forged ahead with the development of HDTV technology, and as the MUSE analog format was proposed by Japan's public broadcaster NHK as a worldwide standard. Japanese advancements were seen as pacesetters that threatened to eclipse US electronics companies. Until June 1990, the Japanese MUSE standard—based on an analog system—was the front-runner among the more than 23 different technical concepts under consideration. Between 1988 and 1991, several European organizations were working on DCT-based digital video coding standards for both SDTV and HDTV. The EU 256 project by the CMTT and ETSI, along with research by Italian broadcaster RAI, developed a DCT video codec that broadcast SDTV at and near-studio-quality HDTV at about . RAI demonstrated this with a 1990 FIFA World Cup broadcast in March 1990. An American company, General Instrument, also demonstrated the feasibility of a digital television signal in 1990. This led to the FCC being persuaded to delay its decision on an advanced television (ATV) standard until a digitally based standard could be developed. When it became evident that a digital standard might be achieved in March 1990, the FCC took several important actions. First, the Commission declared that the new TV standard must be more than an enhanced analog signal, but be able to provide a genuine HDTV signal with at least twice the resolution of existing television images. Then, to ensure that viewers who did not wish to buy a new digital television set could continue to receive conventional television broadcasts, it dictated that the new ATV standard must be capable of being simulcast on different channels. The new ATV standard also allowed the new DTV signal to be based on entirely new design principles. Although incompatible with the existing NTSC standard, the new DTV standard would be able to incorporate many improvements. A universal standard for scanning formats, aspect ratios, or lines of resolution was not produced by the FCC's final standard. This outcome resulted from a dispute between the consumer electronics industry (joined by some broadcasters) and the computer industry (joined by the film industry and some public interest groups) over which of the two scanning processes—interlaced or progressive—is superior. Interlaced scanning, which is used in televisions worldwide, scans even-numbered lines first, then odd-numbered ones. Progressive scanning, which is the format used in computers, scans lines in sequences, from top to bottom. The computer industry argued that progressive scanning is superior because it does not flicker in the manner of interlaced scanning. It also argued that progressive scanning enables easier connections with the Internet and is more cheaply converted to interlaced formats than vice versa. The film industry also supported progressive scanning because it offers a more efficient means of converting filmed programming into digital formats. For their part, the consumer electronics industry and broadcasters argued that interlaced scanning was the only technology that could transmit the highest quality pictures then (and currently) feasible, i.e., 1,080 lines per picture and 1,920 pixels per line. Broadcasters also favored interlaced scanning because their vast archive of interlaced programming is not readily compatible with a progressive format. Inaugural launches DirecTV in the US launched the first commercial digital satellite platform in May 1994, using the Digital Satellite System (DSS) standard. Digital cable broadcasts were tested and launched in the US in 1996 by TCI and Time Warner. The first digital terrestrial platform was launched in November 1998 as ONdigital in the UK, using the DVB-T standard. Technical information Formats and bandwidth Digital television supports many different picture formats defined by the broadcast television systems which are a combination of size and aspect ratio (width to height ratio). With digital terrestrial television (DTT) broadcasting, the range of formats can be broadly divided into two categories: high-definition television (HDTV) for the transmission of high-definition video and standard-definition television (SDTV). These terms by themselves are not very precise and many subtle intermediate cases exist. One of several different HDTV formats that can be transmitted over DTV is: pixels in progressive scan mode (abbreviated 720p) or pixels in interlaced video mode (1080i). Each of these uses a 16:9 aspect ratio. HDTV cannot be transmitted over analog television channels because of channel capacity issues. SDTV, by comparison, may use one of several different formats taking the form of various aspect ratios depending on the technology used in the country of broadcast. NTSC can deliver a resolution in 4:3 and in 16:9, while PAL can give in 4:3 and in 16:9. However, broadcasters may choose to reduce these resolutions to reduce bit rate (e.g., many DVB-T channels in the UK use a horizontal resolution of 544 or 704 pixels per line). Each commercial broadcasting terrestrial television DTV channel in North America is allocated enough bandwidth to broadcast up to 19 megabits per second. However, the broadcaster does not need to use this entire bandwidth for just one broadcast channel. Instead, the broadcast can use Program and System Information Protocol and subdivide across several video subchannels (a.k.a. feeds) of varying quality and compression rates, including non-video datacasting services. A broadcaster may opt to use a standard-definition (SDTV) digital signal instead of an HDTV signal, because current convention allows the bandwidth of a DTV channel (or "multiplex") to be subdivided into multiple digital subchannels, (similar to what most FM radio stations offer with HD Radio), providing multiple feeds of entirely different television programming on the same channel. This ability to provide either a single HDTV feed or multiple lower-resolution feeds is often referred to as distributing one's bit budget or multicasting. This can sometimes be arranged automatically, using a statistical multiplexer. With some implementations, image resolution may be less directly limited by bandwidth; for example in DVB-T, broadcasters can choose from several different modulation schemes, giving them the option to reduce the transmission bit rate and make reception easier for more distant or mobile viewers. Reception There are several different ways to receive digital television. One of the oldest means of receiving DTV (and TV in general) is from terrestrial transmitters using an antenna (known as an aerial in some countries). This delivery method is known as digital terrestrial television (DTT). With DTT, viewers are limited to channels that have a terrestrial transmitter within range of their antenna. Other delivery methods include digital cable and digital satellite. In some countries where transmissions of TV signals are normally achieved by microwaves, digital multichannel multipoint distribution service is used. Other standards, such as digital multimedia broadcasting (DMB) and digital video broadcasting - handheld (DVB-H), have been devised to allow handheld devices such as mobile phones to receive TV signals. Another way is Internet Protocol television (IPTV), which is the delivery of TV over a computer network. Finally, an alternative way is to receive digital TV signals via the open Internet (Internet television), whether from a central streaming service or a P2P (peer-to-peer) system. Some signals are protected by encryption and backed up with the force of law under the WIPO Copyright Treaty and national legislation implementing it, such as the US Digital Millennium Copyright Act. Access to encrypted channels can be controlled by a removable card, for example via the Common Interface or CableCard. Protection parameters Digital television signals must not interfere with each other and they must also coexist with analog television until it is phased out. The following table gives allowable signal-to-noise and signal-to-interference ratios for various interference scenarios. This table is a crucial regulatory tool for controlling the placement and power levels of stations. Digital TV is more tolerant of interference than analog TV. Interaction People can interact with a DTV system in various ways. One can, for example, browse the electronic program guide. Modern DTV systems sometimes use a return path providing feedback from the end user to the broadcaster. This is possible over cable TV or through an Internet connection but is not possible with a standard antenna alone. Some of these systems support video on demand using a communication channel localized to a neighborhood rather than a city (terrestrial) or an even larger area (satellite). 1seg 1seg (1-segment) is a special form of ISDB. Each channel is further divided into 13 segments. Twelve are allocated for HDTV and the other for narrow-band receivers such as mobile televisions and cell phones. Comparison to analog DTV has several advantages over analog television, the most significant being that digital channels take up less bandwidth and the bandwidth allocations are flexible depending on the level of compression and resolution of the transmitted image. This means that digital broadcasters can provide more digital channels in the same space, provide high-definition television service, or provide other non-television services such as multimedia or interactivity. DTV also permits special services such as multiplexing (more than one program on the same channel), electronic program guides and additional languages (spoken or subtitled). The sale of non-television services may provide an additional revenue source to broadcasters. Digital and analog signals react to interference differently. For example, common problems with analog television include ghosting of images, noise from weak signals and other problems that degrade the quality of the image and sound, although the program material may still be watchable. With digital television, because of the cliff effect, reception of the digital signal must be very nearly complete; otherwise, neither audio nor video will be usable. Analog TV began with monophonic sound and later developed multichannel television sound with two independent audio signal channels. DTV allows up to 5 audio signal channels plus a subwoofer bass channel, producing broadcasts similar in quality to movie theaters and DVDs. Digital TV signals require less transmission power than analog TV signals to be broadcast and received satisfactorily. Compression artifacts, picture quality monitoring and allocated bandwidth DTV images have some picture defects that are not present on analog television or motion picture cinema, because of present-day limitations of bit rate and compression algorithms such as MPEG-2. This defect is sometimes referred to as mosquito noise. Because of the way the human visual system works, defects in an image that are localized to particular features of the image or that come and go are more perceptible than defects that are uniform and constant. However, the DTV system is designed to take advantage of other limitations of the human visual system to help mask these flaws, e.g., by allowing more compression artifacts during fast motion where the eye cannot track and resolve them as easily and, conversely, minimizing artifacts in still backgrounds that, because time allows, may be closely examined in a scene. Broadcast, cable, satellite and Internet DTV operators control the picture quality of television signal encoders using sophisticated, neuroscience-based algorithms, such as the structural similarity index measure (SSIM) video quality measurement tool. Another tool called visual information fidelity (VIF), is used in the Netflix VMAF video quality monitoring system. Quantising effects can create contours—rather than smooth gradations—on areas with small graduations in amplitude. Typically, a very flat scene, such as a cloudless sky, will exhibit visible steps across its expanse, often appearing as concentric circles or ellipses. This is known as color banding. Similar effects can be seen in very dark scenes, where true black backgrounds are overlaid by dark gray areas. These transitions may be smooth, or may show a scattering effect as the digital processing dithers and is unable to consistently allocate a value of either absolute black or the next step up the greyscale. Effects of poor reception Changes in signal reception from factors such as degrading antenna connections or changing weather conditions may gradually reduce the quality of analog TV. The nature of digital TV results in a perfectly decodable video initially, until the receiving equipment starts picking up interference that overpowers the desired signal or if the signal is too weak to decode. Some equipment will show a garbled picture with significant damage, while other devices may go directly from perfectly decodable video to no video at all or lock up. This phenomenon is known as the digital cliff effect. Block errors may occur when transmission is done with compressed images. A block error in a single frame often results in black boxes in several subsequent frames, making viewing difficult. For remote locations, distant channels that, as analog signals, were previously usable in a snowy and degraded state may, as digital signals, be perfectly decodable or may become completely unavailable. The use of higher frequencies add to these problems, especially in cases where a clear line-of-sight from the receiving antenna to the transmitter is not available because usually higher frequency signals can't pass through obstacles as easily. Effect on old analog technology Television sets with only analog tuners cannot decode digital transmissions. When analog broadcasting over the air ceases, users of sets with analog-only tuners may use other sources of programming (e.g., cable, recorded media) or may purchase set-top converter boxes to tune in the digital signals. In the United States, a government-sponsored coupon was available to offset the cost of an external converter box. The digital television transition began around the late 1990s and has been completed on a country-by-country basis in most parts of the world. Disappearance of TV-audio receivers Prior to the conversion to digital TV, analog television broadcast audio for TV channels on a separate FM carrier signal from the video signal. This FM audio signal could be heard using standard radios equipped with the appropriate tuning circuits. However, after the digital television transition, no portable radio manufacturer has yet developed an alternative method for portable radios to play just the audio signal of digital TV channels; DTV radio is not the same thing. Environmental issues The adoption of a broadcast standard incompatible with existing analog receivers has created the problem of large numbers of analog receivers being discarded. One superintendent of public works was quoted in 2009 saying; "some of the studies I’ve read in the trade magazines say up to a quarter of American households could be throwing a TV out in the next two years following the regulation change." In Michigan in 2009, one recycler estimated that as many as one household in four would dispose of or recycle a TV set in the following year. The digital television transition, migration to high-definition television receivers and the replacement of CRTs with flat screens are all factors in the increasing number of discarded analog CRT-based television receivers. In 2009, an estimated 99 million analog TV receivers were sitting unused in homes in the US alone and, while some obsolete receivers are being retrofitted with converters, many more are simply dumped in landfills where they represent a source of toxic metals such as lead as well as lesser amounts of materials such as barium, cadmium and chromium. See also Autoroll Digital television in the United Kingdom Digital television in the United States Text to speech in digital television References Further reading Hart, Jeffrey A., Television, technology, and competition : HDTV and digital TV in the United States, Western Europe, and Japan, New York : Cambridge University Press, 2004. Overview of Digital Television Development Worldwide Proceedings of the IEEE, VOL. 94, NO. 1, JANUARY 2006 (University of Texas at San Antonio) External links The FCC's US consumer-oriented DTV website Television technology Television terminology Television Japanese inventions Telecommunications-related introductions in the 1990s
Digital television
[ "Technology" ]
3,961
[ "Information and communications technology", "Digital technology", "Television technology" ]
8,276
https://en.wikipedia.org/wiki/Digital%20data
Digital data, in information theory and information systems, is information represented as a string of discrete symbols, each of which can take on one of only a finite number of values from some alphabet, such as letters or digits. An example is a text document, which consists of a string of alphanumeric characters. The most common form of digital data in modern information systems is binary data, which is represented by a string of binary digits (bits) each of which can have one of two values, either 0 or 1. Digital data can be contrasted with analog data, which is represented by a value from a continuous range of real numbers. Analog data is transmitted by an analog signal, which not only takes on continuous values but can vary continuously with time, a continuous real-valued function of time. An example is the air pressure variation in a sound wave. The word digital comes from the same source as the words digit and digitus (the Latin word for finger), as fingers are often used for counting. Mathematician George Stibitz of Bell Telephone Laboratories used the word digital in reference to the fast electric pulses emitted by a device designed to aim and fire anti-aircraft guns in 1942. The term is most commonly used in computing and electronics, especially where real-world information is converted to binary numeric form as in digital audio and digital photography. Symbol to digital conversion Since symbols (for example, alphanumeric characters) are not continuous, representing symbols digitally is rather simpler than conversion of continuous or analog information to digital. Instead of sampling and quantization as in analog-to-digital conversion, such techniques as polling and encoding are used. A symbol input device usually consists of a group of switches that are polled at regular intervals to see which switches are switched. Data will be lost if, within a single polling interval, two switches are pressed, or a switch is pressed, released, and pressed again. This polling can be done by a specialized processor in the device to prevent burdening the main CPU. When a new symbol has been entered, the device typically sends an interrupt, in a specialized format, so that the CPU can read it. For devices with only a few switches (such as the buttons on a joystick), the status of each can be encoded as bits (usually 0 for released and 1 for pressed) in a single word. This is useful when combinations of key presses are meaningful, and is sometimes used for passing the status of modifier keys on a keyboard (such as shift and control). But it does not scale to support more keys than the number of bits in a single byte or word. Devices with many switches (such as a computer keyboard) usually arrange these switches in a scan matrix, with the individual switches on the intersections of x and y lines. When a switch is pressed, it connects the corresponding x and y lines together. Polling (often called scanning in this case) is done by activating each x line in sequence and detecting which y lines then have a signal, thus which keys are pressed. When the keyboard processor detects that a key has changed state, it sends a signal to the CPU indicating the scan code of the key and its new state. The symbol is then encoded or converted into a number based on the status of modifier keys and the desired character encoding. A custom encoding can be used for a specific application with no loss of data. However, using a standard encoding such as ASCII is problematic if a symbol such as 'ß' needs to be converted but is not in the standard. It is estimated that in the year 1986, less than 1% of the world's technological capacity to store information was digital and in 2007 it was already 94%. The year 2002 is assumed to be the year when humankind was able to store more information in digital than in analog format (the "beginning of the digital age"). States Digital data come in these three states: data at rest, data in transit, and data in use. The confidentiality, integrity, and availability have to be managed during the entire lifecycle from 'birth' to the destruction of the data. Properties of digital information All digital information possesses common properties that distinguish it from analog data with respect to communications: Synchronization: Since digital information is conveyed by the sequence in which symbols are ordered, all digital schemes have some method for determining the beginning of a sequence. In written or spoken human languages, synchronization is typically provided by pauses (spaces), capitalization, and punctuation. Machine communications typically use special synchronization sequences. Language: All digital communications require a formal language, which in this context consists of all the information that the sender and receiver of the digital communication must both possess, in advance, for the communication to be successful. Languages are generally arbitrary and specify the meaning to be assigned to particular symbol sequences, the allowed range of values, methods to be used for synchronization, etc. Errors: Disturbances (noise) in analog communications invariably introduce some, generally small deviation or error between the intended and actual communication. Disturbances in digital communication only result in errors when the disturbance is so large as to result in a symbol being misinterpreted as another symbol or disturbing the sequence of symbols. It is generally possible to have near-error-free digital communication. Further, techniques such as check codes may be used to detect errors and correct them through redundancy or re-transmission. Errors in digital communications can take the form of substitution errors, in which a symbol is replaced by another symbol, or insertion/deletion errors, in which an extra incorrect symbol is inserted into or deleted from a digital message. Uncorrected errors in digital communications have an unpredictable and generally large impact on the information content of the communication. Copying: Because of the inevitable presence of noise, making many successive copies of an analog communication is infeasible because each generation increases the noise. Because digital communications are generally error-free, copies of copies can be made indefinitely. Granularity: The digital representation of a continuously variable analog value typically involves a selection of the number of symbols to be assigned to that value. The number of symbols determines the precision or resolution of the resulting datum. The difference between the actual analog value and the digital representation is known as quantization error. For example, if the actual temperature is 23.234456544453 degrees, but only two digits (23) are assigned to this parameter in a particular digital representation, the quantizing error is 0.234456544453. This property of digital communication is known as granularity. Compressible: According to Miller, "Uncompressed digital data is very large, and in its raw form, it would actually produce a larger signal (therefore be more difficult to transfer) than analog data. However, digital data can be compressed. Compression reduces the amount of bandwidth space needed to send information. Data can be compressed, sent, and then decompressed at the site of consumption. This makes it possible to send much more information and results in, for example, digital television signals offering more room on the airwave spectrum for more television channels." Historical digital systems Even though digital signals are generally associated with the binary electronic digital systems used in modern electronics and computing, digital systems are actually ancient, and need not be binary or electronic. DNA genetic code is a naturally occurring form of digital data storage. Written text (due to the limited character set and the use of discrete symbols – the alphabet in most cases) The abacus was created sometime between 1000 BC and 500 BC, it later became a form of calculation frequency. Nowadays it can be used as a very advanced, yet basic digital calculator that uses beads on rows to represent numbers. Beads only have meaning in discrete up and down states, not in analog in-between states. A beacon is perhaps the simplest non-electronic digital signal, with just two states (on and off). In particular, smoke signals are one of the oldest examples of a digital signal, where an analog "carrier" (smoke) is modulated with a blanket to generate a digital signal (puffs) that conveys information. Morse code uses six digital states—dot, dash, intra-character gap (between each dot or dash), short gap (between each letter), medium gap (between words), and long gap (between sentences)—to send messages via a variety of potential carriers such as electricity or light, for example using an electrical telegraph or a flashing light. The Braille uses a six-bit code rendered as dot patterns. Flag semaphore uses rods or flags held in particular positions to send messages to the receiver watching them some distance away. International maritime signal flags have distinctive markings that represent letters of the alphabet to allow ships to send messages to each other. More recently invented, a modem modulates an analog "carrier" signal (such as sound) to encode binary electrical digital information, as a series of binary digital sound pulses. A slightly earlier, surprisingly reliable version of the same concept was to bundle a sequence of audio digital "signal" and "no signal" information (i.e. "sound" and "silence") on magnetic cassette tape for use with early home computers. See also Analog-to-digital converter Barker code Binary number Comparison of analog and digital recording Data (computer science) Data remanence Digital architecture Digital art Digital control Digital divide Digital electronics Digital infinity Digital native Digital physics Digital recording Digital Revolution Digital video Digital-to-analog converter Internet forum References Further reading Tocci, R. 2006. Digital Systems: Principles and Applications (10th Edition). Prentice Hall. Digital media Computer data Digital systems Digital technology Consumer electronics
Digital data
[ "Technology" ]
1,984
[ "Information and communications technology", "Digital systems", "Computer data", "Digital media", "Information systems", "Digital technology", "Data", "Multimedia" ]
8,286
https://en.wikipedia.org/wiki/Domino%20effect
A domino effect is the cumulative effect produced when one event sets off a series of similar or related events, a form of chain reaction. The term is an analogy to a falling row of dominoes. It typically refers to a linked sequence of events where the time between successive events is relatively short. The term can be used literally (about a series of actual collisions) or metaphorically (about causal linkages within systems such as global finance or politics). The literal, mechanical domino effect is exploited in Rube Goldberg machines. In chemistry, the principle applies to a domino reaction, in which one chemical reaction sets up the conditions necessary for a subsequent one that soon follows. In the realm of process safety, a domino-effect accident is an initial undesirable event triggering additional ones in related equipment or facilities, leading to a total incident effect more severe than the primary accident alone. The metaphorical usage implies that an outcome is inevitable or highly likely (as it has already started to happen) – a form of slippery slope argument. When this outcome is actually unlikely (the argument is fallacious), it has also been called the domino fallacy. See also References Further reading Metaphors referring to objects Causality
Domino effect
[ "Physics" ]
245
[]
8,293
https://en.wikipedia.org/wiki/Diffusion%20pump
Diffusion pumps use a high speed jet of vapor to direct gas molecules in the pump throat down into the bottom of the pump and out the exhaust. They were the first type of high vacuum pumps operating in the regime of free molecular flow, where the movement of the gas molecules can be better understood as diffusion than by conventional fluid dynamics. Invented in 1915 by Wolfgang Gaede, he named it a diffusion pump since his design was based on the finding that gas cannot diffuse against the vapor stream, but will be carried with it to the exhaust. However, the principle of operation might be more precisely described as gas-jet pump, since diffusion also plays a role in other types of high vacuum pumps. In modern textbooks, the diffusion pump is categorized as a momentum transfer pump. The diffusion pump is widely used in both industrial and research applications. Most modern diffusion pumps use silicone oil or polyphenyl ethers as the working fluid. History In the late 19th century, most vacuums were created using a Sprengel pump, which had the advantage of being very simple to operate, and capable of achieving quite good vacuum given enough time. Compared to later pumps, however, the pumping speed was very slow and the vapor pressure of the liquid mercury limited the ultimate vacuum. Following his invention of the molecular pump, Wolfgang Gaede invented the diffusion pump in 1915, and originally used elemental mercury as the working fluid. After its invention, the design was quickly commercialized by Leybold. It was then improved by Irving Langmuir and W. Crawford. Cecil Reginald Burch discovered the possibility of using silicone oil in 1928. Oil diffusion pumps An oil diffusion pump is used to achieve higher vacuum (lower pressure) than is possible by use of positive displacement pumps alone. Although its use has been mainly associated within the high-vacuum range, down to , diffusion pumps today can produce pressures approaching when properly used with modern fluids and accessories. The features that make the diffusion pump attractive for high and ultra-high vacuum use are its high pumping speed for all gases and low cost per unit pumping speed when compared with other types of pump used in the same vacuum range. Diffusion pumps cannot discharge directly into the atmosphere, so a mechanical forepump is typically used to maintain an outlet pressure around . The oil diffusion pump is operated with an oil of low vapor pressure. The high speed jet is generated by boiling the fluid and directing the vapor through a jet assembly. Note that the oil is gaseous when entering the nozzles. Within the nozzles, the flow changes from laminar to supersonic and molecular. Often, several jets are used in series to enhance the pumping action. The outside of the diffusion pump is cooled using either air flow, water lines or a water-filled jacket. As the vapor jet hits the outer cooled shell of the diffusion pump, the working fluid condenses and is recovered and directed back to the boiler. The pumped gases continue flowing to the base of the pump at increased pressure, flowing out through the diffusion pump outlet, where they are compressed to ambient pressure by the secondary mechanical forepump and exhausted. Unlike turbomolecular pumps and cryopumps, diffusion pumps have no moving parts and as a result are quite durable and reliable. They can function over pressure ranges of . They are driven only by convection and thus have a very low energy efficiency. One major disadvantage of diffusion pumps is the tendency to backstream oil into the vacuum chamber. This oil can contaminate surfaces inside the chamber or upon contact with hot filaments or electrical discharges may result in carbonaceous or siliceous deposits. Due to backstreaming, oil diffusion pumps are not suitable for use with highly sensitive analytical equipment or other applications which require an extremely clean vacuum environment, but mercury diffusion pumps may be in the case of ultra high vacuum chambers used for metal deposition. Often cold traps and baffles are used to minimize backstreaming, although this results in some loss of pumping speed. The oil of a diffusion pump cannot be exposed to the atmosphere when hot. If this occurs, the oil will oxidise and has to be replaced. If a fire occurs, the smoke and residue may contaminate other parts of the system. Oil types The least expensive diffusion pump oils are based on hydrocarbons which have been purified by double-distillation. Compared with the other fluids, they have higher vapor pressure, so are usually limited to a pressure of . They are also the most likely to burn or explode if exposed to oxidizers. The most common silicone oils used in diffusion pumps are trisiloxanes, which contain the chemical group Si-O-Si-O-Si, to which various phenyl groups or methyl groups are attached. These are available as the so-called 702 and 703 blends, which were formerly manufactured by Dow Corning. These can be further separated into 704 and 705 oils, which are made up of the isomers of tetraphenyl tetramethyl trisiloxane and pentaphenyl trimethyl trisiloxane respectively. For pumping reactive species, usually a polyphenyl ether based oil is used. These oils are the most chemical and heat resistant type of diffusion pump oil. Steam ejectors The steam ejector is a popular form of pump for vacuum distillation and freeze-drying. A jet of steam entrains the vapour that must be removed from the vacuum chamber. Steam ejectors can have single or multiple stages, with and without condensers in between the stages. While both steam ejectors and diffusion pumps use jets of vapor to entrain gas, they work on fundamentally different principles - steam ejectors rely on viscous flow and mixing to pump gas, whereas diffusion pumps use molecular diffusion. This has several consequences. In diffusion pumps, the inlet pressure can be much lower than the static pressure of jet, whereas in steam ejectors the two pressures are about the same. Also, diffusion pumps are capable of much higher compression ratios, and cannot discharge directly to atmosphere. See also Turbomolecular pump Vacuum pump Aspirator (pump) References External links An oil diffusion pump built from glass by the Arizona State University Main Further reading Vacuum pumps
Diffusion pump
[ "Physics", "Engineering" ]
1,275
[ "Vacuum pumps", "Vacuum systems", "Vacuum", "Matter" ]
8,301
https://en.wikipedia.org/wiki/Distillation
Distillation, also classical distillation, is the process of separating the component substances of a liquid mixture of two or more chemically discrete substances; the separation process is realized by way of the selective boiling of the mixture and the condensation of the vapors in a still. Distillation can operate over a wide range of pressures from 0.14 bar (e.g., ethylbenzene/styrene) to nearly 21 bar (e.g.,propylene/propane) and is capable of separating feeds with high volumetric flowrates and various components that cover a range of relative volatilities from only 1.17 (o-xylene/m-xylene) to 81.2 (water/ethylene glycol). Distillation provides a convenient and time-tested solution to separate a diversity of chemicals in a continuous manner with high purity. However, distillation has an enormous environmental footprint, resulting in the consumption of approximately 25% of all industrial energy use. The key issue is that distillation operates based on phase changes, and this separation mechanism requires vast energy inputs. Dry distillation (thermolysis and pyrolysis) is the heating of solid materials to produce gases that condense either into fluid products or into solid products. The term dry distillation includes the separation processes of destructive distillation and of chemical cracking, breaking down large hydrocarbon molecules into smaller hydrocarbon molecules. Moreover, a partial distillation results in partial separations of the mixture's components, which process yields nearly-pure components; partial distillation also realizes partial separations of the mixture to increase the concentrations of selected components. In either method, the separation process of distillation exploits the differences in the relative volatility of the component substances of the heated mixture. In the industrial applications of classical distillation, the term distillation is used as a unit of operation that identifies and denotes a process of physical separation, not a chemical reaction; thus an industrial installation that produces distilled beverages, is a distillery of alcohol. These are some applications of the chemical separation process that is distillation: Distilling fermented products to yield alcoholic beverages with a high content by volume of ethyl alcohol. Desalination to produce potable water and for medico-industrial applications. Crude oil stabilisation, a partial distillation to reduce the vapor pressure of crude oil, which thus is safe to store and to transport, and thereby reduces the volume of atmospheric emissions of volatile hydrocarbons. Fractional distillation used in the midstream operations of an oil refinery for producing fuels and chemical raw materials for livestock feed. Cryogenic Air separation into the component gases — oxygen, nitrogen, and argon — for use as industrial gases. Chemical synthesis to separate impurities and unreacted materials. History Iron Age Early evidence of distillation was found on Akkadian tablets dated describing perfumery operations. The tablets provided textual evidence that an early, primitive form of distillation was known to the Babylonians of ancient Mesopotamia. Classical antiquity Greek and Roman terminology According to British chemist T. Fairley, neither the Greeks nor the Romans had any term for the modern concept of distillation. Words like "distill" would have referred to something else, in most cases a part of some process unrelated to what now is known as distillation. In the words of Fairley and German chemical engineer Norbert Kockmann respectively: According to Dutch chemical historian Robert J. Forbes, the word distillare (to drip off) when used by the Romans, e.g. Seneca and Pliny the Elder, was "never used in our sense". Aristotle Aristotle knew that water condensing from evaporating seawater is fresh: Letting seawater evaporate and condense into freshwater can not be called "distillation" for distillation involves boiling, but the experiment may have been an important step towards distillation. Alexandrian chemists Early evidence of distillation has been found related to alchemists working in Alexandria in Roman Egypt in the 1st century CE. Distilled water has been in use since at least , when Alexander of Aphrodisias described the process. Work on distilling other liquids continued in early Byzantine Egypt under Zosimus of Panopolis in the 3rd century. Ancient India and China (1–500 CE) Distillation was practiced in the ancient Indian subcontinent, which is evident from baked clay retorts and receivers found at Taxila, Shaikhan Dheri, and Charsadda in Pakistan and Rang Mahal in India dating to the early centuries of the Common Era. Frank Raymond Allchin says these terracotta distill tubes were "made to imitate bamboo". These "Gandhara stills" were only capable of producing very weak liquor, as there was no efficient means of collecting the vapors at low heat. Distillation in China may have begun at the earliest during the Eastern Han dynasty (1st–2nd century CE). Islamic Golden Age Medieval Muslim chemists such as Jābir ibn Ḥayyān (Latin: Geber, ninth century) and Abū Bakr al-Rāzī (Latin: Rhazes, ) experimented extensively with the distillation of various substances. The fractional distillation of organic substances plays an important role in the works attributed to Jābir, such as in the ('The Book of Seventy'), translated into Latin by Gerard of Cremona () under the title . The Jabirian experiments with fractional distillation of animal and vegetable substances, and to a lesser degree also of mineral substances, is the main topic of the , an originally Arabic work falsely attributed to Avicenna that was translated into Latin and would go on to form the most important alchemical source for Roger Bacon (). The distillation of wine is attested in Arabic works attributed to al-Kindī () and to al-Fārābī (), and in the 28th book of al-Zahrāwī's (Latin: Abulcasis, 936–1013) (later translated into Latin as ). In the twelfth century, recipes for the production of ("burning water", i.e., ethanol) by distilling wine with salt started to appear in a number of Latin works, and by the end of the thirteenth century it had become a widely known substance among Western European chemists. The works of Taddeo Alderotti (1223–1296) describe a method for concentrating alcohol involving repeated distillation through a water-cooled still, by which an alcohol purity of 90% could be obtained. Medieval China The distillation of beverages began in the Southern Song (10th–13th century) and Jin (12th–13th century) dynasties, according to archaeological evidence. A still was found in an archaeological site in Qinglong, Hebei province, China, dating back to the 12th century. Distilled beverages were common during the Yuan dynasty (13th–14th century). Modern era In 1500, German alchemist Hieronymus Brunschwig published (The Book of the Art of Distillation out of Simple Ingredients), the first book solely dedicated to the subject of distillation, followed in 1512 by a much expanded version. Right after that, in 1518, the oldest surviving distillery in Europe, The Green Tree Distillery, was founded. In 1651, John French published The Art of Distillation, the first major English compendium on the practice, but it has been claimed that much of it derives from Brunschwig's work. This includes diagrams with people in them showing the industrial rather than bench scale of the operation. As alchemy evolved into the science of chemistry, vessels called retorts became used for distillations. Both alembics and retorts are forms of glassware with long necks pointing to the side at a downward angle to act as air-cooled condensers to condense the distillate and let it drip downward for collection. Later, copper alembics were invented. Riveted joints were often kept tight by using various mixtures, for instance a dough made of rye flour. These alembics often featured a cooling system around the beak, using cold water, for instance, which made the condensation of alcohol more efficient. These were called pot stills. Today, the retorts and pot stills have been largely supplanted by more efficient distillation methods in most industrial processes. However, the pot still is still widely used for the elaboration of some fine alcohols, such as cognac, Scotch whisky, Irish whiskey, tequila, rum, cachaça, and some vodkas. Pot stills made of various materials (wood, clay, stainless steel) are also used by bootleggers in various countries. Small pot stills are also sold for use in the domestic production of flower water or essential oils. Early forms of distillation involved batch processes using one vaporization and one condensation. Purity was improved by further distillation of the condensate. Greater volumes were processed by simply repeating the distillation. Chemists reportedly carried out as many as 500 to 600 distillations in order to obtain a pure compound. In the early 19th century, the basics of modern techniques, including pre-heating and reflux, were developed. In 1822, Anthony Perrier developed one of the first continuous stills, and then, in 1826, Robert Stein improved that design to make his patent still. In 1830, Aeneas Coffey got a patent for improving the design even further. Coffey's continuous still may be regarded as the archetype of modern petrochemical units. The French engineer Armand Savalle developed his steam regulator around 1846. In 1877, Ernest Solvay was granted a U.S. Patent for a tray column for ammonia distillation, and the same and subsequent years saw developments in this theme for oils and spirits. With the emergence of chemical engineering as a discipline at the end of the 19th century, scientific rather than empirical methods could be applied. The developing petroleum industry in the early 20th century provided the impetus for the development of accurate design methods, such as the McCabe–Thiele method by Ernest Thiele and the Fenske equation. The first industrial plant in the United States to use distillation as a means of ocean desalination opened in Freeport, Texas in 1961 with the hope of bringing water security to the region. The availability of powerful computers has allowed direct computer simulations of distillation columns. Applications The application of distillation can roughly be divided into four groups: laboratory scale, industrial distillation, distillation of herbs for perfumery and medicinals (herbal distillate), and food processing. The latter two are distinctively different from the former two in that distillation is not used as a true purification method but more to transfer all volatiles from the source materials to the distillate in the processing of beverages and herbs. The main difference between laboratory scale distillation and industrial distillation are that laboratory scale distillation is often performed on a batch basis, whereas industrial distillation often occurs continuously. In batch distillation, the composition of the source material, the vapors of the distilling compounds, and the distillate change during the distillation. In batch distillation, a still is charged (supplied) with a batch of feed mixture, which is then separated into its component fractions, which are collected sequentially from most volatile to less volatile, with the bottoms – remaining least or non-volatile fraction – removed at the end. The still can then be recharged and the process repeated. In continuous distillation, the source materials, vapors, and distillate are kept at a constant composition by carefully replenishing the source material and removing fractions from both vapor and liquid in the system. This results in a more detailed control of the separation process. Idealized model The boiling point of a liquid is the temperature at which the vapor pressure of the liquid equals the pressure around the liquid, enabling bubbles to form without being crushed. A special case is the normal boiling point, where the vapor pressure of the liquid equals the ambient atmospheric pressure. It is a misconception that in a liquid mixture at a given pressure, each component boils at the boiling point corresponding to the given pressure, allowing the vapors of each component to collect separately and purely. However, this does not occur, even in an idealized system. Idealized models of distillation are essentially governed by Raoult's law and Dalton's law and assume that vapor–liquid equilibria are attained. Raoult's law states that the vapor pressure of a solution is dependent on 1) the vapor pressure of each chemical component in the solution and 2) the fraction of solution each component makes up, a.k.a. the mole fraction. This law applies to ideal solutions, or solutions that have different components but whose molecular interactions are the same as or very similar to pure solutions. Dalton's law states that the total pressure is the sum of the partial pressures of each individual component in the mixture. When a multi-component liquid is heated, the vapor pressure of each component will rise, thus causing the total vapor pressure to rise. When the total vapor pressure reaches the pressure surrounding the liquid, boiling occurs and liquid turns to gas throughout the bulk of the liquid. A mixture with a given composition has one boiling point at a given pressure when the components are mutually soluble. A mixture of constant composition does not have multiple boiling points. An implication of one boiling point is that lighter components never cleanly "boil first". At boiling point, all volatile components boil, but for a component, its percentage in the vapor is the same as its percentage of the total vapor pressure. Lighter components have a higher partial pressure and, thus, are concentrated in the vapor, but heavier volatile components also have a (smaller) partial pressure and necessarily vaporize also, albeit at a lower concentration in the vapor. Indeed, batch distillation and fractionation succeed by varying the composition of the mixture. In batch distillation, the batch vaporizes, which changes its composition; in fractionation, liquid higher in the fractionation column contains more lights and boils at lower temperatures. Therefore, starting from a given mixture, it appears to have a boiling range instead of a boiling point, although this is because its composition changes: each intermediate mixture has its own, singular boiling point. The idealized model is accurate in the case of chemically similar liquids, such as benzene and toluene. In other cases, severe deviations from Raoult's law and Dalton's law are observed, most famously in the mixture of ethanol and water. These compounds, when heated together, form an azeotrope, which is when the vapor phase and liquid phase contain the same composition. Although there are computational methods that can be used to estimate the behavior of a mixture of arbitrary components, the only way to obtain accurate vapor–liquid equilibrium data is by measurement. It is not possible to completely purify a mixture of components by distillation, as this would require each component in the mixture to have a zero partial pressure. If ultra-pure products are the goal, then further chemical separation must be applied. When a binary mixture is vaporized and the other component, e.g., a salt, has zero partial pressure for practical purposes, the process is simpler. Batch or differential distillation Heating an ideal mixture of two volatile substances, A and B, with A having the higher volatility, or lower boiling point, in a batch distillation setup (such as in an apparatus depicted in the opening figure) until the mixture is boiling results in a vapor above the liquid that contains a mixture of A and B. The ratio between A and B in the vapor will be different from the ratio in the liquid. The ratio in the liquid will be determined by how the original mixture was prepared, while the ratio in the vapor will be enriched in the more volatile compound, A (due to Raoult's Law, see above). The vapor goes through the condenser and is removed from the system. This, in turn, means that the ratio of compounds in the remaining liquid is now different from the initial ratio (i.e., more enriched in B than in the starting liquid). The result is that the ratio in the liquid mixture is changing, becoming richer in component B. This causes the boiling point of the mixture to rise, which results in a rise in the temperature in the vapor, which results in a changing ratio of A : B in the gas phase (as distillation continues, there is an increasing proportion of B in the gas phase). This results in a slowly changing ratio of A : B in the distillate. If the difference in vapour pressure between the two components A and B is large – generally expressed as the difference in boiling points – the mixture in the beginning of the distillation is highly enriched in component A, and when component A has distilled off, the boiling liquid is enriched in component B. Continuous distillation Continuous distillation is an ongoing distillation in which a liquid mixture is continuously (without interruption) fed into the process and separated fractions are removed continuously as output streams occur over time during the operation. Continuous distillation produces a minimum of two output fractions, including at least one volatile distillate fraction, which has boiled and been separately captured as a vapor and then condensed to a liquid. There is always a bottoms (or residue) fraction, which is the least volatile residue that has not been separately captured as a condensed vapor. Continuous distillation differs from batch distillation in the respect that concentrations should not change over time. Continuous distillation can be run at a steady state for an arbitrary amount of time. For any source material of specific composition, the main variables that affect the purity of products in continuous distillation are the reflux ratio and the number of theoretical equilibrium stages, in practice determined by the number of trays or the height of packing. Reflux is a flow from the condenser back to the column, which generates a recycle that allows a better separation with a given number of trays. Equilibrium stages are ideal steps where compositions achieve vapor–liquid equilibrium, repeating the separation process and allowing better separation given a reflux ratio. A column with a high reflux ratio may have fewer stages, but it refluxes a large amount of liquid, giving a wide column with a large holdup. Conversely, a column with a low reflux ratio must have a large number of stages, thus requiring a taller column. General improvements Both batch and continuous distillations can be improved by making use of a fractionating column on top of the distillation flask. The column improves separation by providing a larger surface area for the vapor and condensate to come into contact. This helps it remain at equilibrium for as long as possible. The column can even consist of small subsystems ('trays' or 'dishes') which all contain an enriched, boiling liquid mixture, all with their own vapor–liquid equilibrium. There are differences between laboratory-scale and industrial-scale fractionating columns, but the principles are the same. Examples of laboratory-scale fractionating columns (in increasing efficiency) include: Air condenser Vigreux column (usually laboratory scale only) Packed column (packed with glass beads, metal pieces, or other chemically inert material) Spinning band distillation system. Laboratory procedures Laboratory scale distillations are almost exclusively run as batch distillations. The device used in distillation, sometimes referred to as a still, consists at a minimum of a reboiler or pot in which the source material is heated, a condenser in which the heated vapor is cooled back to the liquid state, and a receiver in which the concentrated or purified liquid, called the distillate, is collected. Several laboratory scale techniques for distillation exist (see also distillation types). A completely sealed distillation apparatus could experience extreme and rapidly varying internal pressure, which could cause it to burst open at the joints. Therefore, some path is usually left open (for instance, at the receiving flask) to allow the internal pressure to equalize with atmospheric pressure. Alternatively, a vacuum pump may be used to keep the apparatus at a lower than atmospheric pressure. If the substances involved are air- or moisture-sensitive, the connection to the atmosphere can be made through one or more drying tubes packed with materials that scavenge the undesired air components, or through bubblers that provide a movable liquid barrier. Finally, the entry of undesired air components can be prevented by pumping a low but steady flow of suitable inert gas, like nitrogen, into the apparatus. Simple distillation In simple distillation, the vapor is immediately channeled into a condenser. Consequently, the distillate is not pure but rather its composition is identical to the composition of the vapors at the given temperature and pressure. That concentration follows Raoult's law. As a result, simple distillation is effective only when the liquid boiling points differ greatly (rule of thumb is 25 °C) or when separating liquids from non-volatile solids or oils. For these cases, the vapor pressures of the components are usually different enough that the distillate may be sufficiently pure for its intended purpose. A cutaway schematic of a simple distillation operation is shown at right. The starting liquid 15 in the boiling flask 2 is heated by a combined hotplate and magnetic stirrer 13 via a silicone oil bath (orange, 14). The vapor flows through a short Vigreux column 3, then through a Liebig condenser 5, is cooled by water (blue) that circulates through ports 6 and 7. The condensed liquid drips into the receiving flask 8, sitting in a cooling bath (blue, 16). The adapter 10 has a connection 9 that may be fitted to a vacuum pump. The components are connected by ground glass joints. Fractional distillation For many cases, the boiling points of the components in the mixture will be sufficiently close that Raoult's law must be taken into consideration. Therefore, fractional distillation must be used to separate the components by repeated vaporization-condensation cycles within a packed fractionating column. This separation, by successive distillations, is also referred to as rectification. As the solution to be purified is heated, its vapors rise to the fractionating column. As it rises, it cools, condensing on the condenser walls and the surfaces of the packing material. Here, the condensate continues to be heated by the rising hot vapors; it vaporizes once more. However, the composition of the fresh vapors is determined once again by Raoult's law. Each vaporization-condensation cycle (called a theoretical plate) will yield a purer solution of the more volatile component. In reality, each cycle at a given temperature does not occur at exactly the same position in the fractionating column; theoretical plate is thus a concept rather than an accurate description. More theoretical plates lead to better separations. A spinning band distillation system uses a spinning band of Teflon or metal to force the rising vapors into close contact with the descending condensate, increasing the number of theoretical plates. Steam distillation Like vacuum distillation, steam distillation is a method for distilling compounds which are heat-sensitive. The temperature of the steam is easier to control than the surface of a heating element and allows a high rate of heat transfer without heating at a very high temperature. This process involves bubbling steam through a heated mixture of the raw material. By Raoult's law, some of the target compound will vaporize (in accordance with its partial pressure). The vapor mixture is cooled and condensed, usually yielding a layer of oil and a layer of water. Steam distillation of various aromatic herbs and flowers can result in two products: an essential oil as well as a watery herbal distillate. The essential oils are often used in perfumery and aromatherapy while the watery distillates have many applications in aromatherapy, food processing and skin care. Vacuum distillation Some compounds have very high boiling points. To boil such compounds, it is often better to lower the pressure at which such compounds are boiled instead of increasing the temperature. Once the pressure is lowered to the vapor pressure of the compound (at the given temperature), boiling and the rest of the distillation process can commence. This technique is referred to as vacuum distillation and it is commonly found in the laboratory in the form of the rotary evaporator. This technique is also very useful for compounds which boil beyond their decomposition temperature at atmospheric pressure and which would therefore be decomposed by any attempt to boil them under atmospheric pressure. Molecular distillation Molecular distillation is vacuum distillation below the pressure of 0.01 torr. 0.01 torr is one order of magnitude above high vacuum, where fluids are in the free molecular flow regime, i.e., the mean free path of molecules is comparable to the size of the equipment. The gaseous phase no longer exerts significant pressure on the substance to be evaporated, and consequently, rate of evaporation no longer depends on pressure. That is, because the continuum assumptions of fluid dynamics no longer apply, mass transport is governed by molecular dynamics rather than fluid dynamics. Thus, a short path between the hot surface and the cold surface is necessary, typically by suspending a hot plate covered with a film of feed next to a cold plate with a line of sight in between. Molecular distillation is used industrially for purification of oils. Short path distillation Short path distillation is a distillation technique that involves the distillate travelling a short distance, often only a few centimeters, and is normally done at reduced pressure. A classic example would be a distillation involving the distillate travelling from one glass bulb to another, without the need for a condenser separating the two chambers. This technique is often used for compounds which are unstable at high temperatures or to purify small amounts of compound. The advantage is that the heating temperature can be considerably lower (at reduced pressure) than the boiling point of the liquid at standard pressure, and the distillate only has to travel a short distance before condensing. A short path ensures that little compound is lost on the sides of the apparatus. The Kugelrohr apparatus is a kind of short path distillation method which often contains multiple chambers to collect distillate fractions. Air-sensitive vacuum distillation Some compounds have high boiling points as well as being air sensitive. A simple vacuum distillation system as exemplified above can be used, whereby the vacuum is replaced with an inert gas after the distillation is complete. However, this is a less satisfactory system if one desires to collect fractions under a reduced pressure. To do this a "cow" or "pig" adaptor can be added to the end of the condenser, or for better results or for very air sensitive compounds a Perkin triangle apparatus can be used. The Perkin triangle has means via a series of glass or Teflon taps to allows fractions to be isolated from the rest of the still, without the main body of the distillation being removed from either the vacuum or heat source, and thus can remain in a state of reflux. To do this, the sample is first isolated from the vacuum by means of the taps, the vacuum over the sample is then replaced with an inert gas (such as nitrogen or argon) and can then be stoppered and removed. A fresh collection vessel can then be added to the system, evacuated and linked back into the distillation system via the taps to collect a second fraction, and so on, until all fractions have been collected. Zone distillation Zone distillation is a distillation process in a long container with partial melting of refined matter in moving liquid zone and condensation of vapor in the solid phase at condensate pulling in cold area. The process is worked in theory. When zone heater is moving from the top to the bottom of the container then solid condensate with irregular impurity distribution is forming. Then most pure part of the condensate may be extracted as product. The process may be iterated many times by moving (without turnover) the received condensate to the bottom part of the container on the place of refined matter. The irregular impurity distribution in the condensate (that is efficiency of purification) increases with the number of iterations. Zone distillation is the distillation analog of zone recrystallization. Impurity distribution in the condensate is described by known equations of zone recrystallization – with the replacement of the distribution co-efficient k of crystallization - for the separation factor α of distillation. Closed-system vacuum distillation (cryovap) Non-condensable gas can be expelled from the apparatus by the vapor of relatively volatile co-solvent, which spontaneously evaporates during initial pumping, and this can be achieved with regular oil or diaphragm pump. Other types The process of reactive distillation involves using the reaction vessel as the still. In this process, the product is usually significantly lower boiling than its reactants. As the product is formed from the reactants, it is vaporized and removed from the reaction mixture. This technique is an example of a continuous vs. a batch process; advantages include less downtime to charge the reaction vessel with starting material, and less workup. Distillation "over a reactant" could be classified as a reactive distillation. It is typically used to remove volatile impurity from the distillation feed. For example, a little lime may be added to remove carbon dioxide from water followed by a second distillation with a little sulfuric acid added to remove traces of ammonia. Catalytic distillation is the process by which the reactants are catalyzed while being distilled to continuously separate the products from the reactants. This method is used to assist equilibrium reactions in reaching completion. Pervaporation is a method for the separation of mixtures of liquids by partial vaporization through a non-porous membrane. Extractive distillation is defined as distillation in the presence of a miscible, high boiling, relatively non-volatile component, the solvent, that forms no azeotrope with the other components in the mixture. Flash evaporation (or partial evaporation) is the partial vaporization that occurs when a saturated liquid stream undergoes a reduction in pressure by passing through a throttling valve or other throttling device. This process is one of the simplest unit operations, being equivalent to a distillation with only one equilibrium stage. Codistillation is distillation which is performed on mixtures in which the two compounds are not miscible. In the laboratory, the Dean-Stark apparatus is used for this purpose to remove water from synthesis products. The Bleidner apparatus is another example with two refluxing solvents. Membrane distillation is a type of distillation in which vapors of a mixture to be separated are passed through a membrane, which selectively permeates one component of mixture. Vapor pressure difference is the driving force. It has potential applications in seawater desalination and in removal of organic and inorganic components. The unit process of evaporation may also be called "distillation": In rotary evaporation a vacuum distillation apparatus is used to remove bulk solvents from a sample. Typically the vacuum is generated by a water aspirator or a membrane pump. In a Kugelrohr apparatus a short path distillation apparatus is typically used (generally in combination with a (high) vacuum) to distill high boiling (> 300 °C) compounds. The apparatus consists of an oven in which the compound to be distilled is placed, a receiving portion which is outside of the oven, and a means of rotating the sample. The vacuum is normally generated by using a high vacuum pump. Other uses: Dry distillation or destructive distillation, despite the name, is not truly distillation, but rather a chemical reaction known as pyrolysis in which solid substances are heated in an inert or reducing atmosphere and any volatile fractions, containing high-boiling liquids and products of pyrolysis, are collected. The destructive distillation of wood to give methanol is the root of its common name – wood alcohol. Freeze distillation is an analogous method of purification using freezing instead of evaporation. It is not truly distillation, but a recrystallization where the product is the mother liquor, and does not produce products equivalent to distillation. This process is used in the production of ice beer and ice wine to increase ethanol and sugar content, respectively. It is also used to produce applejack. Unlike distillation, freeze distillation concentrates poisonous congeners rather than removing them; As a result, many countries prohibit such applejack as a health measure. Also, distillation by evaporation can separate these since they have different boiling points. Distillation by filtration: In early alchemy and chemistry, otherwise known as natural philosophy, a form of "distillation" by capillary filtration was known as a form of distillation at the time. In this, a series of cups or bowls were set upon a stepped support with a "wick" of cotton or felt-like material, which had been wetted with water or a clear liquid with each step dripping down through the wetted cloth through capillary action in succeeding steps, creating a "purification" of the liquid, leaving solid materials behind in the upper bowls and purifying the succeeding product through capillary action through the moistened cloth. This was called "distillatio" by filtration by those using the method. Azeotropic process Interactions between the components of the solution create properties unique to the solution, as most processes entail non-ideal mixtures, where Raoult's law does not hold. Such interactions can result in a constant-boiling azeotrope which behaves as if it were a pure compound (i.e., boils at a single temperature instead of a range). At an azeotrope, the solution contains the given component in the same proportion as the vapor, so that evaporation does not change the purity, and distillation does not result in separation. For example, 95.6% ethanol (by mass) in water forms an azeotrope at 78.1 °C. If the azeotrope is not considered sufficiently pure for use, there exist some techniques to break the azeotrope to give a more pure distillate. These techniques are known as azeotropic distillation. Some techniques achieve this by "jumping" over the azeotropic composition (by adding another component to create a new azeotrope, or by varying the pressure). Others work by chemically or physically removing or sequestering the impurity. For example, to purify ethanol beyond 95%, a drying agent (or desiccant, such as potassium carbonate) can be added to convert the soluble water into insoluble water of crystallization. Molecular sieves are often used for this purpose as well. Immiscible liquids, such as water and toluene, easily form azeotropes. Commonly, these azeotropes are referred to as a low boiling azeotrope because the boiling point of the azeotrope is lower than the boiling point of either pure component. The temperature and composition of the azeotrope is easily predicted from the vapor pressure of the pure components, without use of Raoult's law. The azeotrope is easily broken in a distillation set-up by using a liquid–liquid separator (a decanter) to separate the two liquid layers that are condensed overhead. Only one of the two liquid layers is refluxed to the distillation set-up. High boiling azeotropes, such as a 20 percent by weight mixture of hydrochloric acid in water, also exist. As implied by the name, the boiling point of the azeotrope is greater than the boiling point of either pure component. Breaking an azeotrope with unidirectional pressure manipulation The boiling points of components in an azeotrope overlap to form a band. By exposing an azeotrope to a vacuum or positive pressure, it is possible to bias the boiling point of one component away from the other by exploiting the differing vapor pressure curves of each; the curves may overlap at the azeotropic point, but are unlikely to remain identical further along the pressure axis to either side of the azeotropic point. When the bias is great enough, the two boiling points no longer overlap and so the azeotropic band disappears. This method can remove the need to add other chemicals to a distillation, but it has two potential drawbacks. Under negative pressure, power for a vacuum source is needed and the reduced boiling points of the distillates requires that the condenser be run cooler to prevent distillate vapors being lost to the vacuum source. Increased cooling demands will often require additional energy and possibly new equipment or a change of coolant. Alternatively, if positive pressures are required, standard glassware can not be used, energy must be used for pressurization and there is a higher chance of side reactions occurring in the distillation, such as decomposition, due to the higher temperatures required to effect boiling. A unidirectional distillation will rely on a pressure change in one direction, either positive or negative. Pressure-swing distillation Pressure-swing distillation is essentially the same as the unidirectional distillation used to break azeotropic mixtures, but here both positive and negative pressures may be employed. This improves the selectivity of the distillation and allows a chemist to optimize distillation by avoiding extremes of pressure and temperature that waste energy. This is particularly important in commercial applications. One example of the application of pressure-swing distillation is during the industrial purification of ethyl acetate after its catalytic synthesis from ethanol. Industrial process Large scale industrial distillation applications include both batch and continuous fractional, vacuum, azeotropic, extractive, and steam distillation. The most widely used industrial applications of continuous, steady-state fractional distillation are in petroleum refineries, petrochemical and chemical plants and natural gas processing plants. To control and optimize such industrial distillation, a standardized laboratory method, ASTM D86, is established. This test method extends to the atmospheric distillation of petroleum products using a laboratory batch distillation unit to quantitatively determine the boiling range characteristics of petroleum products. Industrial distillation is typically performed in large, vertical cylindrical columns known as distillation towers or distillation columns with diameters ranging from about and heights ranging from about or more. When the process feed has a diverse composition, as in distilling crude oil, liquid outlets at intervals up the column allow for the withdrawal of different fractions or products having different boiling points or boiling ranges. The "lightest" products (those with the lowest boiling point) exit from the top of the columns and the "heaviest" products (those with the highest boiling point) exit from the bottom of the column and are often called the bottoms. Industrial towers use reflux to achieve a more complete separation of products. Reflux refers to the portion of the condensed overhead liquid product from a distillation or fractionation tower that is returned to the upper part of the tower as shown in the schematic diagram of a typical, large-scale industrial distillation tower. Inside the tower, the downflowing reflux liquid provides cooling and condensation of the upflowing vapors thereby increasing the efficiency of the distillation tower. The more reflux that is provided for a given number of theoretical plates, the better the tower's separation of lower boiling materials from higher boiling materials. Alternatively, the more reflux that is provided for a given desired separation, the fewer the number of theoretical plates required. Chemical engineers must choose what combination of reflux rate and number of plates is both economically and physically feasible for the products purified in the distillation column. Such industrial fractionating towers are also used in cryogenic air separation, producing liquid oxygen, liquid nitrogen, and high purity argon. Distillation of chlorosilanes also enables the production of high-purity silicon for use as a semiconductor. Design and operation of a distillation tower depends on the feed and desired products. Given a simple, binary component feed, analytical methods such as the McCabe–Thiele method or the Fenske equation can be used. For a multi-component feed, simulation models are used both for design and operation. Moreover, the efficiencies of the vapor–liquid contact devices (referred to as "plates" or "trays") used in distillation towers are typically lower than that of a theoretical 100% efficient equilibrium stage. Hence, a distillation tower needs more trays than the number of theoretical vapor–liquid equilibrium stages. A variety of models have been postulated to estimate tray efficiencies. In modern industrial uses, a packing material is used in the column instead of trays when low pressure drops across the column are required. Other factors that favor packing are: vacuum systems, smaller diameter columns, corrosive systems, systems prone to foaming, systems requiring low liquid holdup, and batch distillation. Conversely, factors that favor plate columns are: presence of solids in feed, high liquid rates, large column diameters, complex columns, columns with wide feed composition variation, columns with a chemical reaction, absorption columns, columns limited by foundation weight tolerance, low liquid rate, large turn-down ratio and those processes subject to process surges. This packing material can either be random or dumped packing ( wide) such as Raschig rings or structured sheet metal. Liquids tend to wet the surface of the packing and the vapors pass across this wetted surface, where mass transfer takes place. Unlike conventional tray distillation in which every tray represents a separate point of vapor–liquid equilibrium, the vapor–liquid equilibrium curve in a packed column is continuous. However, when modeling packed columns, it is useful to compute a number of "theoretical stages" to denote the separation efficiency of the packed column with respect to more traditional trays. Differently shaped packings have different surface areas and void space between packings. Both these factors affect packing performance. Another factor in addition to the packing shape and surface area that affects the performance of random or structured packing is the liquid and vapor distribution entering the packed bed. The number of theoretical stages required to make a given separation is calculated using a specific vapor to liquid ratio. If the liquid and vapor are not evenly distributed across the superficial tower area as it enters the packed bed, the liquid to vapor ratio will not be correct in the packed bed and the required separation will not be achieved. The packing will appear to not be working properly. The height equivalent to a theoretical plate (HETP) will be greater than expected. The problem is not the packing itself but the mal-distribution of the fluids entering the packed bed. Liquid mal-distribution is more frequently the problem than vapor. The design of the liquid distributors used to introduce the feed and reflux to a packed bed is critical to making the packing perform to it maximum efficiency. Methods of evaluating the effectiveness of a liquid distributor to evenly distribute the liquid entering a packed bed can be found in references. Considerable work has been done on this topic by Fractionation Research, Inc. (commonly known as FRI). Multi-effect distillation The goal of multi-effect distillation is to increase the energy efficiency of the process, for use in desalination, or in some cases one stage in the production of ultrapure water. The number of effects is inversely proportional to the kW·h/m3 of water recovered figure and refers to the volume of water recovered per unit of energy compared with single-effect distillation. One effect is roughly 636 kW·h/m3: Multi-stage flash distillation can achieve more than 20 effects with thermal energy input, as mentioned in the article. Vapor compression evaporation – Commercial large-scale units can achieve around 72 effects with electrical energy input, according to manufacturers. There are many other types of multi-effect distillation processes, including one referred to as simply multi-effect distillation (MED), in which multiple chambers, with intervening heat exchangers, are employed. In food processing Beverages Carbohydrate-containing plant materials are allowed to ferment, producing a dilute solution of ethanol in the process. Spirits such as whiskey and rum are prepared by distilling these dilute solutions of ethanol. Components other than ethanol, including water, esters, and other alcohols, are collected in the condensate, which account for the flavor of the beverage. Some of these beverages are then stored in barrels or other containers to acquire more flavor compounds and characteristic flavors. Gallery See also Atmospheric distillation of crude oil Clyssus Fragrance extraction Low-temperature distillation Microdistillery Sublimation Dixon rings Random column packing References Further reading Needham, Joseph (1980). Science and Civilisation in China. Cambridge University Press. . External links Alcohol distillation Case Study: Petroleum Distillation Unit operations Alchemical processes Separation processes Laboratory techniques Phase transitions Gas technologies Ancient inventions
Distillation
[ "Physics", "Chemistry" ]
9,581
[ "Physical phenomena", "Phase transitions", "Unit operations", "Separation processes", "Phases of matter", "Critical phenomena", "Distillation", "Alchemical processes", "nan", "Chemical process engineering", "Statistical mechanics", "Matter" ]
8,302
https://en.wikipedia.org/wiki/David%20Hilbert
David Hilbert (; ; 23 January 1862 – 14 February 1943) was a German mathematician and philosopher of mathematics and one of the most influential mathematicians of his time. Hilbert discovered and developed a broad range of fundamental ideas including invariant theory, the calculus of variations, commutative algebra, algebraic number theory, the foundations of geometry, spectral theory of operators and its application to integral equations, mathematical physics, and the foundations of mathematics (particularly proof theory). He adopted and defended Georg Cantor's set theory and transfinite numbers. In 1900, he presented a collection of problems that set a course for mathematical research of the 20th century. Hilbert and his students contributed to establishing rigor and developed important tools used in modern mathematical physics. He was a cofounder of proof theory and mathematical logic. Life Early life and education Hilbert, the first of two children and only son of Otto, a county judge, and Maria Therese Hilbert (née Erdtmann), the daughter of a merchant, was born in the Province of Prussia, Kingdom of Prussia, either in Königsberg (according to Hilbert's own statement) or in Wehlau (known since 1946 as Znamensk) near Königsberg where his father worked at the time of his birth. His paternal grandfather was David Hilbert, a judge and Geheimrat. His mother Maria had an interest in philosophy, astronomy and prime numbers, while his father Otto taught him Prussian virtues. After his father became a city judge, the family moved to Königsberg. David's sister, Elise, was born when he was six. He began his schooling aged eight, two years later than the usual starting age. In late 1872, Hilbert entered the Friedrichskolleg Gymnasium (Collegium fridericianum, the same school that Immanuel Kant had attended 140 years before); but, after an unhappy period, he transferred to (late 1879) and graduated from (early 1880) the more science-oriented Wilhelm Gymnasium. Upon graduation, in autumn 1880, Hilbert enrolled at the University of Königsberg, the "Albertina". In early 1882, Hermann Minkowski (two years younger than Hilbert and also a native of Königsberg but had gone to Berlin for three semesters), returned to Königsberg and entered the university. Hilbert developed a lifelong friendship with the shy, gifted Minkowski. Career In 1884, Adolf Hurwitz arrived from Göttingen as an Extraordinarius (i.e., an associate professor). An intense and fruitful scientific exchange among the three began, and Minkowski and Hilbert especially would exercise a reciprocal influence over each other at various times in their scientific careers. Hilbert obtained his doctorate in 1885, with a dissertation, written under Ferdinand von Lindemann, titled Über invariante Eigenschaften spezieller binärer Formen, insbesondere der Kugelfunktionen ("On the invariant properties of special binary forms, in particular the spherical harmonic functions"). Hilbert remained at the University of Königsberg as a Privatdozent (senior lecturer) from 1886 to 1895. In 1895, as a result of intervention on his behalf by Felix Klein, he obtained the position of Professor of Mathematics at the University of Göttingen. During the Klein and Hilbert years, Göttingen became the preeminent institution in the mathematical world. He remained there for the rest of his life. Göttingen school Among Hilbert's students were Hermann Weyl, chess champion Emanuel Lasker, Ernst Zermelo, and Carl Gustav Hempel. John von Neumann was his assistant. At the University of Göttingen, Hilbert was surrounded by a social circle of some of the most important mathematicians of the 20th century, such as Emmy Noether and Alonzo Church. Among his 69 Ph.D. students in Göttingen were many who later became famous mathematicians, including (with date of thesis): Otto Blumenthal (1898), Felix Bernstein (1901), Hermann Weyl (1908), Richard Courant (1910), Erich Hecke (1910), Hugo Steinhaus (1911), and Wilhelm Ackermann (1925). Between 1902 and 1939 Hilbert was editor of the Mathematische Annalen, the leading mathematical journal of the time. He was elected an International Member of the United States National Academy of Sciences in 1907. Personal life In 1892, Hilbert married Käthe Jerosch (1864–1945), who was the daughter of a Königsberg merchant, "an outspoken young lady with an independence of mind that matched [Hilbert's]." While at Königsberg, they had their one child, Franz Hilbert (1893–1969). Franz suffered throughout his life from mental illness, and after he was admitted into a psychiatric clinic, Hilbert said, "From now on, I must consider myself as not having a son." His attitude toward Franz brought Käthe considerable sorrow. Hilbert considered the mathematician Hermann Minkowski to be his "best and truest friend". Hilbert was baptized and raised a Calvinist in the Prussian Evangelical Church. He later left the Church and became an agnostic. He also argued that mathematical truth was independent of the existence of God or other a priori assumptions. When Galileo Galilei was criticized for failing to stand up for his convictions on the Heliocentric theory, Hilbert objected: "But [Galileo] was not an idiot. Only an idiot could believe that scientific truth needs martyrdom; that may be necessary in religion, but scientific results prove themselves in due time." Later years Like Albert Einstein, Hilbert had closest contacts with the Berlin Group whose leading founders had studied under Hilbert in Göttingen (Kurt Grelling, Hans Reichenbach and Walter Dubislav). Around 1925, Hilbert developed pernicious anemia, a then-untreatable vitamin deficiency whose primary symptom is exhaustion; his assistant Eugene Wigner described him as subject to "enormous fatigue" and how he "seemed quite old," and that even after eventually being diagnosed and treated, he "was hardly a scientist after 1925, and certainly not a Hilbert." Hilbert was elected to the American Philosophical Society in 1932. Hilbert lived to see the Nazis purge many of the prominent faculty members at University of Göttingen in 1933. Those forced out included Hermann Weyl (who had taken Hilbert's chair when he retired in 1930), Emmy Noether and Edmund Landau. One who had to leave Germany, Paul Bernays, had collaborated with Hilbert in mathematical logic, and co-authored with him the important book Grundlagen der Mathematik (which eventually appeared in two volumes, in 1934 and 1939). This was a sequel to the Hilbert–Ackermann book Principles of Mathematical Logic from 1928. Hermann Weyl's successor was Helmut Hasse. About a year later, Hilbert attended a banquet and was seated next to the new Minister of Education, Bernhard Rust. Rust asked whether "the Mathematical Institute really suffered so much because of the departure of the Jews." Hilbert replied, "Suffered? It doesn't exist any longer, does it?" Death By the time Hilbert died in 1943, the Nazis had nearly completely restaffed the university, as many of the former faculty had either been Jewish or married to Jews. Hilbert's funeral was attended by fewer than a dozen people, only two of whom were fellow academics, among them Arnold Sommerfeld, a theoretical physicist and also a native of Königsberg. News of his death only became known to the wider world several months after he died. The epitaph on his tombstone in Göttingen consists of the famous lines he spoke at the conclusion of his retirement address to the Society of German Scientists and Physicians on 8 September 1930. The words were given in response to the Latin maxim: "Ignoramus et ignorabimus" or "We do not know and we shall not know": The day before Hilbert pronounced these phrases at the 1930 annual meeting of the Society of German Scientists and Physicians, Kurt Gödel—in a round table discussion during the Conference on Epistemology held jointly with the Society meetings—tentatively announced the first expression of his incompleteness theorem. Gödel's incompleteness theorems show that even elementary axiomatic systems such as Peano arithmetic are either self-contradicting or contain logical propositions that are impossible to prove or disprove within that system. Contributions to mathematics and physics Solving Gordan's Problem Hilbert's first work on invariant functions led him to the demonstration in 1888 of his famous finiteness theorem. Twenty years earlier, Paul Gordan had demonstrated the theorem of the finiteness of generators for binary forms using a complex computational approach. Attempts to generalize his method to functions with more than two variables failed because of the enormous difficulty of the calculations involved. To solve what had become known in some circles as Gordan's Problem, Hilbert realized that it was necessary to take a completely different path. As a result, he demonstrated Hilbert's basis theorem, showing the existence of a finite set of generators, for the invariants of quantics in any number of variables, but in an abstract form. That is, while demonstrating the existence of such a set, it was not a constructive proof—it did not display "an object"—but rather, it was an existence proof and relied on use of the law of excluded middle in an infinite extension. Hilbert sent his results to the Mathematische Annalen. Gordan, the house expert on the theory of invariants for the Mathematische Annalen, could not appreciate the revolutionary nature of Hilbert's theorem and rejected the article, criticizing the exposition because it was insufficiently comprehensive. His comment was: Klein, on the other hand, recognized the importance of the work, and guaranteed that it would be published without any alterations. Encouraged by Klein, Hilbert extended his method in a second article, providing estimations on the maximum degree of the minimum set of generators, and he sent it once more to the Annalen. After having read the manuscript, Klein wrote to him, saying: Later, after the usefulness of Hilbert's method was universally recognized, Gordan himself would say: For all his successes, the nature of his proof created more trouble than Hilbert could have imagined. Although Kronecker had conceded, Hilbert would later respond to others' similar criticisms that "many different constructions are subsumed under one fundamental idea"—in other words (to quote Reid): "Through a proof of existence, Hilbert had been able to obtain a construction"; "the proof" (i.e. the symbols on the page) was "the object". Not all were convinced. While Kronecker would die soon afterwards, his constructivist philosophy would continue with the young Brouwer and his developing intuitionist "school", much to Hilbert's torment in his later years. Indeed, Hilbert would lose his "gifted pupil" Weyl to intuitionism—"Hilbert was disturbed by his former student's fascination with the ideas of Brouwer, which aroused in Hilbert the memory of Kronecker". Brouwer the intuitionist in particular opposed the use of the Law of Excluded Middle over infinite sets (as Hilbert had used it). Hilbert responded: Nullstellensatz In the subject of algebra, a field is called algebraically closed if and only if every polynomial over it has a root in it. Under this condition, Hilbert gave a criterion for when a collection of polynomials of variables has a common root: This is the case if and only if there do not exist polynomials and indices such that . This result is known as the Hilbert root theorem, or "Hilberts Nullstellensatz" in German. He also proved that the correspondence between vanishing ideals and their vanishing sets is bijective between affine varieties and radical ideals in . Curve In 1890, Giuseppe Peano had published an article in the Mathematische Annalen describing the historically first space-filling curve. In response, Hilbert designed his own construction of such a curve, which is now called Hilbert curve. Approximations to this curve are constructed iteratively according to the replacement rules in the first picture of this section. The curve itself is then the pointwise limit. Axiomatization of geometry The text Grundlagen der Geometrie (tr.: Foundations of Geometry) published by Hilbert in 1899 proposes a formal set, called Hilbert's axioms, substituting for the traditional axioms of Euclid. They avoid weaknesses identified in those of Euclid, whose works at the time were still used textbook-fashion. It is difficult to specify the axioms used by Hilbert without referring to the publication history of the Grundlagen since Hilbert changed and modified them several times. The original monograph was quickly followed by a French translation, in which Hilbert added V.2, the Completeness Axiom. An English translation, authorized by Hilbert, was made by E.J. Townsend and copyrighted in 1902. This translation incorporated the changes made in the French translation and so is considered to be a translation of the 2nd edition. Hilbert continued to make changes in the text and several editions appeared in German. The 7th edition was the last to appear in Hilbert's lifetime. New editions followed the 7th, but the main text was essentially not revised. Hilbert's approach signaled the shift to the modern axiomatic method. In this, Hilbert was anticipated by Moritz Pasch's work from 1882. Axioms are not taken as self-evident truths. Geometry may treat things, about which we have powerful intuitions, but it is not necessary to assign any explicit meaning to the undefined concepts. The elements, such as point, line, plane, and others, could be substituted, as Hilbert is reported to have said to Schoenflies and Kötter, by tables, chairs, glasses of beer and other such objects. It is their defined relationships that are discussed. Hilbert first enumerates the undefined concepts: point, line, plane, lying on (a relation between points and lines, points and planes, and lines and planes), betweenness, congruence of pairs of points (line segments), and congruence of angles. The axioms unify both the plane geometry and solid geometry of Euclid in a single system. 23 problems Hilbert put forth a highly influential list consisting of 23 unsolved problems at the International Congress of Mathematicians in Paris in 1900. This is generally reckoned as the most successful and deeply considered compilation of open problems ever to be produced by an individual mathematician. After reworking the foundations of classical geometry, Hilbert could have extrapolated to the rest of mathematics. His approach differed from the later "foundationalist" Russell–Whitehead or "encyclopedist" Nicolas Bourbaki, and from his contemporary Giuseppe Peano. The mathematical community as a whole could engage in problems of which he had identified as crucial aspects of important areas of mathematics. The problem set was launched as a talk, "The Problems of Mathematics", presented during the course of the Second International Congress of Mathematicians held in Paris. The introduction of the speech that Hilbert gave said: He presented fewer than half the problems at the Congress, which were published in the acts of the Congress. In a subsequent publication, he extended the panorama, and arrived at the formulation of the now-canonical 23 Problems of Hilbert. See also Hilbert's twenty-fourth problem. The full text is important, since the exegesis of the questions still can be a matter of inevitable debate, whenever it is asked how many have been solved. Some of these were solved within a short time. Others have been discussed throughout the 20th century, with a few now taken to be unsuitably open-ended to come to closure. Some continue to remain challenges. The following are the headers for Hilbert's 23 problems as they appeared in the 1902 translation in the Bulletin of the American Mathematical Society. 1. Cantor's problem of the cardinal number of the continuum. 2. The compatibility of the arithmetical axioms. 3. The equality of the volumes of two tetrahedra of equal bases and equal altitudes. 4. Problem of the straight line as the shortest distance between two points. 5. Lie's concept of a continuous group of transformations without the assumption of the differentiability of the functions defining the group. 6. Mathematical treatment of the axioms of physics. 7. Irrationality and transcendence of certain numbers. 8. Problems of prime numbers (The "Riemann Hypothesis"). 9. Proof of the most general law of reciprocity in any number field. 10. Determination of the solvability of a Diophantine equation. 11. Quadratic forms with any algebraic numerical coefficients 12. Extensions of Kronecker's theorem on Abelian fields to any algebraic realm of rationality 13. Impossibility of the solution of the general equation of 7th degree by means of functions of only two arguments. 14. Proof of the finiteness of certain complete systems of functions. 15. Rigorous foundation of Schubert's enumerative calculus. 16. Problem of the topology of algebraic curves and surfaces. 17. Expression of definite forms by squares. 18. Building up of space from congruent polyhedra. 19. Are the solutions of regular problems in the calculus of variations always necessarily analytic? 20. The general problem of boundary values (Boundary value problems in PDE's). 21. Proof of the existence of linear differential equations having a prescribed monodromy group. 22. Uniformization of analytic relations by means of automorphic functions. 23. Further development of the methods of the calculus of variations. Formalism In an account that had become standard by the mid-century, Hilbert's problem set was also a kind of manifesto that opened the way for the development of the formalist school, one of three major schools of mathematics of the 20th century. According to the formalist, mathematics is manipulation of symbols according to agreed upon formal rules. It is therefore an autonomous activity of thought. Program In 1920, Hilbert proposed a research project in metamathematics that became known as Hilbert's program. He wanted mathematics to be formulated on a solid and complete logical foundation. He believed that in principle this could be done by showing that: all of mathematics follows from a correctly chosen finite system of axioms; and that some such axiom system is provably consistent through some means such as the epsilon calculus. He seems to have had both technical and philosophical reasons for formulating this proposal. It affirmed his dislike of what had become known as the ignorabimus, still an active issue in his time in German thought, and traced back in that formulation to Emil du Bois-Reymond. This program is still recognizable in the most popular philosophy of mathematics, where it is usually called formalism. For example, the Bourbaki group adopted a watered-down and selective version of it as adequate to the requirements of their twin projects of (a) writing encyclopedic foundational works, and (b) supporting the axiomatic method as a research tool. This approach has been successful and influential in relation with Hilbert's work in algebra and functional analysis, but has failed to engage in the same way with his interests in physics and logic. Hilbert wrote in 1919: Hilbert published his views on the foundations of mathematics in the 2-volume work, Grundlagen der Mathematik. Gödel's work Hilbert and the mathematicians who worked with him in his enterprise were committed to the project. His attempt to support axiomatized mathematics with definitive principles, which could banish theoretical uncertainties, ended in failure. Gödel demonstrated that any non-contradictory formal system, which was comprehensive enough to include at least arithmetic, cannot demonstrate its completeness by way of its own axioms. In 1931 his incompleteness theorem showed that Hilbert's grand plan was impossible as stated. The second point cannot in any reasonable way be combined with the first point, as long as the axiom system is genuinely finitary. Nevertheless, the subsequent achievements of proof theory at the very least clarified consistency as it relates to theories of central concern to mathematicians. Hilbert's work had started logic on this course of clarification; the need to understand Gödel's work then led to the development of recursion theory and then mathematical logic as an autonomous discipline in the 1930s. The basis for later theoretical computer science, in the work of Alonzo Church and Alan Turing, also grew directly out of this "debate". Functional analysis Around 1909, Hilbert dedicated himself to the study of differential and integral equations; his work had direct consequences for important parts of modern functional analysis. In order to carry out these studies, Hilbert introduced the concept of an infinite dimensional Euclidean space, later called Hilbert space. His work in this part of analysis provided the basis for important contributions to the mathematics of physics in the next two decades, though from an unanticipated direction. Later on, Stefan Banach amplified the concept, defining Banach spaces. Hilbert spaces are an important class of objects in the area of functional analysis, particularly of the spectral theory of self-adjoint linear operators, that grew up around it during the 20th century. Physics Until 1912, Hilbert was almost exclusively a pure mathematician. When planning a visit from Bonn, where he was immersed in studying physics, his fellow mathematician and friend Hermann Minkowski joked he had to spend 10 days in quarantine before being able to visit Hilbert. In fact, Minkowski seems responsible for most of Hilbert's physics investigations prior to 1912, including their joint seminar on the subject in 1905. In 1912, three years after his friend's death, Hilbert turned his focus to the subject almost exclusively. He arranged to have a "physics tutor" for himself. He started studying kinetic gas theory and moved on to elementary radiation theory and the molecular theory of matter. Even after the war started in 1914, he continued seminars and classes where the works of Albert Einstein and others were followed closely. By 1907, Einstein had framed the fundamentals of the theory of gravity, but then struggled for nearly 8 years to put the theory into its final form. By early summer 1915, Hilbert's interest in physics had focused on general relativity, and he invited Einstein to Göttingen to deliver a week of lectures on the subject. Einstein received an enthusiastic reception at Göttingen. Over the summer, Einstein learned that Hilbert was also working on the field equations and redoubled his own efforts. During November 1915, Einstein published several papers culminating in The Field Equations of Gravitation (see Einstein field equations). Nearly simultaneously, Hilbert published "The Foundations of Physics", an axiomatic derivation of the field equations (see Einstein–Hilbert action). Hilbert fully credited Einstein as the originator of the theory and no public priority dispute concerning the field equations ever arose between the two men during their lives. See more at priority. Additionally, Hilbert's work anticipated and assisted several advances in the mathematical formulation of quantum mechanics. His work was a key aspect of Hermann Weyl and John von Neumann's work on the mathematical equivalence of Werner Heisenberg's matrix mechanics and Erwin Schrödinger's wave equation, and his namesake Hilbert space plays an important part in quantum theory. In 1926, von Neumann showed that, if quantum states were understood as vectors in Hilbert space, they would correspond with both Schrödinger's wave function theory and Heisenberg's matrices. Throughout this immersion in physics, Hilbert worked on putting rigor into the mathematics of physics. While highly dependent on higher mathematics, physicists tended to be "sloppy" with it. To a pure mathematician like Hilbert, this was both ugly, and difficult to understand. As he began to understand physics and how physicists were using mathematics, he developed a coherent mathematical theory for what he found – most importantly in the area of integral equations. When his colleague Richard Courant wrote the now classic Methoden der mathematischen Physik (Methods of Mathematical Physics) including some of Hilbert's ideas, he added Hilbert's name as author even though Hilbert had not directly contributed to the writing. Hilbert said "Physics is too hard for physicists", implying that the necessary mathematics was generally beyond them; the Courant–Hilbert book made it easier for them. Number theory Hilbert unified the field of algebraic number theory with his 1897 treatise Zahlbericht (literally "report on numbers"). He also resolved a significant number-theory problem formulated by Waring in 1770. As with the finiteness theorem, he used an existence proof that shows there must be solutions for the problem rather than providing a mechanism to produce the answers. He then had little more to publish on the subject; but the emergence of Hilbert modular forms in the dissertation of a student means his name is further attached to a major area. He made a series of conjectures on class field theory. The concepts were highly influential, and his own contribution lives on in the names of the Hilbert class field and of the Hilbert symbol of local class field theory. Results were mostly proved by 1930, after work by Teiji Takagi. Hilbert did not work in the central areas of analytic number theory, but his name has become known for the Hilbert–Pólya conjecture, for reasons that are anecdotal. Ernst Hellinger, a student of Hilbert, once told André Weil that Hilbert had announced in his seminar in the early 1900s that he expected the proof of the Riemann Hypothesis would be a consequence of Fredholm's work on integral equations with a symmetric kernel. Works His collected works (Gesammelte Abhandlungen) have been published several times. The original versions of his papers contained "many technical errors of varying degree"; when the collection was first published, the errors were corrected and it was found that this could be done without major changes in the statements of the theorems, with one exception—a claimed proof of the continuum hypothesis. The errors were nonetheless so numerous and significant that it took Olga Taussky-Todd three years to make the corrections. See also Concepts List of things named after David Hilbert Foundations of geometry Hilbert C*-module Hilbert cube Hilbert curve Hilbert matrix Hilbert metric Hilbert–Mumford criterion Hilbert number Hilbert ring Hilbert–Poincaré series Hilbert series and Hilbert polynomial Hilbert space Hilbert spectrum Hilbert system Hilbert transform Hilbert's arithmetic of ends Hilbert's paradox of the Grand Hotel Hilbert–Schmidt operator Hilbert–Smith conjecture Theorems Hilbert–Burch theorem Hilbert's irreducibility theorem Hilbert's Nullstellensatz Hilbert's theorem (differential geometry) Hilbert's Theorem 90 Hilbert's syzygy theorem Hilbert–Speiser theorem Other Brouwer–Hilbert controversy Direct method in the calculus of variations Entscheidungsproblem Geometry and the Imagination General relativity priority dispute Footnotes Citations Sources Primary literature in English translation 1918. "Axiomatic thought," 1114–1115. 1922. "The new grounding of mathematics: First report," 1115–1133. 1923. "The logical foundations of mathematics," 1134–1147. 1930. "Logic and the knowledge of nature," 1157–1165. 1931. "The grounding of elementary number theory," 1148–1156. 1904. "On the foundations of logic and arithmetic," 129–138. 1925. "On the infinite," 367–392. 1927. "The foundations of mathematics," with comment by Weyl and Appendix by Bernays, 464–489. Secondary literature , available at Gallica. The "Address" of Gabriel Bertrand of 20 December 1943 at the French Academy: he gives biographical sketches of the lives of recently deceased members, including Pieter Zeeman, David Hilbert and Georges Giraud. Bottazzini Umberto, 2003. Il flauto di Hilbert. Storia della matematica. UTET, Corry, L., Renn, J., and Stachel, J., 1997, "Belated Decision in the Hilbert-Einstein Priority Dispute," Science 278: nn-nn. Dawson, John W. Jr 1997. Logical Dilemmas: The Life and Work of Kurt Gödel. Wellesley MA: A. K. Peters. . Grattan-Guinness, Ivor, 2000. The Search for Mathematical Roots 1870–1940. Princeton Univ. Press. Gray, Jeremy, 2000. The Hilbert Challenge. Mehra, Jagdish, 1974. Einstein, Hilbert, and the Theory of Gravitation. Reidel. Piergiorgio Odifreddi, 2003. Divertimento Geometrico. Le origini geometriche della logica da Euclide a Hilbert. Bollati Boringhieri, . A clear exposition of the "errors" of Euclid and of the solutions presented in the Grundlagen der Geometrie, with reference to non-Euclidean geometry. The definitive English-language biography of Hilbert. Sieg, Wilfried, and Ravaglia, Mark, 2005, "Grundlagen der Mathematik" in Grattan-Guinness, I., ed., Landmark Writings in Western Mathematics. Elsevier: 981–99. (in English) Thorne, Kip, 1995. Black Holes and Time Warps: Einstein's Outrageous Legacy, W. W. Norton & Company; Reprint edition. . Georg von Wallwitz: Meine Herren, dies ist keine Badeanstalt. Wie ein Mathematiker das 20. Jahrhundert veränderte. Berenberg Verlag, Berlin 2017, ISBN 978-3-946334-24-8. The definitive German-language biography of Hilbert. External links Hilbert Bernays Project Hilbert's 23 Problems Address ICMM 2014 dedicated to the memory of D.Hilbert Hilbert's radio speech recorded in Königsberg 1930 (in German) , with English translation Wolfram MathWorld – Hilbert'Constant 'From Hilbert's Problems to the Future', lecture by Professor Robin Wilson, Gresham College, 27 February 2008 (available in text, audio and video formats). 1862 births 1943 deaths Scientists from Königsberg People from the Province of Prussia 19th-century German mathematicians 20th-century German mathematicians Foreign members of the Royal Society Foreign associates of the National Academy of Sciences German agnostics Formalism (deductive) Former Protestants German geometers German mathematical analysts German number theorists Operator theorists Recipients of the Pour le Mérite (civil class) German relativity theorists Academic staff of the University of Göttingen University of Königsberg alumni Academic staff of the University of Königsberg Philosophers of mathematics Members of the American Philosophical Society Recipients of the Cothenius Medal
David Hilbert
[ "Mathematics" ]
6,304
[ "Philosophers of mathematics" ]
8,308
https://en.wikipedia.org/wiki/Delft
Delft () is a city and municipality in the province of South Holland, Netherlands. It is located between Rotterdam, to the southeast, and The Hague, to the northwest. Together with them, it is a part of both the Rotterdam–The Hague metropolitan area and the Randstad. Delft is a popular tourist destination in the Netherlands, famous for its historical connections with the reigning House of Orange-Nassau, for its blue pottery, for being home to the painter Jan Vermeer, and for hosting Delft University of Technology (TU Delft). Historically, Delft played a highly influential role in the Dutch Golden Age. In terms of science and technology, thanks to the pioneering contributions of Antonie van Leeuwenhoek and Martinus Beijerinck, Delft can be considered to be the birthplace of microbiology. History Early history The city of Delft came into being beside a canal, the 'Delf', which comes from the word delven, meaning to delve or dig, and this led to the name Delft. At the elevated place where this 'Delf' crossed the creek wall of the silted up river Gantel, a Count established his manor, probably around 1075. Partly because of this, Delft became an important market town, the evidence for which can be seen in the size of its central market square. Having been a rural village in the early Middle Ages, Delft developed into a city, and on 15 April 1246, Count Willem II granted Delft its city charter. Trade and industry flourished. In 1389 the Delfshavensche Schie canal was dug through to the river Maas, where the port of Delfshaven was built, connecting Delft to the sea. Until the 17th century, Delft was one of the major cities of the then county (and later province) of Holland. In 1400, for example, the city had 6,500 inhabitants, making it the third largest city after Dordrecht (8,000) and Haarlem (7,000). In 1560, Amsterdam, with 28,000 inhabitants, had become the largest city, followed by Delft, Leiden and Haarlem, which each had around 14,000 inhabitants. In 1536, a large part of the city was destroyed by the great fire of Delft. The town's association with the House of Orange started when William of Orange (Willem van Oranje), nicknamed William the Silent (Willem de Zwijger), took up residence in 1572 in the former Saint-Agatha convent (subsequently called the Prinsenhof). At the time he was the leader of growing national Dutch resistance against Spanish occupation, known as the Eighty Years' War. By then Delft was one of the leading cities of Holland and was equipped with the necessary city walls to serve as a headquarters. In October 1573, an attack by Spanish forces was repelled in the Battle of Delft. After the Act of Abjuration was proclaimed in 1581, Delft became the de facto capital of the newly independent Netherlands, as the seat of the Prince of Orange. When William was shot dead on 10 July 1584 by Balthazar Gerards in the hall of the Prinsenhof (now the Prinsenhof Museum), the family's traditional burial place in Breda was still in the hands of the Spanish. Therefore, he was buried in the Delft Nieuwe Kerk (New Church), starting a tradition for the House of Orange that has continued to the present day. Around this time, Delft also occupied a prominent position in the field of printing. A number of Italian glazed earthenware makers settled in the city and introduced a new style. The tapestry industry also flourished when famous manufacturer François Spierincx moved to the city. In the 17th century, Delft experienced a new heyday, thanks to the presence of an office of the Dutch East India Company (VOC) (opened in 1602) and the manufacture of Delft Blue china. A number of notable artists based themselves in the city, including Leonard Bramer, Carel Fabritius, Pieter de Hoogh, Gerard Houckgeest, Emanuel de Witte, Jan Steen, and Johannes Vermeer. Reinier de Graaf and Antonie van Leeuwenhoek received international attention for their scientific research. Explosion The Delft Explosion, also known in history as the Delft Thunderclap, occurred on 12 October 1654 when a gunpowder store exploded, destroying much of the city. More than a hundred were killed and thousands were injured. About of gunpowder were stored in barrels in a magazine in a former Clarist convent in the Doelenkwartier district, where the Paardenmarkt is now located. Cornelis Soetens, the keeper of the magazine, opened the store to check a sample of the powder and a huge explosion followed. Fortunately, many citizens were away, visiting a market in Schiedam or a fair in The Hague. Today, the explosion is primarily remembered for killing Rembrandt's most promising pupil, Carel Fabritius, and destroying nearly all his works. Delft artist Egbert van der Poel painted several pictures of Delft showing the devastation. The gunpowder store (Dutch: Kruithuis) was subsequently re-housed, a 'cannonball's distance away', outside the city, in a new building designed by architect Pieter Post. Sights The city centre retains a large number of monumental buildings, while in many streets there are canals of which the banks are connected by typical bridges, altogether making this city a notable tourist destination. Historical buildings and other sights of interest include: Oude Kerk (Old Church), constructed between 1246 and 1350. Buried here: Piet Hein, Johannes Vermeer, Antonie van Leeuwenhoek. Nieuwe Kerk (New Church), constructed between 1381 and 1496. It contains the Dutch royal family's burial vault which, between funerals, is sealed with a cover stone. A statue of Hugo Grotius created by in 1886, located on the Markt near the Nieuwe Kerk. The Prinsenhof (Princes' Court), now a museum. City Hall on the Markt. The Oostpoort (Eastern gate), built around 1400. This is the only remaining gate of the old city walls. The Gemeenlandshuis Delfland, or Huyterhuis, built in 1505, which has housed the Delfland regional water authority since 1645. The Vermeer Centre in the re-built Guild house of St. Luke. The historical "Waag" building (Weigh house). Windmill De Roos, a tower mill built . Restored to working order in 2013. Another windmill that formerly stood in Delft, Het Fortuyn, was dismantled in 1917 and re-erected at the Netherlands Open Air Museum, Arnhem, Gelderland in 1920. Royal Delft also known as De Porceleyne Fles, is a great place which showcases Delft ware. Science Center attracts kids as well as adults. Culture Delft is well known for the Delft pottery ceramic products which were styled on the imported Chinese porcelain of the 17th century. The city had an early start in this area since it was a home port of the Dutch East India Company. It can still be seen at the pottery factories De Koninklijke Porceleyne Fles (or Royal Delft) and De Delftse Pauw, while new ceramics and ceramic art can be found at the Gallery Terra Delft. The painter Johannes Vermeer (1632–1675) was born in Delft. Vermeer used Delft streets and home interiors as the subject or background in his paintings. Several other famous painters lived and worked in Delft at that time, such as Pieter de Hoogh, Carel Fabritius, Nicolaes Maes, Gerard Houckgeest and Hendrick Cornelisz. van Vliet. They were all members of the Delft School. The Delft School is known for its images of domestic life and views of households, church interiors, courtyards, squares and the streets of Delft. The painters also produced pictures showing historic events, flowers, portraits for patrons and the court as well as decorative pieces of art. Delft supports creative arts' companies. From 2001 the , a building that had been disused since 1951, began to house small companies in the creative arts sector. Its demolition started in December 2009, making way for the new railway tunnel in Delft. The occupants of the building, as well as the name 'Bacinol', moved to another building in the city. The name Bacinol relates to Dutch penicillin research during WWII. Education Delft University of Technology (TU Delft) is one of four universities of technology in the Netherlands. It was founded as an academy for civil engineering in 1842 by King William II. As of 2022, well over 27,000 students are enrolled. The UNESCO-IHE Institute for Water Education, providing postgraduate education for people from developing countries, draws on the strong tradition in water management and hydraulic engineering of the Delft university. The Hague University of Applied Sciences has a building on the Delft University of Technology campus. It opened in 2009 and offers several bachelor's degrees for the Faculty of Technology, Innovation & Society. Inholland University of Applied Sciences also has a building on the Delft University of Technology campus. Several bachelor's degrees for the Agri, Food & Life Sciences faculty and the Engineering, Design and Computing faculty are being taught at the Delft campus. Economy In the local economic field, essential elements are: education; (amongst others Delft University of Technology) ( 21.651 students and 4.939 full-time employees), scientific research; (amongst others "TNO" Netherlands Organisation for Applied Scientific Research), Stichting Deltares, Nederlands Normalisatie-Instituut, UNESCO-IHE Institute for water education, Technopolis Innovation Park; tourism; (about one million registered visitors a year), industry; (DSM Gist Services BV, (Delftware) earthenware production by De Koninklijke Porceleyne Fles, Exact Software Nederland BV, TOPdesk, Ampelmann) retail; (IKEA (Inter IKEA Systems B.V., owner and worldwide franchisor of the IKEA Concept, is based in Delft), Makro, Eneco Energy NV). Nature and recreation East of Delft lies a relatively large nature and recreation area called the "Delftse Hout" ("Delft Wood"). Through the forest lie bike, horse-riding and footpaths. It also includes a vast lake (suitable for swimming and windsurfing), narrow beaches, a restaurant, and community gardens, plus camping ground and other recreational and sports facilities. (There is also a facility for renting bikes from the station.) Inside the city, apart from a central park, there are several smaller town parks, including "Nieuwe Plantage", "Agnetapark", "Kalverbos". There is also the Botanical Garden of the TU and an arboretum in Delftse Hout. Notable people Delft is the birthplace of: Dutch Golden Age Jacob Willemsz Delff the Elder, (ca. 1550–1601), portrait painter Michiel Jansz. van Mierevelt (1567–1641), painter Willem van der Vliet (c. 1584–1642), painter Adriaen van de Venne (1589–1662), painter Adriaen Cornelisz van Linschoten (1590–1677), painter Daniël Mijtens (ca. 1590–1647/48), portrait painter Leonaert Bramer (1596–1674), painter of genre, religious, and history paintings Pieter Jansz van Asch (1603–ca. 1678), painter Evert van Aelst (1602–1657), still life painter Hendrick Cornelisz. van Vliet (ca. 1611–1675), painter of church interiors Harmen Steenwijck (ca. 1612–ca. 1656), painter of still lifes and fruit Jacob Willemsz Delff the Younger (1619–1661), portrait painter David Beck (1621–1656), portrait painter Egbert van der Poel (1621–1664), genre and landscape painter Daniel Vosmaer (1622–1666), painter Willem van Aelst (1627–1683), artist of still-lifes Hendrick van der Burgh (1627–after 1664), genre painter Johannes Vermeer (1632–1675), painter of domestic interior scenes Ary de Milde (1634–1708), ceramist Public thinking and service Christian van Adrichem (1533–1585), Catholic priest and theological writer Jan Joosten van Lodensteijn (1556–1623), one of the first Dutchmen in Japan Hugo Grotius (1583–1645), humanist, diplomat, lawyer, theologian and jurist who laid the foundations for international law Frederick Henry, Prince of Orange (1584–1647), sovereign prince of Orange and stadtholder of Holland, Zeeland, Utrecht, Guelders & Overijssel from 1625 to 1647 Philippus Baldaeus (1632–1671), minister in Jaffna Diederik Durven (1676–1740), Governor-General of the Dutch East Indies from 1729 to 1732 Abraham van der Weijden (1743–1773), ship's captain, initiated of Freemasonry in South Africa Gerrit Paape (1752–1803), painter of earthenware and stoneware, poet, journalist, novelist, judge, columnist and finally a ministerial civil servant Aegidius van Braam (1758–1822), naval vice-admiral Agneta Matthes (1847–1909), entrepreneur, manufactured yeast using the cooperative movement and housed workers at Agnetapark Henk Zeevalking (1922–2005), politician and jurist Piet Bukman (born 1934), politician and diplomat Klaas de Vries (born 1943), politician and jurist Atzo Nicolaï (born 1960), politician Marja van Bijsterveldt (born 1961), politician, Mayor of Delft since 2016 Alexander Pechtold (born 1965), politician and art historian Science and business Adolphus Vorstius (1597–1663), physician and botanist Martin van den Hove (1605–1639), astronomer and mathematician Antonie van Leeuwenhoek (1632–1723), father of microbiology and developer of the microscope Nicolaas Kruik (1678–1754), land surveyor, cartographer, astronomer, weatherman and eponym of the Museum De Cruquius Bernard Romans (ca. 1720-ca. 1783), land surveyor, artist, naturalist, and author Martin van Marum (1750–1837), physician, inventor, scientist and teacher Jacob Gijsbertus Samuël van Breda (1788–1867), biologist and geologist Philippe-Charles Schmerling (1791–1836), prehistorian, geologist and pioneer in paleontology Martinus Beijerinck (1851–1931), microbiologist, discovered viruses, lived and worked in Delft Guillaume Daniel Delprat CBE (1856–1937), metallurgist, mining engineer and businessman Frederik H. Kreuger (1928–2015), high-voltage scientist, academic and inventor Marjo van der Knaap (born 1958), professor of pediatric neurology, white matter researcher Antoni Folkers (born 1960), architect, humanist Peter Schrijver (born 1963), historical linguist Ionica Smeets (born 1979), mathematician, science journalist, TV presenter and academic Boyan Slat (born 1994), inventor and entrepreneur, CEO of The Ocean Cleanup Art Suzanne Manet (1829–1906), pianist, wife and model of painter Édouard Manet Betsy Perk (1833–1906), author of novels and plays, pioneer of the Dutch women's movement Ton Lutz (1919–2009) and Pieter Lutz (1927–2009), brothers and actors Bram Bogart (1921–2012), expressionist painter of the COBRA group Cor Dam (born 1935), sculptor, painter, illustrator and ceramist Kader Abdolah (born 1954), poet and columnist Michèle Van de Roer (born 1956), artist, designer, photographer and engraver Mariska Hulscher (born 1964), TV presenter Emma Kirchner (1830 - 1909), first woman photographer in Delft area Wessel van Diepen (born 1966), radio host, music producer and former TV presenter Rob Das (born 1969), film and TV actor, director and writer Jan-Willem van Ewijk (born 1970), film director, actor and screenwriter Ricky Koole (born 1972) a Dutch singer and film actress Vincent de Moor (born 1973), trance musician and remixer Roel van Velzen (born 1978), singer Marly van der Velden (born 1988), actress and fashion designer Rose Schmits (born c. 1988), potter and trans activist Sport Jan Thomée (1886–1954), footballer, team bronze medallist at the 1908 Summer Olympics Henri van Schaik (1899–1991), horse rider, team silver medallist in the 1936 Summer Olympics Tinus Osendarp (1916–2002), sprint runner, twice bronze medallist at the 1936 Summer Olympics Stien Kaiser (born 1938), speed skater, twice bronze medallist at the 1968 Winter Olympics and gold and silver medallist in the 1972 Winter Olympics Pieter van der Kruk (born 1941), heavyweight weightlifter and shot putter, competed at the 1968 Summer Olympics Jan Timman (born 1951), chess grandmaster, raised in Delft Ria Stalman (born 1951), discus thrower and shot putter, gold medallist in the discus at the 1984 Summer Olympics Frank Leistra (born 1960), field hockey goalkeeper, team bronze medallist at the 1988 Summer Olympics Ken Monkou (born 1964), football player with 356 club caps Eeke van Nes (born 1969), rower, team bronze medallist at the 1996 Summer Olympics and team silver medallist at the 2000 Summer Olympics Thamar Henneken (born 1979), freestyle swimmer, team silver medallist at the 2000 Summer Olympics Ard van Peppen (born 1985), footballer with over 350 club caps Sytske de Groot (born 1986), rower, team bronze medallist at the 2012 Summer Olympics Aaron Meijers (born 1987), footballer with almost 400 club caps Michaëlla Krajicek (born 1989), tennis player Arantxa Rus (born 1990), tennis player Kelly Vollebregt (born 1995), handball player Victoria Pelova (born 1999), football player Tijmen van der Helm (born 2004), racing driver Miscellaneous Nuna is a series of crewed solar-powered vehicles, built by students at the Delft University of Technology, that won the World solar challenge in Australia seven times in the last nine competitions (in 2001, 2003, 2005, 2007, 2013, 2015 and 2017). The so-called "Superbus" project aims to develop high-speed coaches capable of speeds of up to together with the supporting infrastructure including special highway lanes constructed separately next to the nation's highways; this project was led by Dutch astronaut professor Wubbo Ockels of the Delft University of Technology. Members of both Delft Student Rowing Clubs Proteus-Eretes and Laga have won many international trophies, including Olympic medals, in the past. Formula Student Team Delft is a student racing team that has won the Formula Student competition format in Germany three times in a row, their workplace is located along the shie. The Human Power Team Delft & Amsterdam, a team consisting mainly of students from the Delft University of Technology, has won The World Human Powered Speed Challenge (WHPSC) four times. This is an international contest for recumbents in the US state of Nevada, the aim of which is to break speed records. They set the world record of 133.78 kilometres an hour (83.13 mph) in 2013. International relations Twin towns Delft is twinned with: Transport Delft railway station; (As of February 2015, located in a new building.) Delft Campus railway station Trains stopping at these stations connect Delft with, among others, the nearby cities of Rotterdam and The Hague, as often as every five minutes, for most of the day. There are several bus routes from Delft to similar destinations. Trams frequently travel between Delft and The Hague via special double tracks crossing the city. The whole city center and adjacent areas are a paid on-street parking area. In 2018, with the day parking fee of 29.5 Euro, it was the most expensive on-street parking area in the Netherlands, with the city centers of Deventer and Dordrecht being second and third, respectively. See also Delftware Delft School (Dutch Golden Age painting) Dutch Golden Age List of films set in Delft RandstadRail Tanthof Bicycle-friendly Gallery Notes References Further reading Vermeer: A View of Delft, Anthony Bailey, Henry Holt & Company, 2001, External links Municipal Website of Delft Radio Netherlands: The day the world came to an end National Gallery, London: A View of Delft after the Explosion of 1654 TU Delft Develop Ambulance Drone Cities in the Netherlands Municipalities of South Holland Populated places in South Holland Industrial fires and explosions
Delft
[ "Chemistry" ]
4,534
[ "Industrial fires and explosions", "Explosions" ]
8,309
https://en.wikipedia.org/wiki/Duesberg%20hypothesis
The Duesberg hypothesis is the claim that AIDS is not caused by HIV, but instead that AIDS is caused by noninfectious factors such as recreational and pharmaceutical drug use and that HIV is merely a harmless passenger virus. The hypothesis was popularized by Peter Duesberg, a professor of biology at University of California, Berkeley, from whom the hypothesis gets its name. The scientific consensus is that the Duesberg hypothesis is incorrect and that HIV is the cause of AIDS. The most prominent supporters of the hypothesis are Duesberg himself, biochemist and vitamin proponent David Rasnick, and journalist Celia Farber. The scientific community generally contends that Duesberg's arguments in favor of the hypothesis are the result of cherry-picking predominantly outdated scientific data and selectively ignoring evidence that demonstrates HIV's role in causing AIDS. Role of legal and illegal drug use Duesberg argues that there is a statistical correlation between trends in recreational drug use and trends in AIDS cases. He argues that the epidemic of AIDS cases in the 1980s corresponds to a supposed epidemic of recreational drug use in the United States and Europe during the same time frame. These claims are not supported by epidemiologic data. The average yearly increase in opioid-related deaths from 1990 to 2002 was nearly three times the yearly increase from 1979 to 1990, with the greatest increase in 2000–2002, yet AIDS cases and deaths fell dramatically during the mid-to-late-1990s. Duesberg's claim that recreational drug use, rather than HIV, was the cause of AIDS has been specifically examined and found to be false. Cohort studies have found that only HIV-positive drug users develop opportunistic infections; HIV-negative drug users do not develop such infections, indicating that HIV rather than drug use is the cause of AIDS. Duesberg has also argued that nitrite inhalants were the cause of the epidemic of Kaposi sarcoma (KS) in gay men. However, this argument has been described as an example of the fallacy of a statistical confounding effect; it is now known that a herpesvirus, potentiated by HIV, is responsible for AIDS-associated KS. Moreover, in addition to recreational drugs, Duesberg argues that anti-HIV drugs such as zidovudine (AZT) can cause AIDS. Duesberg's claim that antiviral medication causes AIDS is regarded as disproven within the scientific community. Placebo-controlled studies have found that AZT as a single agent produces modest and short-lived improvements in survival and delays the development of opportunistic infections; it certainly did not cause AIDS, which develops in both treated and untreated study patients. With the subsequent development of protease inhibitors and highly active antiretroviral therapy, numerous studies have documented the fact that anti-HIV drugs prevent the development of AIDS and substantially prolong survival, further disproving the claim that these drugs "cause" AIDS. Scientific study and rejection of Duesberg's risk-AIDS hypothesis Several studies have specifically addressed Duesberg's claim that recreational drug abuse or sexual promiscuity were responsible for the manifestations of AIDS. An early study of his claims, published in Nature in 1993, found Duesberg's drug abuse-AIDS hypothesis to have "no basis in fact." A large prospective study followed a group of 715 homosexual men in the Vancouver, Canada, area; approximately half were HIV-seropositive or became so during the follow-up period, and the remainder were HIV-seronegative. After more than eight years of follow-up, despite similar rates of drug use, sexual contact, and other supposed risk factors in both groups, only the HIV-positive group suffered from opportunistic infections. Similarly, CD4 counts dropped in the patients who were HIV-infected, but remained stable in the HIV-negative patients, despite similar rates of risk behavior. The authors concluded that "the risk-AIDS hypothesis ... is clearly rejected by our data," and that "the evidence supports the hypothesis that HIV-1 has an integral role in the CD4 depletion and progressive immune dysfunction that characterise AIDS." Similarly, the Multicenter AIDS Cohort Study (MACS) and the Women's Interagency HIV Study (WIHS)—which between them observed more than 8,000 Americans—demonstrated that "the presence of HIV infection is the only factor that is strongly and consistently associated with the conditions that define AIDS." A 2008 study found that recreational drug use (including cannabis, cocaine, poppers, and amphetamines) had no effect on CD4 or CD8 T-cell counts, providing further evidence against a role of recreational drugs as a cause of AIDS. Current AIDS definitions Duesberg argued in 1989 that a significant number of AIDS victims had died without proof of HIV infection. However, with the use of modern culture techniques and polymerase chain reaction testing, HIV can be demonstrated in virtually all patients with AIDS. Since AIDS is now defined partially by the presence of HIV, Duesberg claims it is impossible by definition to offer evidence that AIDS does not require HIV. However, the first definitions of AIDS mentioned no cause and the first AIDS diagnoses were made before HIV was discovered. The addition of HIV positivity to surveillance criteria as an absolutely necessary condition for case reporting occurred only in 1993, after a scientific consensus was established that HIV caused AIDS. AIDS in Africa According to the Duesberg hypothesis, AIDS is not found in Africa. What Duesberg calls "the myth of an African AIDS epidemic," among people" exists for several reasons, including: The need, according to Duesberg, of the CDC, the WHO, and other health organizations to justify their existences, resulting in their "manufacturing contagious plagues out of noninfectious medical conditions." Media sensationalism, with stories that "helped shape the Western impression of an AIDS problem out of control," resulting in high levels of funding. Willing participation in deception by local doctors who wish to take advantage of this aid money: "African doctors themselves participate in building the myth of the AIDS pandemic." Confusion or incompetence on the part of African doctors: "Many common Third World diseases are confused with AIDS even if they are not part of its official definition." Duesberg states that African AIDS cases are "a collection of long-established, indigenous diseases, such as chronic fevers, weight loss, alias "slim disease," diarrhea, and tuberculosis" that result from malnutrition and poor sanitation. African AIDS cases, though, have increased in the last three decades as HIV's prevalence has increased but as malnutrition percentages and poor sanitation have declined in many African regions. In addition, while HIV and AIDS are more prevalent in urban than in rural settings in Africa, malnutrition and poor sanitation are found more commonly in rural than in urban settings. According to Duesberg, common diseases are easily misdiagnosed as AIDS in Africa because "the diagnosis of African AIDS is arbitrary" and does not include HIV testing. A definition of AIDS agreed upon in 1985 by the World Health Organization in Bangui did not require a positive HIV test, but since 1985, many African countries have added positive HIV tests to the Bangui criteria for AIDS or changed their definitions to match those of the U.S. Centers for Disease Control. One of the reasons for using more HIV tests despite their expense is that, rather than overestimating AIDS as Duesberg suggests, the Bangui definition alone excluded nearly half of African AIDS patients." Duesberg notes that diseases associated with AIDS differ between African and Western populations, concluding that the causes of immunodeficiency must be different. Tuberculosis is much more commonly diagnosed among AIDS patients in Africa than in Western countries, while PCP conforms to the opposite pattern. Tuberculosis, though, had higher prevalence in Africa than in the West before the spread of HIV. In Africa and the United States, HIV has spurred a similar percentage increase in tuberculosis cases. PCP may be underestimated in Africa: since machinery "required for accurate testing is relatively rare in many resource-poor areas, including large parts of Africa, PCP is likely to be underdiagnosed in Africa. Consistent with this hypothesis, studies that report the highest rates of PCP in Africa are those that use the most advanced diagnostic methods" Duesberg also claims that Kaposi's sarcoma is "exclusively diagnosed in male homosexual risk groups using nitrite inhalants and other psychoactive drugs as aphrodisiacs", but the cancer is fairly common among heterosexuals in some parts of Africa, and is found in heterosexuals in the United States as well. Because reported AIDS cases in Africa and other parts of the developing world include a larger proportion of people who do not belong to Duesberg's preferred risk groups of drug addicts and male homosexuals, Duesberg writes on his website that "There are no risk groups in Africa, like drug addicts and homosexuals." However, many studies have addressed the issue of risk groups in Africa and concluded that the risk of AIDS is not equally distributed. In addition, AIDS in Africa largely kills sexually active working-age adults. South African president Thabo Mbeki accepted Duesberg's hypothesis and, through the mid-2000s, rejected offers of medical assistance to fight HIV infection, a policy of inaction that cost over 300,000 lives. Duesberg claims that retroviruses like HIV must be harmless to survive Duesberg argues that retroviruses like HIV must be harmless to survive: they do not kill cells and they do not cause cancer, he maintains. Duesberg writes, "retroviruses do not kill cells because they depend on viable cells for the replication of their RNA from viral DNA integrated into cellular DNA." Duesberg elsewhere states that "the typical virus reproduces by entering a living cell and commandeering the cell's resources in order to make new virus particles, a process that ends with the disintegration of the dead cell." Duesberg also rejects the involvement of retroviruses and other viruses in cancer. To him, virus-associated cancers are "freak accidents of nature" that do not warrant research programs such as the war on cancer. Duesberg rejects a role in cancer for numerous viruses, including leukemia viruses, Epstein–Barr virus, human papilloma virus, hepatitis B, feline leukemia virus, and human T-lymphotropic virus. Duesberg claims that the supposedly innocuous nature of all retroviruses is supported by what he considers to be their normal mode of proliferation: infection from mother to child in utero. Duesberg does not suggest that HIV is an endogenous retrovirus, a virus integrated into the germline and genetically heritable: Scientific response to the Duesberg hypothesis The consensus in the scientific community is that the Duesberg hypothesis has been refuted by a large and growing mass of evidence showing that HIV causes AIDS, that the amount of virus in the blood correlates with disease progression, that a plausible mechanism for HIV's action has been proposed, and that anti-HIV medication decreases mortality and opportunistic infection in people with AIDS. In issue of Science (Vol. 266, No. 5191), Duesberg's methods and claims were evaluated in a group of articles. The authors concluded that It is abundantly evident that HIV causes disease and death in hemophiliacs, a group generally lacking Duesberg's proposed risk factors. HIV fulfills Koch's postulates, which are one set of criteria for demonstrating a causal relationship between a microbe and a disease. (Subsequently, additional data further demonstrated the fulfillment of Koch's postulates.) the AIDS epidemic in Thailand cited by Duesberg as confirmation of his hypothesis is in fact evidence of the role of HIV in AIDS. According to researchers who conducted large-scale studies of AZT, the drug does not cause AIDS. Furthermore, researchers acknowledged that recreational drugs do cause immune abnormalities, though not the type of immunodeficiency seen in AIDS. Effectiveness of antiretroviral medication The vast majority of people with AIDS have never received antiretroviral drugs, including those in developed countries prior to the licensure of AZT (zidovudine) in 1987, and people in developing countries today where very few individuals have access to these medications. The NIAID reports that Opponents claim that nearly all HIV-positive people will develop AIDS Duesberg claims as support for his idea that many drug-free HIV-positive people have not yet developed AIDS; HIV/AIDS scientists note that many drug-free HIV-positive people have developed AIDS, and that, in the absence of medical treatment or rare genetic factors postulated to delay disease progression, it is very likely that nearly all HIV-positive people will eventually develop AIDS. Scientists also note that HIV-negative drug users do not suffer from immune system collapse. See also HIV/AIDS denialism Inventing the AIDS Virus References External links Peter Duesberg's website The Evidence That HIV Causes AIDS : from the National Institute of Allergy and Infectious Diseases How HIV Causes AIDS: National Institutes of Health fact sheet. Koch's Postulates and the Etiology of AIDS: An Historical Perspective . AIDS origin hypotheses HIV/AIDS denialism Alternative diagnoses
Duesberg hypothesis
[ "Biology" ]
2,745
[ "Biological hypotheses", "AIDS origin hypotheses" ]
8,315
https://en.wikipedia.org/wiki/Diamagnetism
Diamagnetism is the property of materials that are repelled by a magnetic field; an applied magnetic field creates an induced magnetic field in them in the opposite direction, causing a repulsive force. In contrast, paramagnetic and ferromagnetic materials are attracted by a magnetic field. Diamagnetism is a quantum mechanical effect that occurs in all materials; when it is the only contribution to the magnetism, the material is called diamagnetic. In paramagnetic and ferromagnetic substances, the weak diamagnetic force is overcome by the attractive force of magnetic dipoles in the material. The magnetic permeability of diamagnetic materials is less than the permeability of vacuum, μ0. In most materials, diamagnetism is a weak effect which can be detected only by sensitive laboratory instruments, but a superconductor acts as a strong diamagnet because it entirely expels any magnetic field from its interior (the Meissner effect). Diamagnetism was first discovered when Anton Brugmans observed in 1778 that bismuth was repelled by magnetic fields. In 1845, Michael Faraday demonstrated that it was a property of matter and concluded that every material responded (in either a diamagnetic or paramagnetic way) to an applied magnetic field. On a suggestion by William Whewell, Faraday first referred to the phenomenon as diamagnetic (the prefix dia- meaning through or across), then later changed it to diamagnetism. A simple rule of thumb is used in chemistry to determine whether a particle (atom, ion, or molecule) is paramagnetic or diamagnetic: If all electrons in the particle are paired, then the substance made of this particle is diamagnetic; If it has unpaired electrons, then the substance is paramagnetic. Materials Diamagnetism is a property of all materials, and always makes a weak contribution to the material's response to a magnetic field. However, other forms of magnetism (such as ferromagnetism or paramagnetism) are so much stronger such that, when different forms of magnetism are present in a material, the diamagnetic contribution is usually negligible. Substances where the diamagnetic behaviour is the strongest effect are termed diamagnetic materials, or diamagnets. Diamagnetic materials are those that some people generally think of as non-magnetic, and include water, wood, most organic compounds such as petroleum and some plastics, and many metals including copper, particularly the heavy ones with many core electrons, such as mercury, gold and bismuth. The magnetic susceptibility values of various molecular fragments are called Pascal's constants (named after ). Diamagnetic materials, like water, or water-based materials, have a relative magnetic permeability that is less than or equal to 1, and therefore a magnetic susceptibility less than or equal to 0, since susceptibility is defined as . This means that diamagnetic materials are repelled by magnetic fields. However, since diamagnetism is such a weak property, its effects are not observable in everyday life. For example, the magnetic susceptibility of diamagnets such as water is . The most strongly diamagnetic material is bismuth, , although pyrolytic carbon may have a susceptibility of in one plane. Nevertheless, these values are orders of magnitude smaller than the magnetism exhibited by paramagnets and ferromagnets. Because χv is derived from the ratio of the internal magnetic field to the applied field, it is a dimensionless value. In rare cases, the diamagnetic contribution can be stronger than paramagnetic contribution. This is the case for gold, which has a magnetic susceptibility less than 0 (and is thus by definition a diamagnetic material), but when measured carefully with X-ray magnetic circular dichroism, has an extremely weak paramagnetic contribution that is overcome by a stronger diamagnetic contribution. Superconductors Superconductors may be considered perfect diamagnets (), because they expel all magnetic fields (except in a thin surface layer) due to the Meissner effect. Demonstrations Curving water surfaces If a powerful magnet (such as a supermagnet) is covered with a layer of water (that is thin compared to the diameter of the magnet) then the field of the magnet significantly repels the water. This causes a slight dimple in the water's surface that may be seen by a reflection in its surface. Levitation Diamagnets may be levitated in stable equilibrium in a magnetic field, with no power consumption. Earnshaw's theorem seems to preclude the possibility of static magnetic levitation. However, Earnshaw's theorem applies only to objects with positive susceptibilities, such as ferromagnets (which have a permanent positive moment) and paramagnets (which induce a positive moment). These are attracted to field maxima, which do not exist in free space. Diamagnets (which induce a negative moment) are attracted to field minima, and there can be a field minimum in free space. A thin slice of pyrolytic graphite, which is an unusually strongly diamagnetic material, can be stably floated in a magnetic field, such as that from rare earth permanent magnets. This can be done with all components at room temperature, making a visually effective and relatively convenient demonstration of diamagnetism. The Radboud University Nijmegen, the Netherlands, has conducted experiments where water and other substances were successfully levitated. Most spectacularly, a live frog (see figure) was levitated. In September 2009, NASA's Jet Propulsion Laboratory (JPL) in Pasadena, California announced it had successfully levitated mice using a superconducting magnet, an important step forward since mice are closer biologically to humans than frogs. JPL said it hopes to perform experiments regarding the effects of microgravity on bone and muscle mass. Recent experiments studying the growth of protein crystals have led to a technique using powerful magnets to allow growth in ways that counteract Earth's gravity. A simple homemade device for demonstration can be constructed out of bismuth plates and a few permanent magnets that levitate a permanent magnet. Theory The electrons in a material generally settle in orbitals, with effectively zero resistance and act like current loops. Thus it might be imagined that diamagnetism effects in general would be common, since any applied magnetic field would generate currents in these loops that would oppose the change, in a similar way to superconductors, which are essentially perfect diamagnets. However, since the electrons are rigidly held in orbitals by the charge of the protons and are further constrained by the Pauli exclusion principle, many materials exhibit diamagnetism, but typically respond very little to the applied field. The Bohr–Van Leeuwen theorem proves that there cannot be any diamagnetism or paramagnetism in a purely classical system. However, the classical theory of Langevin for diamagnetism gives the same prediction as the quantum theory. The classical theory is given below. Langevin diamagnetism Paul Langevin's theory of diamagnetism (1905) applies to materials containing atoms with closed shells (see dielectrics). A field with intensity , applied to an electron with charge and mass , gives rise to Larmor precession with frequency . The number of revolutions per unit time is, so the current for an atom with electrons is (in SI units) The magnetic moment of a current loop is equal to the current times the area of the loop. Suppose the field is aligned with the axis. The average loop area can be given as , where is the mean square distance of the electrons perpendicular to the axis. The magnetic moment is therefore If the distribution of charge is spherically symmetric, we can suppose that the distribution of coordinates are independent and identically distributed. Then , where is the mean square distance of the electrons from the nucleus. Therefore, . If is the number of atoms per unit volume, the volume diamagnetic susceptibility in SI units is In atoms, Langevin susceptibility is of the same order of magnitude as Van Vleck paramagnetic susceptibility. In metals The Langevin theory is not the full picture for metals because there are also non-localized electrons. The theory that describes diamagnetism in a free electron gas is called Landau diamagnetism, named after Lev Landau, and instead considers the weak counteracting field that forms when the electrons' trajectories are curved due to the Lorentz force. Landau diamagnetism, however, should be contrasted with Pauli paramagnetism, an effect associated with the polarization of delocalized electrons' spins. For the bulk case of a 3D system and low magnetic fields, the (volume) diamagnetic susceptibility can be calculated using Landau quantization, which in SI units is where is the Fermi energy. This is equivalent to , exactly times Pauli paramagnetic susceptibility, where is the Bohr magneton and is the density of states (number of states per energy per volume). This formula takes into account the spin degeneracy of the carriers (spin-1/2 electrons). In doped semiconductors the ratio between Landau and Pauli susceptibilities may change due to the effective mass of the charge carriers differing from the electron mass in vacuum, increasing the diamagnetic contribution. The formula presented here only applies for the bulk; in confined systems like quantum dots, the description is altered due to quantum confinement. Additionally, for strong magnetic fields, the susceptibility of delocalized electrons oscillates as a function of the field strength, a phenomenon known as the De Haas–Van Alphen effect, also first described theoretically by Landau. See also Antiferromagnetism Magnetochemistry Moses effect References External links The Feynman Lectures on Physics Vol. II Ch. 34: The Magnetism of Matter Electric and magnetic fields in matter Magnetic levitation Magnetism
Diamagnetism
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,155
[ "Condensed matter physics", "Electric and magnetic fields in matter", "Materials science" ]
8,324
https://en.wikipedia.org/wiki/Difference%20engine
A difference engine is an automatic mechanical calculator designed to tabulate polynomial functions. It was designed in the 1820s, and was first created by Charles Babbage. The name difference engine is derived from the method of finite differences, a way to interpolate or tabulate functions by using a small set of polynomial co-efficients. Some of the most common mathematical functions used in engineering, science and navigation are built from logarithmic and trigonometric functions, which can be approximated by polynomials, so a difference engine can compute many useful tables. History The notion of a mechanical calculator for mathematical functions can be traced back to the Antikythera mechanism of the 2nd century BC, while early modern examples are attributed to Pascal and Leibniz in the 17th century. In 1784 J. H. Müller, an engineer in the Hessian army, devised and built an adding machine and described the basic principles of a difference machine in a book published in 1786 (the first written reference to a difference machine is dated to 1784), but he was unable to obtain funding to progress with the idea. Charles Babbage's difference engines Charles Babbage began to construct a small difference engine in and had completed it by 1822 (Difference Engine 0). He announced his invention on 14 June 1822, in a paper to the Royal Astronomical Society, entitled "Note on the application of machinery to the computation of astronomical and mathematical tables". This machine used the decimal number system and was powered by cranking a handle. The British government was interested, since producing tables was time-consuming and expensive and they hoped the difference engine would make the task more economical. In 1823, the British government gave Babbage £1700 to start work on the project. Although Babbage's design was feasible, the metalworking techniques of the era could not economically make parts in the precision and quantity required. Thus the implementation proved to be much more expensive and doubtful of success than the government's initial estimate. According to the 1830 design for Difference Engine No. 1, it would have about 25,000 parts, weigh 4 tons, and operate on 20-digit numbers by sixth-order differences. In 1832, Babbage and Joseph Clement produced a small working model (one-seventh of the plan), which operated on 6-digit numbers by second-order differences. Lady Byron described seeing the working prototype in 1833: "We both went to see the thinking machine (or so it seems) last Monday. It raised several Nos. to the 2nd and 3rd powers, and extracted the root of a Quadratic equation." Work on the larger engine was suspended in 1833. By the time the government abandoned the project in 1842, Babbage had received and spent over £17,000 on development, which still fell short of achieving a working engine. The government valued only the machine's output (economically produced tables), not the development (at unpredictable cost) of the machine itself. Babbage refused to recognize that predicament. Meanwhile, Babbage's attention had moved on to developing an analytical engine, further undermining the government's confidence in the eventual success of the difference engine. By improving the concept as an analytical engine, Babbage had made the difference engine concept obsolete, and the project to implement it an utter failure in the view of the government. The incomplete Difference Engine No. 1 was put on display to the public at the 1862 International Exhibition in South Kensington, London. Babbage went on to design his much more general analytical engine, but later designed an improved "Difference Engine No. 2" design (31-digit numbers and seventh-order differences), between 1846 and 1849. Babbage was able to take advantage of ideas developed for the analytical engine to make the new difference engine calculate more quickly while using fewer parts. Scheutzian calculation engine Inspired by Babbage's difference engine in 1834, the Swedish inventor Per Georg Scheutz built several experimental models. In 1837 his son Edward proposed to construct a working model in metal, and in 1840 finished the calculating part, capable of calculating series with 5-digit numbers and first-order differences, which was later extended to third-order (1842). In 1843, after adding the printing part, the model was completed. In 1851, funded by the government, construction of the larger and improved (15-digit numbers and fourth-order differences) machine began, and finished in 1853. The machine was demonstrated at the World's Fair in Paris, 1855 and then sold in 1856 to the Dudley Observatory in Albany, New York. Delivered in 1857, it was the first printing calculator sold. In 1857 the British government ordered the next Scheutz's difference machine, which was built in 1859. It had the same basic construction as the previous one, weighing about . Others Martin Wiberg improved Scheutz's construction (, his machine has the same capacity as Scheutz's: 30-digit and sixth-order) but used his device only for producing and publishing printed tables (interest tables in 1860, and logarithmic tables in 1875). Alfred Deacon of London in produced a small difference engine (20-digit numbers and third-order differences). American George B. Grant started working on his calculating machine in 1869, unaware of the works of Babbage and Scheutz (Schentz). One year later (1870) he learned about difference engines and proceeded to design one himself, describing his construction in 1871. In 1874 the Boston Thursday Club raised a subscription for the construction of a large-scale model, which was built in 1876. It could be expanded to enhance precision and weighed about . Christel Hamann built one machine (16-digit numbers and second-order differences) in 1909 for the "Tables of Bauschinger and Peters" ("Logarithmic-Trigonometrical Tables with eight decimal places"), which was first published in Leipzig in 1910. It weighed about . Burroughs Corporation in about 1912 built a machine for the Nautical Almanac Office which was used as a difference engine of second-order. It was later replaced in 1929 by a Burroughs Class 11 (13-digit numbers and second-order differences, or 11-digit numbers and [at least up to] fifth-order differences). Alexander John Thompson about 1927 built integrating and differencing machine (13-digit numbers and fifth-order differences) for his table of logarithms "Logarithmetica britannica". This machine was composed of four modified Triumphator calculators. Leslie Comrie in 1928 described how to use the Brunsviga-Dupla calculating machine as a difference engine of second-order (15-digit numbers). He also noted in 1931 that National Accounting Machine Class 3000 could be used as a difference engine of sixth-order. Construction of two working No. 2 difference engines During the 1980s, Allan G. Bromley, an associate professor at the University of Sydney, Australia, studied Babbage's original drawings for the Difference and Analytical Engines at the Science Museum library in London. This work led the Science Museum to construct a working calculating section of difference engine No. 2 from 1985 to 1991, under Doron Swade, the then Curator of Computing. This was to celebrate the 200th anniversary of Babbage's birth in 1991. In 2002, the printer which Babbage originally designed for the difference engine was also completed. The conversion of the original design drawings into drawings suitable for engineering manufacturers' use revealed some minor errors in Babbage's design (possibly introduced as a protection in case the plans were stolen), which had to be corrected. The difference engine and printer were constructed to tolerances achievable with 19th-century technology, resolving a long-standing debate as to whether Babbage's design could have worked using Georgian-era engineering methods. The machine contains 8,000 parts and weighs about 5 tons. The printer's primary purpose is to produce stereotype plates for use in printing presses, which it does by pressing type into soft plaster to create a flong. Babbage intended that the Engine's results be conveyed directly to mass printing, having recognized that many errors in previous tables were not the result of human calculating mistakes but from slips in the manual typesetting process. The printer's paper output is mainly a means of checking the engine's performance. In addition to funding the construction of the output mechanism for the Science Museum's difference engine, Nathan Myhrvold commissioned the construction of a second complete Difference Engine No. 2, which was on exhibit at the Computer History Museum in Mountain View, California, from May 2008 to January 2016. It has since been transferred to Intellectual Ventures in Seattle where it is on display just outside the main lobby. Operation The difference engine consists of a number of columns, numbered from 1 to N. The machine is able to store one decimal number in each column. The machine can only add the value of a column n + 1 to column n to produce the new value of n. Column N can only store a constant, column 1 displays (and possibly prints) the value of the calculation on the current iteration. The engine is programmed by setting initial values to the columns. Column 1 is set to the value of the polynomial at the start of computation. Column 2 is set to a value derived from the first and higher derivatives of the polynomial at the same value of X. Each of the columns from 3 to N is set to a value derived from the first and higher derivatives of the polynomial. Timing In the Babbage design, one iteration (i.e. one full set of addition and carry operations) happens for each rotation of the main shaft. Odd and even columns alternately perform an addition in one cycle. The sequence of operations for column is thus: Count up, receiving the value from column (Addition step) Perform carry propagation on the counted up value Count down to zero, adding to column Reset the counted-down value to its original value Steps 1,2,3,4 occur for every odd column, while steps 3,4,1,2 occur for every even column. While Babbage's original design placed the crank directly on the main shaft, it was later realized that the force required to crank the machine would have been too great for a human to handle comfortably. Therefore, the two models that were built incorporate a 4:1 reduction gear at the crank, and four revolutions of the crank are required to perform one full cycle. Steps Each iteration creates a new result, and is accomplished in four steps corresponding to four complete turns of the handle shown at the far right in the picture below. The four steps are: All even numbered columns (2,4,6,8) are added to all odd numbered columns (1,3,5,7) simultaneously. An interior sweep arm turns each even column to cause whatever number is on each wheel to count down to zero. As a wheel turns to zero, it transfers its value to a sector gear located between the odd/even columns. These values are transferred to the odd column causing them to count up. Any odd column value that passes from "9" to "0" activates a carry lever. This is like Step 1, except it is odd columns (3,5,7) added to even columns (2,4,6), and column one has its values transferred by a sector gear to the print mechanism on the left end of the engine. Any even column value that passes from "9" to "0" activates a carry lever. The column 1 value, the result for the polynomial, is sent to the attached printer mechanism. This is like Step 2, but for doing carries on even columns, and returning odd columns to their original values. Subtraction The engine represents negative numbers as ten's complements. Subtraction amounts to addition of a negative number. This works in the same manner that modern computers perform subtraction, known as two's complement. Method of differences The principle of a difference engine is Newton's method of divided differences. If the initial value of a polynomial (and of its finite differences) is calculated by some means for some value of X, the difference engine can calculate any number of nearby values, using the method generally known as the method of finite differences. For example, consider the quadratic polynomial with the goal of tabulating the values p(0), p(1), p(2), p(3), p(4), and so forth. The table below is constructed as follows: the second column contains the values of the polynomial, the third column contains the differences of the two left neighbors in the second column, and the fourth column contains the differences of the two neighbors in the third column: The numbers in the third values-column are constant. In fact, by starting with any polynomial of degree n, the column number n + 1 will always be constant. This is the crucial fact behind the success of the method. This table was built from left to right, but it is possible to continue building it from right to left down a diagonal in order to compute more values. To calculate p(5) use the values from the lowest diagonal. Start with the fourth column constant value of 4 and copy it down the column. Then continue the third column by adding 4 to 11 to get 15. Next continue the second column by taking its previous value, 22 and adding the 15 from the third column. Thus p(5) is 22 + 15 = 37. In order to compute p(6), we iterate the same algorithm on the p(5) values: take 4 from the fourth column, add that to the third column's value 15 to get 19, then add that to the second column's value 37 to get 56, which is p(6). This process may be continued ad infinitum. The values of the polynomial are produced without ever having to multiply. A difference engine only needs to be able to add. From one loop to the next, it needs to store 2 numbers—in this example (the last elements in the first and second columns). To tabulate polynomials of degree n, one needs sufficient storage to hold n numbers. Babbage's difference engine No. 2, finally built in 1991, can hold 8 numbers of 31 decimal digits each and can thus tabulate 7th degree polynomials to that precision. The best machines from Scheutz could store 4 numbers with 15 digits each. Initial values The initial values of columns can be calculated by first manually calculating N consecutive values of the function and by backtracking (i.e. calculating the required differences). Col gets the value of the function at the start of computation . Col is the difference between and ... If the function to be calculated is a polynomial function, expressed as the initial values can be calculated directly from the constant coefficients a0, a1,a2, ..., an without calculating any data points. The initial values are thus: Col = a0 Col = a1 + a2 + a3 + a4 + ... + an Col = 2a2 + 6a3 + 14a4 + 30a5 + ... Col = 6a3 + 36a4 + 150a5 + ... Col = 24a4 + 240a5 + ... Col = 120a5 + ... Use of derivatives Many commonly used functions are analytic functions, which can be expressed as power series, for example as a Taylor series. The initial values can be calculated to any degree of accuracy; if done correctly the engine will give exact results for first N steps. After that, the engine will only give an approximation of the function. The Taylor series expresses the function as a sum obtained from its derivatives at one point. For many functions the higher derivatives are trivial to obtain; for instance, the sine function at 0 has values of 0 or for all derivatives. Setting 0 as the start of computation we get the simplified Maclaurin series The same method of calculating the initial values from the coefficients can be used as for polynomial functions. The polynomial constant coefficients will now have the value Curve fitting The problem with the methods described above is that errors will accumulate and the series will tend to diverge from the true function. A solution which guarantees a constant maximum error is to use curve fitting. A minimum of N values are calculated evenly spaced along the range of the desired calculations. Using a curve fitting technique like Gaussian reduction an N−1th degree polynomial interpolation of the function is found. With the optimized polynomial, the initial values can be calculated as above. See also Allan G. Bromley Johann Helfrich von Müller Martin Wiberg Pinwheel calculator References Further reading External links The Computer History Museum exhibition on Babbage and the difference engine Meccano Difference Engine #1 Meccano Difference Engine #2 Babbage's First Difference Engine – How it was intended to work Analysis of Expenditure on Babbage's Difference Engine No. 1 Difference engine workings with animations Difference Engine No1 specimen piece at the Powerhouse Museum, Sydney Gigapixel Image of the Difference Engine No2 Scheutz Difference Engine in action video. Purchased by the Dudley Observatory's first director, Benjamin Apthorp Gould, in 1856. Gould was an acquaintance of Babbage. The Difference Engine performed astronomical calculations for the Observatory for many years, and is now part of the national collection at the Smithsonian. Links to videos about Babbage DE 2 and its construction: 1822 introductions Addition Articles containing video clips Charles Babbage Collection of the Science Museum, London Computer-related introductions in the 19th century English inventions Mechanical calculators Replicas Subtraction
Difference engine
[ "Mathematics" ]
3,651
[ "Sign (mathematics)", "Subtraction" ]
8,328
https://en.wikipedia.org/wiki/Divergence
In vector calculus, divergence is a vector operator that operates on a vector field, producing a scalar field giving the quantity of the vector field's source at each point. More technically, the divergence represents the volume density of the outward flux of a vector field from an infinitesimal volume around a given point. As an example, consider air as it is heated or cooled. The velocity of the air at each point defines a vector field. While air is heated in a region, it expands in all directions, and thus the velocity field points outward from that region. The divergence of the velocity field in that region would thus have a positive value. While the air is cooled and thus contracting, the divergence of the velocity has a negative value. Physical interpretation of divergence In physical terms, the divergence of a vector field is the extent to which the vector field flux behaves like a source or a sink at a given point. It is a local measure of its "outgoingness" – the extent to which there are more of the field vectors exiting from an infinitesimal region of space than entering it. A point at which the flux is outgoing has positive divergence, and is often called a "source" of the field. A point at which the flux is directed inward has negative divergence, and is often called a "sink" of the field. The greater the flux of field through a small surface enclosing a given point, the greater the value of divergence at that point. A point at which there is zero flux through an enclosing surface has zero divergence. The divergence of a vector field is often illustrated using the simple example of the velocity field of a fluid, a liquid or gas. A moving gas has a velocity, a speed and direction at each point, which can be represented by a vector, so the velocity of the gas forms a vector field. If a gas is heated, it will expand. This will cause a net motion of gas particles outward in all directions. Any closed surface in the gas will enclose gas which is expanding, so there will be an outward flux of gas through the surface. So the velocity field will have positive divergence everywhere. Similarly, if the gas is cooled, it will contract. There will be more room for gas particles in any volume, so the external pressure of the fluid will cause a net flow of gas volume inward through any closed surface. Therefore, the velocity field has negative divergence everywhere. In contrast, in a gas at a constant temperature and pressure, the net flux of gas out of any closed surface is zero. The gas may be moving, but the volume rate of gas flowing into any closed surface must equal the volume rate flowing out, so the net flux is zero. Thus the gas velocity has zero divergence everywhere. A field which has zero divergence everywhere is called solenoidal. If the gas is heated only at one point or small region, or a small tube is introduced which supplies a source of additional gas at one point, the gas there will expand, pushing fluid particles around it outward in all directions. This will cause an outward velocity field throughout the gas, centered on the heated point. Any closed surface enclosing the heated point will have a flux of gas particles passing out of it, so there is positive divergence at that point. However any closed surface not enclosing the point will have a constant density of gas inside, so just as many fluid particles are entering as leaving the volume, thus the net flux out of the volume is zero. Therefore, the divergence at any other point is zero. Definition The divergence of a vector field at a point is defined as the limit of the ratio of the surface integral of out of the closed surface of a volume enclosing to the volume of , as shrinks to zero where is the volume of , is the boundary of , and is the outward unit normal to that surface. It can be shown that the above limit always converges to the same value for any sequence of volumes that contain and approach zero volume. The result, , is a scalar function of . Since this definition is coordinate-free, it shows that the divergence is the same in any coordinate system. However the above definition is not often used practically to calculate divergence; when the vector field is given in a coordinate system the coordinate definitions below are much simpler to use. A vector field with zero divergence everywhere is called solenoidal – in which case any closed surface has no net flux across it. Definition in coordinates Cartesian coordinates In three-dimensional Cartesian coordinates, the divergence of a continuously differentiable vector field is defined as the scalar-valued function: Although expressed in terms of coordinates, the result is invariant under rotations, as the physical interpretation suggests. This is because the trace of the Jacobian matrix of an -dimensional vector field in -dimensional space is invariant under any invertible linear transformation. The common notation for the divergence is a convenient mnemonic, where the dot denotes an operation reminiscent of the dot product: take the components of the operator (see del), apply them to the corresponding components of , and sum the results. Because applying an operator is different from multiplying the components, this is considered an abuse of notation. Cylindrical coordinates For a vector expressed in local unit cylindrical coordinates as where is the unit vector in direction , the divergence is The use of local coordinates is vital for the validity of the expression. If we consider the position vector and the functions , , and , which assign the corresponding global cylindrical coordinate to a vector, in general and In particular, if we consider the identity function , we find that: Spherical coordinates In spherical coordinates, with the angle with the axis and the rotation around the axis, and again written in local unit coordinates, the divergence is Tensor field Let be continuously differentiable second-order tensor field defined as follows: the divergence in cartesian coordinate system is a first-order tensor field and can be defined in two ways: and We have If tensor is symmetric then Because of this, often in the literature the two definitions (and symbols and ) are used interchangeably (especially in mechanics equations where tensor symmetry is assumed). Expressions of in cylindrical and spherical coordinates are given in the article del in cylindrical and spherical coordinates. General coordinates Using Einstein notation we can consider the divergence in general coordinates, which we write as , where is the number of dimensions of the domain. Here, the upper index refers to the number of the coordinate or component, so refers to the second component, and not the quantity squared. The index variable is used to refer to an arbitrary component, such as . The divergence can then be written via the Voss-Weyl formula, as: where is the local coefficient of the volume element and are the components of with respect to the local unnormalized covariant basis (sometimes written as The Einstein notation implies summation over , since it appears as both an upper and lower index. The volume coefficient is a function of position which depends on the coordinate system. In Cartesian, cylindrical and spherical coordinates, using the same conventions as before, we have , and , respectively. The volume can also be expressed as , where is the metric tensor. The determinant appears because it provides the appropriate invariant definition of the volume, given a set of vectors. Since the determinant is a scalar quantity which doesn't depend on the indices, these can be suppressed, writing The absolute value is taken in order to handle the general case where the determinant might be negative, such as in pseudo-Riemannian spaces. The reason for the square-root is a bit subtle: it effectively avoids double-counting as one goes from curved to Cartesian coordinates, and back. The volume (the determinant) can also be understood as the Jacobian of the transformation from Cartesian to curvilinear coordinates, which for gives Some conventions expect all local basis elements to be normalized to unit length, as was done in the previous sections. If we write for the normalized basis, and for the components of with respect to it, we have that using one of the properties of the metric tensor. By dotting both sides of the last equality with the contravariant element we can conclude that . After substituting, the formula becomes: See for further discussion. Properties The following properties can all be derived from the ordinary differentiation rules of calculus. Most importantly, the divergence is a linear operator, i.e., for all vector fields and and all real numbers and . There is a product rule of the following type: if is a scalar-valued function and is a vector field, then or in more suggestive notation Another product rule for the cross product of two vector fields and in three dimensions involves the curl and reads as follows: or The Laplacian of a scalar field is the divergence of the field's gradient: The divergence of the curl of any vector field (in three dimensions) is equal to zero: If a vector field with zero divergence is defined on a ball in , then there exists some vector field on the ball with . For regions in more topologically complicated than this, the latter statement might be false (see Poincaré lemma). The degree of failure of the truth of the statement, measured by the homology of the chain complex serves as a nice quantification of the complicatedness of the underlying region . These are the beginnings and main motivations of de Rham cohomology. Decomposition theorem It can be shown that any stationary flux that is twice continuously differentiable in and vanishes sufficiently fast for can be decomposed uniquely into an irrotational part and a source-free part . Moreover, these parts are explicitly determined by the respective source densities (see above) and circulation densities (see the article Curl): For the irrotational part one has with The source-free part, , can be similarly written: one only has to replace the scalar potential by a vector potential and the terms by , and the source density by the circulation density . This "decomposition theorem" is a by-product of the stationary case of electrodynamics. It is a special case of the more general Helmholtz decomposition, which works in dimensions greater than three as well. In arbitrary finite dimensions The divergence of a vector field can be defined in any finite number of dimensions. If in a Euclidean coordinate system with coordinates , define In the 1D case, reduces to a regular function, and the divergence reduces to the derivative. For any , the divergence is a linear operator, and it satisfies the "product rule" for any scalar-valued function . Relation to the exterior derivative One can express the divergence as a particular case of the exterior derivative, which takes a 2-form to a 3-form in . Define the current two-form as It measures the amount of "stuff" flowing through a surface per unit time in a "stuff fluid" of density moving with local velocity . Its exterior derivative is then given by where is the wedge product. Thus, the divergence of the vector field can be expressed as: Here the superscript is one of the two musical isomorphisms, and is the Hodge star operator. When the divergence is written in this way, the operator is referred to as the codifferential. Working with the current two-form and the exterior derivative is usually easier than working with the vector field and divergence, because unlike the divergence, the exterior derivative commutes with a change of (curvilinear) coordinate system. In curvilinear coordinates The appropriate expression is more complicated in curvilinear coordinates. The divergence of a vector field extends naturally to any differentiable manifold of dimension that has a volume form (or density) , e.g. a Riemannian or Lorentzian manifold. Generalising the construction of a two-form for a vector field on , on such a manifold a vector field defines an -form obtained by contracting with . The divergence is then the function defined by The divergence can be defined in terms of the Lie derivative as This means that the divergence measures the rate of expansion of a unit of volume (a volume element) as it flows with the vector field. On a pseudo-Riemannian manifold, the divergence with respect to the volume can be expressed in terms of the Levi-Civita connection : where the second expression is the contraction of the vector field valued 1-form with itself and the last expression is the traditional coordinate expression from Ricci calculus. An equivalent expression without using a connection is where is the metric and denotes the partial derivative with respect to coordinate . The square-root of the (absolute value of the determinant of the) metric appears because the divergence must be written with the correct conception of the volume. In curvilinear coordinates, the basis vectors are no longer orthonormal; the determinant encodes the correct idea of volume in this case. It appears twice, here, once, so that the can be transformed into "flat space" (where coordinates are actually orthonormal), and once again so that is also transformed into "flat space", so that finally, the "ordinary" divergence can be written with the "ordinary" concept of volume in flat space (i.e. unit volume, i.e. one, i.e. not written down). The square-root appears in the denominator, because the derivative transforms in the opposite way (contravariantly) to the vector (which is covariant). This idea of getting to a "flat coordinate system" where local computations can be done in a conventional way is called a vielbein. A different way to see this is to note that the divergence is the codifferential in disguise. That is, the divergence corresponds to the expression with the differential and the Hodge star. The Hodge star, by its construction, causes the volume form to appear in all of the right places. The divergence of tensors Divergence can also be generalised to tensors. In Einstein notation, the divergence of a contravariant vector is given by where denotes the covariant derivative. In this general setting, the correct formulation of the divergence is to recognize that it is a codifferential; the appropriate properties follow from there. Equivalently, some authors define the divergence of a mixed tensor by using the musical isomorphism : if is a -tensor ( for the contravariant vector and for the covariant one), then we define the divergence of to be the -tensor that is, we take the trace over the first two covariant indices of the covariant derivative. The symbol refers to the musical isomorphism. See also Curl Del in cylindrical and spherical coordinates Divergence theorem Gradient Notes Citations References External links The idea of divergence of a vector field Khan Academy: Divergence video lesson Differential operators Linear operators in calculus Vector calculus
Divergence
[ "Mathematics" ]
3,105
[ "Mathematical analysis", "Differential operators" ]
8,336
https://en.wikipedia.org/wiki/Decision%20problem
In computability theory and computational complexity theory, a decision problem is a computational problem that can be posed as a yes–no question based on the given input values. An example of a decision problem is deciding with the help of an algorithm whether a given natural number is prime. Another example is the problem, "given two numbers x and y, does x evenly divide y?" A method for solving a decision problem, given in the form of an algorithm, is called a decision procedure for that problem. A decision procedure for the decision problem "given two numbers x and y, does x evenly divide y?" would give the steps for determining whether x evenly divides y. One such algorithm is long division. If the remainder is zero the answer is 'yes', otherwise it is 'no'. A decision problem which can be solved by an algorithm is called decidable. Decision problems typically appear in mathematical questions of decidability, that is, the question of the existence of an effective method to determine the existence of some object or its membership in a set; some of the most important problems in mathematics are undecidable. The field of computational complexity categorizes decidable decision problems by how difficult they are to solve. "Difficult", in this sense, is described in terms of the computational resources needed by the most efficient algorithm for a certain problem. The field of recursion theory, meanwhile, categorizes undecidable decision problems by Turing degree, which is a measure of the noncomputability inherent in any solution. Definition A decision problem is a yes-or-no question on an infinite set of inputs. It is traditional to define the decision problem as the set of possible inputs together with the set of inputs for which the answer is yes. These inputs can be natural numbers, but can also be values of some other kind, like binary strings or strings over some other alphabet. The subset of strings for which the problem returns "yes" is a formal language, and often decision problems are defined as formal languages. Using an encoding such as Gödel numbering, any string can be encoded as a natural number, via which a decision problem can be defined as a subset of the natural numbers. Therefore, the algorithm of a decision problem is to compute the characteristic function of a subset of the natural numbers. Examples A classic example of a decidable decision problem is the set of prime numbers. It is possible to effectively decide whether a given natural number is prime by testing every possible nontrivial factor. Although much more efficient methods of primality testing are known, the existence of any effective method is enough to establish decidability. Decidability A decision problem is decidable or effectively solvable if the set of inputs (or natural numbers) for which the answer is yes is a recursive set. A problem is partially decidable, semidecidable, solvable, or provable if the set of inputs (or natural numbers) for which the answer is yes is a recursively enumerable set. Problems that are not decidable are undecidable. For those it is not possible to create an algorithm, efficient or otherwise, that solves them. The halting problem is an important undecidable decision problem; for more examples, see list of undecidable problems. Complete problems Decision problems can be ordered according to many-one reducibility and related to feasible reductions such as polynomial-time reductions. A decision problem P is said to be complete for a set of decision problems S if P is a member of S and every problem in S can be reduced to P. Complete decision problems are used in computational complexity theory to characterize complexity classes of decision problems. For example, the Boolean satisfiability problem is complete for the class NP of decision problems under polynomial-time reducibility. Function problems Decision problems are closely related to function problems, which can have answers that are more complex than a simple 'yes' or 'no'. A corresponding function problem is "given two numbers x and y, what is x divided by y?". A function problem consists of a partial function f; the informal "problem" is to compute the values of f on the inputs for which it is defined. Every function problem can be turned into a decision problem; the decision problem is just the graph of the associated function. (The graph of a function f is the set of pairs (x,y) such that f(x) = y.) If this decision problem were effectively solvable then the function problem would be as well. This reduction does not respect computational complexity, however. For example, it is possible for the graph of a function to be decidable in polynomial time (in which case running time is computed as a function of the pair (x,y)) when the function is not computable in polynomial time (in which case running time is computed as a function of x alone). The function f(x) = 2x has this property. Every decision problem can be converted into the function problem of computing the characteristic function of the set associated to the decision problem. If this function is computable then the associated decision problem is decidable. However, this reduction is more liberal than the standard reduction used in computational complexity (sometimes called polynomial-time many-one reduction); for example, the complexity of the characteristic functions of an NP-complete problem and its co-NP-complete complement is exactly the same even though the underlying decision problems may not be considered equivalent in some typical models of computation. Optimization problems Unlike decision problems, for which there is only one correct answer for each input, optimization problems are concerned with finding the best answer to a particular input. Optimization problems arise naturally in many applications, such as the traveling salesman problem and many questions in linear programming. Function and optimization problems are often transformed into decision problems by considering the question of whether the output is equal to or less than or equal to a given value. This allows the complexity of the corresponding decision problem to be studied; and in many cases the original function or optimization problem can be solved by solving its corresponding decision problem. For example, in the traveling salesman problem, the optimization problem is to produce a tour with minimal weight. The associated decision problem is: for each N, to decide whether the graph has any tour with weight less than N. By repeatedly answering the decision problem, it is possible to find the minimal weight of a tour. Because the theory of decision problems is very well developed, research in complexity theory has typically focused on decision problems. Optimization problems themselves are still of interest in computability theory, as well as in fields such as operations research. See also ALL (complexity) Computational problem Decidability (logic) – for the problem of deciding whether a formula is a consequence of a logical theory. Search problem Counting problem (complexity) Word problem (mathematics) References Computational problems Computability theory
Decision problem
[ "Mathematics" ]
1,421
[ "Computability theory", "Mathematical logic", "Mathematical problems", "Computational problems" ]
8,361
https://en.wikipedia.org/wiki/Definable%20real%20number
Informally, a definable real number is a real number that can be uniquely specified by its description. The description may be expressed as a construction or as a formula of a formal language. For example, the positive square root of 2, , can be defined as the unique positive solution to the equation , and it can be constructed with a compass and straightedge. Different choices of a formal language or its interpretation give rise to different notions of definability. Specific varieties of definable numbers include the constructible numbers of geometry, the algebraic numbers, and the computable numbers. Because formal languages can have only countably many formulas, every notion of definable numbers has at most countably many definable real numbers. However, by Cantor's diagonal argument, there are uncountably many real numbers, so almost every real number is undefinable. Constructible numbers One way of specifying a real number uses geometric techniques. A real number is a constructible number if there is a method to construct a line segment of length using a compass and straightedge, beginning with a fixed line segment of length 1. Each positive integer, and each positive rational number, is constructible. The positive square root of 2 is constructible. However, the cube root of 2 is not constructible; this is related to the impossibility of doubling the cube. Real algebraic numbers A real number is called a real algebraic number if there is a polynomial , with only integer coefficients, so that is a root of , that is, . Each real algebraic number can be defined individually using the order relation on the reals. For example, if a polynomial has 5 real roots, the third one can be defined as the unique such that and such that there are two distinct numbers less than at which is zero. All rational numbers are constructible, and all constructible numbers are algebraic. There are numbers such as the cube root of 2 which are algebraic but not constructible. The real algebraic numbers form a subfield of the real numbers. This means that 0 and 1 are algebraic numbers and, moreover, if and are algebraic numbers, then so are , , and, if is nonzero, . The real algebraic numbers also have the property, which goes beyond being a subfield of the reals, that for each positive integer and each real algebraic number , all of the th roots of that are real numbers are also algebraic. There are only countably many algebraic numbers, but there are uncountably many real numbers, so in the sense of cardinality most real numbers are not algebraic. This nonconstructive proof that not all real numbers are algebraic was first published by Georg Cantor in his 1874 paper "On a Property of the Collection of All Real Algebraic Numbers". Non-algebraic numbers are called transcendental numbers. The best known transcendental numbers are and . Computable real numbers A real number is a computable number if there is an algorithm that, given a natural number , produces a decimal expansion for the number accurate to decimal places. This notion was introduced by Alan Turing in 1936. The computable numbers include the algebraic numbers along with many transcendental numbers including Like the algebraic numbers, the computable numbers also form a subfield of the real numbers, and the positive computable numbers are closed under taking th roots for each Not all real numbers are computable. Specific examples of noncomputable real numbers include the limits of Specker sequences, and algorithmically random real numbers such as Chaitin's Ω numbers. Definability in arithmetic Another notion of definability comes from the formal theories of arithmetic, such as Peano arithmetic. The language of arithmetic has symbols for 0, 1, the successor operation, addition, and multiplication, intended to be interpreted in the usual way over the natural numbers. Because no variables of this language range over the real numbers, a different sort of definability is needed to refer to real numbers. A real number is definable in the language of arithmetic (or arithmetical) if its Dedekind cut can be defined as a predicate in that language; that is, if there is a first-order formula in the language of arithmetic, with three free variables, such that Here m, n, and p range over nonnegative integers. The second-order language of arithmetic is the same as the first-order language, except that variables and quantifiers are allowed to range over sets of naturals. A real that is second-order definable in the language of arithmetic is called analytical. Every computable real number is arithmetical, and the arithmetical numbers form a subfield of the reals, as do the analytical numbers. Every arithmetical number is analytical, but not every analytical number is arithmetical. Because there are only countably many analytical numbers, most real numbers are not analytical, and thus also not arithmetical. Every computable number is arithmetical, but not every arithmetical number is computable. For example, the limit of a Specker sequence is an arithmetical number that is not computable. The definitions of arithmetical and analytical reals can be stratified into the arithmetical hierarchy and analytical hierarchy. In general, a real is computable if and only if its Dedekind cut is at level of the arithmetical hierarchy, one of the lowest levels. Similarly, the reals with arithmetical Dedekind cuts form the lowest level of the analytical hierarchy. Definability in models of ZFC A real number is first-order definable in the language of set theory, without parameters, if there is a formula in the language of set theory, with one free variable, such that is the unique real number such that holds. This notion cannot be expressed as a formula in the language of set theory. All analytical numbers, and in particular all computable numbers, are definable in the language of set theory. Thus the real numbers definable in the language of set theory include all familiar real numbers such as 0, 1, , , et cetera, along with all algebraic numbers. Assuming that they form a set in the model, the real numbers definable in the language of set theory over a particular model of ZFC form a field. Each set model of ZFC set theory that contains uncountably many real numbers must contain real numbers that are not definable within (without parameters). This follows from the fact that there are only countably many formulas, and so only countably many elements of can be definable over . Thus, if has uncountably many real numbers, one can prove from "outside" that not every real number of is definable over . This argument becomes more problematic if it is applied to class models of ZFC, such as the von Neumann universe. The assertion "the real number is definable over the class model " cannot be expressed as a formula of ZFC. Similarly, the question of whether the von Neumann universe contains real numbers that it cannot define cannot be expressed as a sentence in the language of ZFC. Moreover, there are countable models of ZFC in which all real numbers, all sets of real numbers, functions on the reals, etc. are definable. See also Berry's paradox Constructible universe Entscheidungsproblem Ordinal definable set Richard's paradox Tarski's undefinability theorem References Set theory
Definable real number
[ "Mathematics" ]
1,540
[ "Mathematical logic", "Set theory" ]
8,363
https://en.wikipedia.org/wiki/Divinity
Divinity or the divine are things that are either related to, devoted to, or proceeding from a deity. What is or is not divine may be loosely defined, as it is used by different belief systems. Under monotheism and polytheism this is clearly delineated. However, in pantheism and animism this becomes synonymous with concepts of sacredness and transcendence. Etymology The root of the word divinity is the Latin meaning of or belonging to a God (deus). The word entered English from Medieval Latin in the 14th century. Usages Divinity as a quality has two distinct usages: Divine force or power  – Powers or forces that are universal, or transcend human capacities Divinity applied to mortals – Qualities of individuals who are considered to have some special access or relationship to the divine. Overlap occurs between these usages because deities or godly entities are often identical with or identified by the powers and forces that are credited to them—in many cases, a deity is merely a power or force personified—and these powers and forces may then be extended or granted to mortal individuals. For instance, Jehovah is closely associated with storms and thunder throughout much of the Old Testament. He is said to speak in thunder, and thunder is seen as a token of his anger. This power was then extended to prophets like Moses and Samuel, who caused thunderous storms to rain down on their enemies. Divinity always carries connotations of goodness, beauty, beneficence, justice, and other positive, pro-social attributes. In monotheistic faiths there is an equivalent cohort of malefic supernatural beings and powers, such as demons, devils, afreet, etc., which are not conventionally referred to as divine; demonic is often used instead. Polytheistic and animistic systems of belief make no such distinction; gods and other beings of transcendent power often have complex, ignoble, or even incomprehensible motivations for their acts. Note that while the terms demon and demonic are used in monotheistic faiths as antonyms to divine, they are in fact derived from the Greek word daimón (δαίμων), which itself translates as divinity. Uses in religious discourse There are three distinct usages of divinity and divine in religious discourse: Entity In monotheistic faiths, the word divinity is often used to refer to the singular God central to that faith. Often the word takes the definite article and is capitalized—"the Divinity"—as though it were a proper name or definitive honorific. Divine—capitalized—may be used as an adjective to refer to the manifestations of such a Divinity or its powers: e.g. "basking in the Divine presence..." The terms divinity and divine—uncapitalized, and lacking the definite article—are sometimes used to denote 'god(s) or certain other beings and entities which fall short of absolute Godhood but lie outside the human realm. Divine force or power As previously noted, divinities are closely related to the transcendent force(s) or power(s) credited to them, so much so that in some cases the powers or forces may themselves be invoked independently. This leads to the second usage of the word divine (and less common usage of divinity): to refer to the operation of transcendent power in the world. In its most direct form, the operation of transcendent power implies some form of divine intervention. For monotheistic and polytheistic faiths this usually implies the direct action of one god or another on the course of human events. In Greek legend, for instance, it was Poseidon (god of the sea) who raised the storms that blew Odysseus's craft off course on his return journey, and Japanese tradition holds that a god-sent wind saved them from Mongol invasion. Prayers or propitiations are often offered to specific gods to garner favorable interventions in particular enterprises: e.g. safe journeys, success in war, or a season of bountiful crops. Many faiths around the world—from Japanese Shinto and Chinese traditional religion, to certain African practices and the faiths derived from those in the Caribbean, to Native American beliefs—hold that ancestral or household deities offer daily protection and blessings. In monotheistic religions, divine intervention may take very direct forms: miracles, visions, or intercessions by blessed figures. Transcendent force or power may also operate through more subtle and indirect paths. Monotheistic faiths generally support some version of divine providence, which acknowledges that the divinity of the faith has a profound but unknowable plan always unfolding in the world. Unforeseeable, overwhelming, or seemingly unjust events are often thrown on 'the will of the Divine', in deferences like the Muslim inshallah ('as God wills it') and Christian 'God works in mysterious ways'. Often such faiths hold out the possibility of divine retribution as well, where the divinity will unexpectedly bring evil-doers to justice through the conventional workings of the world; from the subtle redressing of minor personal wrongs to such large-scale havoc as the destruction of Sodom and Gomorrah or the biblical Great Flood. Other faiths are even more subtle: the doctrine of karma shared by Buddhism and Hinduism is a divine law similar to divine retribution but without the connotation of punishment: our acts, good or bad, intentional or unintentional, reflect back on us as part of the natural working of the universe. Philosophical Taoism also proposes a transcendent operant principle—transliterated in English as tao or dao, meaning 'the way'—which is neither an entity nor a being per se, but reflects the natural ongoing process of the world. Modern western mysticism and new age philosophy often use the term 'the Divine' as a noun in this latter sense: a non-specific principle or being that gives rise to the world, and acts as the source or wellspring of life. In these latter cases, the faiths do not promote deference, as happens in monotheisms; rather each suggests a path of action that will bring the practitioner into conformance with the divine law: ahimsa—'no harm'—for Buddhist and Hindu faiths; de or te—'virtuous action'—in Taoism; and any of numerous practices of peace and love in new age thinking. Mortal In the third usage, extensions of divinity and divine power are credited to living, mortal individuals. Political leaders are known to have claimed actual divinity in certain early societies—the ancient Egyptian Pharaohs being the premier case—taking a role as objects of worship and being credited with superhuman status and powers. More commonly, and more pertinent to recent history, leaders merely claim some form of divine mandate, suggesting that their rule is in accordance with the will of God. The doctrine of the divine right of kings was introduced as late as the 17th century, proposing that kings rule by divine decree; Japanese Emperors ruled by divine mandate until the inception of the Japanese constitution after World War II. Less politically, most faiths have any number of people that are believed to have been touched by divine forces: saints, prophets, heroes, oracles, martyrs, and enlightened beings, among others. Saint Francis of Assisi, in Catholicism, is said to have received instruction directly from God and it is believed that he grants plenary indulgence to all who confess their sins and visit his chapel on the appropriate day. In Greek mythology, Achilles' mother bathed him in the river Styx to give him immortality, and Hercules—as the son of Zeus—inherited near-godly powers. In religious Taoism, Laozi is venerated as a saint with his own powers. Various individuals in the Buddhist faith, beginning with Siddhartha, are considered to be enlightened, and in religious forms of Buddhism they are credited with divine powers. Christ in the Bible is said to be God's Son and is said to have performed divine miracles. In general, mortals with divine qualities are carefully distinguished from the deity or deities in their religion's main pantheon. Even the Christian faith, which generally holds Christ to be identical to God, distinguishes between God the Father and Christ the begotten Son. There are, however, certain esoteric and mystical schools of thought, present in many faiths—Sufis in Islam, Gnostics in Christianity, Advaitan Hindus, Zen Buddhists, as well as several non-specific perspectives developed in new age philosophy—which hold that all humans are in essence divine, or unified with the Divine in a non-trivial way. Such divinity, in these faiths, would express itself naturally if it were not obscured by the social and physical worlds we live in; it needs to be brought to the fore through appropriate spiritual practices. In religions Christianity In the New Testament the Greek word θεῖον (theion) in the Douay Version, is translated as "divinity". Examples are below: Acts 17:29 "Being therefore the offspring of God, we must not suppose the divinity to be like unto gold, or silver, or stone, the graving of art, and device of man." Romans 1:20 "For the invisible things of him, from the creation of the world, are clearly seen, being understood by the things that are made; his eternal power also, and divinity: so that they are inexcusable." Revelation 5:12 "Saying with a loud voice: The Lamb that was slain is worthy to receive power, and divinity, and wisdom, and strength, and honour, and glory, and benediction." The word translated as either "deity", "Godhead", or "divinity" in the Greek New Testament is also the Greek word θεότητος (theotētos), and the one verse that contains it is this: Colossians 2:9 "Quia in ipso inhabitat omnis plenitudo divinitatis [divinity] corporaliter." (Vulgate) "For in him dwelleth all the fulness of the Godhead bodily." (KJV) "Because it is in him that all the fullness of the divine quality dwells bodily." (NWT) "For in him all the fullness of deity lives in bodily form." (NET) "For the full content of divine nature lives in Christ." (TEV) The word "divine" in the New Testament is the Greek word θείας (theias), and is the adjective form of "divinity". Biblical examples from the King James Bible are below: 2 Peter 1:3 "According as his divine power hath given unto us all things that pertain unto life and godliness, through the knowledge of him that hath called us to glory and virtue." 2 Peter 1:4 "Whereby are given unto us exceeding great and precious promises: that by these ye might be partakers of the divine nature, having escaped the corruption that is in the world through lust." Latter-day Saints The most prominent conception of divine entities in the Church of Jesus Christ of Latter-day Saints (LDS Church) is the Godhead, a divine council of three distinct beings: Elohim (the Father), Jehovah (the Son, or Jesus), and the Holy Spirit. Joseph Smith described a nontrinitarian Godhead, with God the Father and Jesus Christ each having individual physical bodies, and the Holy Spirit as a distinct personage with a spirit body. Smith also introduced the existence of a Heavenly Mother in the King Follett Discourse, but very little is acknowledged or known beyond her existence. Mormons hold a belief in the divine potential of humanity; Smith taught a form of divinization where mortal men and women can become like god through salvation and exaltation. Lorenzo Snow succinctly summarized this using a couplet, which is often repeated within the LDS Church: "As man now is, God once was: As God now is, man may be." Wicca Wiccan views of divinity are generally theistic, and revolve around a Goddess and a Horned God, thereby being generally dualistic. In traditional Wicca, as expressed in the writings of Gerald Gardner and Doreen Valiente, the emphasis is on the theme of divine gender polarity, and the God and Goddess are regarded as equal and opposite divine cosmic forces. In some newer forms of Wicca, such as feminist or Dianic Wicca, the Goddess is given primacy or even exclusivity. In some forms of traditional witchcraft that share a similar duotheistic theology, the Horned God is given precedence over the Goddess. See also Apotheosis Christology Deity Divinization (Christian) Ho'oponopono (Morrnah section) List of deities Sacred References External links Conceptions of God Religious belief and doctrine Deities
Divinity
[ "Biology" ]
2,687
[ "Behavior", "Human behavior", "Spirituality" ]
8,376
https://en.wikipedia.org/wiki/Day
A day is the time period of a full rotation of the Earth with respect to the Sun. On average, this is 24 hours (86,400 seconds). As a day passes at a given location it experiences morning, noon, afternoon, evening, and night. This daily cycle drives circadian rhythms in many organisms, which are vital to many life processes. A collection of sequential days is organized into calendars as dates, almost always into weeks, months and years. A solar calendar organizes dates based on the Sun's annual cycle, giving consistent start dates for the four seasons from year to year. A lunar calendar organizes dates based on the Moon's lunar phase. In common usage, a day starts at midnight, written as 00:00 or 12:00 am in 24- or 12-hour clocks, respectively. Because the time of midnight varies between locations, time zones are set up to facilitate the use of a uniform standard time. Other conventions are sometimes used, for example the Jewish religious calendar counts days from sunset to sunset, so the Jewish Sabbath begins at sundown on Friday. In astronomy, a day begins at noon so that observations throughout a single night are recorded as happening on the same day. In specific applications, the definition of a day is slightly modified, such as in the SI day (exactly 86,400 seconds) used for computers and standards keeping, local mean time accounting of the Earth's natural fluctuation of a solar day, and stellar day and sidereal day (using the celestial sphere) used for astronomy. In some countries outside of the tropics, daylight saving time is practiced, and each year there will be one 23-hour civil day and one 25-hour civil day. Due to slight variations in the rotation of the Earth, there are rare times when a leap second will get inserted at the end of a UTC day, and so while almost all days have a duration of 86,400 seconds, there are these exceptional cases of a day with 86,401 seconds (in the half-century spanning 1972 through 2022, there have been a total of 27 leap seconds that have been inserted, so roughly once every other year). Etymology The term comes from the Old English term dæġ (), with its cognates such as dagur in Icelandic, Tag in German, and dag in Norwegian, Danish, Swedish and Dutch – all stemming from a Proto-Germanic root *dagaz. Definitions Apparent and mean solar day Several definitions of this universal human concept are used according to context, need, and convenience. Besides the day of 24 hours (86,400 seconds), the word day is used for several different spans of time based on the rotation of the Earth around its axis. An important one is the solar day, the time it takes for the Sun to return to its culmination point (its highest point in the sky). Due to an orbit's eccentricity, the Sun resides in one of the orbit's foci instead of the middle. Consequently, due to Kepler's second law, the planet travels at different speeds at various positions in its orbit, and thus a solar day is not the same length of time throughout the orbital year. Because the Earth moves along an eccentric orbit around the Sun while the Earth spins on an inclined axis, this period can be up to 7.9 seconds more than (or less than) 24 hours. In recent decades, the average length of a solar day on Earth has been about 86,400.002 seconds (24.000 000 6 hours). There are currently about 365.2421875 solar days in one mean tropical year. Ancient custom has a new day starting at either the rising or setting of the Sun on the local horizon (Italian reckoning, for example, being 24 hours from sunset, old style). The exact moment of, and the interval between, two sunrises or sunsets depends on the geographical position (longitude and latitude, as well as altitude), and the time of year (as indicated by ancient hemispherical sundials). A more constant day can be defined by the Sun passing through the local meridian, which happens at local noon (upper culmination) or midnight (lower culmination). The exact moment is dependent on the geographical longitude, and to a lesser extent on the time of the year. The length of such a day is nearly constant (24 hours ± 30 seconds). This is the time as indicated by modern sundials. A further improvement defines a fictitious mean Sun that moves with constant speed along the celestial equator; the speed is the same as the average speed of the real Sun, but this removes the variation over a year as the Earth moves along its orbit around the Sun (due to both its velocity and its axial tilt). In terms of Earth's rotation, the average day length is about 360.9856°. A day lasts for more than 360° of rotation because of the Earth's revolution around the Sun. With a full year being slightly more than 360 days, the Earth's daily orbit around the Sun is slightly less than 1°, so the day is slightly less than 361° of rotation. Elsewhere in the Solar System or other parts of the universe, a day is a full rotation of other large astronomical objects with respect to its star. Civil day For civil purposes, a common clock time is typically defined for an entire region based on the local mean solar time at a central meridian. Such time zones began to be adopted about the middle of the 19th century when railroads with regularly occurring schedules came into use, with most major countries having adopted them by 1929. As of 2015, throughout the world, 40 such zones are now in use: the central zone, from which all others are defined as offsets, is known as UTC+00, which uses Coordinated Universal Time (UTC). The most common convention starts the civil day at midnight: this is near the time of the lower culmination of the Sun on the central meridian of the time zone. Such a day may be called a calendar day. A day is commonly divided into 24 hours, with each hour being made up of 60 minutes, and each minute composed of 60 seconds. Sidereal day A sidereal day or stellar day is the span of time it takes for the Earth to make one entire rotation with respect to the celestial background or a distant star (assumed to be fixed). Measuring a day as such is used in astronomy. A sidereal day is about 4 minutes less than a solar day of 24 hours (23 hours 56 minutes and 4.09 seconds), or 0.99726968 of a solar day of 24 hours. There are about 366.2422 stellar days in one mean tropical year (one stellar day more than the number of solar days). Besides a stellar day on Earth, other bodies in the Solar System have day times, the durations of these being: In the International System of Units In the International System of Units (SI), a day not an official unit, but is accepted for use with SI. A day, with symbol d, is defined using SI units as 86,400 seconds; the second is the base unit of time in SI units. In 1967–68, during the 13th CGPM (Resolution 1), the International Bureau of Weights and Measures (BIPM) redefined a second as "the duration of 9,192,631,770 periods of the radiation corresponding to the transition between two hyperfine levels of the ground state of the caesium-133 atom". This makes the SI-based day last exactly 794,243,384,928,000 of those periods. In decimal and metric time Various decimal or metric time proposals have been made, but do not redefine the day, and use the day or sidereal day as a base unit. Metric time uses metric prefixes to keep time. It uses the day as the base unit, and smaller units being fractions of a day: a metric hour (deci) is of a day; a metric minute (milli) is of a day; etc. Similarly, in decimal time, the length of a day is static to normal time. A day is also split into 10 hours, and 10 days comprise a décade – the equivalent of a week. 3 décades make a month. Various decimal time proposals which do not redefine the day: Henri de Sarrauton's proposal kept days, and subdivided hours into 100 minutes; in Mendizábal y Tamborel's proposal, the sidereal day was the basic unit, with subdivisions made upon it; and Rey-Pailhade's proposal divided the day 100 cés. Other definitions The word refers to various similarly defined ideas, such as: Full day 24 hours (exactly) (a nychthemeron) A day counting approximation, for example "See you in three days." or "the following day" The full day covering both the dark and light periods, beginning from the start of the dark period or from a point near the middle of the dark period A full dark and light period, sometimes called a nychthemeron in English, from the Greek for night-day; or more colloquially the term . In other languages, is also often used. Other languages also have a separate word for a full day. Part of a date: the day of the year (doy) in ordinal dates, day of the month (dom) in calendar dates or day of the week (dow) in week dates. Time regularly spend at paid work on a single work day, cf. man-day and workweek. Daytime The period of light when the Sun is above the local horizon (that is, the time period from sunrise to sunset) The time period from 06:00–18:00 (6:00 am – 6:00 pm) or 21:00 (9:00 pm) or another fixed clock period overlapping or offset from other time periods such as "morning", "afternoon", or "evening". The time period from first-light "dawn" to last-light "dusk". Other A specific period of the day, which may vary by context, such as "the school day" or "the work day". Variations in length Mainly due to tidal deceleration – the Moon's gravitational pull slowing down the Earth's rotation – the Earth's rotational period is slowing. Because of the way the second is defined, the mean length of a solar day is now about 86,400.002 seconds, and is increasing by about 2 milliseconds per century. Since the rotation rate of the Earth is slowing, the length of a second fell out of sync with a second derived from the rotational period. This created the need for leap seconds, which insert extra seconds into Coordinated Universal Time (UTC). Although typically 86,400 seconds in duration, a civil day can be either 86,401 or 86,399 SI seconds long on such a day. Other than the two-millisecond variation from tidal deceleration, other factors minutely affect the day's length, which creates an irregularity in the placement of leap seconds. Leap seconds are announced in advance by the International Earth Rotation and Reference Systems Service (IERS), which measures the Earth's rotation and determines whether a leap second is necessary. Geological day lengths Discovered by paleontologist John W. Wells, the day lengths of geological periods have been estimated by measuring sedimentation rings in coral fossils, due to some biological systems being affected by the tide. The length of a day at the Earth's formation is estimated at 6 hours. Arbab I. Arbab plotted day lengths over time and found a curved line. Arbab attributed this to the change of water volume present affecting Earth's rotation. Boundaries For most diurnal animals, the day naturally begins at dawn and ends at sunset. Humans, with their cultural norms and scientific knowledge, have employed several different conceptions of the day's boundaries. In the Hebrew Bible, Genesis 1:5 defines a day in terms of "evening" and "morning" before recounting the creation of the Sun to illuminate it: "And God called the light Day, and the darkness he called Night. And the evening and the morning were the first day." The Jewish day begins at either sunset or nightfall (when three second-magnitude stars appear). Medieval Europe also followed this tradition, known as Florentine reckoning: In this system, a reference like "two hours into the day" meant two hours after sunset and thus times during the evening need to be shifted back one calendar day in modern reckoning. Days such as Christmas Eve, Halloween (“All Hallows’ Eve”), and the Eve of Saint Agnes are remnants of the older pattern when holidays began during the prior evening. The common convention among the ancient Romans, ancient Chinese and in modern times is for the civil day to begin at midnight, i.e. 00:00, and to last a full 24 hours until 24:00, i.e. 00:00 of the next day. The International Meridian Conference of 1884 resolved That the Conference expresses the hope that as soon as may be practicable the astronomical and nautical days will be arranged everywhere to begin at midnight. In ancient Egypt the day was reckoned from sunrise to sunrise. Prior to 1926, Turkey had two time systems: Turkish, counting the hours from sunset, and French, counting the hours from midnight. Parts Humans have divided the day in rough periods, which can have cultural implications, and other effects on humans' biological processes. The parts of the day do not have set times; they can vary by lifestyle or hours of daylight in a given place. Daytime Daytime is the part of the day during which sunlight directly reaches the ground, assuming that there are no obstacles. The length of daytime averages slightly more than half of the 24-hour day. Two effects make daytime on average longer than night. The Sun is not a point but has an apparent size of about 32 minutes of arc. Additionally, the atmosphere refracts sunlight in such a way that some of it reaches the ground even when the Sun is below the horizon by about 34 minutes of arc. So the first light reaches the ground when the centre of the Sun is still below the horizon by about 50 minutes of arc. Thus, daytime is on average around 7 minutes longer than 12 hours. Daytime is further divided into morning, afternoon, and evening. Morning occurs between sunrise and noon. Afternoon occurs between noon and sunset, or between noon and the start of evening. This period of time sees human's highest body temperature, an increase of traffic collisions, and a decrease of productivity. Evening begins around 5 or 6 pm, or when the sun sets, and ends when one goes to bed. Twilight Twilight is the period before sunrise and after sunset in which there is natural light but no direct sunlight. The morning twilight begins at dawn and ends at sunrise, while the evening twilight begins at sunset and ends at dusk. Both periods of twilight can be divided into civil twilight, nautical twilight, and astronomical twilight. Civil twilight is when the sun is up to 6 degrees below the horizon; nautical when it is up to 12 degrees below, and astronomical when it is up to 18 degrees below. Night Night is the period in which the sky is dark; the period between dusk and dawn when no light from the sun is visible. Light pollution during night can impact human and animal life, for example by disrupting sleep. See also Determination of the day of the week Holiday ISO 8601 Season, for a discussion of daylight and darkness at various latitudes Synodic day World Meteorological day References External links Orders of magnitude (time) Units of time
Day
[ "Physics", "Mathematics" ]
3,247
[ "Physical quantities", "Time", "Units of time", "Quantity", "Spacetime", "Units of measurement" ]