id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
8,665,986
https://en.wikipedia.org/wiki/Verilog%20Procedural%20Interface
The Verilog Procedural Interface (VPI), originally known as PLI 2.0, is an interface primarily intended for the C programming language. It allows behavioral Verilog code to invoke C functions, and C functions to invoke standard Verilog system tasks. The Verilog Procedural Interface is part of the IEEE 1364 Programming Language Interface standard; the most recent edition of the standard is from 2005. VPI is sometimes also referred to as PLI 2, since it replaces the deprecated Program Language Interface (PLI). While PLI 1 was deprecated in favor of VPI (aka. PLI 2), PLI 1 is still commonly used over VPI due to its much more widely documented tf_put, tf_get function interface that is described in many verilog reference books. Use of C++ C++ is integrable with VPI (PLI 2.0) and PLI 1.0, by using the "extern C/C++" keyword built into C++ compilers. Example As an example, consider the following Verilog code fragment: val = 41; $increment(val); $display("After $increment, val=%d", val); Suppose the increment system task increments its first parameter by one. Using C and the VPI mechanism, the increment task can be implemented as follows: // Implements the increment system task static int increment(char *userdata) { vpiHandle systfref, args_iter, argh; struct t_vpi_value argval; int value; // Obtain a handle to the argument list systfref = vpi_handle(vpiSysTfCall, NULL); args_iter = vpi_iterate(vpiArgument, systfref); // Grab the value of the first argument argh = vpi_scan(args_iter); argval.format = vpiIntVal; vpi_get_value(argh, &argval); value = argval.value.integer; vpi_printf("VPI routine received %d\n", value); // Increment the value and put it back as first argument argval.value.integer = value + 1; vpi_put_value(argh, &argval, NULL, vpiNoDelay); // Cleanup and return vpi_free_object(args_iter); return 0; } Also, a function that registers this system task is necessary. This function is invoked prior to elaboration or resolution of references when it is placed in the externally visible vlog_startup_routines[] array. // Registers the increment system task void register_increment() { s_vpi_systf_data data = {vpiSysTask, 0, "$increment", increment, 0, 0, 0}; vpi_register_systf(&data); } // Contains a zero-terminated list of functions that have to be called at startup void (*vlog_startup_routines[])() = { register_increment, 0 }; The C code is compiled into a shared object that will be used by the Verilog simulator. A simulation of the earlier mentioned Verilog fragment will now result in the following output: VPI routine received 41 After $increment, val=42 See also SystemVerilog DPI Sources IEEE Xplore Sources for Verilog VPI interface Teal, for C++ JOVE, for Java Ruby-VPI, for Ruby ScriptEDA, for Perl, Python, Tcl Cocotb , for Python OrigenSim, for Ruby External links Verilog PLI primer Verilog VPI tutorial IEEE standards
Verilog Procedural Interface
[ "Technology" ]
857
[ "Computer standards", "IEEE standards" ]
8,666,134
https://en.wikipedia.org/wiki/Physcomitrella%20patens
Physcomitrella patens is a synonym of Physcomitrium patens, the spreading earthmoss. It is a moss, a bryophyte used as a model organism for studies on plant evolution, development, and physiology. Distribution and ecology Physcomitrella patens is an early colonist of exposed mud and earth around the edges of pools of water. P. patens has a disjunct distribution in temperate parts of the world, with the exception of South America. The standard laboratory strain is the "Gransden" isolate, collected by H. Whitehouse from Gransden Wood, in Cambridgeshire in 1962. Model organism Mosses share fundamental genetic and physiological processes with vascular plants, although the two lineages diverged early in land-plant evolution. A comparative study between modern representatives of the two lines may give insight into the evolution of mechanisms that contribute to the complexity of modern plants. In this context, P. patens is used as a model organism. P. patens is one of a few known multicellular organisms with highly efficient homologous recombination. meaning that an exogenous DNA sequence can be targeted to a specific genomic position (a technique called gene targeting) to create knockout mosses. This approach is called reverse genetics and it is a powerful and sensitive tool to study the function of genes and, when combined with studies in higher plants such as Arabidopsis thaliana, can be used to study molecular plant evolution. The targeted deletion or alteration of moss genes relies on the integration of a short DNA strand at a defined position in the genome of the host cell. Both ends of this DNA strand are engineered to be identical to this specific gene locus. The DNA construct is then incubated with moss protoplasts in the presence of polyethylene glycol. As mosses are haploid organisms, the regenerating moss filaments (protonemata) can be directly assayed for gene targeting within 6 weeks using PCR methods. The first study using knockout moss appeared in 1998 and functionally identified ftsZ as a pivotal gene for the division of an organelle in a eukaryote. In addition, P. patens is increasingly used in biotechnology. Examples are the identification of moss genes with implications for crop improvement or human health and the safe production of complex biopharmaceuticals in moss bioreactors. By multiple gene knockout Physcomitrella plants were engineered that lack plant-specific post-translational protein glycosylation. These knockout mosses are used to produce complex biopharmaceuticals in a process called molecular farming. The genome of P. patens, with about 500 megabase pairs organized into 27 chromosomes, was completely sequenced in 2008. Physcomitrella ecotypes, mutants, and transgenics are stored and made freely available to the scientific community by the International Moss Stock Center (IMSC). The accession numbers given by the IMSC can be used for publications to ensure safe deposit of newly described moss materials. Lifecycle Like all mosses, the lifecycle of P. patens is characterized by an alternation of two generations: a haploid gametophyte that produces gametes and a diploid sporophyte where haploid spores are produced. A spore develops into a filamentous structure called protonema, composed of two types of cells – chloronema with large and numerous chloroplasts and caulonema with very fast growth. Protonema filaments grow exclusively by tip growth of their apical cells and can originate side branches from subapical cells. Some side-branch initial cells can differentiate into buds rather than side branches. These buds give rise to gametophores (0.5–5.0 mm), more complex structures bearing leaf-like structures, rhizoids, and the sexual organs: female archegonia and male antheridia. P. patens is monoicous, meaning that male and female organs are produced in the same plant. If water is available, flagellate sperm cells can swim from the antheridia to an archegonium and fertilize the egg within. The resulting diploid zygote develops into a sporophyte composed of a foot, seta, and capsule, where thousands of haploid spores are produced by meiosis. DNA repair and homologous recombination P. patens is an excellent model in which to analyze repair of DNA damages in plants by the homologous recombination pathway. Failure to repair double-strand breaks and other DNA damages in somatic cells by homologous recombination can lead to cell dysfunction or death, and when failure occurs during meiosis, it can cause loss of gametes. The genome sequence of P. patens has revealed the presence of numerous genes that encode proteins necessary for repair of DNA damages by homologous recombination and by other pathways. PpRAD51, a protein at the core of the homologous recombination repair reaction, is required to preserve genome integrity in P. patens. Loss of PpRAD51 causes marked hypersensitivity to the double-strand break-inducing agent bleomycin, indicating that homologous recombination is used for repair of somatic cell DNA damages. PpRAD51 is also essential for resistance to ionizing radiation. The DNA mismatch repair protein PpMSH2 is a central component of the P. patens mismatch repair pathway that targets base pair mismatches arising during homologous recombination. The PpMsh2 gene is necessary in P. patens to preserve genome integrity. Genes Ppmre11 and Pprad50 of P. patens encode components of the MRN complex, the principal sensor of DNA double-strand breaks. These genes are necessary for accurate homologous recombinational repair of DNA damages in P. patens. Mutant plants defective in either Ppmre11 or Pprad50 exhibit severely restricted growth and development (possibly reflecting accelerated senescence), and enhanced sensitivity to UV-B and bleomycin-induced DNA damage compared to wild-type plants. Taxonomy P. patens was first described by Johann Hedwig in his 1801 work , under the name Phascum patens. Physcomitrella is sometimes treated as a synonym of the genus Aphanorrhegma, in which case P. patens is known as Aphanorrhegma patens. The generic name Physcomitrella implies a resemblance to Physcomitrium, which is named for its large calyptra, unlike that of Physcomitrella. In 2019 it was proposed that the correct name for this moss is Physcomitrium patens. References Further reading External links cosmoss.org - moss transcriptome and genome resource including genome browser The Japanese Physcomitrella transcriptome resource (Physcobase) The NCBI Physcomitrella patens genome project page JGI genome browser The moss Physcomitrella patens gives insights into RNA interference in plants A small moss turns professional Physcomitrella patens facts, developmental stages, organs at GeoChemBio Plant models Funariales Plants described in 1801 Taxa named by Philipp Bruch
Physcomitrella patens
[ "Biology" ]
1,527
[ "Model organisms", "Plant models" ]
8,666,685
https://en.wikipedia.org/wiki/Molecular%20Koch%27s%20postulates
Molecular Koch's postulates are a set of experimental criteria that must be satisfied to show that a gene found in a pathogenic microorganism encodes a product that contributes to the disease caused by the pathogen. Genes that satisfy molecular Koch's postulates are often referred to as virulence factors. The postulates were formulated by the microbiologist Stanley Falkow in 1988 and are based on Koch's postulates. Postulates As per Falkow's original descriptions, the three postulates are: "The phenotype or property under investigation should be associated with pathogenic members of a genus or pathogenic strains of a species. Specific inactivation of the gene(s) associated with the suspected virulence trait should lead to a measurable loss in pathogenicity or virulence. Reversion or allelic replacement of the mutated gene should lead to restoration of pathogenicity." To apply the molecular Koch's postulates to human diseases, researchers must identify which microbial genes are potentially responsible for symptoms of pathogenicity, often by sequencing the full genome to compare which nucleotides are homologous to the protein-coding genes of other species. Alternatively, scientists can identify which mRNA transcripts are at elevated levels in the diseased organs of infected hosts. Additionally, the tester must identify and verify methods for inactivating and reactivating the gene being studied. In 1996, Fredricks and Relman proposed seven molecular guidelines for establishing microbial disease causation: "A nucleic acid sequence belonging to a putative pathogen should be present in most cases of an infectious disease. Microbial nucleic acids should be found preferentially in those organs or gross anatomic sites known to be diseased (i.e., with anatomic, histologic, chemical, or clinical evidence of pathology) and not in those organs that lack pathology. Fewer, or no, copy numbers of pathogen-associated nucleic acid sequences should occur in hosts or tissues without disease. With resolution of disease (for example, with clinically effective treatment), the copy number of pathogen-associated nucleic acid sequences should decrease or become undetectable. With clinical relapse, the opposite should occur. When sequence detection predates disease, or sequence copy number correlates with severity of disease or pathology, the sequence-disease association is more likely to be a causal relationship. The nature of the microorganism inferred from the available sequence should be consistent with the known biological characteristics of that group of organisms. When phenotypes (e.g., pathology, microbial morphology, and clinical features) are predicted by sequence-based phylogenetic relationships, the meaningfulness of the sequence is enhanced. Tissue-sequence correlates should be sought at the cellular level: efforts should be made to demonstrate specific in-situ hybridization of microbial sequence to areas of tissue pathology and to visible microorganisms or to areas where microorganisms are presumed to be located. These sequence-based forms of evidence for microbial causation should be reproducible." References Epidemiology Microbiology Diseases and disorders Cause (medicine)
Molecular Koch's postulates
[ "Chemistry", "Biology", "Environmental_science" ]
658
[ "Epidemiology", "Microbiology", "Environmental social science", "Microscopy" ]
8,666,821
https://en.wikipedia.org/wiki/Missing%20letter%20effect
In cognitive psychology, the missing letter effect refers to the finding that, when people are asked to consciously detect target letters while reading text, they miss more letters in frequent function words (e.g. the letter "h" in "the") than in less frequent, content words. Understanding how, why and where this effect arises becomes useful in explaining the range of cognitive processes that are associated with reading text. The missing letter effect has also been referred to as the reverse word superiority effect, since it describes a phenomenon where letters in more frequent words fail to be identified, instead of letter identification benefitting from increased word frequency. The method in which researchers utilise to measure this effect is termed a letter detection task. This involves a paper-and-pencil procedure, where readers are asked to circle a target letter, such as "t" every time they come across it while reading a prose passage or text. Researchers measure the number of letter detection errors, or missed circled target letters, in the texts. The missing letter effect is more likely to appear when reading words that are part of a normal sequence, than when words are embedded in a mixed-up sequence (e.g. readers asked to read backwards). Despite the missing letter effect being a common phenomenon, there are different factors that have influence on the magnitude of this effect. Age (development), language proficiency and the position of target letters in words are some of these factors. Function vs content words When testing for the missing letter effect, prose passages are used which incorporate a mix of common function words and rare content words. Common function words are words that are used and seen very frequently and regularly in every day texts. These words are connector words for content words and consist of pronouns, articles, prepositions, conjunctions, and auxiliary verbs. Common examples of function words include “the”, “and”, “on”, “of” and “for” and majority of these words are short in length, consisting of usually around 1-4 letters. Because of their frequency and commonness, these words are seldom paid attention to or consciously observed. Content words usually consist of nouns and regular verbs and are more rare than frequent function words. These word types are usually given and paid more attention to. The word “ant” is an example of a rare content word in comparison to a structurally similar looking frequent function word like “and”. Letter detection tasks Letter detection tasks are ones that are set up and used to prove and measure the missing letter effect. Participants of this task are given prose passages or continuous texts to read and are told to circle every occurrence of a target letter. The missing letter effect is determined when target letters are missed or not circled, and these omissions or letter detection errors occur more when reading frequent function words than in rare content words. Saint-Aubin and Poirier reported from their experiment that there are higher accounts of letter detection omissions of the same word when it the word is presented as a definite article than when the word is a pronoun. Hypotheses Early Two primary hypotheses tried to explain the missing letter effect: Healy (1994) emphasized identification processes playing a crucial role, almost entirely focusing on word frequency. This hypothesis is primarily referred to as the unitization model and relates to familiar visual configuration. In this model, once readers have finished processing the text at a higher level (units like words), they move on and continue reading a different section of text, which interferes with the completion of processing of lower-level units (like letters). Common words are processed and “read in terms of units larger than the letter (e.g. syllables or whole words) whereas rare words tend to be read in smaller units (e.g. letters)”. The result is more letter detection errors occurring from insufficient processing of the lower-level units. The processing time hypothesis also proposed by Healy, provides an explanation for the missing letter effect. The amount of time readers or participants of letter detection tasks take to process a word, dictates the occurrence of letter detection errors and the missing letter effect. The increase of processing time denotes the decrease of letter detection errors and the decrease of processing time follows as a result of an increase in word familiarity (or word frequency). The missing letter effect occurs due to faster processing of common function words at the higher level than rare content words, a result of “the higher familiarity of their visual patterns”. Koriat & Greenberg (1994) give another explanation for the missing letter effect, viewing the structural role of the word within a sentence (i.e. function words vs. content words) to be crucial. This is termed the "alternative structural hypothesis". Within this hypothesis, rather than putting focus on familiarity as a determinant of this effect, it is “the word’s role in syntactic structure of a sentence” which encompasses common function words “receding into the background…to allow more meaningful content words to be brought into the foreground”. In this sense, the structural organization of texts overrules the perceptional organization (like the unitization model) in the occurrence of the missing letter effect. In the early stages of processing a text, its structural frame is speculatively formulated by the readers, constructed from a fast but insufficient processing of function words and punctuation. The missing letter effect unfolds as it is more difficult to detect target letters within function words as they are “pushed into the background” following structural analysis than it is to detect letters in content words which “stand in the foreground” and uncover the meaning of the text. Both accounts were thoroughly investigated, but neither could completely explain the effect. Contemporary A new model called the guidance-organization (GO) model was recently proposed to potentially explain the missing letter effect. It is a combination of the two models proposed by Healy, Koriat, and Greenberg and illuminates the idea that word frequency and function together influence the rate of letter detection errors and omissions. Both the unitization and structural processes occur, but not concurrently. During reading, unitization processing takes place before structural processing and assists “lexical identification”, particularly of common function words which establish the basis of the phrase or sentence's structural organisation. The organisation of sentence structure proceeds to “guide attention” to the higher-level units and less frequent content words to understanding meaning. As Greenberg et al. explain: "The time spent processing high-frequency function words at the whole-word level is relatively short, thereby enabling the fast and early use of these words to build a tentative structural frame." In short, the GO model “is an account of how readers coordinate text elements to achieve on-line integration” and analysis of meaning of the text. Although this hypothesis models the missing letter effect, its limitation is that it is difficult to relate to models of reading. Klein and Saint-Aubin proposed the attentional-disengagement model similarly includes aspects of the two earlier models but emphasizes the role of attention in reading and comprehension. In this model, letter detection errors increase, and the magnitude of the missing letter effect increases when there is a rapid attentional disengagement from a word in which a target letter is embedded. Saint-Aubin et al. propose that the likelihood of identifying a target letter within a word and/or text is contingent upon how much information about the possible presence of the target letter is available at that time. The timing of attentional disengagement from a “target-containing” word, essentially produces the missing letter effect where attention disengages faster from functional words than content words. Influential Factors Age (Development) Developmental change, grade level and generally reading skills increase with age, and all of these factors have some influence on the missing letter effect. The number of letter detection errors and size of the missing letter effect increases with age. When testing primary and elementary school children from grades one to four, the missing letter effect is higher for children who have better reading skills, where they tend to make more letter detection errors on function words than in rare content words. When testing primary school children (second graders) with college students the same effect is found where the older students miss target letters in function words more frequently than younger students do. When comparing adults and second graders, the missing letter effect gets larger with age but only when observing differences in letter detection errors in function words, not in word frequency. Researchers Greenberg, Koriat and Vellutino give reasoning for these findings and write that the “missing letter effect arises very early in reading, by the first or second grade, and that its magnitude increases with grade level”. As reading skills and ability improve through developmental changes, younger readers gain a greater ability to process and understand texts and their structure. Because the configuration of texts and words influence the missing letter effect and letter detection errors, younger readers who have not yet developed enough to be conscious of text structure, are not affected as greatly as older readers by structural function words when analysing passages. The conclusions that the magnitude of the missing letter effect increases with age, development and grade level are consistent with both the GO model and the AD model. The hypotheses assume and depict that more developed, good readers, whom of which are generally older, display more responsiveness to word frequency, in that they omit more target letters to more frequent words than younger, less developed poor readers. The missing letter effect is also incidental of word function, more so for older, more developed and better readers as they are better at “using information about the probable location of function words in a sentence”. Language Proficiency The missing letter effect is influenced by the proficiency of language when proficiency differs across two or more languages for one person. An experiment by Bovee and Raney recruited people who speak English proficiently and have a low proficiency level in Spanish to take part in a letter detection task with comprehension questions to follow. Results show that more letter detection errors are made when the readers read passages in their proficient language compared to when the passages are written in the language they have a low proficiency level in. Both function words and content words are presented in the texts and more letter omissions occur in function words than they do in content words when people read in their proficient language. When people read texts in their less proficient language, they omit more target letters in content words than they do in function words. Both the GO and AD models are effective in explaining and predicting how the missing letter effect is greater for readers reading in their proficient language. Word familiarity and a greater knowledge and understanding of word frequency for function words and their structural functions, allows readers (who read text in their first language) to process words in a “top-down” approach, and increases target letter omissions. For readers reading in their less proficient language, their word familiarity and knowledge of word frequency and function is much more limited. Because of this, readers process text more sufficiently and pay more attention to individual words and “letter by letter word identification”, which results in less omissions of target letters, and a smaller sized missing letter effect. Letter position in words The position of letters in words and the position of suffix morphemes have an influence on word identification, letter detection, and the missing letter effect in texts. The letters at the start and end of words, or the first and last letter of a word, contribute to how people read and recognize words. When readers take part in the letter detection task and are given a connected text to read, there are less letter detection errors of a target letter (for example ‘t’) when it is situated as the initial letter of a word (e.g. tree) compared to when it is embedded into words (e.g. path). Drewnowski and Healy account for this where the initial letter of a word is “more separable from the rest of the word” and is “easier to detect because it can be processed individually”. When letters are transposed in words within a text, the last letter of these words is important in assisting target letter detection. The pace of reading is reduced when letters are transposed in words which allows for more comprehensive processing and provides a reason for why the last letter of a word can be identified more easily. Drewnowski and Healy's (1980) experiments exhibit additional findings of significantly less letter detection errors when target letters embedded in a letter sequence are then embedded into other words than when the letter sequence appears as a “separate function word”. For example, more letter detection errors of the target letter ‘t’ are made when the function word “the” is embedded into the content word “thesis” rather than when “the” appears on its own in a text. See also Word superiority effect Visual perception Alice F. Healy Cognitive Psychology Reading References Cognitive psychology Experimental psychology Perception Reading (process)
Missing letter effect
[ "Biology" ]
2,627
[ "Behavioural sciences", "Behavior", "Cognitive psychology" ]
8,668,103
https://en.wikipedia.org/wiki/Animal%20transporter
Animal transporters are used to transport livestock or non-livestock animals over long distances. They could be specially-modified vehicles, trailers, ships or aircraft containers. While some animal transporters like horse trailers only carry a few animals, modern ships engaged in live export can carry tens of thousands. The Animal Transportation Association campaigns for humane transporting of animals as do many other animal welfare organisations. See also Animal-powered transport Drover's caboose Horse box Horse trailer livestock carrier (Maritime) Livestock transportation Road transport Stock car (rail) References Road transport Livestock transportation vehicles Intensive farming Human–animal interaction
Animal transporter
[ "Chemistry", "Biology" ]
121
[ "Animals", "Eutrophication", "Intensive farming", "Human–animal interaction", "Humans and other species" ]
8,668,185
https://en.wikipedia.org/wiki/BBC%20Sky%20at%20Night
BBC Sky at Night is a British monthly magazine about astronomy aimed at amateur astronomers and published by Immediate Media Company. Its title is taken from the television program produced by the BBC, The Sky at Night. The magazine, in comparison with the TV series, includes more technical and scientific information. Until 2015, it also included a bonus CD-ROM with software programs, the latest astronomical photographs, written materials and 'classic' episodes of The Sky at Night from the BBC archives (from 2015, the monthly content was moved online). History BBC Sky at Night was launched in 2005. The first issue, which featured Patrick Moore on the cover and included a copy of Moore's Moon map as a free gift, sold out and back issues are no longer available. Copies of Issue 1 have since sold for over £100 on eBay. In April 2007, the magazine celebrated the 50th anniversary of The Sky at Night on BBC TV with a specially-themed issue, which was produced in two different covers. Patrick Moore was an editorial advisor serving as Editor Emeritus, along with Chris Lintott, serving as Contributing Editor respectively. As of 2023, Chris Bramley is serving as the magazine's editor, with Ezzy Pearson as Features Editor, Jess Wilder as Production Editor, and Iain Todd as Content Editor. References External links BBC Sky at Night magazine BBC Sky at Night magazine forum BBC Magazines Bristol The Sky at Night TV programme homepage 2005 establishments in the United Kingdom Astronomy in the United Kingdom Astronomy magazines Sky at Night Monthly magazines published in the United Kingdom Science and technology magazines published in the United Kingdom Magazines established in 2005
BBC Sky at Night
[ "Astronomy" ]
323
[ "Astronomy magazines", "Works about astronomy" ]
6,999,722
https://en.wikipedia.org/wiki/Traffic%20barrier
Traffic barriers (known in North America as guardrails or guard rails, in Britain as crash barriers, and in auto racing as Armco barriers) keep vehicles within their roadway and prevent them from colliding with dangerous obstacles such as boulders, sign supports, trees, bridge abutments, buildings, walls, and large storm drains, or from traversing steep (non-recoverable) slopes or entering deep water. They are also installed within medians of divided highways to prevent errant vehicles from entering the opposing carriageway of traffic and help to reduce head-on collisions. Some of these barriers, designed to be struck from either side, are called median barriers. Traffic barriers can also be used to protect vulnerable areas like school yards, pedestrian zones, and fuel tanks from errant vehicles. In pedestrian zones, like school yards, they also prevent children or other pedestrians from running onto the road. While barriers are normally designed to minimize injury to vehicle occupants, injuries do occur in collisions with traffic barriers. They should only be installed where a collision with the barrier is likely to be less severe than a collision with the hazard behind it. Where possible, it is preferable to remove, relocate or modify a hazard, rather than shield it with a barrier. To make sure they are safe and effective, traffic barriers undergo extensive simulated and full scale crash testing before they are approved for general use. While crash testing cannot replicate every potential manner of impact, testing programs are designed to determine the performance limits of traffic barriers and provide an adequate level of protection to road users. Need and placement Roadside hazards must be assessed for the danger they pose to traveling motorists based on size, shape, rigidity, and distance from the edge of travelway. For instance, small roadside signs and some large signs (ground-mounted breakaway post) often do not merit roadside protection as the barrier itself may pose a greater threat to general health and well-being of the public than the obstacle it intends to protect. In many regions of the world, the concept of clear zone is taken into account when examining the distance of an obstacle or hazard from the edge of travelway. Clear zone, also known as clear recovery area or horizontal clearance is defined (through study) as a lateral distance in which a motorist on a recoverable slope may travel outside of the travelway and return their vehicle safely to the roadway. This distance is commonly determined as the 85th percentile in a study comparable to the method of determining speed limits on roadways through speed studies and varies based on the classification of a roadway. In order to provide for adequate safety in roadside conditions, hazardous elements such as fixed obstacles or steep slopes can be placed outside of the clear zone in order to reduce or eliminate the need for roadside protection. Common sites for installation of traffic barrier: Bridge ends Near steep slopes from roadway limits At drainage crossings or culverts where steep or vertical drops are present Near large signs/illumination poles or other roadside elements which may pose hazards When a barrier is needed, careful calculations are completed to determine length of need. The calculations take into account the speed and volume of traffic volume using the road, the distance from the edge of travelway to the hazard, and the distance or offset from the edge of travelway to the barrier. U.S. NRC, 10 CFR 73.55(e)(10) Vehicle Barriers In accordance with U.S. regulations for nuclear power plants, the U.S. Nuclear Regulatory Commission (NRC) addresses vehicle barriers under 10 CFR Part 73, specifically in 10 CFR 73.55(e)(10) Vehicle Barriers. This section requires licensees to "use physical barriers and security strategies [via strategic planning] to protect against land vehicle borne explosive devices". Here, the focus is on safeguarding the protected area and vital areas of nuclear facilities from unauthorized vehicle access, emphasizing the need for effective barrier systems against potential vehicular threats. The regulation highlights the importance of designing and implementing barriers that are robust enough to withstand various threat scenarios, including different types of vehicles and potential explosive devices. The integration of these barriers with other security measures, such as surveillance, access control, and intrusion detection systems, forms a critical component of comprehensive security planning at nuclear facilities. The NRC's detailed guidelines on vehicle barriers demonstrate its commitment to maintaining high standards of safety and security at U.S. nuclear sites. Adherence to these regulations is crucial for mitigating risks associated with vehicle-based threats. Types and performance Traffic barriers are categorized in two ways: by the function they serve, and by how much they deflect when a vehicle crashes into them. Functions are used to protect traffic from roadside obstacles or hazards, such as slopes steep enough to cause rollover crashes, fixed objects like bridge piers, and bodies of water. Roadside barriers can also be used with medians, to prevent vehicles from colliding with hazards within the median. are used to prevent vehicles from crossing over a median and striking an oncoming vehicle in a head-on crash. Unlike roadside barriers, they must be designed to be struck from either side. are designed to restrain vehicles from crashing off the side of a bridge and falling onto the roadway, river or railroad below. It is usually higher than roadside barrier, to prevent trucks, buses, pedestrians and cyclists from vaulting or rolling over the barrier and falling over the side of the structure. Bridge rails are usually multi-rail tubular steel barriers or reinforced concrete parapets and barriers. are used to protect traffic from hazards in work zones. Their distinguishing feature is they can be relocated as conditions change in the road works. Two common types are used: temporary concrete barrier and water-filled barrier. The latter is composed of steel-reinforced plastic boxes that are put in place where needed, linked together to form a longitudinal barrier, then ballasted with water. These have an advantage in that they can be assembled without heavy lifting equipment, but they cannot be used in freezing weather. are used to enhance security by preventing unauthorized or hostile vehicles from entering sensitive or protected locations, such as government buildings, military installations, airports, embassies, and high-security facilities. They act as a formidable deterrent against potential threats, including vehicle-borne attacks and unauthorized access. Road blockers are equipped with mechanisms that allow for quick deployment and retraction when needed, providing a flexible and effective means of traffic control and security management. , Platform screen doors (PSDs) without the doors, are used when PSDs are not feasible due to cost, technological compatibility or other factors. Stiffness Barriers are divided into three groups, based on the amount they deflect when struck by a vehicle and the mechanism the barrier uses to resist the impact forces. In the United States, traffic barriers are tested and classified according to the AASHTO Manual for Assessing Safety Hardware (MASH) standards, which recently superseded Federal Highway Administration NCHRP Report 350. Barrier deflections listed below are results from crash tests with a pickup truck traveling , colliding with the rail at a 25-degree angle. include cable barriers and weak post corrugated guide rail systems. These are referred to as flexible barriers because they will deflect when struck by a typical passenger car or light truck. Impact energy is dissipated through tension in the rail elements, deformation of the rail elements, posts, soil and vehicle bodywork, and friction between the rail and vehicle. include box beam guide rail, heavy post blocked out corrugated guide rail and thrie-beam guide rail. Thrie-beam is similar to corrugated rail, but it has three ridges instead of two. They deflect : more than rigid barriers, but less than flexible barriers. Impact energy is dissipated through deformation of the rail elements, posts, soil and vehicle bodywork, and friction between the rail and vehicle. Box beam systems also spread the impact force over a number of posts due to the stiffness of the steel tube. are usually constructed of reinforced concrete. A permanent concrete barrier will only deflect a negligible amount when struck by a vehicle. Instead, the shape of a concrete barrier is designed to redirect a vehicle into a path parallel to the barrier. This means they can be used to protect traffic from hazards very close behind the barrier, and generally require very little maintenance. Impact energy is dissipated through redirection and deformation of the vehicle itself. Jersey barriers and F-shape barriers also lift the vehicle as the tires ride up on the angled lower section. For low-speed or low-angle impacts on these barriers, that may be sufficient to redirect the vehicle without damaging the bodywork. The disadvantage is there is a higher likelihood of rollover with a small car than the single slope or step barriers. Impact forces are resisted by a combination of the rigidity and mass of the barrier. Deflection is usually negligible. An early concrete barrier design was developed by the New Jersey State Highway Department. This led to the term Jersey barrier being used as a generic term, although technically it applies to a specific shape of concrete barrier. Other types include constant-slope barriers, concrete step barriers, and F-shape barriers. Concrete barriers usually have smooth finishes. At some impact angles, coarse finishes allow the drive wheel of front wheel drive vehicles to climb the barrier, potentially causing the vehicle to roll over. However, along parkways and other areas where aesthetics are considered important, reinforced concrete walls with stone veneers or faux stone finishes are sometimes used. These barrier walls usually have vertical faces to prevent vehicles from climbing the barrier. Barrier end treatments For several decades after the invention of motor vehicles, designers of early traffic barriers paid little attention to their ends, so that the barriers either ended abruptly in blunt ends, or sometimes featured some flaring of the edges away from the side of the barrier facing traffic. Vehicles that struck blunt ends at the wrong angle could stop too suddenly or suffer penetration of the passenger compartment by steel rail sections, resulting in severe injuries or fatalities. Traffic engineers have learned through such gruesome real-world experience that the ends of barriers are just as important as the barriers themselves; the American Association of State Highway and Transportation Officials devotes an entire chapter to the topic of barrier "end treatments" in its Roadsign Design Guide. In response, a new style of barrier terminals was developed in the 1960s in which the installers were directed to twist the guardrail 90 degrees and bring its end down so that it would lie flat at ground level (so-called "turned-down" terminals or "ramped ends"). While this innovation prevented the rail from penetrating the vehicle, it could also vault a vehicle into the air or cause it to roll over, since the rising and twisting guardrail formed a ramp. These crashes often led to vehicles vaulting, rolling, or vaulting and rolling at high speed into the very objects which guardrails or barriers were supposed to protect them from in the first place. Such wild crashes caused the United States to ban ramped ends in 1990 on high-speed, high-volume highways, and to extend the ban in 1998 to the entire National Highway System. To address the vaulting and rollover crashes, a new type of terminals were developed. The first generation of these terminals in the 1970s were breakaway cable terminals, in which the rail curves back on itself and is connected to a cable that runs between the first and second posts (which are often breakaway posts). These barrier terminals were sometimes able to spear through small cars that hit them at exactly the wrong angle and were deprecated in 1993. The second generation of these terminals, called energy-absorbing terminals, was developed in the 1990s and 2000s. The goal was to develop a kinetic energy dissipating system soft enough for small vehicles to decelerate without causing the guardrail to spear through them, but firm enough to stop larger vehicles. The energy dissipation could be done through bending, kinking, crushing, or deforming guardrail elements. The first family of energy-absorbing terminal products was the extruding terminal type. It features a large steel impact head that engages the frame or bumper of the vehicle in head-on collisions. The impact head is driven back along the guide rail, dissipating the vehicle's kinetic energy by bending or tearing the steel in the guide rail sections away to the side to prevent spearing. When the terminals are hit in an angle, they dissipate much of the energy but the "gating" feature allows the vehicles to pass through the rail as it bends. If space allows, a guide rail may also be terminated by gradually curving it back to the point that the terminal is unlikely to be hit end-on, or, if possible, by embedding the end in a hillside or cut slope. An alternative to energy absorbing barrier terminals are impact attenuators. These are used for wider hazards that cannot be effectively protected with a one-sided traffic barrier. Recycled tyres had been proposed for highway crash barriers by 2012, but many governments prefer sand-filled crash barriers because they have excellent energy-absorption characteristics and are easier to erect and dismantle. A Fitch Barrier is an energy-absorbing type of impact attenuator consisting of a group of sand-filled plastic barrels, usually yellow in color with a black lid. Fitch barriers are often found in a triangular arrangement at the end of a guard rail between a highway and an exit lane (the area known as the gore), along the most probable line of impact. The barriers in front contain the least sand, with each successive barrel containing more. When a vehicle collides with the barrels, the vehicle's kinetic energy is dissipated by the shattering of the barrels and the scattering of the sand inside, and the vehicle decelerates over a longer period of time instead of sudden and more violent rapid deceleration from striking a solid obstruction. In turn, the risk of injury to the vehicle occupants is greatly reduced. Fitch barriers are widely popular due to their effectiveness, low cost, and ease of setup and repair or replacement. Types of end treatments: Bull nose ET Plus Water and Sand Filled Barriers buffers Rubber end caps Quad guard crash cushion Pennsylvania Guardrail End Terminal Traffic barrier energy attenuator W-beam double buffer See also SAFER barrier Safety barrier Hostile vehicle mitigation Traffic cone Traffic guard Trinity Industries#Guardrail controversies References Road infrastructure Road transport Protective barriers Street furniture Transportation engineering
Traffic barrier
[ "Engineering" ]
2,930
[ "Transportation engineering", "Civil engineering", "Industrial engineering" ]
7,000,446
https://en.wikipedia.org/wiki/Out-of-band%20management
In systems management, out-of-band management (OOB; also lights-out management or LOM) is a process for accessing and managing devices and infrastructure at remote locations through a separate management plane from the production network. OOB allows a system administrator to monitor and manage servers and other network-attached equipment by remote control regardless of whether the machine is powered on or whether an OS is installed or functional. It is contrasted to in-band management which requires the managed systems to be powered on and available over their operating system's networking facilities. OOB can use dedicated management interfaces, serial ports, or cellular 4G and 5G networks for connectivity. Out-of-band management is now considered an essential network component to ensure business continuity and many manufacturers have it as a product offering. Out-of-band versus in-band By contrast, in-band management through VNC or SSH is based on in-band connectivity (the usual network channel). It typically requires software that must be installed on the remote system being managed and only works after the operating system has been booted and networking is brought up. It does not allow management of remote network components independently of the current status of other network components. A classic example of this limitation is when a sysadmin attempts to reconfigure the network on a remote machine only to find themselves locked out and unable to fix the problem without physically going to the machine. Despite these limitations, in-band solutions are still common because they are simpler and much lower-cost. Design A complete remote management system allows remote reboot, shutdown, powering on; hardware sensor monitoring (fan speed, power voltages, chassis intrusion, etc.); broadcasting of video output to remote terminals and receiving of input from remote keyboard and mouse (KVM over IP). It also can access local media like a DVD drive, or disk images, from the remote machine. If necessary, this allows one to perform remote installation of the operating system. Remote management can be used to adjust BIOS settings that may not be accessible after the operating system has already booted. Settings for hardware RAID or RAM timings can also be adjusted as the management card needs no hard drives or main memory to operate. As management via serial port has traditionally been important on servers, a complete remote management system also allows interfacing with the server through a serial over LAN cable. As sending monitor output through the network is bandwidth intensive, cards like AMI's MegaRAC use built-in video compression (versions of VNC are often used in implementing this). Devices like Dell DRAC also have a slot for a memory card where an administrator may keep server-related information independently from the main hard drive. The remote system can be accessed either through an SSH command-line interface, specialized client software, or through various web-browser-based solutions. Client software is usually optimized to manage multiple systems easily. There are also various scaled-down versions, up to devices that only allow remote reboot by power cycling the server. This helps if the operating system hangs, but only needs a reboot to recover. An older version of out-of-band management is a layout involving the availability of a separate network that allows network administrators to get command-line interface access over the console ports of network equipment, even when those devices are not forwarding any payload traffic. If a location has several network devices, a terminal server can provide access to different console ports for direct CLI access. In case there is only one or just a few network devices, some of them provide AUX ports making it possible to connect a dial-in modem for direct CLI access. The mentioned terminal server can often be accessed via a separate network that does not use managed switches and routers for a connection to the central site, or it has a modem connected via dial-in access through POTS or ISDN. Implementation Remote management can be enabled on many computers (not necessarily only servers) by adding a remote management card (while some cards only support a limited list of motherboards). Newer server motherboards often have built-in remote management and need no separate management card. Internally, Ethernet-based out-of-band management can either use a dedicated separate Ethernet connection, or some kind of traffic multiplexing can be performed on the system's regular Ethernet connection. That way, a common Ethernet connection becomes shared between the computer's operating system and the integrated baseboard management controller (BMC), usually by configuring the network interface controller (NIC) to perform Remote Management Control Protocol (RMCP) ports filtering, use a separate MAC address, or to use a virtual LAN (VLAN). Thus, out-of-band nature of the management traffic is ensured in a shared-connection scenario, as the system configures the NIC to extract the management traffic from the incoming traffic flow on the hardware level, and to route it to the BMC before reaching the host and its operating system. Both in-band and out-of-band management are usually done through a network connection, but an out-of-band management card can use a physically separated network connector if preferred. A remote management card usually has at least a partially independent power supply and can switch the main machine on and off through the network. Because a special device is required for each machine, out-of-band management can be much more expensive. Serial consoles are an in-between case: they are technically OOB as they do not require the primary network to be functioning for remote administration. However, without special hardware, a serial console cannot configure the UEFI (or BIOS) settings, reinstall the operating system remotely, or fix problems that prevent the system from booting. See also Cisco IMC – Out-of-band management platform by Cisco References System administration
Out-of-band management
[ "Technology" ]
1,202
[ "Information systems", "System administration" ]
7,000,543
https://en.wikipedia.org/wiki/The%20Last%20Dragon%20%282004%20film%29
The Last Dragon, known as Dragons: A Fantasy Made Real in the United States, and also known as Dragon's World in other countries, is a 2004 British docufiction made by Darlow Smithson Productions for Channel Four and broadcast on both Channel Four and Animal Planet. It posits a speculative evolution of dragons from the Cretaceous period up to the 15th century, and suppositions about what dragon life and behaviour might have been like if they had existed and evolved. It uses the premise that the ubiquity of dragons in world mythology suggests that dragons could have existed. They are depicted as a scientifically feasible species of reptile that could have evolved, somewhat similar to the depiction of dragons in the Dragonology series of books. The dragons featured in the show were designed by John Sibbick. The programme switches between two stories. The first uses CGI to show the dragons in their natural habitat throughout history. The second shows the story of a modern-day scientist at a museum, Dr. Jack Tanner, who believes in dragons. When the frozen remains of an unknown creature are discovered in the Carpathian Mountains, Tanner and two colleagues from the museum undertake the task of examining the specimen to try to save his reputation. Once there, they discover that the creature is a dragon. Tanner and his colleagues set about working out how it lived and died. Plot summary The docufiction features two interwoven stories. Jack Tanner, an American paleontologist working for the Natural History Museum in London, suggests the theory that a carbonised Tyrannosaurus rex skeleton on display was killed by a prehistoric dragon, causing him to believe that the legends were more than myth. This ruins Tanner's reputation. As viewed in a flashback, Tanner's theory is proven true, as said Tyrannosaurus battles a female dragon in the Cretaceous but is mortally wounded. The female, with two legs and two wings, dies from her wounds, forcing her son to survive on his own, escaping an aggressive male dragon by learning how to fly for the first time. This is aided by bacteria that can produce hydrogen, aiding buoyancy. A later vignette shows the dragon, now an adult, trying to mate, and successfully challenging a dominant male in a sky duel. The museum is contacted by Romanian authorities, who discovered the alleged corpse of a dragon in the Carpathian Mountains, along with many carbonised human bodies from the 15th century. Tanner and two colleagues are sent to examine the bodies, which have been moved to a warehouse. The scientists are baffled by the corpse, discovering that despite being , it was capable of both flight and breathing fire by storing bacteria and hydrogen inside its body, like the prehistoric dragon. The prehistoric dragon was a victim of the K-T Event, but he had a cousin, the marine dragon, which was protected by living in the ocean. It eventually evolved into other species, such as the Chinese forest dragon, able to glide with her smaller wings and capable of camouflaging herself in the dappled forest light. The forest dragon hunts the wild boar and the South China tiger, but the arrival of humans in the forest challenges her survival. Another descendant is the mountain dragon, which has four legs and fully-functional wings, and inhabits the Carpathian and Atlas mountains. By analyzing the dead dragon's reproductive system, Tanner concludes the corpse is actually that of a baby, having been killed by the humans. The scientists travel back to the mountains to explore the caves where the corpses were found. A flashback shows that in 1475, a lone female dragon is living on the verge of extinction within the Carpathian Mountains, looking for a mate. A male arrives from the Atlas Mountains and they perform an airborne courtship ritual. They grasp each other's talons and free-fall from the sky at high speed. Just before touchdown they break free and fly off together, breathing fire and leaving scorch marks on rocks below. While scouring the cave system, Tanner discovers a preserved dragon egg. It is surmised that the male dragon guards the nest, made from a cluster of rocks and the eggs are kept warm for preservation. However, the male is negligent, letting one of the eggs die, and is chased away by the female. Some time later, the female dragon has had a lone daughter, hunting sheep from the local shepherds, leading to dragon slayers being hired to kill any dragons that get too close to the livestock. The lord and his squire attack, slaying the young female but are in turn killed by the mother. Tanner discovers more human corpses and then that of the mother dragon, twice the size of the baby. In a final flashback, a larger group of dragon slayers approach the cave, leading to the deaths of all involved. Tanner and his team take the dragons to the museum, reuniting mother and daughter. A year later, Tanner receives information of another discovery and sets off to investigate. Reception The Scotsman opined that The Last Dragon'''s computer graphics made it "awesome", but ultimately the show gave the feeling of conveying the message "Do not believe this slice of old hokum" to the viewer. According to The New York Times "it's easy to forget that [the film] isn't a serious documentary" after the fiction disclaimer at the beginning, judging the computer graphics to be well made, sometimes beautiful, but not impressive "to the point of wonder". Awards and nominations See also List of films featuring dinosaurs Mermaids: The Body Found (2012), a similar programme airing on Animal Planet, also with Charlie Foley's involvement, which attempted to describe mermaids in a scientific manner The Flight of Dragons'' (1979 book) References External links Animal Planet's Official Site 2004 films 2004 television films Animal Planet original programming British docufiction films Channel 4 television films Films about dragons Speculative evolution Films about dinosaurs Films about tigers Films set in prehistory 2000s English-language films 2000s British films Films set in Europe Films set in Africa Films set in China
The Last Dragon (2004 film)
[ "Biology" ]
1,234
[ "Biological hypotheses", "Speculative evolution", "Hypothetical life forms" ]
7,000,901
https://en.wikipedia.org/wiki/Hydrogen-powered%20aircraft
A hydrogen-powered aircraft is an aeroplane that uses hydrogen fuel as a power source. Hydrogen can either be burned in a jet engine or another kind of internal combustion engine, or can be used to power a fuel cell to generate electricity to power an electric propulsor. It cannot be stored in a traditional wet wing, and hydrogen tanks have to be housed in the fuselage or be supported by the wing. Hydrogen, which can be produced from low-carbon power and can produce zero emissions, can reduce the environmental impact of aviation. Boeing acknowledges the technology potential and Airbus plans to launch a first commercial hydrogen-powered aircraft by 2035. McKinsey & Company forecast hydrogen aircraft entering the market in the late 2030s and scaling up through 2050, when they could account for a third of aviation's energy demand. Hydrogen properties Hydrogen has a specific energy of 119.9 MJ/kg, compared to ~ MJ/kg for usual liquid fuels, times higher. However, it has an energy density of 10.05 kJ/L at normal atmospheric pressure and temperature, compared to ~ kJ/L for liquid fuels, times lower. When pressurised to , it reaches 4,500 kJ/L, still times lower than liquid fuels. Cooled at , liquid hydrogen has an energy density of 8,491 kJ/L, times lower than liquid fuels. Aircraft design The low volumetric energy density of hydrogen poses challenges when designing an aircraft, where weight and exposed surface area are critical. To reduce the size of the tanks liquid hydrogen will be used, requiring cryogenic fuel tanks. Cylindrical tanks minimise surface for minimal thermal insulation weight, leading towards tanks in the fuselage rather than wet wings in conventional aircraft. Airplane volume and drag will be increased somewhat by larger fuel tanks. A larger fuselage adds more skin friction drag due to the extra wetted area. The extra tank weight is offset by dramatically lower liquid hydrogen fuel weight. Gaseous hydrogen may be used for short-haul aircraft. Liquid hydrogen might be needed for long-haul aircraft. Hydrogen's high specific energy means it would need less fuel weight for the same range, ignoring the repercussions of added volume and tank weight. As airliners have a fuel fraction of the Maximum Takeoff Weight MTOW between 26% for medium-haul to 45% for long-haul, maximum fuel weight could be reduced to % to % of the MTOW. Fuel cells make sense for general aviation and regional aircraft but their engine efficiency is less than large gas turbines. They are more efficient than modern 7 to 90-passenger turboprop airliners such as the DASH 8. The efficiency of a hydrogen-fueled aircraft is a trade-off of the larger wetted area, lower fuel weight, and added tank weight, varying with the aircraft size. Hydrogen is suited for short-range airliners. While longer-range aircraft need new aircraft designs. Liquid hydrogen is one of the best coolants used in engineering, and precooled jet engines have been proposed to use this property for cooling the intake air of hypersonic aircraft, or even for cooling the aircraft's skin itself, particularly for scramjet-powered aircraft. A study in the UK, NAPKIN (New Aviation, Propulsion Knowledge and Innovation Network), with collaboration from Heathrow Airport, Rolls-Royce, GKN Aerospace, and Cranfield Aerospace solutions, has investigated the potential of new hydrogen-powered aircraft designs to reduce the environmental impact of aviation. The aircraft designers have proposed a range of hydrogen-fuelled aircraft concepts, ranging from 7 to 90 seats, exploring the use of hydrogen with fuel cells and gas turbines to replace conventional aircraft engines powered by fossil fuels. The findings suggest that in the UK hydrogen-powered aircraft could be commercially viable for short-haul and regional flights by the second half of the 2020s with airlines potentially able to replace the entire UK regional fleet with hydrogen aircraft by 2040. However, the report highlighted that national supply, and the price of green liquid hydrogen relative to fossil kerosene are critical factors in determining uptake of hydrogen aircraft by airline operators. Modeling showed that, if hydrogen prices approach $1/kg, hydrogen aircraft uptake could cover almost 100% of the UK domestic market. Emissions and environmental impact Hydrogen aircraft using a fuel cell design are zero emission in operation, whereas aircraft using hydrogen as a fuel for a jet engine or an internal combustion engine are zero emission for (a greenhouse gas which contributes to global climate change) but not for (a local air pollutant). The burning of hydrogen in air leads to the production of , i.e., the + ½ → reaction in a nitrogen-rich environment also causes the production of . However, hydrogen combustion produces up to 90% less nitrogen oxides than kerosene fuel, and it eliminates the formation of particulate matter. If hydrogen is available in quantity from low-carbon power such as wind or nuclear, its use in aircraft will produce fewer greenhouse gases than current aircraft: water vapor and a small amount of nitrogen oxide. However, as of 2021, less than 5% of all hydrogen produced is emissions free, and the majority comes from fossil fuels. A 2020 study by the EU Clean Sky 2 and Fuel Cells and Hydrogen 2 Joint Undertakings found that hydrogen could power aircraft by 2035 for short-range aircraft. A short-range aircraft (< ) with hybrid Fuel cell/Turbines could reduce climate impact by 70–80% for a 20–30% additional cost, a medium-range airliner with H2 turbines could have a 50–60% reduced climate impact for a 30–40% overcost, and a long-range aircraft (> ) also with H2 turbines could reduce climate impact by 40–50% for a 40–50% additional cost. Research and development would be required, in aircraft technology and into hydrogen infrastructure, regulations and certification standards. Water vapor is a greenhouse gas – in fact, most of the total greenhouse effect on earth is due to water vapor. However, in the troposphere the content of water vapor is not dominated by anthropogenic emissions but rather the natural water cycle as water does not long remain static in that layer of the atmosphere. This is different in the stratosphere which – absent human action – would be almost totally dry and still remains relatively devoid of water. If hydrogen is burned and the resulting water vapor is released at stratospheric heights (the cruising altitude of some commercial flights is within the stratosphere – supersonic flight takes place almost entirely at stratospheric altitude), the content of water vapor in the stratosphere is increased. Due to the long residence time of water vapor at those heights, the long term effects over years or even decades cannot be entirely discounted. History Demonstrations In February 1957, a Martin B-57B of the NACA flew on hydrogen for 20 min for one of its two Wright J65 engines rather than jet fuel. On 15 April 1988, the Tu-155 first flew as the first hydrogen-powered experimental aircraft, an adapted Tu-154 airliner. Boeing converted a two-seat Diamond DA20 to run on a fuel cell designed and built by Intelligent Energy. It first flew on April 3, 2008. The Antares DLR-H2 is a hydrogen-powered aeroplane from Lange Aviation and the German aerospace center. In July 2010, Boeing unveiled its hydrogen powered Phantom Eye UAV, that uses two converted Ford Motor Company piston engines. In 2010, the Rapid 200FC concluded six flight tests fueled by gaseous hydrogen. The aircraft and the electric and energy system was developed within the European Union's project coordinated by the Politecnico di Torino. Hydrogen gas is stored at 350 bar, feeding a fuel cell powering a electric motor along a lithium polymer battery pack. On January 11, 2011, an AeroVironment Global Observer unmanned aircraft completed its first flight powered by a hydrogen-fueled propulsion system. Developed by Germany's DLR Institute of Engineering Thermodynamics, the DLR HY4 four-seater was powered by a hydrogen fuel cell, its first flight took place on September 29, 2016. It has the possibility to store of hydrogen, 4x11 kW fuel cells and 2x10 kWh batteries. On 19 January 2023, ZeroAvia flew its Dornier 228 testbed with one turboprop replaced by a prototype hydrogen-electric powertrain in the cabin, consisting of two fuel cells and a lithium-ion battery for peak power. The aim is to have a certifiable system by 2025 to power airframes carrying up to 19 passengers over . On 2 March 2023, Universal Hydrogen flew a Dash 8 40-passenger testbed with one engine powered by their hydrogen-electric powertrain. The company has received an order from Connect Airlines to convert 75 ATR 72-600 with their hydrogen powertrains. On 8 November 2023, Airbus flew a modified Schempp-Hirth Arcus-M glider, dubbed the Blue Condor, equipped with a hydrogen combustion engine for the first time, using hydrogen as its sole source of fuel. On 24 June 2024, Joby Aviation's S4 eVTOL demonstrator, refitted with a hydrogen-electric powertrain in May, completed a record 523 miles non-stop flight, more than triple the range of the battery powered version. It landed with 10% liquid hydrogen fuel remaining in its cyrogenic fuel tank, and the only in-flight emission was water vapor. A hydrogen fuel cell system provided the power for the six electric rotors of the eVTOL during its flight, and a small battery provided added takeoff and landing power. Aircraft projects In 1975, Lockheed prepared a study of liquid hydrogen fueled subsonic transport aircraft for NASA Langley, exploring airliners carrying 130 passengers over 2,780 km (1500 nmi); 200 passengers over 5,560 km (3,000 nmi); and 400 passengers over 9,265 km (5,000 nmi). Between April 2000 and May 2002, the European Commission funded half of the Airbus-led Cryoplane Study, assessing the configurations, systems, engines, infrastructure, safety, environmental compatibility and transition scenarios. Multiple configurations were envisioned: a 12 passenger business jet with a range, regional airliner for 44 passengers over and 70 passengers over , a medium range narrowbody aircraft for 185 passengers over and long range widebody aircraft for 380 to 550 passengers over . In September 2020, Airbus presented three ZEROe hydrogen-fuelled concepts aiming for commercial service by 2035: a 100-passenger turboprop, a 200-passenger turbofan, and a futuristic design based around a blended wing body. The aircraft are powered by gas turbines rather than fuel cells. In December 2021, the UK Aerospace Technology Institute (ATI) presented its FlyZero study of cryogenic liquid hydrogen used in gas turbines for a 279-passenger design with of range. ATI is supported by Airbus, Rolls-Royce, GKN, Spirit, General Electric, Reaction Engines, Easyjet, NATS, Belcan, Eaton, Mott MacDonald and the MTC. In August 2021 the UK Government claimed it was the first to have a Hydrogen Strategy. This report included a suggested strategy for hydrogen powered aircraft along with other transport modes. In March 2022, FlyZero detailed its three concept aircraft: the 75-seat FZR-1E regional airliner has six electric propulsors powered by fuel cells, a size comparable to the ATR 72 with a larger fuselage diameter at compared to to accommodate hydrogen storage, for a 325 kn (601 km/h) cruise and an 800 nmi (1,480 km) range; its FZN-1E narrowbody has rear-mounted hydrogen-burning turbofans, a T-tail and nose-mounted canards, a longer fuselage than the Airbus A320neo becoming up to wider at the rear to accommodate two cryogenic fuel tanks, and a larger wingspan requiring folding wing-tips for a range with a cruise; the small widebody FZM-1G is comparable to the Boeing 767-200ER, flying 279 passengers over , with a wide fuselage diameter closer to the A350 or 777X, a wingspan within airport gate limits, underwing engines and tanks in front of the wing. Propulsion projects In March 2021, Cranfield Aerospace Solutions announced the Project Fresson switched from batteries to hydrogen for the nine-passenger Britten-Norman Islander retrofit for a September 2022 demonstration. Project Fresson is supported by the Aerospace Technology Institute in partnership with the UK Department for Business, Energy & Industrial Strategy and Innovate UK. Pratt & Whitney wants to associate its geared turbofan architecture with its Hydrogen Steam Injected, Inter‐Cooled Turbine Engine (HySIITE) project, to avoid carbon dioxide emissions, reduce NOx emissions by 80%, and reduce fuel consumption by 35% compared with the current jet-fuel PW1100G, for a service entry by 2035 with a compatible airframe. On 21 February 2022, the US Department of Energy through the OPEN21 scheme run by its Advanced Research Projects Agency-Energy (ARPA-E) awarded P&W $3.8 million for a two-year early stage research initiative, to develop the combustor and the heat exchanger used to recover water vapour in the exhaust stream, injected into the combustor to increase its power, and into the compressor as an intercooler, and into the turbine as a coolant. In February 2022, Airbus announced a demonstration of a liquid hydrogen-fueled turbofan, with CFM International modifying the combustor, fuel system and control system of a GE Passport, mounted on a fuselage pylon on an A380 prototype, for a first flight expected within five years. Proposed aircraft and prototypes Historical Lockheed CL-400 Suntan, 1950's concept, dropped for the SR-71 National Aerospace Plane, 1986–1993 concept with a scramjet, cancelled during development Tupolev Tu-155, 1988 modified Tupolev Tu-154 testbed, flew over 100 flights AeroVironment Global Observer, 2010-2011 fuel-cell powered drone demonstrator, performed 9 flights before crashing Boeing Phantom Eye, 2012-2016 piston engine powered drone demonstrator, flew 9 times with flights lasting up to 9 hours Projects AeroDelft, a student team from Delft University of Technology creating a gaseous and liquid hydrogen fuelled drone and Sling 4. Airbus ZEROe, presented in late 2020, it aims to create four concept aircraft and launch the first commercial zero-emission aircraft, entering service by 2035 Cellsius H2-Sling, a student project at ETH Zürich building a modified Sling HW with a hydrogen fuel cell propulsion system. DLR Smartfish, two seat experimental lifting body; based on the previous Hyfish model. DLR HY4, operated by DLR spinoff H2Fly, completed the world's first piloted electric flights powered by liquid hydrogen in 2023 Project Fresson, a Britten-Norman Islander retrofit. Reaction Engines Skylon, orbital hydrogen fuelled spaceplane. Reaction Engines A2, antipodal hypersonic jet airliner. Taifun 17H2, a student project retrofitting a Valentin Taifun 17E and 17EII with a gaseous hydrogen fuel cell electric propulsion system. Universal Hydrogen (fuel cell powered Dash 8-300) the largest aircraft ever to cruise mainly on hydrogen power ZeroAvia HyFlyer (fuel-cell powered Piper PA-46 demonstrator) ZeroAvia (fuel-cell powered Dornier 228x) See also Electric aircraft Aviation fuel#Emerging aviation fuels References External links Aircraft configurations Aviation and the environment
Hydrogen-powered aircraft
[ "Engineering" ]
3,228
[ "Aircraft configurations", "Aerospace engineering" ]
7,000,956
https://en.wikipedia.org/wiki/Nomarski%20prism
A Nomarski prism is a modification of the Wollaston prism that is used in differential interference contrast microscopy. It is named after its inventor, Polish and naturalized-French physicist Georges Nomarski. Like the Wollaston prism, the Nomarski prism consists of two birefringent crystal wedges (e.g. quartz or calcite) cemented together at the hypotenuse (e.g. with Canada balsam). One of the wedges is identical to a conventional Wollaston wedge and has the optical axis oriented parallel to the surface of the prism. The second wedge of the prism is modified by cutting the crystal so that the optical axis is oriented obliquely with respect to the flat surface of the prism. The Nomarski modification causes the light rays to come to a focal point outside the body of the prism, and allows greater flexibility so that when setting up the microscope the prism can be actively focused. See also Glan–Foucault prism Glan–Thompson prism Nicol prism Prism (optics) Rochon prism Sénarmont prism References External links Nomarski Prism Action in Polarized Light Wavefront Shear in Wollaston and Nomarski Prisms Polarization (waves) Prisms (optics) Microscopy
Nomarski prism
[ "Physics", "Chemistry" ]
261
[ "Polarization (waves)", "Astrophysics", "Microscopy" ]
7,001,234
https://en.wikipedia.org/wiki/Sediment%20trap
Sediment traps are instruments used in oceanography and limnology to measure the quantity of sinking particulate organic (and inorganic) material in aquatic systems, usually oceans, lakes, or reservoirs. This flux of material is the product of biological and ecological processes typically within the surface euphotic zone, and is of interest to scientists studying the role of the biological pump in the carbon cycle. Sediments traps normally consist of an upward-facing funnel that directs sinking particulate matter (e.g. marine snow) towards a mechanism for collection and preservation. Typically, traps operate over an extended period of time (weeks to months) and their collection mechanisms may consist of a series of sampling vessels that are cycled through to allow the trap to record the changes in sinking flux with time (for instance, across a seasonal cycle). Preservation of collected material is necessary because of these long deployments, and prevents sample decomposition and its consumption by zooplankton "swimmers". Traps are often moored at a specific depth in the water column (usually below the euphotic zone or mixed layer) in a particular location, but some are so-called Lagrangian traps that drift with the surrounding ocean currents (though they may remain at a fixed depth). These latter traps travel with the biological systems that they study, while moored traps are subject to variability introduced by different systems (or states of systems) "passing by". However, because of their fixed location moored traps are straightforward to recover for analysis of their measurements. Lagrangian traps must surface at a predetermined time, and report their position (usually via satellite) in order to be recovered. See also Biological pump f-ratio Marine snow Mooring (oceanography) Primary production References Oceanographic instrumentation Limnology
Sediment trap
[ "Technology", "Engineering" ]
362
[ "Oceanographic instrumentation", "Measuring instruments" ]
7,001,511
https://en.wikipedia.org/wiki/Netgear%20DG834%20%28series%29
The DG834 series are popular ADSL modem router products from Netgear. The devices can be directly connected to a phone line and establish an ADSL broadband Internet connection to an internet service provider (ISP) and share it among several computers via 802.3 Ethernet and (on many models) 802.11b/g wireless data links. These devices are popular among ISPs as they provide an all in one solution (ADSL modem/router/firewall/switch), which is ideal for home broadband users. The Netgear UK website claims the DG834G is the most popular wireless router in the UK and lists five awards that it has received. The DG834G is perhaps the most popular product of the series, and has been produced in five versions. All versions have Wi-Fi. The DG834 (without the G suffix) is the same product but without Wi-Fi. Wi-Fi can be added later by plugging in a wireless access point although this then occupies one of the RJ45 ports. The DG834GT is a similar product - it looks like a DG834G v2 or v3, but has a Broadcom chipset like a DG834G v4 and supports Atheros Super G which can achieve a 108 Mbit/s signaling rate (double that of standard 802.11g). In the United Kingdom, many DG834GT routers were supplied by Sky Broadband and are branded with a Sky logo. Sky later supplied a DG934G router, which is a DG834G v3 router in a black case. The DG834 GB is similar to DG834GT, have Broadcom chipset, but support only 54 Mbit/s wifi. It has modifications to support Annex-B ADSL. The DG834PN model has Wi-Fi but no external antenna. It has six internal antennas, and is easily recognised by the blue dome on the top of its case. The DG834GSP model is locked to a particular ISP. Firmware Netgear's stock firmware on all products in the series runs Linux. This has led to popularity among computer enthusiasts as it provides a cheaper alternative to a Linux router. Much of the Netgear firmware is built from open-source software, and Netgear provide this source code and the build system to enable users to reassemble a new firmware image. As a result, various individuals and projects have produced modified firmware which extend the capabilities of the built-in firmware. It is also possible to completely replace the built-in firmware for TI-AR7 and Broadcom chipsets with firmware from other projects, such as OpenWRT. All products except the DG834(G) v5 run on a MIPS architecture CPU, the DG834(G) v5 runs on an ARM architecture CPU. Security issue Any person who can access the router using a web browser, can enable "debug" mode using and then connect via Telnet directly to the router's embedded Linux system as 'root', which gives unfettered access to the router's operating system via its Busybox functionality. Additionally, a 'hidden' URL also allows unfettered access (On a v5 model a username and password are requested). There is no user option provided to disable this. On default Netgear firmware Telnet access lacks password or other control; on ISP modified versions (such as Sky) a Telnet password exists based on the MAC address which can be found via online websites. Default settings IP address: 192.168.0.1 (alternate login URL http://www.routerlogin.net/) Username: admin (Virgin-branded units have a default user of virgin) Password: password (Sky-branded units have a default password of sky) Function set to Router + Modem Specifications 4-port 10/100 Mbit/s Ethernet switch Wireless Access Point (802.11b+g) (not on DG834 models) ADSL/ADSL2/ADSL2+ modem Firewall (as of DG834G v5 restricted to 20 rules) Router Differences between revisions/versions DG834(G) DG834(G) v1: first release, known as v1 in retrospect. Grey case, larger than subsequent models. 15 V AC power supply. TI-AR7 chipset (MIPS32 CPU), 16 MB of SDRAM, 4 MB of flash memory. G versions have black removable antenna at the rear left, using an RP-SMA connector. DG834(G) v2: new smaller design in a white case. Different power supply requirements from v1, otherwise almost identical electrically (uses same firmware as v1). G versions have white removable antenna. DG834(G) v3: RoHS-compliant construction. Expands the Wireless encryption options to include: WPA2-PSK(Wi-Fi Protected Access 2 with Pre-Shared Key) WPA-PSK+WPA2-PSK WPA2-802.1x WPA-802.1x+WPA2-802.1x Adds an "Advanced Wireless Settings" page to the enable, various wireless Bridge and Repeater modes. Removes the Parental control and Trend Micro Security Services functionality. Add a PPPoE relay mode. Improves Sync speeds, on good / short lines. White removable antenna. DG834(G) v4: Fixed antenna on G versions. Now uses Broadcom BCM6348 V0.7 chipset, also the Ethernet switch is now Broadcom BCM5325 (previously, it was Marvell) and the wifi module is branded Broadcom (previously it was Texas Instruments). Frontal connection LED split in 2: one for "carrier wave" connection, another for PPP link. The 4 Ethernet LEDs are now on the left of the front panel - all previous models had them to the right. DG834(G) v5 (aka DG834GNA for North America): G versions have the fixed antenna on the right (when viewed from front) - all previous models had antenna on the left. The antenna is attached to the board with a u-FL connector so an upgrade would be possible by somebody willing to void their warranty. Comes with additional buttons for power, Push 'n' Connect using Wi-Fi Protected Setup, and for switching the wireless radio on and off. Reset is achieved by holding in both side buttons simultaneously for about 6 seconds until power light flashes. The chipset used is a Conexant CX94610 which has an ARM CPU (all previous models used a MIPS CPU). Quality of service was also added as a fully changeable feature which was available with firmware version 6.00.25 but with the newer firmware version 1.6.01.34 the quality of service and wireless distribution system is not available. DG834GT Only one version produced. White case with a white removable antenna to the rear left of the unit which utilises an RP-SMA connector. Inclusion of a Broadcom BCM6348 chipset make this model notable, particularly as the Broadcom chipset offers superior compatibility over the Texas Instruments AR7 chipset (used in the DG834G v1-3) with ADSL2+ / LLU lines in the UK, partly due to power spectrum density (PSD) masks applied at the DSLAM. References External links Netgear DG834Gv5 Product Information DG834 Hardware routers Linux-based devices
Netgear DG834 (series)
[ "Technology" ]
1,646
[ "Netgear", "Wireless networking" ]
7,001,745
https://en.wikipedia.org/wiki/Impedance%20of%20free%20space
In electromagnetism, the impedance of free space, , is a physical constant relating the magnitudes of the electric and magnetic fields of electromagnetic radiation travelling through free space. That is, where is the electric field strength, and is the magnetic field strength. Its presently accepted value is , where Ω is the ohm, the SI unit of electrical resistance. The impedance of free space (that is, the wave impedance of a plane wave in free space) is equal to the product of the vacuum permeability and the speed of light in vacuum . Before 2019, the values of both these constants were taken to be exact (they were given in the definitions of the ampere and the metre respectively), and the value of the impedance of free space was therefore likewise taken to be exact. However, with the revision of the SI that came into force on 20 May 2019, the impedance of free space as expressed with an SI unit is subject to experimental measurement because only the speed of light in vacuum retains an exactly defined value. Terminology The analogous quantity for a plane wave travelling through a dielectric medium is called the intrinsic impedance of the medium and designated (eta). Hence is sometimes referred to as the intrinsic impedance of free space, and given the symbol . It has numerous other synonyms, including: wave impedance of free space, the vacuum impedance, intrinsic impedance of vacuum, characteristic impedance of vacuum, wave resistance of free space. Relation to other constants From the above definition, and the plane wave solution to Maxwell's equations, where H/m is the magnetic constant, also known as the permeability of free space, F/m is the electric constant, also known as the permittivity of free space, is the speed of light in free space, The reciprocal of is sometimes referred to as the admittance of free space and represented by the symbol . Historical exact value Between 1948 and 2019, the SI unit the ampere was defined by choosing the numerical value of to be exactly . Similarly, since 1983 the SI metre has been defined relative to the second by choosing the value of to be . Consequently, until the 2019 revision, exactly, or exactly, or This chain of dependencies changed when the ampere was redefined on 20 May 2019. Approximation as 120π ohms It is very common in textbooks and papers written before about 1990 to substitute the approximate value 120 ohms for . This is equivalent to taking the speed of light to be precisely in conjunction with the then-current definition of as . For example, Cheng 1989 states that the radiation resistance of a Hertzian dipole is (result in ohms; not exact). This practice may be recognized from the resulting discrepancy in the units of the given formula. Consideration of the units, or more formally dimensional analysis, may be used to restore the formula to a more exact form, in this case to See also Electromagnetic wave equation Mathematical descriptions of the electromagnetic field Near and far field Sinusoidal plane-wave solutions of the electromagnetic wave equation Space cloth Vacuum Wave impedance References and notes Further reading Electromagnetism Physical constants
Impedance of free space
[ "Physics", "Mathematics" ]
642
[ "Electromagnetism", "Physical phenomena", "Physical quantities", "Quantity", "Physical constants", "Fundamental interactions" ]
7,002,202
https://en.wikipedia.org/wiki/Claudia%20Mitchell
Claudia Mitchell (born 1980) is a former United States Marine whose left arm was amputated near the shoulder following a motorcycle crash in 2004. She became the first woman to be outfitted with a bionic arm. The arm is controlled through muscles in her chest and side, which in turn are controlled by the nerves that had previously controlled her real arm. The nerves were rerouted to these muscles in a process of targeted reinnervation. Her prosthesis, a prototype developed by the Rehabilitation Institute of Chicago, was one of the most advanced prosthetic arms developed to date. References External links New Yorker article about Mitchell and the prosthetic procedure Video of Mitchell demonstrating the prosthetic on youtube from New Scientist magazine 1980 births American amputees Cyborgs Living people United States Marines Female United States Marine Corps personnel Place of birth missing (living people) 21st-century American women
Claudia Mitchell
[ "Biology" ]
181
[ "Cyborgs" ]
7,002,304
https://en.wikipedia.org/wiki/Indiplon
Indiplon (INN and USAN) is a nonbenzodiazepine, hypnotic sedative that was developed in two formulations—an immediate-release formulation for sleep onset, and a modified-release (also called controlled-release or extended-release) version for sleep maintenance. Pharmacology Pharmacodynamics Indiplon works by enhancing the action of the inhibitory neurotransmitter GABA, like most other nonbenzodiazepine sedatives. It primarily binds to the α1 subunits of the GABAA receptors in the brain. Pharmacokinetics Indiplon has a short elimination half-life of 1.5 to 1.8 hours in young and elderly subjects, respectively. History Indiplon was discovered at Lederle Laboratories (which was later acquired by Wyeth) in the 1980s and was called CL 285,489. In 1998 Lederle licensed it, along with other early stage drug candidates, to DOV Pharmaceutical, a startup formed by former Lederle employees, and Dov exclusively sublicensed its rights in the drug to Neurocrine Biosciences in that same year. In 2002, Neurocrine entered into an agreement with Pfizer to develop the drug. Indiplon was originally scheduled for release in 2007, when Sanofi-Aventis' popular hypnotic zolpidem lost its patent rights in the United States and thus became available as a much less expensive generic. In 2002, Neurocrine Biosciences had entered into an agreement with Pfizer to co-market indiplon in the US, in a deal worth a potential $400mn. However, following the issuing of a non-approvable letter for the modified-release 15 mg formulation and an approvable letter with stipulations for the 5 mg and 10 mg immediate-release version by the FDA in May 2006, Pfizer ended its relationship with Neurocrine. Neurocrine's stock price dropped 60% on the news. Following a resubmission, the FDA in December 2007 deemed Neurocrine's new drug application (NDA) 'approvable' in the 5 and 10 mg formulations, but requested new studies as a prerequisite to approval, including a clinical trial in the elderly, a safety study comparing adverse effects to those of similarly marketed drugs, and a preclinical study examining indiplon's safety in the third trimester of pregnancy. Following the 2007 FDA letter, Neurocrine decided to discontinue all clinical and marketing development of Indiplon in the United States. References External links 2004 press release announcing Neurocrine's new product, Indiplon GenomeNet Entry: D02640 Hypnotics Pyrazolopyrimidines Sedatives Ketones Thiophenes Acetanilides GABAA receptor positive allosteric modulators
Indiplon
[ "Chemistry", "Biology" ]
619
[ "Hypnotics", "Behavior", "Ketones", "Functional groups", "Sleep" ]
7,002,935
https://en.wikipedia.org/wiki/Herman%20Sp%C3%B6ring%20Jr.
Herman Diedrich Spöring Jr. (1733–1771) was a Finnish explorer, draughtsman, botanist and a naturalist. Early life Herman Spöring Jr. was born in 1733 in the town of Turku, at that time a major Finnish city and administrative center of the Swedish Empire. He was the son of an amateur naturalist and professor of Medicine at the Academy of Åbo, Herman Spöring Sr. (1701–1747), in Turku, Finland. Spöring Jr. attended the Academy as a youth, studying medicine under his father. Sometime around 1755, at his age of 22, he moved to London, where he began working at a watchmakers. During this time, he became acquainted with the Swedish naturalist Daniel Solander, who employed him as his personal clerk for a time. In 1768, Spöring Jr. was enlisted as a clerk, assistant naturalist and personal secretary in the entourage of Joseph Banks, a wealthy young botanist who was preparing for an expedition to the Pacific Ocean, sponsored by the British Royal Society. This expedition had as one of its principal goals the observation of the transit of Venus. However, it was also intended to make scientific studies of the flora and fauna of any new lands encountered on the way of the voyage. Indeed, the confidential purpose of the voyage - from the point of view of the British Admiralty, in particular - was to seek out the hypothetical "unknown southern continent", orTerra Australis (Incognita). The other noted naturalist on the voyage was Daniel Solander, Spöring's former employer who had recommended Spöring for the post when he himself signed up. Solander was a former student and protégé of the noted Swedish botanist and founder of modern taxonomy, Carl Linnaeus. Spöring was also a skilled instrument and clock maker, and in addition to his cataloging duties was assigned the maintenance and upkeep of the ship's scientific equipment during the voyage. Voyage to the Pacific The expedition left England in 1768, aboard HM Bark Endeavour under the command of R.N. Lt. James Cook, bound for the Society Islands (present-day Tahiti). They arrived there in 1769, where the observations of Venus were taken during the transit on 3 June. Spöring had to repair the astronomical quadrant after it had become damaged when it was taken by the local Polynesian inhabitants. Leaving the Society Islands, the expedition sailed southwards, reaching New Zealand where Spöring Jr. and the other naturalists became the first ever European to have landed there. The ensuing months were spent gathering and documenting specimens of native plant and animal life there. At a bay now known as Tolaga Bay (not far from the modern township of Gisborne), Cook bestowed the name Spöring Island to a landmark, after the botanist. Today, the island is best known by its original Māori name, Pourewa. The expedition continued westwards, and in 1770 the Endeavour encountered the southeastern coastline of the Australian continent, and became the first European vessel to have navigated the eastern side of the continent. The expedition made first landfall at a site Cook named Botany Bay, very near the site at which 18 years later the colony of Sydney would be established. Banks, Solander and Spöring collected further unique specimens from this site. This collection would be greatly augmented later when the Endeavour was laid up for several weeks for repairs, after having run aground on a section of the Great Barrier Reef much further to the north. The naturalists availed themselves of the opportunity whilst repairs were being made to continue their compilation of new species. The first meeting between the Aboriginal people and the British explorers occurred on 29 April 1770 when Lieutenant James Cook landed at Botany Bay (Kamay) and encountered the Gweagal clan. Two Gweagal men opposed the landing party and in the confrontation one of them was shot and wounded. At some point in his life, Spöring Jr. created an art piece featuring the now rare Heva Tūpāpāʻu funeral costume from Tahiti, of which very few examples still exist. Once the repairs were completed, the expedition continued northwards to the East Indies port of Batavia (Jakarta). Until this point in the voyage, no crewmember or passenger had been lost to disease; however, the unhealthy conditions of the port and their new provisions would soon result in quite a few deaths, including that of Spöring himself. In 1771 on the return leg, Spöring died of dysentery complications related to food poisoning. He was buried at sea on 24 January 1771. Achievements and commemorations He has a commemorative statue dedicated to him in Sydney, Australia. In 1990, a rock taken from Pourewa (Spöring) Island was transported to Spöring's birthplace of Åbo, Finland, to be placed in a monument set up to commemorate his achievements and ties with New Zealand, as the first Finn to have landed there. Amongst his achievements are the discovery and illustration of a number of hitherto-unknown Australian species. His colleagues and successors who studied his materials have recognised the accuracy and form of his drawings and annotations. His efforts, along with those of others on the voyage provided critical new materials for study, which allowed further advances in the historical development of the theory of evolution to be made. References 18th-century Finnish botanists Swedish explorers Expatriates in Australia People who died at sea Burials at sea Finnish explorers Draughtsmen Botanists active in Australia Explorers of Australia Finnish explorers of the Pacific Scientists from Turku 1733 births 1771 deaths Botanists active in New Zealand 18th-century Swedish botanists James Cook Expatriates in the United Kingdom Finnish expatriates in Australia Finnish expatriates in England
Herman Spöring Jr.
[ "Engineering" ]
1,178
[ "Design engineering", "Draughtsmen" ]
7,003,710
https://en.wikipedia.org/wiki/Klotz%20Digital
Klotz Digital AG was a manufacturer of audio media products based in Munich, Germany; it was founded in 1990 and acquired by United Screens Media AG in 2009. The company was active in the two business segments Public Address and Radio & TV Broadcast. Its products include systems for radio broadcast, television broadcast, live sound, public address, and commercial sound. History Klotz Digital was founded in 1990 by Thomas Klotz. The company's products were first used in live sound installations and later in the 1990s found their way into broadcast facilities. In 2002 the company entered into the public address market with a digital public address product line named Varizone. The live sound, broadcast, and public address markets were the main markets for the company. At the end of 2009, Klotz Digital AG was acquired by United Screens Media AG. Thomas Klotz resigned from his position, and Dr. Andreas Gruettner, known as CEO of United Screens Media AG, was appointed Klotz Digital’s new CEO. The company was then renamed to QPhonics AG and finally after its insolvency in 2013 turned into a company named Qphonics GmbH which went into insolvency in 2015. Klotz Communications GmbH, the new company from Thomas Klotz and his partner Andre Sauer, has purchased the assets of the former Qphonics GmbH from the company's insolvency lawyers. Klotz Communications is now the sole owner of all intellectual property, including hardware and software, and controls all licensing, maintenance and upgrades. Products The broadcast products range from stand-alone on-air mixing consoles for radio and TV stations to a suite of products to enable efficient workflows in large broadcast facilities and production studios. References External links Company website Audio equipment manufacturers of Germany Radio electronics Radio technology Audio mixing console manufacturers Manufacturing companies based in Munich
Klotz Digital
[ "Technology", "Engineering" ]
379
[ "Information and communications technology", "Radio electronics", "Telecommunications engineering", "Radio technology" ]
7,004,401
https://en.wikipedia.org/wiki/Pierre%20Rosenstiehl
Pierre Rosenstiehl (5 December 1933 – 28 October 2020) was a French mathematician recognized for his work in graph theory, planar graphs, and graph drawing. The Fraysseix-Rosenstiehl's planarity criterion is at the origin of the left-right planarity algorithm implemented in Pigale software, which is considered the fastest implemented planarity testing algorithm. Rosenstiehl was directeur d’études at the École des Hautes Études en Sciences Sociales in Paris, before his retirement. He was a founding co-editor in chief of the European Journal of Combinatorics. Rosenstiehl, Giuseppe Di Battista, Peter Eades and Roberto Tamassia organized in 1992 at Marino (Italy) a meeting devoted to graph drawing which initiated a long series of international conferences, the International Symposia on Graph Drawing. He has been a member of the French literary group Oulipo since 1992. He married the French author and illustrator Agnès Rosenstiehl. References 1933 births 2020 deaths Oulipo members 20th-century French mathematicians 21st-century French mathematicians Graph theorists Graph drawing people Researchers in geometric algorithms Academic journal editors Academic staff of the School for Advanced Studies in the Social Sciences Academic staff of HEC Paris
Pierre Rosenstiehl
[ "Mathematics" ]
256
[ "Mathematical relations", "Graph theory", "Graph theorists" ]
7,004,443
https://en.wikipedia.org/wiki/Arctic%20haze
Arctic haze is the phenomenon of a visible reddish-brown springtime haze in the atmosphere at high latitudes in the Arctic due to anthropogenic air pollution. A major distinguishing factor of Arctic haze is the ability of its chemical ingredients to persist in the atmosphere for significantly longer than other pollutants. Due to limited amounts of snow, rain, or turbulent air to displace pollutants from the polar air mass in spring, Arctic haze can linger for more than a month in the northern atmosphere. History Arctic haze was first noticed in 1750 when the Industrial Revolution began. Explorers and whalers could not figure out where the foggy layer was coming from. "Poo-jok" was the term the Inuit used for it. Another hint towards clarifying this issue was relayed in notes approximately a century ago by Norwegian explorer Fridtjof Nansen. After trekking through the Arctic he found dark stains on the ice. The term "Arctic haze" was coined in 1956 by J. Murray Mitchell, a US Air Force officer stationed in Alaska, to describe an unusual reduction in visibility observed by North American weather reconnaissance planes. From his investigations, Mitchell thought the haze had come from industrial areas in Europe and China. He went on to become an eminent climatologist. The haze is seasonal, reaching a peak in late winter and spring. When an aircraft is within a layer of Arctic haze, pilots report that horizontal visibility can drop to one tenth that of normally clear sky. At this time it was unknown whether the haze was natural or was formed by pollutants. In 1972, Glenn Edmond Shaw attributed this smog to transboundary anthropogenic pollution, whereby the Arctic is the recipient of contaminants whose sources are thousands of miles away. Further research continues with the aim of understanding the impact of this pollution on global warming. Origin of pollutants Coal-burning in northern mid-latitudes contribute aerosols containing about 90% sulfur and the remainder carbon, that makes the haze reddish in color. This pollution is helping the Arctic warm up faster than any other region, although increases in greenhouse gases are the main driver of this climatic change. Sulfur aerosols in the atmosphere affect cloud formation, leading to localized cooling effects over industrialized regions due to increased reflection of sunlight, which masks the opposite effect of trapped warmth beneath the cloud cover. During the Arctic winter, however, there is no sunlight to reflect. In the absence of this cooling effect, the dominant effect of changes to Arctic clouds is an increased trapping of infrared radiation from the surface. Ship emissions, mercury, aluminium, vanadium, manganese, and aerosol and ozone pollutants are many examples of the pollution that is affecting this atmosphere, but the smoke from forest fires is not a significant contributor. Some of those pollutants figure among environmental effects of coal burning. Due to low deposition rates, these pollutants are not yet having adverse effects on people or animals. Different pollutants actually represent different colors of haze. Dr. Shaw discovered in 1976 that the yellowish haze is from dust storms in China and Mongolia. The particles were carried polewards by unusual air currents. The trapped particles were dark gray the next year he took a sample. That was caused by a heavy amount of industrial pollutants. A 2013 study found that at least 40% of the black carbon deposited in the Arctic originated from gas flares, predominately from oil extraction activities throughout the northern latitudes. The black carbon is short-lived, but such routine flaring also emits vast quantities of sulphur. Home fires in India also contribute. Recent studies According to Tim Garrett, an assistant professor of meteorology at the University of Utah involved in the study of Arctic haze at the university, mid-latitude cities contribute pollution to the Arctic, and it mixes with thin clouds, allowing them to trap heat more easily. Garrett's study found that during the dark Arctic winter, when there is no precipitation to wash out pollution, the effects are strongest, because pollutants can warm the environment up to three degrees Fahrenheit. Scientific predictions European climatologists predicted in 2009 that by the end of the 21st century, the temperature of the Arctic region is expected to rise 3° Celsius on an average day. In that same article, National Geographic quoted the co-author of the study, Andreas Stohl, of the Norwegian Institute for Air Research, "Previous climate models have suggested that the Arctic's summer sea ice may completely disappear by 2040 if warming continues unabated." See also Bioamplification Convention on Long-Range Transboundary Air Pollution Global distillation Kyoto Protocol Montreal Protocol Ozone depletion Stockholm Convention on Persistent Organic Pollutants Footnotes References Connelly, Joel. Pictures of Arctic are Hard to Argue With. 13 November 2006. Seattle Post-Intelligencer. Rozell, Ned. Arctic Haze: An Uninvited Spring Guest. 2 April 1996. Geographical Institute, University of Alaska Fairbanks. 1 May 2007 Study: The Haze is Heating Up the Arctic. 10 May 2006. United Press International. Garrett, Tim. Pollutant Haze is Heating up the Arctic. 10 May 2006. Earth Observatory. Contaminating the Arctic. 1 January 1999. Scholastic. Gorrie, Peter. Grim prognosis for Earth. 3 January 2007. Toronto Star. External links What is Arctic Haze? Air pollution Fog Smog Visibility Geography of the Arctic Environment of the Arctic
Arctic haze
[ "Physics", "Mathematics" ]
1,119
[ "Visibility", "Fog", "Physical quantities", "Quantity", "Smog", "Wikipedia categories named after physical quantities" ]
7,004,848
https://en.wikipedia.org/wiki/F%C3%A1inne
(; pl. Fáinní but often Fáinnes in English) is the name of a pin badge worn to show fluency in, or a willingness to speak, the Irish language. The three modern versions of the pin as relaunched in 2014 by Conradh na Gaeilge are the Fáinne Óir (gold circle), Seanfháinne (old fáinne/circle) and Fáinne Airgid (silver circle). In other contexts, fáinne simply means "ring" or "circle" and is also used to give such terms as fáinne pósta (wedding ring), fáinne an lae (daybreak), Tiarna na bhFáinní (The Lord of the Rings), and fáinne cluaise (earring). An Fáinne Úr An Fáinne Úr ('úr' meaning 'new') is the modernised rendition of the Fáinne, having been updated in 2014 by Conradh na Gaeilge. There are three versions presently available, none requiring test or certification: Fáinne Óir (Gold Fáinne) – for fluent speakers; Fáinne Mór Óir (literally, "Large Gold Fáinne") – traditional larger, old style solid 9ct Gold (Colour), the style worn by Liam Neeson in his film portrayal of Michael Collins; Fáinne Airgid (Silver Fáinne) – for speakers with a basic working knowledge of the language. An Fáinne (The Original Organisation) Two Irish language organisations, An Fáinne (est. 1916) ("The Ring" or "The Circle" in Irish) and the Society of Gaelic Writers (est. 1911), were founded by Piaras Béaslaí (1881–1965). They were intended to work together to a certain extent, the former promoting the language and awarding those fluent in its speaking with a Fáinne Óir (Gold Ring) lapel pin, and the latter would promote and create a pool of quality literary works in the language. All the personnel actively involved in promoting the concept of An Fáinne were associated with Conradh na Gaeilge, and from an early time, An Fáinne used the Dublin postal address of 25 Cearnóg Pharnell / Parnell Square, the then HQ of Conradh na Gaeilge though the organisations were officially separate, at least at first. The effectiveness of the organisation was acknowledged in the Dáil Éireann on 6 August 1920, when Richard Mulcahy, the Sinn Féin Teachta Dála for Clontarf suggested that a league on the model of the Fáinne for the support of Irish manufactures might be established. The Fáinne lapel pins were, at first, a limited success. They appealed mainly to Nationalists and Republicans, for whom the language was generally learnt as adults as a second language. The appeal to people for whom Irish was the native tongue was limited. They spoke Irish, as did everyone from their village, so there was no point whatsoever wearing a pin to prove it, even if they could have afforded one, or for that matter, even known they existed. In the early 1920s, many people who earned their Fáinne did so in prison, the majority of these being anti-treaty Irish Republican Army (IRA) Volunteers during the Irish Civil War. History According to Piaras Béaslaí's own article in the magazine Iris An Fháinne in 1922, he states that in the winter of 1915 the language movement was at a low ebb due to lack of funds and a large portion of the best Gaels being so involved in the work of the volunteers that they were forgetting about speaking Irish. He says he wrote an article in The Leader proposing that Gaels establish an association of those who would take a solemn oath to only speak Irish at certain events and to other Gaeilgeoirí and that they should wear a clear symbol. The article got many letters in favour and against, but two men, Tadhg Ó Scanaill and Colm Ó Murchadha, came to him asking him to organise a meeting towards setting up a council. He says that it was they who set the whole thing up. He says that he went to speak to Cú Uladh (Peadar Mac Fhionnlaíoch 1856–1942), then vice president of Conradh na Gaeilge, and he highly praised the idea. The meeting was organised for some time in the spring of 1916 in Craobh an Chéitinnigh (the Keating Branch). They went to a 'seanchus' prior to their own meeting in the Ard Chraobh (High Branch) and presented their idea to all those present. They were so taken with the idea that they all came with them to their own meeting in Craobh an Chéitinnigh. Cú Uladh was there before them and at this meeting and they decided they would (1) form the association and (2) name it "An Fáinne" instead of "An Fáinne Gaedhalach", which was proposed by Colm Ó Murchadha, and three officers were elected to conduct the work of the association. Piaras supposes that Tadhg Ó Sganaill first thought of the Fáinne (ring) as the symbol. It was an inspired idea, he says, because no one had even thought of this symbol when the name was first proposed. He states at the end of the article that they had only begun the work of the committee when Easter Week arrived and some of the small amount that were involved were snatched away, but he says, the work continued and the world knows how they well they got on since then. Recognition The consistently high standard required to qualify for the Fáinne at this time made them quite prestigious, and there are many reports of people being recruited as night-school teachers of Irish-based purely on the fact they wore the pin. The President of the Executive Council of the Irish Free State, W. T. Cosgrave acknowledged the Fáinne on 8 February 1924 as an indicator of Irish Language proficiency. Demise The fact that the underlying reason many Fáinne wearers had studied Irish was political meant that the semi-independence of the Irish Free State, and the later complete independence of the Irish Republic, along with a period of relative peace in the new province of Northern Ireland, meant they had, to some extent, achieved their aim. Twenty years or so later, a Fáinne would be a very rare sight. Due to lack of demand they were no longer manufactured, and the organisation had fizzled out. 'An Fáinne Nua' Conradh na Gaeilge and other Irish-language bodies attempted a revival, circa 1965, of the Fáinne, which, for a short time at least, became successful: An Fáinne Nua ('The New Fáinne') was marketed with the slogan Is duitse an Fáinne Nua! – meaning "The New Fáinne is for you!." It came in three varieties: An Fáinne Nua Óir (The new Gold Fáinne), An Fáinne Nua Airgid (The new Silver Fáinne), An Fáinne Nua Daite (The new coloured Fáinne). The Gold Fáinne was manufactured from 9ct Gold, whilst the other two were sterling silver. The Coloured Fáinne also had an enamel blue ring separating two concentric silver circles. The prices for the Gold, Silver and Coloured varieties in 1968 were twelve shillings and sixpence, four shillings and five shillings respectively. They were popular in Ireland during the 1960s–1970s, but fell into relative disuse shortly afterwards. Included among reasons commonly given for this were that the change in fashion made it impractical to wear a lapel pin; the resumption of hostilities in Northern Ireland making people either not wanting to show publicly a "love for things Irish" for fear of intimidation; or, for the more radical elements to place "Irishness" second to "freedom". Non-Fáinne variations Cúpla Focal badge As cúpla focal means "a couple of words". The Conradh na Gaeilge website notes that this badge is "Suitable for anyone who has a few words of Irish." Béal na nGael The Béal na nGael (Mouth of the Irish) is a different pin badge that shows a face with spiked hair and an open mouth. It was developed by the students of the Gaelcholáiste Reachrann gaelscoil and marketed primarily to youth in the Dublin Area. "The aim of the badge is to let the world know that the user is both willing and able to speak Irish, and the students say that what they are promoting is 'a practical product to stimulate more peer-to-peer communication through Irish.'" "The badge won't threaten the place of the Fáinne, they say, because their target market is an age group which is not wearing the Fáinne and which, their market research suggests, is in many cases not even aware that the Fáinne exists. They hope this target market will latch on to the badge and wear it as an invitation to others to speak to them in Irish." References External links Official website Culture of Ireland Irish words and phrases Types of jewellery Symbols Rings (jewellery)
Fáinne
[ "Mathematics" ]
1,888
[ "Symbols" ]
7,005,062
https://en.wikipedia.org/wiki/Energy%20conversion%20efficiency
Energy conversion efficiency (η) is the ratio between the useful output of an energy conversion machine and the input, in energy terms. The input, as well as the useful output may be chemical, electric power, mechanical work, light (radiation), or heat. The resulting value, η (eta), ranges between 0 and 1. Overview Energy conversion efficiency depends on the usefulness of the output. All or part of the heat produced from burning a fuel may become rejected waste heat if, for example, work is the desired output from a thermodynamic cycle. Energy converter is an example of an energy transformation. For example, a light bulb falls into the categories energy converter. Even though the definition includes the notion of usefulness, efficiency is considered a technical or physical term. Goal or mission oriented terms include effectiveness and efficacy. Generally, energy conversion efficiency is a dimensionless number between 0 and 1.0, or 0% to 100%. Efficiencies cannot exceed 100%, which would result in a perpetual motion machine, which is impossible. However, other effectiveness measures that can exceed 1.0 are used for refrigerators, heat pumps and other devices that move heat rather than convert it. It is not called efficiency, but the coefficient of performance, or COP. It is a ratio of useful heating or cooling provided relative to the work (energy) required. Higher COPs equate to higher efficiency, lower energy (power) consumption and thus lower operating costs. The COP usually exceeds 1, especially in heat pumps, because instead of just converting work to heat (which, if 100% efficient, would be a COP of 1), it pumps additional heat from a heat source to where the heat is required. Most air conditioners have a COP of 2.3 to 3.5. When talking about the efficiency of heat engines and power stations the convention should be stated, i.e., HHV ( Gross Heating Value, etc.) or LCV (a.k.a. Net Heating value), and whether gross output (at the generator terminals) or net output (at the power station fence) are being considered. The two are separate but both must be stated. Failure to do so causes endless confusion. Related, more specific terms include Electrical efficiency, useful power output per electrical power consumed; Mechanical efficiency, where one form of mechanical energy (e.g. potential energy of water) is converted to mechanical energy (work); Thermal efficiency or Fuel efficiency, useful heat and/or work output per input energy such as the fuel consumed; 'Total efficiency', e.g., for cogeneration, useful electric power and heat output per fuel energy consumed. Same as the thermal efficiency. Luminous efficiency, that portion of the emitted electromagnetic radiation is usable for human vision. Chemical conversion efficiency The change of Gibbs energy of a defined chemical transformation at a particular temperature is the minimum theoretical quantity of energy required to make that change occur (if the change in Gibbs energy between reactants and products is positive) or the maximum theoretical energy that might be obtained from that change (if the change in Gibbs energy between reactants and products is negative). The energy efficiency of a process involving chemical change may be expressed relative to these theoretical minima or maxima.The difference between the change of enthalpy and the change of Gibbs energy of a chemical transformation at a particular temperature indicates the heat input required or the heat removal (cooling) required to maintain that temperature. A fuel cell may be considered to be the reverse of electrolysis. For example, an ideal fuel cell operating at a temperature of 25 °C having gaseous hydrogen and gaseous oxygen as inputs and liquid water as the output could produce a theoretical maximum amount of electrical energy of 237.129 kJ (0.06587 kWh) per gram mol (18.0154 gram) of water produced and would require 48.701 kJ (0.01353 kWh) per gram mol of water produced of heat energy to be removed from the cell to maintain that temperature. An ideal electrolysis unit operating at a temperature of 25 °C having liquid water as the input and gaseous hydrogen and gaseous oxygen as products would require a theoretical minimum input of electrical energy of 237.129 kJ (0.06587 kWh) per gram mol (18.0154 gram) of water consumed and would require 48.701 kJ (0.01353 kWh) per gram mol of water consumed of heat energy to be added to the unit to maintain that temperature. It would operate at a cell voltage of 1.24 V. For a water electrolysis unit operating at a constant temperature of 25 °C without the input of any additional heat energy, electrical energy would have to be supplied at a rate equivalent of the enthalpy (heat) of reaction or 285.830 kJ (0.07940 kWh) per gram mol of water consumed. It would operate at a cell voltage of 1.48 V. The electrical energy input of this cell is 1.20 times greater than the theoretical minimum so the energy efficiency is 0.83 compared to the ideal cell.  A water electrolysis unit operating with a higher voltage that 1.48 V and at a temperature of 25 °C would have to have heat energy removed in order to maintain a constant temperature and the energy efficiency would be less than 0.83. The large entropy difference between liquid water and gaseous hydrogen plus gaseous oxygen accounts for the significant difference between the Gibbs energy of reaction and the enthalpy (heat) of reaction. Fuel heating values and efficiency In Europe the usable energy content of a fuel is typically calculated using the lower heating value (LHV) of that fuel, the definition of which assumes that the water vapor produced during fuel combustion (oxidation) remains gaseous, and is not condensed to liquid water so the latent heat of vaporization of that water is not usable. Using the LHV, a condensing boiler can achieve a "heating efficiency" in excess of 100% (this does not violate the first law of thermodynamics as long as the LHV convention is understood, but does cause confusion). This is because the apparatus recovers part of the heat of vaporization, which is not included in the definition of the lower heating value of a fuel. In the U.S. and elsewhere, the higher heating value (HHV) is used, which includes the latent heat for condensing the water vapor, and thus the thermodynamic maximum of 100% efficiency cannot be exceeded. Wall-plug efficiency, luminous efficiency, and efficacy In optical systems such as lighting and lasers, the energy conversion efficiency is often referred to as wall-plug efficiency. The wall-plug efficiency is the measure of output radiative-energy, in watts (joules per second), per total input electrical energy in watts. The output energy is usually measured in terms of absolute irradiance and the wall-plug efficiency is given as a percentage of the total input energy, with the inverse percentage representing the losses. The wall-plug efficiency differs from the luminous efficiency in that wall-plug efficiency describes the direct output/input conversion of energy (the amount of work that can be performed) whereas luminous efficiency takes into account the human eye's varying sensitivity to different wavelengths (how well it can illuminate a space). Instead of using watts, the power of a light source to produce wavelengths proportional to human perception is measured in lumens. The human eye is most sensitive to wavelengths of 555 nanometers (greenish-yellow) but the sensitivity decreases dramatically to either side of this wavelength, following a Gaussian power-curve and dropping to zero sensitivity at the red and violet ends of the spectrum. Due to this the eye does not usually see all of the wavelengths emitted by a particular light-source, nor does it see all of the wavelengths within the visual spectrum equally. Yellow and green, for example, make up more than 50% of what the eye perceives as being white, even though in terms of radiant energy white-light is made from equal portions of all colors (i.e.: a 5 mW green laser appears brighter than a 5 mW red laser, yet the red laser stands-out better against a white background). Therefore, the radiant intensity of a light source may be much greater than its luminous intensity, meaning that the source emits more energy than the eye can use. Likewise, the lamp's wall-plug efficiency is usually greater than its luminous efficiency. The effectiveness of a light source to convert electrical energy into wavelengths of visible light, in proportion to the sensitivity of the human eye, is referred to as luminous efficacy, which is measured in units of lumens per watt (lm/w) of electrical input-energy. Unlike efficacy (effectiveness), which is a unit of measurement, efficiency is a unitless number expressed as a percentage, requiring only that the input and output units be of the same type. The luminous efficiency of a light source is thus the percentage of luminous efficacy per theoretical maximum efficacy at a specific wavelength. The amount of energy carried by a photon of light is determined by its wavelength. In lumens, this energy is offset by the eye's sensitivity to the selected wavelengths. For example, a green laser pointer can have greater than 30 times the apparent brightness of a red pointer of the same power output. At 555 nm in wavelength, 1 watt of radiant energy is equivalent to 683 lumens, thus a monochromatic light source at this wavelength, with a luminous efficacy of 683 lm/w, would have a luminous efficiency of 100%. The theoretical-maximum efficacy lowers for wavelengths at either side of 555 nm. For example, low-pressure sodium lamps produce monochromatic light at 589 nm with a luminous efficacy of 200 lm/w, which is the highest of any lamp. The theoretical-maximum efficacy at that wavelength is 525 lm/w, so the lamp has a luminous efficiency of 38.1%. Because the lamp is monochromatic, the luminous efficiency nearly matches the wall-plug efficiency of < 40%. Calculations for luminous efficiency become more complex for lamps that produce white light or a mixture of spectral lines. Fluorescent lamps have higher wall-plug efficiencies than low-pressure sodium lamps, but only have half the luminous efficacy of ~ 100 lm/w, thus the luminous efficiency of fluorescents is lower than sodium lamps. A xenon flashtube has a typical wall-plug efficiency of 50–70%, exceeding that of most other forms of lighting. Because the flashtube emits large amounts of infrared and ultraviolet radiation, only a portion of the output energy is used by the eye. The luminous efficacy is therefore typically around 50 lm/w. However, not all applications for lighting involve the human eye nor are restricted to visible wavelengths. For laser pumping, the efficacy is not related to the human eye so it is not called "luminous" efficacy, but rather simply "efficacy" as it relates to the absorption lines of the laser medium. Krypton flashtubes are often chosen for pumping Nd:YAG lasers, even though their wall-plug efficiency is typically only ~ 40%. Krypton's spectral lines better match the absorption lines of the neodymium-doped crystal, thus the efficacy of krypton for this purpose is much higher than xenon; able to produce up to twice the laser output for the same electrical input. All of these terms refer to the amount of energy and lumens as they exit the light source, disregarding any losses that might occur within the lighting fixture or subsequent output optics. Luminaire efficiency refers to the total lumen-output from the fixture per the lamp output. With the exception of a few light sources, such as incandescent light bulbs, most light sources have multiple stages of energy conversion between the "wall plug" (electrical input point, which may include batteries, direct wiring, or other sources) and the final light-output, with each stage producing a loss. Low-pressure sodium lamps initially convert the electrical energy using an electrical ballast, to maintain the proper current and voltage, but some energy is lost in the ballast. Similarly, fluorescent lamps also convert the electricity using a ballast (electronic efficiency). The electricity is then converted into light energy by the electrical arc (electrode efficiency and discharge efficiency). The light is then transferred to a fluorescent coating that only absorbs suitable wavelengths, with some losses of those wavelengths due to reflection off and transmission through the coating (transfer efficiency). The number of photons absorbed by the coating will not match the number then reemitted as fluorescence (quantum efficiency). Finally, due to the phenomenon of the Stokes shift, the re-emitted photons will have a longer wavelength (thus lower energy) than the absorbed photons (fluorescence efficiency). In very similar fashion, lasers also experience many stages of conversion between the wall plug and the output aperture. The terms "wall-plug efficiency" or "energy conversion efficiency" are therefore used to denote the overall efficiency of the energy-conversion device, deducting the losses from each stage, although this may exclude external components needed to operate some devices, such as coolant pumps. Example of energy conversion efficiency See also Cost of electricity by source Energy efficiency (disambiguation) EROEI Exergy efficiency Figure of merit Heat of combustion International Electrotechnical Commission Perpetual motion Sensitivity (electronics) Solar cell efficiency Coefficient of performance References External links Does it make sense to switch to LED? Building engineering Dimensionless numbers of thermodynamics Energy conservation Energy conversion Energy efficiency
Energy conversion efficiency
[ "Physics", "Chemistry", "Engineering" ]
2,821
[ "Thermodynamic properties", "Physical quantities", "Dimensionless numbers of thermodynamics", "Building engineering", "Civil engineering", "Architecture" ]
7,005,392
https://en.wikipedia.org/wiki/Business%20Roundtable
The Business Roundtable (BRT) is a nonprofit lobbyist association based in Washington, D.C. whose members are chief executive officers of major U.S. companies. Unlike the United States Chamber of Commerce, whose members are entire businesses, BRT members are exclusively CEOs. The BRT lobbies for public policy that is favorable to business interests, such as lowering corporate taxes in the U.S. and internationally, as well as international trade policy like the North American Free Trade Agreement. In 2019, the BRT redefined its definition of the purpose of a corporation as participating in stakeholder capitalism, putting the interests of employees, customers, suppliers, and communities on par with shareholders. The BRT's board members include, as of 2024, chair Chuck Robbins of Cisco (CEO); former White House chief of staff Joshua Bolten; Mary Barra of General Motors; Tim Cook of Apple; and Jamie Dimon of JPMorgan Chase. History On October 13, 1972, the March Group, co-founded by Alcoa chairman John D. Harper and General Electric CEO Fred Borch, the Construction Users Anti-Inflation Roundtable, founded by retired United States Steel CEO Roger Blough, and the Labor Law Study Group (LLSG) merged to form the Business Roundtable. The March Group consisted of chief executive officers who met informally to consider public policy issues; the Construction Users Anti-Inflation Roundtable was devoted to containing construction costs; and the Labor Law Study Committee was largely made up of labor relations executives of major companies. Harper was the newly founded group's first president, followed by Thomas Murphy of General Motors, Irving Shapiro of DuPont, then Clifford Garvin of Exxon. In 2010, The Washington Post characterized the group as President Barack Obama's "closest ally in the business community." On August 19, 2019, the BRT redefined its decades-old definition of the purpose of a corporation, replacing its bedrock principle that shareholder interests must be placed above all else, as defined in 1970 by conservative economist and Nobel economics laureate Milton Friedman and promoted during the 1980s in the teachings and writings of economist Alfred Rappaport; the shareholder value theory was widely adopted in 20th century North American boardrooms. The BRT statement, signed by nearly 200 chief executive officers from major U.S. corporations in 2019, makes a "fundamental commitment to all of our stakeholders", including customers, employees, suppliers and local communities. Activities The Business Roundtable played a key role in defeating an anti-trust bill in 1975 and a Ralph Nader plan for a consumer protection agency in 1977. It also helped dilute the Humphrey-Hawkins Full Employment Act. But the Roundtable's most significant victory was in blocking labor law reform that sought to strengthen labor law to make it more difficult for companies to intimidate workers who wanted to form unions. The AFL-CIO produced a bill in 1977 that passed the House. But the Roundtable voted to oppose the bill, and through its aggressive lobbying, it prevented the bill's Senate supporters from rounding up the 60 votes in the Senate necessary to withstand a filibuster. In fiscal policy, the Roundtable was responsible for broadening the 1985 tax cuts signed into law by Ronald Reagan, lobbying successfully for sharp reductions in corporate taxes. In trade policy, it argued for opening foreign markets to American trade and investment. In 1990, the Roundtable urged George Bush to initiate a free trade agreement with Mexico. In 1993, the Roundtable lobbied for NAFTA and against any strong side agreements on labor and the environment. The Roundtable also supported the new NAFTA deal in 2019. The Roundtable also successfully opposed changes in corporate governance that would have made boards of directors and CEOs more accountable to stockholders. In 1986, the Roundtable convinced the Securities and Exchange Commission to forgo new rules on merger and acquisitions, and in 1993 convinced President Clinton to water down his plan to impose penalties on excessive executive salaries. Citicorp CEO, John Reed, chairperson of the Roundtables Accounting Task Force, argued that Clinton's plan would have had negative effects on U.S. competitiveness. The Roundtable's Health, Welfare, and Retirement Income Task Force, chaired by Prudential Insurance CEO Robert C. Winters, cheered President Bush's plan, which consisted mainly of subsidies to the health care industry. The nation's health care system works well for the majority of Americans, the Roundtable announced in a June 1991 statement. "We believe the solutions lie not in tearing down the present system, but in building upon it." It has issued press releases, submitted editorials, given congressional testimony, and distributed position advertisements. After the No Child Left Behind Act of 2001 was signed into law in January 2002, the Roundtable issued a press release stating that it had "strongly supported passage of the legislation" and was "actively working with states on implementation." The Business Roundtable also acts as a major lobby that aims to extend or maintain administrators' rights/power in large companies. For example, the U.S. Securities and Exchange Commission adopted the so-called "shareholders’ access to proxy" rule, which aimed to empower shareholders in the proposition and nomination of administrators of big corporations. The Business Roundtable was strongly against that rule, as its president John Castellani reported to The Washington Post about removing this rule: "this is our highest priority [...] Literally all of our members have called about this". And they got the upper hand: the SEC rule was finally dropped after intense lobbying and lawsuits. In June 2018, Business Roundtable issued a statement urging the White House "Administration to end immediately the policy of separating accompanied minors from their parents," and condemned the practice as "cruel and contrary to American values." Authored by the organization's Immigration Committee chairman, Chuck Robbins, the statement also commended bipartisan lawmakers for working together to reform immigration policies, and was widely supported by the Business Roundtable chair and membership. In April 2024, the Business Roundtable filed suit against the Federal Trade Commission (FTC) after the FTC issued a ban on noncompete agreements which the FTC cited as "widespread and often exploitative practice imposing contractual conditions that prevent workers from taking a new job or starting a new business." Legislation The Business Roundtable wrote a letter to members of the House strongly endorsing the Customer Protection and End User Relief Act (H.R. 4413; 113th Congress). According to the Business Roundtable letter, a survey of chief financial officers and corporate treasurers "underscores the urgent need for the end-user provisions" in this bill because "eighty-six percent of respondents indicated the fully collateralizing over-the-counter (OTC) derivatives would adversely impact business investment, acquisitions, research and development, and job creation." The letter concluded that the Business Roundtable "supports efforts to increase transparency in the derivatives markets and enhance financial stability for the U.S. economy through thoughtful new regulation while avoiding needless costs." Together with the U.S. Chamber of Commerce and the National Association of Manufacturers, in 2021, the BRT lobbied House and Senate Democrats "against raising taxes on corporations, high-income earners and small businesses" to finance the Build Back Better initiative proposed by President Joe Biden. 2019 corporation pledge On August 19, 2019, the Business Roundtable released a new "Statement on the Purpose of a Corporation." Signed by nearly 200 chief executive officers, including Amazon's Jeff Bezos, Apple's Tim Cook, General Motors' Mary Barra and Oracle's Safra Catz, the group seeks to "move away from shareholder primacy", a concept that had existed in the group's principles since 1997, and move to "include commitment to all stakeholders." It notes that "businesses play a vital role in the economy" because of jobs, fostering innovation and providing essential services. But it places shareholder interests on the same level as those of customers, employees, suppliers and communities. "Each of our stakeholders is essential", the statement says. "We commit to deliver value to all of them, for the future success of our companies, our communities and our country." Criticism In September 2019, Bezos was cited as the "first CEO to break his pledge" by the Los Angeles Times. He no longer appeared on the BRT membership roster in 2021. In July 2021, prior to stepping down as CEO, Bezos nonetheless added "Strive to be Earth's Best Employer" to Amazon's set of leadership principles. Former U.S. secretary of labor and professor of public policy at Berkeley University, Robert Reich, accused both corporate social responsibility, and the Business Roundtable's commitment to it, of being a "con". Citing BRT members Jeff Bezos, Mary Barra and Dennis Muilenburg, Reich criticized their respective companies' recent decisions: Whole Foods, an Amazon subsidiary, announced the intention to cut medical benefits for its entire part-time workforce; Mary Barra, despite GM's hefty profits and large tax breaks, rejected worker's demands that GM raise their wages and stop outsourcing their jobs; and Muilenburg, who, as Reich predicted, would depart Boeing with $62 million in compensation and pension benefits, despite the Boeing 737 MAX groundings. U.S. senator Elizabeth Warren, in September 2020, addressed the BRT in correspondence. The "withering, 11-page letter to past and present leaders of the Business Roundtable (BRT)" states that the BRT violates its August 2019 pledge to prioritize stakeholder value, calling the mandate an "empty gesture". In August 2021, Harvard Law School's Program on Corporate Governance found that the 2019 "Statement on the Purpose of a Corporation" represented no meaningful commitment by the BRT membership, citing the pledge made as "mostly for show". BRT board of directors As of 2024, corporate CEO members of BRT's board of directors are: President John Engler, 2010–2017 Joshua Bolten, 2017– References Lobbying organizations based in Washington, D.C. Political advocacy groups in the United States Business organizations based in the United States Organizations established in 1972 1972 establishments in Washington, D.C. 501(c)(6) nonprofit organizations Conservative organizations in the United States Life sciences industry Corporate executive associations
Business Roundtable
[ "Biology" ]
2,135
[ "Life sciences industry" ]
7,005,552
https://en.wikipedia.org/wiki/Aleksandr%20Lebedev%20%28biochemist%29
Alexander Nikolayevich Lebedev (1869–1937) was a biochemist in the Russian Empire and the Soviet Union. He is known for his early experiments on the biochemical basis of behavior. Lebedev apprenticed as a student with physiologist and psychologist Ivan Pavlov, becoming familiar with various techniques involved used in behavioral psychology. Lebedev developed a theory that behavior in general, and specifically conditioned behavior, had a biochemical rather than psychological basis. He began his studies in biochemistry in Moscow State University, obtaining a doctorate in 1898. He then proceeded to publish widely on the topic of "biochemistry of the mind" and is considered by some to have pioneered the field of neuropharmacology. Sources Cooper, D. M. Russian Science Reader, Oxford, Pergamon Press; NY, Macmillan (1964). Biochemists from the Russian Empire Soviet scientists Moscow State University alumni 1869 births 1937 deaths
Aleksandr Lebedev (biochemist)
[ "Chemistry" ]
189
[ "Biochemistry stubs", "Biochemists", "Biochemist stubs" ]
7,005,786
https://en.wikipedia.org/wiki/Hengzhi%20chip
The Hengzhi chip (, 共產主義監控晶片) is a microcontroller that can store secured information, designed by the People's Republic of China government and manufactured in China. Its functionalities should be similar to those offered by a Trusted Platform Module but, unlike the TPM, it does not follow Trusted Computing Group specifications. Lenovo is selling PCs installed with Hengzhi security chips. The chip could be a development of the IBM ESS (Embedded security subsystem) chip, which was a public key smart card placed directly on the motherboard's system management bus. As of September 2006, no public specifications about the chip are available. The Hengzhi chip has caused issues with the installation of Windows 11 as it doesn't follow the TPM standards and foreign TPMs are banned in China. See also Trusted Computing Trusted Platform Module References External links Lenovo releases China's first security chip Does China Own Your Box? Cryptographic hardware Trusted computing Science and technology in the People's Republic of China
Hengzhi chip
[ "Engineering" ]
217
[ "Cybersecurity engineering", "Trusted computing" ]
7,005,992
https://en.wikipedia.org/wiki/NBBC
NBBC (the National Broadband Company) was a marketplace for digital video syndication. It connected owners of digital video content (content licensors) with owners of websites that wanted video content (distributors). The marketplace generated revenue by selling advertising against content and sharing the income amongst content licensors and distributors. A joint venture of NBC Universal and NBC's broadcast affiliates, NBBC's charter launch partners included NBC owned-and-operated stations and affiliates, other NBC Universal properties such as USA Network and Bravo, and external partners such as A&E Networks, HowStuffWorks.com, The Washington Post and the Post's Newsweek magazine. On July 3, 2007, NBC shut down NBBC in order to concentrate its web activities on Hulu, its joint venture with News Corporation. References External links Brian Buchwald's Keynote Address at Streaming Media West – 11.1.06 >nbbc Press Release - 09.13.06 New York Times - NBC and Its Stations Venture Into Online Video Market - 09.13.06 Forbes - NBC Joins Online Video Race - 09.13.06 NBC
NBBC
[ "Technology" ]
233
[ "Computing stubs", "World Wide Web stubs" ]
7,005,997
https://en.wikipedia.org/wiki/Central%20Salt%20and%20Marine%20Chemicals%20Research%20Institute
Central Salt and Marine Chemicals Research Institute (formerly Central Salt Research Institute) is a constituent laboratory of the Council of Scientific and Industrial Research (CSIR), India. The institute was inaugurated by Jawahar Lal Nehru on 10 April 1954 at Bhavnagar, in Gujarat. Technology developed Preparations of nutrient-rich salt of plant origins Electrodialysis domestic desalination system Preparation of novel iodizing agent "Clean Write" writing chalk Preparation of low sodium salt of botanic origin Plastic Chip Electrodes Research activity Molecular sensors for selective recognition of cations/anions Recognition of analytes and neutral molecules in physiological condition Supramolecular metal complexes to study photo-induced energy/electron transfer processes Nanocrystalline dye-sensitized solar cells (DSSC) Smart Materials Tailored and modified electrodes. Green Chemistry Polymer Chemistry and development of novel drug delivery system Pharmaceutical Biotechnology and Natural products Recovery of precious metal ions from natural sources Crystal engineering Computational Study Electrochemical/chemical value addition processes Development of polyethylene based inter polymer membranes and design of electrodialysis units References External links 1. About CSMCRI Research institutes in Gujarat Council of Scientific and Industrial Research Education in Bhavnagar Salt industry in India Research institutes established in 1954 1954 establishments in Bombay State
Central Salt and Marine Chemicals Research Institute
[ "Chemistry" ]
258
[ "Chemistry organization stubs" ]
7,006,101
https://en.wikipedia.org/wiki/Leftover%20hash%20lemma
The leftover hash lemma is a lemma in cryptography first stated by Russell Impagliazzo, Leonid Levin, and Michael Luby. Given a secret key that has uniform random bits, of which an adversary was able to learn the values of some bits of that key, the leftover hash lemma states that it is possible to produce a key of about bits, over which the adversary has almost no knowledge, without knowing which are known to the adversary. Since the adversary knows all but bits, this is almost optimal. More precisely, the leftover hash lemma states that it is possible to extract a length asymptotic to (the min-entropy of ) bits from a random variable ) that are almost uniformly distributed. In other words, an adversary who has some partial knowledge about , will have almost no knowledge about the extracted value. This is also known as privacy amplification (see privacy amplification section in the article Quantum key distribution). Randomness extractors achieve the same result, but use (normally) less randomness. Let be a random variable over and let . Let be a 2-universal hash function. If then for uniform over and independent of , we have: where is uniform over and independent of . is the min-entropy of , which measures the amount of randomness has. The min-entropy is always less than or equal to the Shannon entropy. Note that is the probability of correctly guessing . (The best guess is to guess the most probable value.) Therefore, the min-entropy measures how difficult it is to guess . is a statistical distance between and . See also Universal hashing Min-entropy Rényi entropy Information-theoretic security References C. H. Bennett, G. Brassard, and J. M. Robert. Privacy amplification by public discussion. SIAM Journal on Computing, 17(2):210-229, 1988. C. Bennett, G. Brassard, C. Crepeau, and U. Maurer. Generalized privacy amplification. IEEE Transactions on Information Theory, 41, 1995. J. Håstad, R. Impagliazzo, L. A. Levin and M. Luby. A Pseudorandom Generator from any One-way Function. SIAM Journal on Computing, v28 n4, pp. 1364-1396, 1999. Theory of cryptography Probability theorems
Leftover hash lemma
[ "Mathematics" ]
487
[ "Theorems in probability theory", "Mathematical theorems", "Mathematical problems" ]
7,006,166
https://en.wikipedia.org/wiki/Green%E2%80%93Tao%20theorem
In number theory, the Green–Tao theorem, proved by Ben Green and Terence Tao in 2004, states that the sequence of prime numbers contains arbitrarily long arithmetic progressions. In other words, for every natural number , there exist arithmetic progressions of primes with terms. The proof is an extension of Szemerédi's theorem. The problem can be traced back to investigations of Lagrange and Waring from around 1770. Statement Let denote the number of primes less than or equal to . If is a subset of the prime numbers such that then for all positive integers , the set contains infinitely many arithmetic progressions of length . In particular, the entire set of prime numbers contains arbitrarily long arithmetic progressions. In their later work on the generalized Hardy–Littlewood conjecture, Green and Tao stated and conditionally proved the asymptotic formula for the number of k tuples of primes in arithmetic progression. Here, is the constant The result was made unconditional by Green–Tao and Green–Tao–Ziegler. Overview of the proof Green and Tao's proof has three main components: Szemerédi's theorem, which asserts that subsets of the integers with positive upper density have arbitrarily long arithmetic progressions. It does not a priori apply to the primes because the primes have density zero in the integers. A transference principle that extends Szemerédi's theorem to subsets of the integers which are pseudorandom in a suitable sense. Such a result is now called a relative Szemerédi theorem. A pseudorandom subset of the integers containing the primes as a dense subset. To construct this set, Green and Tao used ideas from Goldston, Pintz, and Yıldırım's work on prime gaps. Once the pseudorandomness of the set is established, the transference principle may be applied, completing the proof. Numerous simplifications to the argument in the original paper have been found. provide a modern exposition of the proof. Numerical work The proof of the Green–Tao theorem does not show how to find the arithmetic progressions of primes; it merely proves they exist. There has been separate computational work to find large arithmetic progressions in the primes. The Green–Tao paper states 'At the time of writing the longest known arithmetic progression of primes is of length 23, and was found in 2004 by Markus Frind, Paul Underwood, and Paul Jobling: 56211383760397 + 44546738095860 · k; k = 0, 1, . . ., 22.'. On January 18, 2007, Jarosław Wróblewski found the first known case of 24 primes in arithmetic progression: 468,395,662,504,823 + 205,619 · 223,092,870 · n, for n = 0 to 23. The constant 223,092,870 here is the product of the prime numbers up to 23, more compactly written 23# in primorial notation. On May 17, 2008, Wróblewski and Raanan Chermoni found the first known case of 25 primes: 6,171,054,912,832,631 + 366,384 · 23# · n, for n = 0 to 24. On April 12, 2010, Benoît Perichon with software by Wróblewski and Geoff Reynolds in a distributed PrimeGrid project found the first known case of 26 primes : 43,142,746,595,714,191 + 23,681,770 · 23# · n, for n = 0 to 25. In September 2019 Rob Gahan and PrimeGrid found the first known case of 27 primes : 224,584,605,939,537,911 + 81,292,139 · 23# · n, for n = 0 to 26. Extensions and generalizations Many of the extensions of Szemerédi's theorem hold for the primes as well. Independently, Tao and Ziegler and Cook, Magyar, and Titichetrakun derived a multidimensional generalization of the Green–Tao theorem. The Tao–Ziegler proof was also simplified by Fox and Zhao. In 2006, Tao and Ziegler extended the Green–Tao theorem to cover polynomial progressions. More precisely, given any integer-valued polynomials in one unknown all with constant term 0, there are infinitely many integers such that , xare simultaneously prime. The special case when the polynomials are implies the previous result that there arithmetic progressions of primes of length . Tao proved an analogue of the Green–Tao theorem for the Gaussian primes. See also Erdős conjecture on arithmetic progressions Dirichlet's theorem on arithmetic progressions Arithmetic combinatorics References Further reading Ramsey theory Additive combinatorics Additive number theory Theorems about prime numbers
Green–Tao theorem
[ "Mathematics" ]
1,033
[ "Additive combinatorics", "Theorems about prime numbers", "Combinatorics", "Theorems in number theory", "Ramsey theory" ]
7,006,587
https://en.wikipedia.org/wiki/Internet%20sex%20addiction
Internet sex addiction, also known as cybersex addiction, has been proposed as a sexual addiction characterized by virtual Internet sexual activity that causes serious negative consequences to one's physical, mental, social, and/or financial well-being. It may also be considered a subset of the theorized Internet addiction disorder. Internet sex addiction manifests various behaviours: reading erotic stories; viewing, downloading or trading online pornography; online activity in adult fantasy chat rooms; cybersex relationships; masturbation while engaged in online activity that contributes to one's sexual arousal; the search for offline sexual partners and information about sexual activity. Internet sex addiction can have several causes according to the American Association for Sex Addiction Therapy. The first cause is the neural physiological attachment that occurs during orgasms - reinforcing and attaching the images or scenarios to the addictive behavior concurrently. Secondly, psychological defects like abandonment, unimportance or lack of genuine attachment are sometimes medicated by the instances of sex addiction behavior. Thirdly, the internet sex addict may be using the addiction to balance a legitimate chemical imbalance due to major depression, a bipolar disorder or a manic depressive disorder. The cybersex addict may also struggle with intimacy anorexia since the cyber world feels safer than real relationships. General Cybersex addiction is a form of sexual addiction and Internet addiction disorder. As a form of a compulsive behavior, it can be identified by three criteria: the failure of making a decision about engagement in the behavior, obsession with the behavior, and the inability to stop the behavior despite negative consequences. Adults with this type of addiction engage in at least one of the relevant behaviors. The majority of reasons why individuals experiment with such forms of sexual expression are diverse, and can be associated with an individual's psychological disorders or issues. Individuals who suffer from low self-esteem, severely distorted body image, untreated sexual dysfunction, social isolation, depression, or are in recovery from a prior sexual addiction are more vulnerable to cybersexual addictions. Other psychological issues that may arise with this addiction include struggles for intimacy, self-worth, self-identity, self-understanding. The impact of cybersex addiction may also impact the spouse, partner or others in relationships with the addict. The resulting effects on others may include depression, weight gain and lower self-esteem. If cyber sex addicts have children, their actions may also impact those children (whether they are grown adult children or younger dependents). DSM classification There is an ongoing debate in the medical community concerning the insufficient studies, and of those, their quality, or lack thereof, and the resulting analysis and conclusions drawn from them, such as they are. So far, without repeatable, meaningful, measurable, and quantifiable analysis, no medical community wide acceptably reasonable standards, a definition, have been drawn yet. Hence, internet sex addiction, just like its umbrella sexual addiction, is still not listed in the DSM-5, which is commonly used by psychiatrists in the United States for diagnosing patients problems in a standard uniform way. See also Behavioral modernity Evolutionary mismatch Internet pornography Pornography addiction References Further reading External links WikiSaurus:libidinist Cybersexual Addiction Quiz Internet Pornography and Sex Addiction Help through Supplementation Internet Pornography and Masturbation Addiction Help Using Exposure and Response Prevention (ERP) to break away from pornography and masturbation addiction Digital media use and mental health Sexual addiction Sexuality and computing Research on the effects of pornography Behavioral addiction
Internet sex addiction
[ "Technology" ]
724
[ "Computing and society", "Sexuality and computing" ]
7,006,917
https://en.wikipedia.org/wiki/Prefix%20order
In mathematics, especially order theory, a prefix ordered set generalizes the intuitive concept of a tree by introducing the possibility of continuous progress and continuous branching. Natural prefix orders often occur when considering dynamical systems as a set of functions from time (a totally-ordered set) to some phase space. In this case, the elements of the set are usually referred to as executions of the system. The name prefix order stems from the prefix order on words, which is a special kind of substring relation and, because of its discrete character, a tree. Formal definition A prefix order is a binary relation "≤" over a set P which is antisymmetric, transitive, reflexive, and downward total, i.e., for all a, b, and c in P, we have that: a ≤ a (reflexivity); if a ≤ b and b ≤ a then a = b (antisymmetry); if a ≤ b and b ≤ c then a ≤ c (transitivity); if a ≤ c and b ≤ c then a ≤ b or b ≤ a (downward totality). Functions between prefix orders While between partial orders it is usual to consider order-preserving functions, the most important type of functions between prefix orders are so-called history preserving functions. Given a prefix ordered set P, a history of a point p∈P is the (by definition totally ordered) set p− = {q | q ≤ p}. A function f: P → Q between prefix orders P and Q is then history preserving if and only if for every p∈P we find f(p−) = f(p)−. Similarly, a future of a point p∈P is the (prefix ordered) set p+ = {q | p ≤ q} and f is future preserving if for all p∈P we find f(p+) = f(p)+. Every history preserving function and every future preserving function is also order preserving, but not vice versa. In the theory of dynamical systems, history preserving maps capture the intuition that the behavior in one system is a refinement of the behavior in another. Furthermore, functions that are history and future preserving surjections capture the notion of bisimulation between systems, and thus the intuition that a given refinement is correct with respect to a specification. The range of a history preserving function is always a prefix closed subset, where a subset S ⊆ P is prefix closed if for all s,t ∈ P with t∈S and s≤t we find s∈S. Product and union Taking history preserving maps as morphisms in the category of prefix orders leads to a notion of product that is not the Cartesian product of the two orders since the Cartesian product is not always a prefix order. Instead, it leads to an arbitrary interleaving of the original prefix orders. The union of two prefix orders is the disjoint union, as it is with partial orders. Isomorphism Any bijective history preserving function is an order isomorphism. Furthermore, if for a given prefix ordered set P we construct the set P- ≜ { p- | p∈ P} we find that this set is prefix ordered by the subset relation ⊆, and furthermore, that the function max: P- → P is an isomorphism, where max(S) returns for each set S∈P- the maximum element in terms of the order on P (i.e. max(p-) ≜ p). References Dynamical systems Order theory Trees (data structures)
Prefix order
[ "Physics", "Mathematics" ]
731
[ "Order theory", "Mechanics", "Dynamical systems" ]
7,007,010
https://en.wikipedia.org/wiki/Inverted%20bell
The inverted bell is a metaphorical name for a geometric shape that resembles a bell upside-down. By context In architecture, the term is applied to describe the shape of the capitals of Corinthian columns. The inverted bell is used in shape classification in pottery, often featured in archaeology as well as in modern times. In statistics, a bimodial distribution is sometimes called an inverted bell curve. References Geometric shapes
Inverted bell
[ "Mathematics" ]
86
[ "Geometric shapes", "Mathematical objects", "Geometric objects" ]
7,007,246
https://en.wikipedia.org/wiki/Roadheader
A roadheader, also called a boom-type roadheader, road header machine, road header or just header machine, is a piece of excavating equipment consisting of a boom-mounted cutting head, a loading device usually involving a conveyor, and a crawler travelling track to move the entire machine forward into the rock face. The cutting head can be a general purpose rotating drum mounted in line or perpendicular to the boom, or can be special function heads such as jackhammer-like spikes, compression fracture micro-wheel heads like those on larger tunnel boring machines, a slicer head like a gigantic chain saw for dicing up rock, or simple jaw-like buckets of traditional excavators. History The first roadheader patent was applied for by Dr. Z. Ajtay in Hungary, in 1949. It was invented as a remote operated miner for exploitation of small seam, close walled deposits, typically in wet conditions. Types Cutting Heads: Transverse - rotates parallel to the cutter boom axis Longitudinal - rotates perpendicular to boom axis Uses Roadheaders were initially used in coal mines. The first use in a civil engineering project was the construction of the City Loop (then called the Melbourne Underground Rail Loop) in the 1970s, where the machines enabled around 80% of the excavation to be performed mechanically. They are now widely used in such as tunneling both for mining and municipal government projects, building wine caves, and building cave homes such as those in Coober Pedy, Australia. On February 21, 2014, Waller Street, just south of Laurier Avenue collapsed into an 8m-wide and 12m-deep sink-hole where a roadheader was excavating the eastern entrance to Ottawa's LRT O-Train tunnel. A similar incident occurred in June 2016, when a sink-hole opened up in Rideau Street during further construction of the tunnel, and filled with water up to a depth of three metres. The CBC reported that one of Rideau Transit Group’s 135-tonne roadheaders was in a part of the tunnel where the flooding was the deepest. Three roadheaders were used in the construction of the O-Train. Projects utilizing roadheaders Boston's Big Dig Ground Zero Cleanup Addison Airport Toll Tunnel Fourth bore of Caldecott Tunnel Malmö City Tunnel Confederation Line, Ottawa References External links An article on underground home design and construction, with a section on use of roadheader machines. Ripping head roadheader Video Engineering vehicles Mining equipment Excavating equipment
Roadheader
[ "Engineering" ]
515
[ "Engineering vehicles", "Excavating equipment", "Mining equipment" ]
7,008,701
https://en.wikipedia.org/wiki/Decrement%20table
Decrement tables, also called life table methods, are used to calculate the probability of certain events. Birth control Life table methods are often used to study birth control effectiveness. In this role, they are an alternative to the Pearl Index. As used in birth control studies, a decrement table calculates a separate effectiveness rate for each month of the study, as well as for a standard period of time (usually 12 months). Use of life table methods eliminates time-related biases (i.e. the most fertile couples getting pregnant and dropping out of the study early, and couples becoming more skilled at using the method as time goes on), and in this way is superior to the Pearl Index. Two kinds of decrement tables are used to evaluate birth control methods. Multiple-decrement (or competing) tables report net effectiveness rates. These are useful for comparing competing reasons for couples dropping out of a study. Single-decrement (or noncompeting) tables report gross effectiveness rates, which can be used to accurately compare one study to another. See also Survival analysis Footnotes Birth control Actuarial science
Decrement table
[ "Mathematics" ]
234
[ "Applied mathematics", "Actuarial science" ]
7,008,837
https://en.wikipedia.org/wiki/Jig%20borer
The jig borer is a type of machine tool invented at the end of World War I to enable the quick and precise location of hole centers. It was invented independently in Switzerland and the United States. It resembles a specialized kind of milling machine that provides tool and die makers with a higher degree of positioning precision (repeatability) and accuracy than those provided by general machines. Although capable of light milling, a jig borer is more suited to highly accurate drilling, boring, and reaming, where the quill or headstock does not see the significant side loading that it would with mill work. The result is a machine designed more for location accuracy than heavy material removal. A typical jig borer has a work table of around which can be moved using large handwheels (with micrometer-style readouts and verniers) on particularly carefully made shafts with a strong degree of gearing; this allows positions to be set on the two axes to an accuracy of . It is generally used to enlarge to a precise size smaller holes drilled with less accurate machinery in approximately the correct place (that is, with the small hole strictly within the area to be bored out for the large hole). Jig borers are limited to working materials that are still soft enough to be bored. Often a jig is hardened; for a jig borer this requires the material to be bored first and then hardened, which may introduce distortion. Consequently, the jig grinder was developed as a machine with the precision of the jig borer, but capable of working materials in their hardened state. History Before the jig borer was developed, hole center location had been accomplished either with layout (either quickly-but-imprecisely or painstakingly-and-precisely) or with drill jigs (themselves made with painstaking-and-precise layout). The jig borer was invented to expedite the making of drill jigs, but it helped to eliminate the need for drill jigs entirely by making quick precision directly available for the parts that the jigs would have been created for. The revolutionary underlying principle was that advances in machine tool control that expedited the making of jigs were fundamentally a way to expedite the cutting process itself, for which the jig was just a means to an end. Thus, the jig borer's development helped advance machine tool technology toward later NC and CNC development. The jig borer was a logical extension of manual machine tool technology that began to incorporate some then-novel concepts that would become routine with NC and CNC control, such as: coordinate dimensioning (dimensioning of all locations on the part from a single reference point); working routinely in "tenths" (ten-thousandths of an inch, 0.0001 inch) as a fast, everyday machine capability (whereas it had been the exclusive domain of special, time-consuming, craftsman-dependent manual skills); and circumventing jigs altogether. Franklin D. Jones, in his textbook Machine Shop Training Course (5th ed), noted: "In many cases, a jig borer is a 'jig eliminator.' In other words, such a machine may be used instead of a jig either when the quantity of work is not large enough to warrant making a jig or when there is insufficient time for jig making." Several innovations in the development of the jig borer were the work of the Moore Special Tool Company, such as the adoption of hardened and accurate leadscrews, formed by grinding, rather than a soft leadscrew with a compensating nut. The technological advances that led to the jig borer and NC were about to usher in the age of CNC and CAD/CAM, radically changing the way people manufacture many of their goods. References Hole making Machine tools
Jig borer
[ "Engineering" ]
795
[ "Machine tools", "Industrial machinery" ]
10,818,503
https://en.wikipedia.org/wiki/Feller%27s%20coin-tossing%20constants
Feller's coin-tossing constants are a set of numerical constants which describe asymptotic probabilities that in n independent tosses of a fair coin, no run of k consecutive heads (or, equally, tails) appears. William Feller showed that if this probability is written as p(n,k) then where αk is the smallest positive real root of and Values of the constants For the constants are related to the golden ratio, , and Fibonacci numbers; the constants are and . The exact probability p(n,2) can be calculated either by using Fibonacci numbers, p(n,2) =  or by solving a direct recurrence relation leading to the same result. For higher values of , the constants are related to generalizations of Fibonacci numbers such as the tribonacci and tetranacci numbers. The corresponding exact probabilities can be calculated as p(n,k) = . Example If we toss a fair coin ten times then the exact probability that no pair of heads come up in succession (i.e. n = 10 and k = 2) is p(10,2) =  = 0.140625. The approximation gives 1.44721356...×1.23606797...−11 = 0.1406263... References External links Steve Finch's constants at Mathsoft Mathematical constants Gambling mathematics Probability theorems
Feller's coin-tossing constants
[ "Mathematics" ]
314
[ "Mathematical objects", "Theorems in probability theory", "nan", "Mathematical problems", "Mathematical theorems", "Mathematical constants", "Numbers" ]
10,818,976
https://en.wikipedia.org/wiki/Alusil
Alusil as a hypereutectic aluminium-silicon alloy (EN AC-AlSi17Cu4Mg / EN AC-48100 or A390) contains approximately 78% aluminium and 17% silicon. This alloy was theoretically conceived in 1927 by Schweizer & Fehrenbach, of Badener Metall-Waren-Fabrik, but practically created only by Lancia in the same year, for its car engines. It was further developed by Reynolds, now Rheinmetall Automotive. In the United States, Chevrolet was the first to use Reynolds A390 in the Chevrolet Vega. The Alusil aluminium alloy is commonly used to make linerless aluminium alloy engine blocks. There is no coating applied to the cylinder bore and blocks are not honed conventionally. During the manufacturing process, a chemical or mechanical process is used to remove aluminum from the surface of the cylinder bore, exposing a very hard silicon precipitate. These exposed silicon particles, which under a microscope look like small islands, allow for oil to collect in the area surrounding them, thus forming the required tribofilm that supports piston and ring travel. The pistons used in an Alusil engine block typically have an iron-clad plating or similar coating on the piston skirts to prevent galling of the aluminum pistons when run against the uncoated aluminum cylinder bore. Examples of this coating include Mahle Ferrostan (I & II), FerroTec, or Ferroprint. BMW switched from Nikasil-coated cylinder walls to Alusil in 1996 to eliminate the corrosion problems caused through the use of petrol/gasoline containing sulfur. Although similar, Alusil is not to be mistaken with Lokasil which was used by Porsche in the Boxster, Cayman, and 911 models from 1997 through 2008. Lokasil blocks use a freeze cast cylinder sleeve pre-form which is inserted into the casting mold. This preform contains silicon particles suspended in a resin binder. During the casting process, the molten aluminum is injected into the mold and burns off the resin, leaving an area of localized hypereutectic aluminum only in the area of the cylinder bore. The silicon particles are then mechanically exposed in a similar process to an Alusil block resulting in a cylinder block that functions in the same way as one cast out of Alusil. Although successfully used by many European manufacturers, there are potentially issues associated with engines that use Alusil blocks, namely cylinder bore scoring which occurs when there is a breakdown of the exposed silicon particles in the cylinder bore, resulting in increased oil consumption and excessive piston noise. Vehicles / Engines using Alusil include: Audi 2.4 V6 Audi 4.2 MPI V8 Audi 3.2 FSI V6 Audi 4.2 FSI V8 Audi 5.2 FSI V10 Audi/Volkswagen 6.0 W12 BMW N52 I6 BMW M62 V8 BMW S62 V8 BMW N62 V8 BMW N63 V8 BMW M70/M73 V12 BMW N74 V12 BMW S65 & S85 M Engines Mercedes-Benz M112 engine V6 Mercedes-Benz M113 engine V8 Mercedes 560 SEL M117 V8 Mercedes M119 V8 Mercedes M120 V12 Porsche 928 V8 Porsche 924S I4 Porsche 944 I4 Porsche 968 I4 Porsche Cayenne V6 (excluding models with VW VR6 engine which has a cast iron block) Porsche Cayenne V8 (excluding 4.5 V8 Naturally Aspirated which uses Lokasil) Porsche Panamera V6 Porsche Panamera V8 Porsche MA1 H6 Porsche Macan V6 See also Hypereutectic piston References External links Kolbenschmidt Pierburg - official website of Alusil trademark holder Aluminium alloys 1927 introductions Aluminium–silicon alloys
Alusil
[ "Chemistry" ]
780
[ "Alloys", "Aluminium alloys" ]
10,820,517
https://en.wikipedia.org/wiki/1963%20United%20States%20Tri-Service%20rocket%20and%20guided%20missile%20designation%20system
In 1963, the U.S. Department of Defense established a designation system for rockets and guided missiles jointly used by all the United States armed services. It superseded the separate designation systems the Air Force and Navy had for designating US guided missiles and drones, but also a short-lived interim USAF system for guided missiles and rockets. History On 11 December 1962, the U.S. Department of Defense issued Directive 4000.20 “Designating, Redesignating, and Naming Military Rockets and Guided Missiles” which called for a joint designation system for rockets and missiles which was to be used by all armed forces services. The directive was implemented via Air Force Regulation (AFR) 66-20, Army Regulation (AR) 705-36, Bureau of Weapons Instruction (BUWEPSINST) 8800.2 on 27 June 1963. A subsequent directive, DoD Directive 4120.15 "Designating and Naming Military Aircraft, Rockets, and Guided Missiles", was issued on 24 November 1971 and implemented via Air Force Regulation (AFR) 82-1/Army Regulation (AR) 70-50/Naval Material Command Instruction (NAVMATINST) 8800.4A on 27 March 1974. Within AFR 82-1/AR 70-50/NAVMATINST 8800.4A, the 1963 rocket and guided missile designation system was presented alongside the 1962 United States Tri-Service aircraft designation system and the two systems have been concurrently presented and maintained in joint publications since. The current version of the rocket and missile designation system was mandated by Joint Regulation 4120.15E Designating and Naming Military Aerospace Vehicles and was implemented via Air Force Instruction (AFI) 16-401, Army Regulation (AR) 70-50, Naval Air Systems Command Instruction (NAVAIRINST) 13100.16 on 3 November 2020. The list of military rockets and guided missiles was maintained via 4120.15-L Model Designation of Military Aerospace Vehicles until its transition to data.af.mil on 31 August 2018. Explanation The basic designation of every rocket and guided missile is based in a set of letters called the Mission Design Sequence. The sequence indicates the following: An optional status prefix The environment from which the weapon is launched The primary mission of the weapon The type of weapon Examples of guided missile designators are as follows: AGM – (A) Air-launched (G) Surface-attack (M) Guided missile AIM – (A) Air-launched (I) Intercept-aerial (M) Guided missile ATM – (A) Air-launched (T) Training (M) Guided missile RIM – (R) Ship-launched (I) Intercept-aerial (M) Guided missile LGM – (L) Silo-launched (G) Surface-attack (M) Guided missile The design or project number follows the basic designator. In turn, the number may be followed by consecutive letters, representing modifications. Example: RGM-84D means: R – The weapon is ship-launched; G – The weapon is designed to surface-attack; M – The weapon is a guided missile; 84 – eighty-fourth missile design; D – fourth modification; In addition, most guided missiles have names, such as Harpoon, Tomahawk, Sea Sparrow, etc. These names are retained regardless of subsequent modifications to the missile. Code Prefixes Additionally, a prefix may be added to the designation indicating a non-standard configuration. For example: YAIM-54A XAIM-174B See also List of missiles 1962 United States Tri-Service aircraft designation system United States military aircraft designation systems Notes References External links Guided missiles Weapons of the United States Naming conventions Rocketry
1963 United States Tri-Service rocket and guided missile designation system
[ "Engineering" ]
747
[ "Rocketry", "Aerospace engineering" ]
10,821,496
https://en.wikipedia.org/wiki/Dullard%20protein
In cell biology, Dullard protein is a protein coding gene involved in neural development. It is a member of DXDX(T/V) phosphatase family and is a potential regulator of neural tube development in Xenopus. The gene promotes neural development by inhibiting Bone Morphogenetic Proteins (BMPs). Dullard is also known as CTDnep1, which stands for CTD nuclear envelope phosphatase 1. This gene is relatively small and only contains 244 amino acids. Description Dullard is also known as CTDnep1, which stands for CTD nuclear envelope phosphatase 1. It is a protein coding gene, which include phosphatase activity and protein serine/threonine phosphatase activity. This gene is relatively small and only contains 244 amino acids. Dullard protein or CTDnep1 encodes a protein serine/threonine phosphatase and dephosphorylates LPIN1 and LPIN2. LPIN1 and LPIN2 catalyze the reaction of the conversion of phosphatidic acid to diacylglycerol. The reaction can affect and change the lipid concentration of the endoplasmic reticulum and the nucleus. Dullard and BNP signaling Neural development happens in the dorsal ectoderm. In the genus Xenopus, over expression of Dullard undergoes apoptosis in early development. Dullard helps promote Ubiquitin by proteosomal degradation. Dullard mRNA is derived from maternal genes and is localized within the animal neural hemisphere. Functioning negatively for the regulation of Bone Morphogenetic Proteins (BMPs), Dullard conserves the C-terminal region of NLI-IF, in which is fairly dominant in cellular functions. Dullard is essential for inhibiting BMP receptor activation during Xenopus neuralization. Human Dullard Human Dullard has shown that the protein has two membrane spanning regions. One end is the N-terminal end, which helps localize the protein to the nuclear envelope. Dullard dephosphorylates the mammalian phosphatidic acid phosphatase, lipin. Dullard participates in a unique phosphatase cascade regulating nuclear membrane biogenesis, and that this cascade is conserved from yeast to mammals. There is belief that Dullard may have other targets that is not only associated with the nuclear envelope. In recent studies, dullard interacts with BMP type 1 to inhibit dependent phosphorylation. This can conclude that it is a potential source for regulating the level of BMP signaling and can affect germ cell specification. References Proteins
Dullard protein
[ "Chemistry" ]
552
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
10,821,683
https://en.wikipedia.org/wiki/Bryan%20Simonaire
Bryan Warner Simonaire (born September 6, 1963) is an American politician who serves as a Maryland state senator representing District 31, which encompasses much of northern Anne Arundel County's Baltimore suburbs. A member of the Republican Party, he served as the minority leader of the Maryland Senate from 2020 to 2023. Background Simonaire was born in Baltimore. He graduated from Bob Jones University in 1985, receiving a Bachelor of Science degree in computer science, and from Loyola College, where he earned a Master of Science degree in engineering in 2005. He is a member of Upsilon Pi Epsilon. After graduating from Bob Jones, Simonaire has worked as a computer systems engineer for Westinghouse Electronic Systems (now Northrop Grumman since its acquisition in 1995). In 2002, he founded Heroes-at-Home, a web-based program that helps the needy. Simonaire became involved in politics in 2005, when he joined the North Count Republican Club's board of directors. He entered the race for state Senate in District 31 later that year, seeking to succeed retiring Democratic state senator Philip C. Jimeno and running on a "common sense, conservative" platform that included opposition to same-sex marriage. The district was targeted by the Maryland Republican Party, which saw the election as an opportunity to make legislative gains. Simonaire won the Republican primary in September 2006, and later won the general election on November 7, 2006, defeating Democratic state delegate Walter J. Shandrowsky by 659 votes, or a margin of 1.72 percent. It was the closest election in the 2006 Maryland Senate elections. In the legislature Simonaire was sworn into the Maryland Senate on January 10, 2007. He was initially a member of the Judicial Proceedings Committee from 2007 to 2010, afterwards serving on the Health and Environmental Affairs Committee until 2022. Since 2023, he has served on the Education, Energy, and the Environment Committee. Simonaire endorsed Mitt Romney in the 2012 Republican Party presidential primaries and later served a Romney delegate to the 2012 Republican National Convention. In 2014, Simonaire proposed a constitutional amendment to remove legislative leaders' ability to remove voting powers from any member of the Maryland General Assembly. The amendment was introduced after state Delegate Don H. Dwyer Jr. was stripped of his voting powers and committee assignments after being sentenced to 30 weekends in jail for driving under the influence. In 2016, Simonaire introduced the "Dwyer amendment", which would have prevented Senate president Thomas V. Miller Jr. from removing a member's voting powers. The proposed rule change was rejected in a 11-31 vote. In October 2020, Simonaire was elected as the minority leader of the Maryland Senate, which was seen by the media as the Senate Republican caucus becoming more conservative as to push back on the perceived leftward shift of the Maryland Democratic Party following the election of Bill Ferguson as Senate president. In this capacity, Simonaire sought to allow his party to make their own committee assignments and oversaw the party's state Senate campaign in 2022, in which the party lost two seats in the Maryland Senate. Following the 2022 elections, Senate Republicans opted to elect Stephen S. Hershey Jr. as minority leader. Simonaire endorsed Maryland Secretary of Commerce Kelly M. Schulz in the 2022 Maryland gubernatorial election. After she was defeated by far-right state delegate Dan Cox in the Republican primary, he declined to endorse or campaign with Cox, instead focusing on competitive Senate elections. Political positions Crime and justice In 2009, Simonaire said he would vote to repeal the death penalty if legislators passed a constitutional amendment to ban same-sex marriage in Maryland. He later voted for an amendment to the death penalty repeal bill to limit the death penalty's use rather than fully repeal it, which passed 25-21. During the 2013 legislative session, Simonaire voted against repealing the death penalty. During the 2022 legislative session, Simonaire implored legislators to pass a tough-on-crime bill introduced by Governor Larry Hogan. He also expressed willingness to work with Democrats to pass a bipartisan judicial transparency bill. Education Simonaire opposes the Blueprint for Maryland's Future, calling for its repeal during the 2021 legislative session and comparing them to the Bridge of Excellence education reforms in 2002. He supports legislation requiring the Maryland State Board of Education to prepare a problem gambling curriculum in schools. During the 2011 legislative session, Simonaire said he opposed Maryland's Dream Act, a bill that extended in-state tuition for undocumented immigrants. During the 2022 legislative session, Simonaire introduced a bill that would force the county Board of Education to vote on certain curriculum items if a petition got the signatures of at least three percent of parents. Electoral reform During the 2015 legislative session, Simonaire testified against a bill to restore voting rights for ex-felons. In May 2020, Simonaire asked Governor Larry Hogan to call a special session to pass election integrity bills, expressing concern that the use of mail-in ballots in the 2020 elections would lead to voter fraud. During the 2021 legislative session, Simonaire introduced a package of election reform bills, including voter ID laws and signature verification on mail-in ballots, citing what he called "major deficiencies" in the 2020 United States presidential election. The package failed to move out of committee, and many bills from the package were reintroduced in 2022. He also supported a bill to shift control of local election boards to whichever party had a majority of registered voters in each jurisdiction, and sought to amend a bill to expand early voting centers to require local boards of elections to consider "geographical distance" in deciding where to locate early voting centers. Simonaire opposed the congressional maps drawn by the Legislative Redistricting Advisory Committee (LRAC), of which he was a member, instead supporting maps drawn by Governor Larry Hogan's Maryland Citizens Redistricting Commission. During the LRAC's map drawing process, he pressed for a bipartisan map drawing process and hoped legislators would produce a single map, but predicted that Democrats on the commission would pass their own map. He criticized the commission's final congressional and legislative maps as "seriously gerrymandered". After Judge Lynne A. Battaglia struck down the state's congressional maps in March 2022, Simonaire criticized Democrats for not including Republicans in the process of drafting a new map. Environment Simonaire is an environmentalist and has expressed willingness to work with legislators to pass a bipartisan climate bill. He voted in favor of bills to ban fracking and foam containers in Maryland. Simonaire was critical of Maryland's "Rain Tax" and introduced legislation in 2013 to offset the fee in Anne Arundel County. In 2015, he voted in favor of a bill to make the rain tax optional for Maryland's largest jurisdictions. During the 2021 legislative session, Simonaire expressed concern with the Climate Solutions Now Act, which he said would force jurisdictions to choose between planting more trees and protecting local sewage projects. After it was reintroduced in 2022, he objected to provisions that would require large buildings to become carbon neutral by 2040 and expressed that legislators should instead focus on climate solutions "starting at the regional level". Gun policy During the 2013 legislative session, Simonaire voted against the Firearms Safety Act, a bill that placed restrictions on firearm purchases and magazine capacity in semi-automatic rifles. Social issues Simonaire is a social conservative, opposing abortion rights and same-sex marriage, citing religious beliefs. Simonaire opposed the Civil Marriage Protection Act, reading King & King on the Senate floor to protest the bill and warning that "young, impressionable students" would be taught the "homosexual worldview" if the bill passed. He also unsuccessfully sought to amend the bill to allow religious adoption agencies to refuse services to same-sex couples. In 2015, he voted against a bill that would allow same-sex couples to use donor sperm for in vitro fertilization. In 2014, Simonaire said he opposed a bill to prohibit discrimination against transgender people. In 2021, he was the lone vote against a bill to allow transgender people to change their names without advertising it in newspapers. In 2015, Simonaire introduced a "right to try" bill that would allow terminally ill patients to try experimental drugs not approved by the Food and Drug Administration. In 2019, he spoke against the End-of-Life Option Act, which would have provided palliative care to terminally ill adults. During the 2016 legislative session, Simonaire introduced legislation to revise a translation of the state's motto to "Strong deeds, gentle words", saying that he believed that the current meaning of the motto ("Manly deeds, womanly words") was sexist. In 2022, Simonaire downplayed the impact of the U.S. Supreme Court's decision in Dobbs v. Jackson Women's Health Organization, which overturned Roe v. Wade, calling it a "Democratic ploy" to energize voters. In 2023, during debate on a bill creating a ballot referendum to codify abortion access rights into the Constitution of Maryland, Simonaire compared abortion to the death penalty and sought to amend the bill to prohibit abortions after fetal viability, which failed by a vote of 13-33. Taxes In 2013, Simonaire said he opposed a bill to provide $450,000 in tax breaks to Lockheed Martin. In 2021, Simonaire spoke against legislation to extend the state's earned income tax credit to undocumented immigrants. He also opposed legislation to allow counties to implement progressive income taxes and to impose a tax on digital advertising, and unsuccessfully attempted to amend the tax bill to prevent large companies from increasing prices for consumers or small businesses to pay for the tax. During the 2022 legislative session, Simonaire supported a bill to cut taxes on centenarians and implored legislators to pass additional tax cuts. Transportation In March 2024, following the Francis Scott Key Bridge collapse, Simonaire and state senator Johnny Ray Salling introduced a bill that would allow the governor to declare a year-long state of emergency after damage to critical infrastructure, though it would eliminate the authority to seize private property for government use, as now allowed under a state of emergency. The bill was withdrawn following discussions with the Moore administration. Personal life Simonaire is married and has seven children. He lives in Pasadena, Maryland, and attends nondenominational Christian churches. During the 2018 legislative session, Simonaire spoke against a bill to ban conversion therapy on minors, arguing that it would dissuade teens from seeking counseling. His daughter, Meagan, a member of the Maryland House of Delegates, spoke in support of the bill and accused her father of seeking conversion therapy for her after she came out as bisexual in 2015. Simonaire disputed his daughter's story in interviews with the media, saying that he had recommended her Christian counseling after she approached him for advice with her depression and anxiety, but added that he disagreed with her "lifestyle". Electoral history References External links 1963 births Bob Jones University alumni Christians from Maryland Computer systems engineers Living people Loyola University Maryland alumni People from Pasadena, Maryland Politicians from Baltimore Republican Party Maryland state senators 21st-century members of the Maryland General Assembly
Bryan Simonaire
[ "Technology" ]
2,270
[ "Computer systems engineers", "Computer systems" ]
10,821,878
https://en.wikipedia.org/wiki/Pharmaceutical%20ink
Pharmaceutical ink is an ingestible form of water-based ink used on most medicine pills to indicate which drug it is, and/or how many milligrams the pill contains. History The first U.S. patent for pharmaceutical inks was filed on 28 June 1966, and its method involved ethyl alcohol, shellac, titanium dioxide and propylene glycol. Most pharmaceutical inks since the early 1990s eliminate ethyl alcohol in favour of faster ink drying times, and may include methyl alcohol and isopropanol in addition to the traditional ingredients titanium dioxide and propylene glycol. External links U.S. Patent no. 3,258,347 - original pharmaceutical ink patent U.S. Patent no. 5,006,362, filed 9 April 1991 - eliminates ethanol in favor of faster ink drying times See also Pharmaceutical glaze Inks Pharmacy
Pharmaceutical ink
[ "Chemistry" ]
184
[ "Pharmacology", "Pharmacy" ]
10,822,281
https://en.wikipedia.org/wiki/Michael%20Barr%20%28software%20engineer%29
Michael Barr is a software engineer specializing in software design for medical devices and other embedded systems. He is a past editor-in-chief of Embedded Systems Design magazine and author of three books and more than seventy articles about embedded software. Barr has often worked as an expert witness, including testifying in the Toyota Sudden Unintended Acceleration litigation. In October 2013, after reviewing Toyota's source code as part of a team of seven engineers, he testified in a jury trial in Oklahoma that led to a "guilty by software defects" finding against Toyota. There are several technical articles that discuss the various electronic throttle control defects he testified were linked to unintended acceleration that caused deaths in Toyota Camry vehicles. Earlier in his career, Barr testified as an expert witness in the DirecTV anti-piracy end user litigation, which involved over 25,000 end users. He has also worked as a testifying expert witness in other high-profile litigation involving software, such as SmartPhone Technologies vs Apple and in a copyright dispute about EA's early Madden Football video game source code. Barr began his career working as an embedded programmer at Hughes Network Systems, where he wrote software for products including the first-generation Hughes-branded DirecTV receiver, which sold in the millions of units. He subsequently wrote embedded software at TSI TelSys, PropHead Development, and Netrino. His three books are Programming Embedded Systems in C with GNU Development Tools, Embedded Systems Dictionary (co-authored by Jack Ganssle), and "Embedded C Coding Standard". Barr studied electrical engineering at the University of Maryland in College Park, from which he earned a Bachelor of Science degree in 1994 and a Master of Science degree in 1997. From 2000 to 2002, he taught ENEE 447 Operating Systems Theory as an adjunct professor in the same Department of Electrical and Computer Engineering. References External links Barr Code blog Barr Group website Embedded Systems Design magazine (formerly Embedded Systems Programming) Living people Year of birth missing (living people) University of Maryland, College Park alumni
Michael Barr (software engineer)
[ "Technology" ]
409
[ "Computing stubs", "Computer specialist stubs" ]
10,822,681
https://en.wikipedia.org/wiki/Baseline%20%28magazine%29
Baseline magazine () is a magazine devoted to typography, book arts and graphic design (distinct from the information technology magazine of the same name published by QuinStreet). History Since Baseline 19, which appeared in 1995, Baseline has been published by Bradbourne Publishing, co-edited by Mike Daines and Hans Dieter Reichert and art-directed by HDR Visual Communication. It is characterized by its large format, sumptuous art and double cover. It has won several major international design awards in the USA, Europe and Japan. The magazine is featured in several academic publications (i.e. Philip Megg's History of Graphic design and Idea magazine). Before issue 19, publishers, editors, magazine dimensions and quality varied as the magazine evolved from a small format booklet that first appeared in 1979. Early editors included Mike Daines (Baselines 1–3), Tony Bisley (Baseline 4), Geoffrey Lawrence (Baseline 5) and Erik Spiekermann (Baselines 6, 7). The first full-color Baseline appeared as issue 8. Baseline 10 expanded the dimensions of the magazine from 8¼ x 11¾ to 10½ x 14¼. Baseline assumed its current size of 9¾ x 13¾ with Baseline 14. The first four issues of Baseline were published by TSI (Typographic Systems International Ltd.). Following TSI, issues 5–18 had been published by Letraset, a graphics product company, but as the magazine flourished Letraset faced difficult times. Mike Daines, Jenny Daines and Hans Dieter Reichert, Veronika Reichert formed Bradbourne Publishing Ltd. and bought the magazine from Letraset in 1994. See also Communication Arts Graphis Inc. Print (magazine) Visible Language References External links Baseline magazine website Design team of Baseline see HDR Visual Communication Visual arts magazines published in the United Kingdom Magazines established in 1979 Typography Biannual magazines published in the United Kingdom Design magazines
Baseline (magazine)
[ "Engineering" ]
400
[ "Design magazines", "Design" ]
10,822,838
https://en.wikipedia.org/wiki/Rail%20profile
The rail profile is the cross sectional shape of a railway rail, perpendicular to its length. Early rails were made of wood, cast iron or wrought iron. All modern rails are hot rolled steel with a cross section (profile) approximate to an I-beam, but asymmetric about a horizontal axis (however see grooved rail below). The head is profiled to resist wear and to give a good ride, and the foot profiled to suit the fixing system. Unlike some other uses of iron and steel, railway rails are subject to very high stresses and are made of very high quality steel. It took many decades to improve the quality of the materials, including the change from iron to steel. Minor flaws in the steel that may pose no problems in other applications can lead to broken rails and dangerous derailments when used on railway tracks. By and large, the heavier the rails and the rest of the track work, the heavier and faster the trains these tracks can carry. Rails represent a substantial fraction of the cost of a railway line. Only a small number of rail sizes are made by steelworks at one time, so a railway must choose the nearest suitable size. Worn, heavy rail from a mainline is often reclaimed and downgraded for re-use on a branch line, siding or yard. History The earliest rails used on horse-drawn wagonways were wooden,. In the 1760s strap-iron rails were introduced with thin strips of cast iron fixed onto the top of the wooden rails. This increased the durability of the rails. Both wooden and strap-iron rails were relatively inexpensive, but could only carry a limited weight. The metal strips of strap-iron rails sometimes separated from the wooden base and speared into the floor of the carriages above, creating what was referred to as a "snake head". The long-term maintenance expense involved outweighed the initial savings in construction costs. Cast-iron rails with vertical flanges were introduced by Benjamin Outram of B. Outram & Co. which later became the Butterley Company in Ripley. The wagons that ran on these plateway rails had a flat profile. Outram's partner William Jessop preferred the use of "edge rails" where the wheels were flanged and the rail heads were flat - this configuration proved superior to plateways. Jessop's (fishbellied) first edge rails were cast by the Butterley Company. The earliest of these in general use were the so-called cast iron fishbelly rails from their shape. Rails made from cast iron were brittle and broke easily. They could only be made in short lengths which would soon become uneven. John Birkinshaw's 1820 patent, as rolling techniques improved, introduced wrought iron in longer lengths, replaced cast iron and contributed significantly to the explosive growth of railroads in the period 1825–40. The cross-section varied widely from one line to another, but were of three basic types as shown in the diagram. The parallel cross-section which developed in later years was referred to as bullhead. Meanwhile, in May 1831, the first flanged T rail (also called T-section) arrived in America from Britain and was laid into the Pennsylvania Railroad by Camden and Amboy Railroad. They were also used by Charles Vignoles in Britain. The first steel rails were made in 1857 by Robert Forester Mushet, who laid them at Derby station in England. Steel is a much stronger material, which steadily replaced iron for use on railway rail and allowed much longer lengths of rails to be rolled. The American Railway Engineering Association (AREA) and the American Society for Testing Materials (ASTM) specified carbon, manganese, silicon and phosphorus content for steel rails. Tensile strength increases with carbon content, while ductility decreases. AREA and ASTM specified 0.55 to 0.77 percent carbon in rail, 0.67 to 0.80 percent in rail weights from , and 0.69 to 0.82 percent for heavier rails. Manganese increases strength and resistance to abrasion. AREA and ASTM specified 0.6 to 0.9 percent manganese in 70 to 90 pound rail and 0.7 to 1 percent in heavier rails. Silicon is preferentially oxidised by oxygen and is added to reduce the formation of weakening metal oxides in the rail rolling and casting procedures. AREA and ASTM specified 0.1 to 0.23 percent silicon. Phosphorus and sulfur are impurities causing brittle rail with reduced impact-resistance. AREA and ASTM specified maximum phosphorus concentration of 0.04 percent. The use of welded rather than jointed track began in around the 1940s and had become widespread by the 1960s. Types Strap rail The earliest rails were simply lengths of timber. To resist wear, a thin iron strap was laid on top of the timber rail. This saved money as wood was cheaper than metal. The system had the flaw that every so often the passage of the wheels on the train would cause the strap to break away from the timber. The problem was first reported by Richard Trevithick in 1802. The use of strap rails in the United States (for instance on the Albany and Schenectady Railroad 1837) led to passengers being threatened by "snake-heads" when the straps curled up and penetrated the carriages. T rail T-rail was a development of strap rail which had a 'T' cross-section formed by widening the top of the strap into a head. This form of rail was generally short-lived, being phased out in America by 1855. Plate rail Plate rail was an early type of rail and had an 'L' cross-section in which the flange kept an unflanged wheel on the track. The flanged rail has seen a minor revival in the 1950s, as guide bars, with the Paris Métro (Rubber-tyred metro or French Métro sur pneus) and more recently as the Guided bus. In the Cambridgeshire Guided Busway the rail is a thick concrete beam with a lip to form the flange. The buses run on normal road wheels with side-mounted guidewheels to run against the flanges. Buses are steered normally when off the busway, analogous to the 18th-century wagons which could be manoeuvered around pitheads before joining the track for the longer haul. Bridge rail Bridge rail is a rail with an inverted-U profile. Its simple shape is easy to manufacture, and it was widely used before more sophisticated profiles became cheap enough to make in bulk. It was notably used on the Great Western Railway's gauge baulk road, designed by Isambard Kingdom Brunel. Barlow rail Barlow rail was invented by William Henry Barlow in 1849. It was designed to be laid straight onto the ballast, but the lack of sleepers (ties) meant that it was difficult to keep it in gauge. Flat bottomed rail Flat bottomed rail is the dominant rail profile in worldwide use. Flanged T rail Flanged T rail (also called T-section) is the name for flat bottomed rail used in North America. Iron-strapped wooden rails were used on all American railways until 1831. Col. Robert L. Stevens, the President of the Camden and Amboy Railroad, conceived the idea that an all-iron rail would be better suited for building a railroad. There were no steel mills in America capable of rolling long lengths, so he sailed to the United Kingdom which was the only place where his flanged T rail (also called T-section) could be rolled. Railways in the UK had been using rolled rail of other cross-sections which the ironmasters had produced. In May 1831, the first 500 rails, each long and weighing , reached Philadelphia and were placed in the track, marking the first use of the flanged T rail. Afterwards, the flanged T rail became employed by all railroads in the United States. Col. Stevens also invented the hooked spike for attaching the rail to the crosstie (or sleeper). In 1860, the screw spike was introduced in France where it was widely used. Screw spikes are the most common form of spike in use worldwide in the 21st century. Flat-bottom or Vignoles rail Vignoles rail is the popular name for flat-bottomed rail, recognising engineer Charles Vignoles who introduced it to Britain. Charles Vignoles observed that wear was occurring with wrought iron rails and cast iron chairs on stone blocks, the most common system at that time. In 1836 he recommended flat-bottomed rail to the London and Croydon Railway for which he was consulting engineer. His original rail had a smaller cross-section than the Stevens rail, with a wider base than modern rail, fastened with screws through the base. Other lines which adopted it were the Hull and Selby, the Newcastle and North Shields, and the Manchester, Bolton and Bury Canal Navigation and Railway Company. When it became possible to preserve wooden sleepers with mercuric chloride (a process called Kyanising) and creosote, they gave a much quieter ride than stone blocks and it was possible to fasten the rails directly using clips or rail spikes. Their use, and Vignoles's name, spread worldwide. The joint where the ends of two rails are connected to each other is the weakest part of a rail line. The earliest iron rails were joined by a simple fishplate or bar of metal bolted through the web of the rail. Stronger methods of joining two rails together have been developed. When sufficient metal is put into the rail joint, the joint is almost as strong as the rest of the rail length. The noise generated by trains passing over the rail joints, described as "the clickity clack of the railroad track", can be eliminated by welding the rail sections together. Continuously welded rail has a uniform top profile even at the joints. Double-headed rail In late 1830s, Britain's railways used a range of different rail patterns. The London and Birmingham Railway, which had offered a prize for the best design, and was one of the earliest lines to use double-headed rail, where the head and foot of the rail had the same profile. These rails were supported by chairs fastened to the sleepers. The advantage of double-headed rails was that, when the rail head became worn, they could be turned over and re-used. In 1835 Peter Barlow of the London and Birmingham Railway expressed concern that this would not be successful because the supporting chair would cause indentations in the lower surface of the rail, making it unsuitable as the running surface. Although the Great Northern Railway did experience this problem, double-headed rails were successfully used and turned by the London and South Western Railway, the North Eastern Railway, the London, Brighton and South Coast Railway and the South Eastern Railway. Double-headed rails continued in widespread use in Britain until the First World War. Bullhead rail Bullhead rail was developed from double-headed rail. The profile of the head of the rail is not the same as the foot. Because it does not have a symmetrical profile, it was not possible to reverse bullhead rail over and use the foot as the head. It was an expensive method of laying track as heavy cast iron chairs were needed to support the rail, which was secured in the chairs by wooden (later steel) wedges or "keys", which required regular attention. Bullhead rail was the standard for the British railway system from the mid-19th until the mid-20th century. In 1954, bullhead rail was used on of new track and flat-bottom rail on . One of the first British Standards, BS 9, was for bullhead rail - it was originally published in 1905, and revised in 1924. Rails manufactured to the 1905 standard were referred to as "O.B.S." (Original), and those manufactured to the 1924 standard as "R.B.S." (Revised). Bullhead rail has been almost completely replaced by flat-bottom rail on the British rail system, although it survives on some branch lines and sidings. It can also be found on heritage railways, due both to the desire to maintain an historic appearance, and the use of old track components salvaged from main lines. The London Underground continued to use bullhead rail after it had been phased out elsewhere in Britain but, in the last few years, there has been a concerted effort to replace it with flat-bottom rail. However, the process of replacing track in tunnels is a slow one, due to the difficulty of using heavy plant and machinery. Grooved rail Where a rail is laid in a road surface (pavement) or within grassed surfaces, there has to be accommodation for the flange. This is provided by a slot called the flangeway. The rail is then known as grooved rail, groove rail, or girder rail. The flangeway has the railhead on one side and the guard on the other. The guard carries no weight, but may act as a checkrail. Grooved rail was invented in 1852 by Alphonse Loubat, a French inventor who developed improvements in tram and rail equipment, and helped develop tram lines in New York City and Paris. The invention of grooved rail enabled tramways to be laid without causing a nuisance to other road users, except unsuspecting cyclists, who could get their wheels caught in the groove. The grooves may become filled with gravel and dirt (particularly if infrequently used or after a period of idleness) and need clearing from time to time, this being done by a "scrubber" vehicle (either a specialised tram, or a maintenance road-rail vehicle). Failure to clear the grooves can lead to a bumpy ride for the passengers, damage to either wheel or rail and possibly derailing. Girder guard rail The traditional form of grooved rail is the girder guard section illustrated to the left. This rail is a modified form of flanged rail and requires a special mounting for weight transfer and gauge stabilisation. If the weight is carried by the roadway subsurface, steel ties are needed at regular intervals to maintain the gauge. Installing these means that the whole surface needs to be excavated and reinstated. Block rail Block rail is a lower profile form of girder guard rail with the web eliminated. In profile it is more like a solid form of bridge rail, with a flangeway and guard added. Simply removing the web and combining the head section directly with the foot section would result in a weak rail, so additional thickness is required in the combined section. A modern block rail with a further reduction in mass is the LR55 rail which is polyurethane grouted into a prefabricated concrete beam. It can be set in trench grooves cut into an existing asphalt road bed for Light Rail (trams). Rail weights and sizes The weight of a rail per length is an important factor in determining rail strength and hence axleloads and speeds. Weights are measured in pounds per yard (imperial units in Canada, the United Kingdom and United States) and kilograms per metre in mainland Europe and Australia). . Commonly, in rail terminology pound is a metonym for the expression pounds per yard and hence a 132–pound rail means a rail of 132 pounds per yard. Europe Rails are made in a large number of different sizes. Some common European rail sizes include: In the countries of the former USSR, rails and rails (not thermally hardened) are common. Thermally hardened rails also have been used on heavy-duty railroads like Baikal–Amur Mainline, but have proven themselves deficient in operation and were mainly rejected in favor of rails. North America The American Society of Civil Engineers (or ASCE) specified rail profiles in 1893 for increments from . Height of rail equaled width of foot for each ASCE tee-rail weight; and the profiles specified fixed proportion of weight in head, web and foot of 42%, 21% and 37%, respectively. ASCE profile was adequate; but heavier weights were less satisfactory. In 1909, the American Railway Association (or ARA) specified standard profiles for increments from . The American Railway Engineering Association (or AREA) specified standard profiles for , and rails in 1919, for and rails in 1920, and for rails in 1924. The trend was to increase rail height/foot-width ratio and strengthen the web. Disadvantages of the narrower foot were overcome through use of tie plates. AREA recommendations reduced the relative weight of rail head down to 36%, while alternative profiles reduced head weight to 33% in heavier weight rails. Attention was also focused on improved fillet radii to reduce stress concentration at the web junction with the head. AREA recommended the ARA profile. Old ASCE rails of lighter weight remained in use, and satisfied the limited demand for light rail for a few decades. AREA merged into the American Railway Engineering and Maintenance-of-Way Association in 1997. By the mid-20th century, most rail production was medium heavy () and heavy (). Sizes under rail are usually for lighter duty freight, low use trackage, or light rail. Track using rail is for lower speed freight branch lines or rapid transit; for example, most of the New York City Subway system track is constructed with rail. Main line track is usually built with rail or heavier. Some common North American rail sizes include: Crane rails Some common North American crane rail sizes include: Australia Some common Australian rail sizes include: rails are used on the heavy-haul iron ore railways in the north-west of the state of Western Australia. 50 kg/m and 60 kg/m are the current standard on mainlines elsewhere, although some other sizes are still manufactured. Rail lengths Advances in rail lengths produced by rolling mills include the following: 1825: Stockton and Darlington Railway 1830: Liverpool and Manchester Railway fish-belly rails 1850: United States, to fit gondola cars 1895: London and North Western Railway (UK) – four times and twice 2003: (Railtrack (UK) rail delivery train) 2010: Bhilai Steel Plant, India – four times and twice 2016: at Bhilai Steel Plant. Welding of rails into longer lengths was first introduced around 1893. Welding can be done in a central depot or in the field. 1895 Hans Goldschmidt, Thermit welding 1935 Charles Cadwell, non-ferrous Thermit welding. Conical or cylindrical wheels It has long been recognised that conical wheels and rails that are sloped by the same amount follow curves better than cylindrical wheels and vertical rails. A few railways such as Queensland Railways for a long time had cylindrical wheels until much heavier traffic required a change. Cylindrical wheel treads have to "skid" on track curves so increase both drag and rail and wheel wear. On very straight track a cylindrical wheel tread rolls more freely and does not "hunt". The gauge is narrowed slightly and the flange fillets keep the flanges from rubbing the rails. United States practice is a 1 in 20 cone when new. As the tread wears it approaches an unevenly cylindrical tread, at which time the wheel is trued on a wheel lathe or replaced. Manufacturers ArcelorMittal Steelton, United States ArcelorMittal Ostrava, Czech Republic ArcelorMittal Gijón, Spain ArcelorMittal Huta Katowice, Poland ArcelorMittal Rodange, Luxembourg British Steel, UK Evraz, Pueblo, Colorado, United States Evraz, Russia JFE Steel, Japan Kardemir, Turkey Liberty Steel Group, Whyalla, Australia – formerly OneSteel and Arrium Metinvest, Ukraine Nippon Steel and Sumitomo Metal, Japan Steel Authority of India, India Steel Dynamics, United States Voestalpine, Austria Defunct manufacturers Algoma Steel Company, Canada Australian Iron & Steel, Australia Barrow Steel Works, England Bethlehem Steel, United States Călărași steel works, Romania Dowlais Ironworks, Wales Lackawanna Steel Company, United States Sydney Steel Corporation, Canada Standards EN 13674-1 - Railway applications - Track - Rail - Part 1: Vignole railway rails 46 kg/m and above EN 13674-1 EN 13674-4 - Railway applications - Track - Rail - Part 4: Vignole railway rails from 27 kg/m to, but excluding 46 kg/m EN 13674-4 See also Common structural shapes Difference between train and tram rails Hunting oscillation History of rail transport Iron rails Permanent way (history) Plateway Rail lengths Rail Squeal Rail tracks Railway guide rail Structural steel Tramway track References External links British Steel rail, Vignoles rail British Steel crane rail, Crane rails Table of North American tee rail (flat bottom) sections ArcelorMittal Crane Rails Track components and materials Wirth Girder Rail MRT Track & Services Co., Inc / Krupp, T and girder rails, scroll down. Railroad Facts… Construction, Safety and More. Permanent way Rail technologies Structural steel
Rail profile
[ "Engineering" ]
4,286
[ "Structural engineering", "Structural steel" ]
10,823,418
https://en.wikipedia.org/wiki/Journal%20of%20Water%20Resources%20Planning%20and%20Management
The Journal of Water Resources Planning and Management is a monthly scientific journal of engineering published by the American Society of Civil Engineers since 1943. The journal covers the development of methods, theories, and applications to current administrative, economic, engineering, planning, and social issues as they apply to water resources management. It publishes papers on analytical, experimental, and numerical methods with regard to the investigation of physical or conceptual models related to these issues. It also publishes technical notes, book reviews, and forum discussions. The journal requires the use of the metric system, but allows for authors to also submit their papers in other systems of measure in addition to the SI system. The current editor in chief is David W. Watkins, Jr. (Michigan Tech University). External links ASCE Library Journal homepage Hydraulic engineering Engineering journals Bimonthly journals Academic journals established in 1993 English-language journals Environmental planning Urban studies and planning journals American Society of Civil Engineers academic journals
Journal of Water Resources Planning and Management
[ "Physics", "Engineering", "Environmental_science" ]
189
[ "Hydrology", "Physical systems", "Hydraulics", "Civil engineering", "Hydraulic engineering" ]
10,824,215
https://en.wikipedia.org/wiki/Heckler%20v.%20Chaney
Heckler v. Chaney, 470 U.S. 821 (1985), is a decision of the Supreme Court of the United States which held that a federal agency's decision to not take an enforcement action is presumptively unreviewable by the courts under section 701(a)(2) of the Administrative Procedure Act (APA). The case arose out of a group of death row inmates' petition to the Food and Drug Administration (FDA), seeking to have the agency thwart the state governments' plans to execute the inmates by lethal injection. The FDA declined to interfere, a decision the inmates appealed unsuccessfully to the District Court for the District of Columbia. On further review, the D.C. Circuit Court of Appeals held that the FDA's action was reviewable and that its denial was "arbitrary and capricious". The Supreme Court unanimously reversed the appeals court and declared in an 8–1 decision that agency nonenforcement decisions were presumptively unreviewable. The D.C. Circuit Court of Appeals reacted to Overton Park by holding that practical considerations should be used in determining whether to grant review, rather than looking at the laws relevant to the agency in question – in Chaney, they did precisely this in overturning the district court. The Supreme Court overturned the appeals court's decision and upheld Overton Park emphasis on statutory considerations, but the presumption of unreviewability it created in this case was largely based on practical factors rather than statutory factors. It reasoned that, in general, an agency's decision not to enforce does not easily lend itself to manageable standards of judicial review, likening such a decision to one a prosecutor might make. It highlighted, however, that the presumption of unreviewability can be rebutted where the plaintiffs provide a relevant statute ("law to apply") that limits the discretion of the agency. Justice William J. Brennan Jr. concurred with the majority and emphasized that the court was not closing off all avenues of review for nonenforcement decisions. Justice Thurgood Marshall concurred in the judgment only, criticizing the majority's decision to create a presumption of unreviewability and instead arguing that the FDA's decision should have been held to be reviewable and upheld on the merits. Lower courts largely accepted the ruling, albeit with varying interpretations of scope; the wider legal community criticized the majority's rationale for a presumption of unreviewability while agreeing with the result immediately concerning the inmates. Background Case Prior to the 1970s, U.S. states primarily executed prisoners with either the electric chair or the gas chamber. Supporters of lethal injection said it was more dignified and less painful than electrocution. In 1977, Oklahoma became the first U.S. state to pass a law authorizing execution via lethal injection. A day after Oklahoma passed its statute, Texas passed its own version. By 1984, fifteen states had adopted lethal injection as a method of execution. The NAACP Legal Defense Fund and two people sentenced under these statutes petitioned the FDA asserting that the use of barbiturates and derivatives of curare for executions by untrained personnel "may actually result in agonizingly slow and painful deaths". These petitioners were Larry Leon Chaney of Jenks, Oklahoma, who was convicted of the 1977 murder of Kendal Ashmore, and Doyle Skillern, who was convicted of the 1974 murder of Patrick Randel. Chaney was the second person in the state to be sentenced to death by lethal injection; his protracted legal battle in state and federal courts was met with little initial luck, including the U.S. Supreme Court thrice declining to review Chaney's case. Per the petitioners, their states were planning to use drugs for lethal injection that had not been approved by the FDA for that purpose, in violation of two provisions of the Federal Food, Drug, and Cosmetic Act's (FDCA). First, they contended, their states had violated the FDCA by distributing a "new drug" by way of interstate commerce. While the drugs were FDA-approved, the petitioners argued they were "new drugs" under the statutory requirements because they were not approved by the FDA as "safe and effective" for lethal injections. Second, they said that their states' use of approved drugs for unapproved purposes violated the "misbranding" provisions of the act. They requested that the FDA affix warning labels stating that the drugs were not approved for human execution, notify state corrections officials that the drugs should not be used, seize prison stockpiles and recommend the prosecution of those who knowingly continued to sell the drugs for use in executions. That July, in a letter to the inmates' lawyer, the FDA declined. The head of the FDA wrote that the FDA did not have clear jurisdiction to interfere with state criminal justice systems and was authorized by its "inherent discretion to decline to pursue certain enforcement matters" even if the requested actions were within the scope of the agency's jurisdiction. The inmates appealed the FDA's refusal to the United States District Court for the District of Columbia in Chaney v. Schweiker. By this time, the number of petitioners had increased to eight. Five of them were from Oklahoma, including Chaney, Alton C. Franks, Carl Morgan, Charles William Davis, and Robyn Leroy Parks; three were from Texas, including Skillern, Jerry Joe Bird, and Henry Martinez Porter. Administrative Procedure Act The Court has held since Abbott Laboratories v. Gardner (1967) that the Administrative Procedure Act (APA) (codified as 5 USC §§ 701-706) provides a "basic presumption of judicial review" that would need "clear and convincing evidence" of legislative intent for an exemption. Despite the presumption of reviewability, actions "committed to agency discretion by law" are unreviewable under §701(a)(2) of the APA. The exception is a narrow one; courts are authorized by §706 of the Act to set aside actions of federal agencies that are "arbitrary, capricious, [or] an abuse of discretion". If all legitimately conferred agency discretion was unreviewable, § 706(a)(2) would have no effect. The court first directly ruled on § 701(a)(2) in Citizens to Preserve Overton Park v. Volpe (1971), ruling that it only included cases where "statutes are drawn in such broad terms that in a given case there is no law to apply" to the case. This standard provoked criticism from legal scholars who pointed out that § 706 required judicial review. Cass Sunstein noted that when "an agency has taken constitutionally impermissible factors into account, there is always 'law to apply'—no matter what the governing statute may say." Some lower courts followed this standard to its letter, but others skirted or flouted its terms. Courts in the District of Columbia Circuit in particular argued that the "law to apply" standard allowed weighing policy considerations that were not expressly stated in the statute or legislative history. Some commentators said that the D.C. Circuit was completely disregarding the Supreme Court's decision under the guise of following it. The Supreme Court previously ruled that agency inaction was reviewable in Dunlop v. Bachowski (1975) and that "§§ 702 and 704 subject the Secretary's decision to judicial review under the standard specified in § 706 (2)(A)". The Court dismissed the argument that the secretary's decision was an unreviewable exercise of prosecutorial discretion reasoning that enforcement discretion was limited in the civil context to cases which "like criminal prosecutions, involve the vindication of societal or governmental interests, rather than the protection of individual rights." The Court concluded that "[a]lthough the Secretary's decision to bring suit bears some similarity to the decision to commence a criminal prosecution, the principle of absolute prosecutorial discretion is not applicable to the facts of this case". Justice William Rehnquist dissented from the decision, arguing that the secretary's action was an exercise of agency discretion under § 701(a)(2). Court proceedings Lower courts On August 30, 1982, the District Court for the District of Columbia issued a summary judgment holding that the nonenforcement decisions are "essentially unreviewable by the courts". The D.C. Circuit Court of Appeals, in a divided opinion, reversed the district court. The majority opinion, written by Judge J. Skelly Wright invoked the "strong presumption of reviewability" set out in earlier cases, applying Overton Park'''s narrow "no law to apply" test to decide if the agency decision was "committed to agency discretion by law". Judge Wright cited an FDA policy statement in which the agency committed itself to "investigate...thoroughly" unapproved uses of new approved drugs that are "widespread or [endanger] the public health". Having asserted that the agency's nonenforcement decision was reviewable, Wright ruled the agency had abused its discretion. Judge Antonin Scalia dissented, arguing that the majority cited Dunlop, Overton Park, and Gardner erroneously and that they did not apply to the case at hand. While he agreed that there is a strong presumption of reviewability towards agency action in general, he argued that agency enforcement decisions carried a strong presumption of nonreviewability, and that the three precedential cases were not intended to address section 701(a)(2) and this case. He also noted that the FDA policy statement was attached to a proposed rule that was never adopted. In a subsequent opinion, Scalia also criticized the court's regular reliance on pragmatic considerations. The Supreme Court's reversal was largely based on the dissent. Supreme Court The Supreme Court, ruling unanimously against Chaney, held that agency nonenforcement decisions are presumptively unreviewable by the courts. Justice Thurgood Marshall concurred in judgement only, and did not join the majority opinion. Writing for the majority, William Rehnquist said that enforcement decisions were presumed unreviewable under the § 701(a)(2) "committed to agency discretion" exception to the general presumption of reviewability. The presumption of unreviewability was based on the well-established common law doctrine of prosecutorial discretion. Justice Rehnquist said the decision to bring an enforcement action "has traditionally been 'committed to agency discretion' and we believe that Congress enacting the APA did not intend to alter that tradition". The Court presented four policy considerations to support the presumption of unreviewability: An agency's ordering of enforcement priorities involves weighing of complex factors within the agency's expertise. An agency's decision to bring an enforcement action is analagous to prosecutorial discretion which "has long been regarded as the special province of the Executive Branch". There is no action to provide a focus for judicial review. While action may involve the use of an agency's "coercive power", inaction does not infringe upon a person's rights. The presumption of unreviewability of enforcement decisions is not absolute, and can be overcome if the petitioner can find "law to apply" in the governing statue. Applying Overton Park, the Court said agency actions were within the § 701(a)(2) exception "if the statute is drawn so that a court would have no meaningful standard against which to judge the agency's exercise of discretion." The court concluded that judicial review of nonenforcement decisions was permitted where "the substantive statute has provided guidelines for the agency to follow in exercising its...powers." However, the Court said there was often "no law to apply" to review nonenforcement decisions. Speaking to the presumption of reviewability affirmed in Dunlop that was relied on by the D.C. Circuit Court of Appeals, Justice Rehnquist remarked that "our textual references to the 'strong presumption' of reviewability in ... [Bachowski] were addressed only to the § (a)(1) exception; we were content to rely on the [Third Circuit] Court of Appeals' opinion to hold that the § (a)(2) exception did not apply." Justice Rehnquist said the presumption of unreviewability would have been overcome because the Labor Management Reporting and Disclosure Act "quite clearly [withdrew] discretion from the agency and [provided guidelines] for the exercise of its enforcement power." The Court said the FDCA contained no such constraints because the misbranding and new drug provisions did not limit agency discretion and distinguished Dunlop on this ground. The court dismissed the FDA policy statement on the same grounds Scalia did: the proposed rule that statement was attached to was never adopted by the agency, and the court additionally found the language to be ambiguous on the matter. The court also refused to interpret a clause of the FDCA exempting the Secretary from exercising authority for minor violations as implying a requirement for action against major violations, reasoning that it could only apply where the agency had already established that a major violation took place, which does not happen if the agency declines to investigate. It also left open the possibility that there could be judicial review under several other circumstances, including where an agency refuses to make a rule, resolves to fully ignore its statutory obligations, or makes a nonenforcement decision that is based solely on jurisdictional grounds or violates a plaintiff's constitutional rights. Concurring opinions Justice William J. Brennan Jr. wrote a short concurrence contending that section 701(a)(2) was not intended to allow agencies to disregard "clear jurisdictional, regulatory, statutory, or constitutional commands", and instead should restrict challenges to what he asserted were hundreds of daily routine nonenforcement decisions that would otherwise be open to lawsuit. He argued that judicial review should still be available in the areas the majority left open or where an agency's decision stood on "illegitimate reasons", but still concurred with the presumption of unreviewability put forward in this case. Justice Marshall, on the other hand, concurred in the judgement only. He agreed that the FDA was within its discretion to direct its resources elsewhere, and that it was therefore acceptable for it to decline the petition. He disagreed, however, with the majority's creation of a new presumption of unreviewability, calling it inconsistent with Abbott Laboratories v. Gardner strong general presumption in favor of the opposite. He criticized the majority's use of precedent as unsupported by the case law, and took issue with their comparison to prosecutorial discretion, asserting that there are limits on its reach and that enforcement decisions, unlike prosecutorial decisions, deal much more often with situations in which nonenforcement denies someone a benefit or relief written into statue by Congress. He also criticized the prosecutorial discretion analogy on the grounds that the APA was designed to open agencies to judicial review, not shield them from it. Marshall contended that the majority's allowed exceptions to the presumption of unreviewability were too narrow, and that since an unreviewability test still involves examining an agency's rationale, it looks too similar to a deferential test on the merits in any case. "Easy cases", he remarked, "at times produce bad law". Marshall expressed the hope that over time, the majority's opinion would be interpreted as an expression of deference to agency expertise, rather than a full denial of the courts' role in agency action. Reaction, analysis, and impact Judge Patricia M. Wald wrote: Chaney is a precedent which, on its face, applies only to enforcement choices. Yet the broad language of the Court about why enforcement choices should not be reviewed, why deference should be given to agency expertise and to the agency's decision on how to deploy its limited resources, apply as well to other kinds of agency policymaking. Judges, who think the federal courts are reviewing too many decisions, read Chaney broadly as a signal to move forcefully to cut off review where Congress' directions to the agency are arguably vague or general. On the other hand, those judges more hospitable to judicial review of agency action register concern that taken too far, Chaney not only will cut off review of substantive legal issues and policies that inevitably take resource allocations into consideration, but will also permit agencies to insulate pure statutory interpretations about what a law means by dressing them up in the guise of enforcement decisions. In our circuit right now, judges are walking a tightrope between these polar views of Chaney. Some scholars criticized the majority's rationale in creating a presumption of unreviewability. American legal scholar Bernard Schwartz was adamant that there is always "law to apply" in a system where "all discretionary power should be reviewable to determine that the discretion conferred has not been abused". Ronald M. Levin wrote that Chaney upheld the part of Overton Park that was strongly criticized by continuing to bar nonstatutory claims of abuse of discretion. William W. Templeton, writing for the Catholic University Law Review noted that prosecutorial discretion is limited by abuse of discretion. He also argues that prosecutorial decisions have "fundamental differences" from agency enforcement decisions, arguing that since prosecutors seek to punish violations of the law while agencies usually seek to prevent them, a court's refusal to review an agency nonenforcement decision has more potential for harm. Cass Sunstein wrote that "it would probably be a mistake to read Chaney as establishing a general rule of nonreviewability for enforcement decisions", pointing out that the court carved out a substantial number of exceptions as questions to be answered by a later court. Lower courts largely accepted the Chaney decision without complaint in its immediate aftermath, applying the ruling to a swath of enforcement questions arising after it. Some expanded its reasoning beyond nonenforcement decisions, such as the Seventh Circuit's decision in Bethlehem Steel Corp. v. Environmental Protection Agency that the EPA's refusal to make a rule concerning the operation of coke ovens was unreviewable; others have refused to make that inference, such as the Eighth Circuit's decision in Iowa ex rel. Miller v. Block'' that the Department of Agriculture's refusal to implement an entire payment program was reviewable. Chaney's sentence had already been overturned the year before by the United States Court of Appeals for the Tenth Circuit for an unrelated reason; a state court subsequently converted it to life imprisonment. Had he been executed, he would have been the first person executed in Oklahoma by lethal injection. Doyle Skillern was executed by lethal injection on January 16, 1985, in the Huntsville Walls Unit. Notes References Citations Works cited Academic sources Court cases Other sources Senate Report No. 752, 79th Congress, 1st Session, 26 (1945). 5 USC Ch. 7 Further reading United States Supreme Court cases United States administrative case law 1985 in United States case law United States Supreme Court cases of the Burger Court Lethal injection
Heckler v. Chaney
[ "Environmental_science" ]
3,983
[ "Toxicology", "Lethal injection" ]
10,824,237
https://en.wikipedia.org/wiki/Industrial%20Union%20Department%20v.%20American%20Petroleum%20Institute
Industrial Union Department v. American Petroleum Institute (also known as the Benzene Case), 448 U.S. 607 (1980), was a case decided by the Supreme Court of the United States. This case represented a challenge to the OSHA practice of regulating carcinogens by setting the exposure limit "at the lowest technologically feasible level that will not impair the viability of the industries regulated." OSHA selected that standard because it believed that (1) it could not determine a safe exposure level and that (2) the authorizing statute did not require it to quantify such a level. The AFL Industrial Union Department served as the petitioner; the American Petroleum Institute was the respondent. A plurality on the Court, led by Justice Stevens, wrote that the authorizing statute did indeed require OSHA to demonstrate a significant risk of harm (albeit not with mathematical certainty) in order to justify setting a particular exposure level. Perhaps more important than the specific holding of the case, the Court noted in dicta that if the government's interpretation of the authorizing statute had been correct, it might violate the nondelegation doctrine. This line of reasoning may represent the "high-water mark" of recent attempts to revive the doctrine. Background The Occupational Safety and Health Act of 1970 delegated broad authority to the Secretary of Labor to promulgate standards to ensure safe and healthful working conditions for the Nation's workers (the Occupational Safety and Health Administration (OSHA) being the agency responsible for carrying out this authority). According to Section 3(8), standards created by the secretary must be “reasonably necessary or appropriate to provide safe or healthful employment and places of employment.” Section 6(b)(5) of the statute sets the principle for creating the safety regulations, directing the Secretary to “set the standard which most adequately assures, to the extent feasible, on the basis of the best available evidence, that no employee will suffer material impairment of health or functional capacity…”. At issue in the case, is the Secretary's interpretation of "extent feasible" to mean that if a material is unsafe he must “set an exposure limit at the lowest technologically feasible level that will not impair the viability of the industries regulated.” Opinion of the Court The Court held the Secretary applied the act inappropriately. To comply with the statute, the secretary must determine 1) that a health risk of a substance exists at a particular threshold and 2) Decide whether to issue the most protective standard, or issue a standard that weighs the costs and benefits. Here, the secretary failed to first determine that a health risk of substance existed for the chemical benzene when workers were exposed at 1 part per million. Data only suggested the chemical was unsafe at 10 parts per million. Thus, the secretary had failed the first step of interpreting the statute, that is, finding that the substance posed a risk at that level. In its reasoning, the Court noted it would be unreasonable to Congress intended to give the Secretary “unprecedented power over American industry.” Such a delegation of power would likely be unconstitutional. The Court also cited the legislative history of the act, which suggested that Congress meant to address major workplace hazards, not hazards with low statistical likelihoods. Concurring opinion In a famous concurrence, Justice Rehnquist argued that the section 6(b)(5) of the statute, which set forth the "extent feasible" principle, should be struck down on the basis of the non-delegation doctrine. The non-delegation doctrine, which has been recognized by the Supreme Court since the era of Chief Justice Marshall, holds that Congress cannot delegate law-making authority to other branches of government. Rehnquist offered three rationales for the application of the non-delegation doctrine. First, ensure Congress makes social policy, not agencies; delegation should only be used when the policy is highly technical or the ground too large to be covered. Second, agencies of the delegated authority require an “intelligible principle” to exercise discretion which was lacking in this case. Third, the intelligible principle must provide judges with a measuring stick for judicial review. Subsequent developments Some scholars have said that the interpretation of the statute ignored a foundational principle of statutory interpretation, generalia specialibus non derogant ("the general does not derogate from the specific"). Generally, specific language governs general language. In this case, the court read the more general provision of Section 3(8) as governing the specific process specified in Section 6(b)(5). The case also marks the current state of affairs for the non-delegation doctrine. When the court is faced with a provision that appears to be an impermissible delegation of the authority, it will use tools of statutory interpretation to try to narrow the delegation of power. References External links United States Supreme Court cases United States Supreme Court cases of the Burger Court United States administrative case law Chemical safety 1980 in the environment 1980 in United States case law Occupational Safety and Health Administration American Petroleum Institute AFL-CIO United States labor case law United States nondelegation doctrine case law
Industrial Union Department v. American Petroleum Institute
[ "Chemistry" ]
1,048
[ "Chemical safety", "Chemical accident", "nan" ]
10,824,968
https://en.wikipedia.org/wiki/Orion%20%28space%20telescope%29
The Orion space telescopes were a series of two instruments flown aboard Soviet spacecraft during the 1970s to conduct ultraviolet spectroscopy of stars. Orion 1 The Orion 1 space astrophysical observatory was installed in the orbital station Salyut 1. It was designed by Grigor Gurzadyan of Byurakan Observatory in Armenia, USSR. It was operated in June 1971 by crew member Viktor Patsayev, who thus became the first man to operate a telescope outside the Earth's atmosphere. Spectrograms of stars Vega and Beta Centauri between wavelengths 2000 and 3800 Å were obtained. Specifications Ultraviolet telescope Optical system: Mersenne Spectrograph: Wadsworth Diameter of primary mirror: 280 mm Focal length: 1400 mm Spectral range: 2000–3800 Å Spectral resolution at wavelength 2600 Å: 5 Å Film: UFSh 4, width 16 mm, range of sensitivity: 4000–2500 Å, resolution better 130 lines/mm Cartridge capacity: 12m Stabilization: two-stage, inertial First stage: three-axis inertial stabilization of station Salyut 1; Fine guidance: via a star with accuracy 15 arcsec on each axis. Star sensor: of semi-disk (diameter of input: 70 mm; focal length: 450 mm), limiting stellar magnitude 5m. Mass: 170 kg Orion 2 Orion 2 was installed onboard Soyuz 13 in December 1973, a spacecraft modified to become the first manned space observatory. The observatory was operated by crew member Valentin Lebedev. The designer of the observatory was Grigor Gurzadyan, then at Garni Space Astronomy Laboratory in Armenia. Ultraviolet spectrograms of thousands of stars to as faint as 13th stellar magnitude were obtained by a wide-angle meniscus telescope. The first satellite UV spectrogram of a planetary nebula (IC 2149) was obtained, revealing spectral lines of aluminum and titanium - elements not previously observed in planetary nebula. Two-photon emission in that planetary nebula and a remarkable star cluster in Auriga were also discovered. Specifications Telescope: meniscus, Cassegrain (-Maksutov) system with an objective prism Primary mirror: 300 mm Focal length: 1000 mm Field of view: 5° Registration of spectrograms: film KODAK 103UV, diameter: 110 mm Spectral resolution: 8-29 Å at 2000-3000 Å Two star sensor sets: each containing a two-coordinate star sensor coaxial to telescope and one-coordinate one, in 45° to telescope axis. Two additional sidereal spectrographs. Three-axes guidance system accuracy: better than 5 arcsec on two cross-sectional axes of telescope (via star А), and better than 30 arcsec at optical axis (star B) Star sensors: input apertures: 80 and 60 mm; focal lengths: 500 and 240 mm; limiting stellar magnitudes: 3.5 and 3.0 m. Mass: 240 kg (telescope: 205 kg) Mass returned to Earth (cartridges): 4.3 kg References Orion 1 bibliography Gurzadyan, G.A., 1972 On One Principle of Operation of Orbital Observatory by a Cosmonaut, Commun. Byurakan Obs, vol.XLV, p. 5. Gurzadyan, G.A., Harutyunian, E.A., 1972 Orbital Astrophysical Observatory Orion, Commun. Byurakan Obs., vol.XLV, p. 12. Orion 2 bibliography Gurzadyan, G. A., Ultraviolet observations of planetary nebulae, Planetary nebulae. Observations and theory, Proc. IAU Symp.76, Ed.Y.Terzian, p. 79, Dordrecht, D.Reidel Publ., 1978. Ambartsumian V.A. (ed) Gurzadyan, G.A.; Raushenbach, B.V.; Feoktistov, K.P.; Klimuk, P.I.; Lebedev, V.V.; Maksimenko, A.P.; Gorshkov, K.A.; Savchenko, S.A.; Baryshnikov, G.K.; Pachomov, A.I.; Antonov, V.V.; Kashin, A.L.; Loretsian, G.M.; Gasparyan, O.N.; Chabrov, G.I.; Ohanesian, J.B.; Tsybin, S.P.; Rustambekova, S.S.; Epremian, R.A. Observatory in space "SOYUZ-13"-"ORION-2" . "Mashinostroenie" Publ, Moscow, 1984 (monograph, in Russian). Gurzadyan G.A, Ohanesyan, J.B., Rustambekova, S.S. & Epremian, R.A., Catalogue of 900 Faint Star Ultraviolet Spectra, Publ. Armenian Acad. Sci, Yerevan, 1985. Furniss, T., Manned Spaceflight Log, Jane's, London, 1986. Davies, J. K., Astronomy from Space, PRAXIS Publishing, Chichester, 2002. External links Garni Space museum 1971 in spaceflight 1973 in spaceflight Soviet space observatories Ultraviolet telescopes Crewed space observatories
Orion (space telescope)
[ "Astronomy" ]
1,124
[ "Space telescopes", "Crewed space observatories", "Soviet space observatories" ]
10,825,303
https://en.wikipedia.org/wiki/Leslie%20Kay%20%28engineer%29
Leslie Kay (14 January 1922 – 2 June 2020) was a British–New Zealand electrical engineer, particularly known for the development of ultrasonic devices to assist the blind. Early life and family Kay was born in Chester-le-Street, County Durham, England, on 14 January 1922, the son of a colliery manager. He left school at the age of 14, and accepted an electrical apprenticeship at the local colliery managed by his father, and took night classes in electrical engineering. In 1940, Kay joined the Royal Air Force, training as a pilot, but was later posted as an aircraft engineer because of his engineering background. His role included modifying aircraft, recalibrating their instruments, and test-flying the planes to ensure their airworthiness. In 1944, Kay married Nora Waters, and the couple went on to have three children. Career England After World War II, Kay studied for a Bachelor of Engineering degree at the Newcastle campus of Durham University, graduating in 1948. Later he joined the Admiralty as a civilian scientist based at the Isle of Portland. He was involved in the development of transmitting underwater sonar for the identification of submarines, mines and torpedoes, undertaking research both on land and at sea. He also took part in naval operations, and was in a submarine off Port Said during the Suez Crisis. After discovering that details of the technology that he helped to develop had been passed to the Soviet Union, Kay chose to move to an academic post at the University of Birmingham, where he established the Department of Electrical Engineering. At Birmingham, Kay initially continued researching underwater ultrasonic technology, but was inspired to investigate air sonar to assist blind people to navigate after watching blind children learning to swim. This led to his study of the way that bats navigate, and the development of devices for blind people. Kay was awarded a PhD by Birmingham on the basis of eight papers on echolocation by humans and animals that he either authored or co-authored. New Zealand In 1965, Kay and his family migrated to New Zealand, where he took up a post at the University of Canterbury in Christchurch. He was appointed to a personal chair in 1982. Kay served as a member of the University Grants Committee, head of the Department of Electrical and Electronic Engineering, and dean of the School of Engineering at Canterbury. At Canterbury, Kay continued his work improving devices for the blind, as well as applying ultrasonic technology to applications in medicine, robotics, diving and fishing. He developed an international reputation for his work, particularly for the sonic torch, allowing blind people to avoid obstacles, sonic spectacles, and the Trisensor Aid, allowing blind children to be trained in spatial awareness. Kay became a naturalised New Zealander in 1979. When he retired from the University of Canterbury in 1986, Kay was conferred the title of professor emeritus. He continued his research independently, establishing Bay Advanced Technologies, a research business in Russell, to further refine devices for the blind. In 1999, he received the Saatchi and Saatchi Prize for innovation, recognising his lifetime's contribution to the field. Honours and awards In 1971, Kay was elected a Fellow of the Royal Society of New Zealand, the first engineer to be so honoured. He was also awarded fellowships of the New Zealand Institution of Engineers (now Engineering New Zealand), the Institution of Electrical Engineers and the Institution of Electronic and Radio Engineers (both now part of the Institution of Engineering and Technology). In the 1988 New Year Honours, Kay was appointed an Officer of the Order of the British Empire, for services to electrical and electronic engineering. Later life and death Kay's wife, Nora, died in 2004, and he retired from active research in 2006. In retirement, Kay lived with his daughter in Southland, where he died on 2 June 2020. References 1922 births 2020 deaths People from Chester-le-Street Royal Air Force personnel of World War II English electrical engineers Civil servants in the Admiralty People of the Cold War Alumni of the University of Birmingham Academics of the University of Birmingham British emigrants to New Zealand New Zealand electrical engineers Naturalised citizens of New Zealand Academic staff of the University of Canterbury Fellows of the Royal Society of New Zealand Fellows of the Institution of Engineering and Technology New Zealand Officers of the Order of the British Empire Alumni of King's College, Newcastle
Leslie Kay (engineer)
[ "Engineering" ]
866
[ "Institution of Engineering and Technology", "Fellows of the Institution of Engineering and Technology" ]
10,825,731
https://en.wikipedia.org/wiki/Macro%20key
A macro key is a keyboard key that can be configured to perform custom, user-defined behavior. Many keyboards do not have a macro key, but some have one or more. Some consider a macro key to enhance productivity by allowing them to do operations via a single key press that otherwise requires slower or multiple UI actions. Custom behavior typically involves one or more user interface (UI) operations such as keystrokes and mouse actions. For example, a macro key might be configured to launch a program. A gamer might configure it for rapid-fire. Some early PC keyboards had a single key located on the lowest row of keys, either to the left of the Z key or to the right of the right control key. Sometimes it was treated as a backslash, but its behavior varied. It generated a special scan code so that a program could associate unique behavior to it. Around 2010, some mice had a macro button with a similar utility. References Computer keys
Macro key
[ "Technology" ]
197
[ "Computing stubs" ]
10,826,922
https://en.wikipedia.org/wiki/Paul%20W.%20Taylor
Paul W. Taylor (November 19, 1923 – October 14, 2015) was an American philosopher best known for his work in the field of environmental ethics. Biography Taylor's theory of biocentric egalitarianism, related to but not identical with deep ecology, was expounded in his 1986 book Respect for Nature: A Theory of Environmental Ethics, and has been taught in university courses on environmental ethics. Taylor taught philosophy for four decades at Brooklyn College, City University of New York and was professor emeritus there at the time of his death. Respect for Nature Taylor's Respect for Nature is widely considered one of the fullest and most sophisticated defences of a life-centered (biocentric) approach to nature. In this work, Taylor agrees with biocentrists that all living things, both plants and animals, have inherent value and deserve moral concern and consideration. More radically, he denies human superiority and argues that all living things have equal inherent value. Recognizing that human interests inevitably conflict with the interests of plants and animals, Taylor carefully lays out and defends a variety of priority principles for the fair resolution of such conflicts. Taylor's new theory of environmentalism based on the "biocentric outlook" was used in opposition to speciesism. His theory advocated four beliefs: that humans are equal members of the earth's community of life, that humans and members of other species are interdependent, that "all organisms are teleological centres of life in the sense that each is a unique individual pursuing its own good in its own way" and that "humans are not inherently superior to other living things." Taylor's biocentric outlook emphasizes "species impartiality" and because of this it is said to provide the justification for the respect for nature including the recognition that wild animals and plants have "inherent worth" and thus deserve moral respect, so they should not "be harmed or interfered with in nature, other things being equal". Taylor argued that humans should behave towards nonhuman organisms by four guided rules: the rule of nonmaleficence, the rule of non-interference, the rule of fidelity and the rule of restitutive justice. The four rules prohibit humans from harming any living entity in the natural environment without good reason. Taylor admitted that none of the four rules are absolute and offered "priority principles" for handling conflicts. For example his principle of self-defense allows humans to protect themselves against life-threatening organisms by destroying them and his principle of minimum wrong permits humans to further their nonbasic interests over the basic interests of animals and plants only under the condition of minimizing wrongs done to nonhumans. His principle of restitutive justice requires that animals and plants receive a form of compensation for any harm done to them. Taylor was a critic of animal rights and he held the view that only humans have moral rights. He argued that animals and plants cannot have rights because they lack certain capacities for exercising them. Despite this, his biocentric outlook asserted that humans are not superior to wild animals or plants and they all have inherent worth. A 25th anniversary edition of the book was published in 2011 with a new foreword by Dale Jamieson. Reception Kristin Shrader-Frechette wrote that Taylor broke new grounds in environmental ethics with his concepts of biocentric outlook and inherent worth and suggested that he developed "the most philosophically sophisticated theory of environmental ethics that has yet appeared". However, she noted various flaws with his theory. Shrader-Frechette said that a problem with Taylor's biocentric outlook is giving "inherent worth" to all animals, humans and plants that requires compensation for every control or intrusion affecting their lives. She commented that "if everyone has duties of compensation to virtually every other living entity, as indeed we must in Taylor's scheme, then applying Taylor's ethics is complex, cumbersome and unworkable. We would each have hundreds of conflicting duties of compensation alone". Shrader-Frechette also noted a problem of incoherence in Taylor's claim that only humans have moral rights because he also argued that the interests of humans and nonhumans "must equally be taken into consideration" and that humans are not superior but this is incoherent because he held the view that human interests are protected by rights but nonhuman interests are not. Philosopher Louis G. Lombardi also noted Taylor's odd position on rights considering he denied human superiority over animals and plants but restricted moral rights to humans. Selected publications Normative Discourse (Prentice-Hall, 1961; Greenwood Press, 1973, 1976) Principles of Ethics: An Introduction (Dickenson, 1975; Wadsworth, 1980) In Defense of Biocentrism (Environmental Ethics, 1983) Are Humans Superior to Animals and Plants? (Environmental Ethics, 1984) Respect for Nature: A Theory of Environmental Ethics (Princeton University Press, 1986) Inherent Value and Moral Rights (The Monist, 1987) See also American philosophy List of American philosophers References External links An outline of Paul Taylor's "Respect for Nature" Full bibliography Obituary Notice 2015 deaths 1923 births 20th-century American philosophers 21st-century American philosophers American environmentalists American non-fiction environmental writers Brooklyn College faculty Critics of animal rights Environmental ethicists Environmental philosophers
Paul W. Taylor
[ "Environmental_science" ]
1,071
[ "Environmental ethicists", "Environmental ethics" ]
10,828,016
https://en.wikipedia.org/wiki/Subparhelic%20circle
The subparhelic circle is a rare halo, an optical phenomenon, located below the horizon. It passes through both the subsun (below the Sun) and the antisolar point (opposite to the Sun). The subparhelic circle is the subhorizon counterpart to the parhelic circle, located above the horizon. Located on the subparhelic circle are several relatively rare optical phenomena: the subsun, the subparhelia, the 120° subparhelia, Liljequist subparhelia, the diffuse arcs, and the Parry antisolar arcs. On the accompanying photo centred at the antisolar point, the subparhelic circle appears as a gently curved horizontal line intercepted by anthelic arcs. See also 120° parhelion Anthelion References External links Another photo from a plane Antisolar Region Arcs Atmospheric optical phenomena fr:Cercle parhélique#Cercle subparhélique
Subparhelic circle
[ "Physics" ]
197
[ "Optical phenomena", "Physical phenomena", "Atmospheric optical phenomena", "Earth phenomena" ]
10,828,129
https://en.wikipedia.org/wiki/Childbirth%20positions
Childbirth positions (or maternal birthing positions) are the physical postures that the pregnant mother may assume during the process of childbirth. They may also be referred to as delivery positions or labor positions. In addition to the lithotomy position (on back with feet pulled up), still commonly used by many obstetricians, other positions are successfully used by midwives and traditional birth-attendants around the world. Engelmann's seminal 1882 work "Labor among primitive peoples" publicised the childbirth positions amongst primitive cultures to the Western world. They frequently use squatting, standing, kneeling and all fours positions, often in a sequence. They are referred to as upright birth positions. Understanding the physical effects of each birthing position on the mother and baby is important. However, the psychological effects are crucial as well. Knowledge about birthing positions can help mothers choose the option they are most comfortable with. Having the agency and self-control to change positions in labor positively influences the mother's comfort and birthing experience, which increases the birthing outcome and her satisfaction with labor. Lithotomy position In the lithotomy position, the birthing person is lying on their back with their legs up in stirrups and their buttocks close to the edge of the table. This position is convenient for the caregiver because it permits them more access to the perineum. The position has been largely popular in the US and other Western countries over the last two centuries, though cross-culturally and historically, it is very rare (about 18%) for people to assume a prone or dorsal position during childbirth. Reclining positions became common in France during the 17th century, as obstetrics became a more respected field and ideas of birth being an affliction rather than natural became widespread. The standard of women lying flat on their back originated in the early 19th century in the US, and was recommended by male obstetricians on the basis of claims that the position was both more convenient for attending medical staff and that women would be more comfortable lying on their backs. However, the lithotomy is not a comfortable position for most patients, considering the pressure on the vaginal walls because the baby's head is uneven and the labor process is working against gravity. Research suggests that maternal bodily positions have a notable influence on pain experienced during labor and delivery, and that positions such as squatting show significantly reduced pain compared to the lithotomy position. Upright birth positions in general Various people have promoted the adoption of upright birthing positions, particularly squatting, for Western countries, such as Grantly Dick-Read, Janet Balaskas, Moysés Paciornik and Hugo Sabatino. The adoption of the non-lithotomy positions is also promoted by the natural childbirth movement. Being upright during labour and birth can increase the available space within the pelvis by 28–30% giving more room to the baby for rotation and descent. There is also a 54% decreased incidence of foetal heart rate abnormalities when the mother is upright. These birthing positions can also reduce the duration of the second stage of labour as well as reduce the risk for emergency caesarian sections by 29%. They are also associated with the lower need for epidural. Different positions may be associated with different rates of perineal injury. Squatting position The squatting position gives a greater increase of pressure in the pelvic cavity with minimal muscular effort. The birth canal will open 20 to 30% more in a squat than in any other position. It is recommended for the second stage of childbirth. As most Western adults find it difficult to squat with heels down, compromises are often made such as putting a support under the elevated heels or another person supporting the squatter. In ancient Egypt, women delivered babies while squatting on a pair of bricks, known as birth bricks. All-fours Some mothers may choose the all-fours position instinctively. It can help the baby turn around in the case of a malpresentation of the head. Since this position uses gravity, it decreases back pain, as the mother is able to tilt her hips. Side lying Side lying may help slow the baby's descent down the birth canal, thereby giving the perineum more time to naturally stretch. To assume this position, the mother lies on her side with her knees bent. To push, a slight rolling movement is used such that the mother is propped up on one elbow is needed, while one leg is held up. This position does not use gravity but still holds an advantage over the lithotomy position, as it does not position the venae cavae under the uterus, which decreases blood flow to mother and child. See also Defecation postures Sexual positions Female urination device, enabling an atypical standing position for female urination References Childbirth Human positions Midwifery Squatting position
Childbirth positions
[ "Biology" ]
989
[ "Behavior", "Human positions", "Human behavior" ]
10,828,381
https://en.wikipedia.org/wiki/Atomic%20Clock%20Ensemble%20in%20Space
Atomic Clock Ensemble in Space (ACES) is a project led by the European Space Agency which will place ultra-stable atomic clocks on the International Space Station. Operation in the microgravity environment of the ISS will provide a stable and accurate time base for different areas of research, including general relativity and string theory tests, time and frequency metrology, and very long baseline interferometry. The payload actually contains two clocks: a caesium laser cooled atomic clock (PHARAO) developed by CNES, France for long-term stability and an active hydrogen maser (SHM) developed by Spectratime, Switzerland for short-term stability. The onboard frequency comparison between PHARAO and SHM will be a key element for the evaluation of the accuracy and the short/medium-term stability of the PHARAO clock. Further, it will allow to identify the optimal operating conditions for PHARAO and to select a compromise between frequency accuracy and stability. The mission will also be a test-bed for the space qualification of the active hydrogen maser SHM. After optimisation performances in the range for both frequency instability and inaccuracy are intended. This corresponds to a time error of about 1 second over 300 million () years. After earlier plans for launch readiness in 2012, the clock ensemble was expected to travel to the space station aboard a SpaceX Falcon 9 in 2021. Major delays due to difficulties in the development and test of the active hydrogen maser and the time transfer microwave system have extended the launch to 2025. The ACES module will be externally mounted to the ESA's Columbus Laboratory with an 18-30 month expected operations phase. See also Scientific research on the ISS European contribution to the International Space Station References External links ACES factsheet by the ESA (PDF) International Space Station experiments Columbus (ISS module) Space science Atomic clocks European Space Agency
Atomic Clock Ensemble in Space
[ "Astronomy" ]
381
[ "Space science", "Outer space" ]
10,828,674
https://en.wikipedia.org/wiki/Phenadoxone
Phenadoxone (trade names Heptalgin, Morphidone, and Heptazone) is an opioid analgesic of the open chain class (methadone and relatives) invented in Germany by Hoechst in 1947. It is one of a handful of useful synthetic analgesics which were used in the United States for various lengths of time in the 20 or so years after the end of the Second World War but which were withdrawn from the market for various or no known reason and which now are mostly in Schedule I of the United States' Controlled Substances Act of 1970, or (like phenazocine and bezitramide) in Schedule II but not produced or marketed in the US. Others on this list are ketobemidone (Ketogin), dextromoramide (Dimorlin, Palfium and others), phenazocine (Narphen and Prinadol), dipipanone (Diconal, Pipadone and Wellconal), piminodine (Alvodine), propiram (Algeril), anileridine (Leritine) and alphaprodine (Nisentil). Phenadoxone has a US DEA ACSCN of 9637 and has had a zero annual manufacturing quota under the Controlled Substances Act 1970. Its withdrawal from US distribution prior to the promulgation of said act is a large part of its Schedule I designation; it is, however, used as a legitimate medication in other countries and consumption is increasing worldwide as indicated below. Phenadoxone is generally considered to be a strong opioid analgesic and is regulated in much the same way as morphine where it is used. The usual starting dose is 10–20 mg and it has a duration of analgesic effect of 1 to 4 hours. Phenadoxone is not used at this time for purposes other than pain relief. Like its drug subcategory prototype methadone, phenadoxone can be used as the opioid analgesic in Brompton cocktail. Phenadoxone is most used at the current time in Denmark and various countries in eastern Europe. References 4-Morpholinyl compounds Ketones Mu-opioid receptor agonists Synthetic opioids Benzhydryl compounds
Phenadoxone
[ "Chemistry" ]
489
[ "Ketones", "Functional groups" ]
10,829,007
https://en.wikipedia.org/wiki/Slope%20car
A is a small automated monorail, or a fusion between monorail, people mover, inclined elevator and rack railway. It is a brand name of . Since this mode of transportation is relatively unknown, it lacks widely accepted generic name, other than the simple "monorail". The system is different from normal modern monorails in many ways. It is a development from industrial monorails used in 1960s orchards. Slope cars are installed in more than 80 places in Japan and South Korea. Overview The system is introduced generally when there are steep slopes or stairs between entrance gates and buildings. Slope cars generally function as amenities that provide accessibilities for elderly or handicapped people visiting particular places, such as parks, golf courses, or hotels. As most lines move fairly slowly, people without disabilities often find it faster to walk the same routes on foot, rather than to use slope cars. However, there are also places where slope cars climb very steep slopes which people without disabilities can not climb unless there are stairs. In Japan, slope cars are not legally considered as railways. System There is a type that is long, having a 4 to 8 passengers capacity, and another type that is long, having around 30 passengers capacity. Some slope cars are "trainsets" consisting of two cars. Most slope cars are straddle-beam monorails, but there are suspended monorail slope cars as well. Normal monorails generally use rubber tyres running on a concrete beam, while slope cars use a steel beam with a rack rail on one side. As such, slope cars can climb 100% (45°) slopes at maximum speed. The system is powered by a "third rail" on the other side of the beam. The system does not require a driver. A car starts when a user pushes a button, and it automatically stops at the selected destination. History In 1966, Yoneyama Industory, an agricultural machinery maker in Matsuyama, Ehime Prefecture, invented , a freight-only rack monorail system. It soon became widespread in mikan citrus orchards in the prefecture, and in other parts of Japan. Other makers also started to build similar systems. Later in 1990, a company called Chigusa developed a passenger rack monorail system. These rack monorails were first used to transport workers in construction sites or forests. However, from 1990s, public facilities such as parks also started to use the system. started to sell their "slope cars" in 1990. Similar systems were designed for vineyards in Switzerland and Germany in the 1960s. These were also transporting workers from the start. The brand name Monorack is used here for the Garaventa Monorackbahn since 1976. The main difference is the type of rail being used - the Japanese systems use and the European systems use square tubing. The cooperation between Nikkari in Japan and Habegger in Switzerland started in 1975, so the Monorack tractors are mostly identical. Other names As "slope car" is the brand name of Kaho Manufacturing, similar, if not the same, concepts are called differently by different manufacturers. Ansaku makes . Chigusa makes . Monorail Industry makes . Senyō Kōgyō makes . EMTC of Korea makes the Mountain Type (which has two rails) and the monorail Inclined Type and Locomotive Type Doppelmayr Garaventa makes the Monorack for agricultural use. They say they have installed 650 systems worldwide. Slope cars are similar in some ways to personal rapid transit systems in that they offer on-demand service for individuals or small numbers of passengers. List of slope cars Japan South Korea See also Funicular Monorail Monorails in Japan People mover Personal rapid transit Rack railway inclined elevator References External links Kaho Manufacturing official website Ansaku official website Chigusa official website Korea Monorail official website, South Korean agency of Kaho Manufacturing. Senyō Kōgyō official website EMCT Smart Monorail official website Monorails in Japan Vertical transport devices Driverless monorails
Slope car
[ "Technology" ]
814
[ "Vertical transport devices", "Transport systems" ]
10,829,022
https://en.wikipedia.org/wiki/Liljequist%20parhelion
A Liljequist parhelion is a rare halo, an optical phenomenon in the form of a brightened spot on the parhelic circle approximately 150–160° from the sun; i.e., between the position of the 120° parhelion and the anthelion. While the sun touches the horizon, a Liljequist parhelion is located approximately 160° from the sun and is about 10° long. As the sun rises up to 30° the phenomenon gradually moves towards 150°, and as the sun reaches over 30° the optical effect vanishes. The parhelia are caused by light rays passing through oriented plate crystals. . The phenomenon was first observed by Gösta Hjalmar Liljequist in 1951 at Maudheim, Antarctica during the Norwegian–British–Swedish Antarctic Expedition in 1949–1952. It was then simulated by Dr. Eberhard Tränkle (1937–1997) and Robert Greenler in 1987 and theoretically explained by Walter Tape in 1994. A theoretical and experimental investigation of the Liljequist parhelion caused by perfect hexagonal plate crystals showed that the azimuthal position of maximum intensity occurs at , where the refractive index to use for the angle of total internal reflection is Bravais' index for inclined rays, i.e. for a solar elevation . For ice at zero solar elevation this angle is . The dispersion of ice causes a variation of this angle, leading to a blueish/cyan coloring close to this azimuthal coordinate. The halo ends towards the anthelion at an angle . See also Sun dog References External links A fish eye photo by Günter Röttler, Hagen, September 1983 featuring a parhelic circle with a 120° parhelion and a Liljequist parhelion. List of observations (pick Liljequist parhelia as a halo filter.) Atmospheric optical phenomena
Liljequist parhelion
[ "Physics" ]
385
[ "Optical phenomena", "Physical phenomena", "Atmospheric optical phenomena", "Earth phenomena" ]
10,830,506
https://en.wikipedia.org/wiki/Face%20diagonal
In geometry, a face diagonal of a polyhedron is a diagonal on one of the faces, in contrast to a space diagonal passing through the interior of the polyhedron. A cuboid has twelve face diagonals (two on each of the six faces), and it has four space diagonals. The cuboid's face diagonals can have up to three different lengths, since the faces come in congruent pairs and the two diagonals on any face are equal. The cuboid's space diagonals all have the same length. If the edge lengths of a cuboid are a, b, and c, then the distinct rectangular faces have edges (a, b), (a, c), and (b, c); so the respective face diagonals have lengths and Thus each face diagonal of a cube with side length a is . A regular dodecahedron has 60 face diagonals (and 100 space diagonals). References Elementary geometry
Face diagonal
[ "Mathematics" ]
197
[ "Elementary mathematics", "Elementary geometry" ]
10,831,433
https://en.wikipedia.org/wiki/Douglas%20sea%20scale
The Douglas sea scale is a scale which measures the height of the waves and also measures the swell of the sea. The scale is very simple to follow and is expressed in one of 10 degrees. The scale The Douglas sea scale, also called the "international sea and swell scale", was devised in 1921 by Captain H. P. Douglas, who later became vice admiral Sir Percy Douglas and hydrographer of the Royal Navy. Its purpose is to estimate the roughness of the sea for navigation. The scale has two codes: one code is for estimating the sea state, the other code is for describing the swell of the sea. State of the sea (wind sea) The Degree (D) value has an almost linear dependence on the square root of the average wave Height (H) above, i.e., . Using linear regression on the table above, the coefficients can be calculated for the low Height values () and for the high Height values (). Then the Degree can be approximated as the average between the low and high estimations, i.e.:where [.] is the optional rounding to the closest integer value. Without the rounding to integer, the root mean square error of this approximation is: . Swell Wave length and height classification Wavelength Short wave 100 m – Average wave 100–200 m Long wave 201 m + Wave height Low wave 2 m – Moderate wave 2–4 m High wave 4.01 m + See also Beaufort scale Fujita scale Saffir–Simpson hurricane scale Sea state Significant wave height TORRO scale References External links EuroWEATHER–Douglas scale Hazard scales Water waves
Douglas sea scale
[ "Physics", "Chemistry" ]
328
[ "Water waves", "Waves", "Physical phenomena", "Fluid dynamics" ]
10,831,486
https://en.wikipedia.org/wiki/Tony%20Hey
Anthony John Grenville Hey (born 17 August 1946) was vice-president of Microsoft Research Connections, a division of Microsoft Research, until his departure in 2014. Education Hey was educated at King Edward's School, Birmingham and the University of Oxford. He graduated with a Bachelor of Arts degree in physics in 1967, and a Doctor of Philosophy in theoretical physics in 1970 supervised by P. K. Kabir. He was a student of Worcester College, Oxford and St John's College, Oxford. Career and research From 1970 through 1972 Hey was a postdoctoral fellow at California Institute of Technology (Caltech). Moving to Pasadena, California, he worked with Richard Feynman and Murray Gell-Mann, both winners of the Nobel Prize in Physics. He then moved to Geneva, Switzerland and worked as a fellow at CERN (the European organisation for nuclear research) for two years. Hey worked about thirty years as an academic at University of Southampton, starting in 1974 as a particle physicist. He spent 1978 as a visiting fellow at Massachusetts Institute of Technology. For 1981 he returned to Caltech as a visiting research professor. There he learned of Carver Mead's work on very-large-scale integration and become interested in applying parallel computing techniques to large-scale scientific simulations. Hey worked with British semiconductor company Inmos on the Transputer project in the 1980s. He switched to computer science in 1985, and in 1986 became professor of computation in the Department of Electronics and Computer Science at Southampton. While there, he was promoted to Head of the School of Electronics and Computer Science in 1994 and Dean of Engineering and Applied Science in 1999. Among his work was "doing research on Unix with tools like LaTeX." In 1990 he was a visiting fellow at the Thomas J. Watson Research Center of IBM Research. He then worked with Jack Dongarra, Rolf Hempel and David Walker, to define the Message Passing Interface (MPI) which became a de facto open standard for parallel scientific computing. In 1998 he was a visiting research fellow at Los Alamos National Laboratory in the USA. Hey led the UK's e-Science Programme from March 2001 to June 2005. He was appointed corporate vice-president of technical computing at Microsoft on 27 June 2005. Later he became corporate vice-president of external research, and in 2011 corporate vice-president of Microsoft Research Connections until his departure in 2014. Since 2015, Hey has held the position of Chief Data Scientist at the UK's Science and Technology Facilities Council, and is a Senior Data Science Fellow at the University of Washington eScience Institute. Hey is the editor of the journal Concurrency and Computation: Practice and Experience. Among other scientific advisory boards in Europe and the United States, he is a member of the Global Grid Forum (GGF) Advisory Committee. Publications Hey has authored or co-authored a number of books including The Fourth Paradigm: Data-Intensive Scientific Discovery, The Quantum Universe,The New Quantum Universe, The Feynman Lectures on Computation and Einstein's Mirror. Hey has also authored numerous peer-reviewed journal papers. His latest book is a popular book on computer science called The Computing Universe: A Journey through a Revolution. Awards and honours Hey had an open scholarship to Worcester College, Oxford, from 1963 to 1967, won the Scott Prize for Physics in 1967, senior scholarship to St John's College, Oxford, in 1968 and was a Harkness Fellow from 1970 through 1972. Hey was made a Commander of the Order of the British Empire (CBE) in 2005. He was elected a Fellow of the British Computer Society (FBCS) in 1996, the Institute of Physics (FInstP) and the Institution of Electrical Engineers in 1996 and the Royal Academy of Engineering (FREng) in 2001. In 2006 he presented the prestigious IET Pinkerton Lecture. In 2007 he was awarded an honorary Doctor of Civil Law degree from Newcastle University. In 2017 he was elected a Fellow of the Association for Computing Machinery (ACM). References E-Science English physicists English science writers Living people Harkness Fellows Fellows of the British Computer Society Fellows of the Institute of Physics Fellows of the Institution of Engineering and Technology Fellows of the Royal Academy of Engineering 2016 fellows of the Association for Computing Machinery 1946 births Commanders of the Order of the British Empire Academics of the University of Southampton Alumni of Worcester College, Oxford People associated with CERN
Tony Hey
[ "Engineering" ]
886
[ "Institution of Engineering and Technology", "Fellows of the Institution of Engineering and Technology" ]
10,831,708
https://en.wikipedia.org/wiki/Spinosad
Spinosad is an insecticide based on chemical compounds found in the bacterial species Saccharopolyspora spinosa. The genus Saccharopolyspora was discovered in 1985 in isolates from crushed sugarcane. The bacteria produce yellowish-pink aerial hyphae, with bead-like chains of spores enclosed in a characteristic hairy sheath. This genus is defined as aerobic, Gram-positive, nonacid-fast actinomycetes with fragmenting substrate mycelium. S. spinosa was isolated from soil collected inside a nonoperational sugar mill rum still in the Virgin Islands. Spinosad is a mixture of chemical compounds in the spinosyn family that has a generalized structure consisting of a unique tetracyclic ring system attached to an amino sugar (D-forosamine) and a neutral sugar (tri-Ο-methyl-L-rhamnose). Spinosad is relatively nonpolar and not easily dissolved in water. Spinosad is a novel mode-of-action insecticide derived from a family of natural products obtained by fermentation of S. spinosa. Spinosyns occur in over 20 natural forms, and over 200 synthetic forms (spinosoids) have been produced in the lab. Spinosad contains a mix of two spinosoids, spinosyn A, the major component, and spinosyn D (the minor component), in a roughly 17:3 ratio. Mode of action Spinosad is highly active, by both contact and ingestion, in numerous insect species. Its overall protective effect varies with insect species and life stage. It affects certain species only in the adult stage, but can affect other species at more than one life stage. The species subject to very high rates of mortality as larvae, but not as adults, may gradually be controlled through sustained larval mortality. The mode of action of spinosoid insecticides is by a neural mechanism. The spinosyns and spinosoids have a novel mode of action, primarily targeting binding sites on nicotinic acetylcholine receptors (nAChRs) of the insect nervous system that are distinct from those at which other insecticides have their activity. Spinosoid binding leads to disruption of acetylcholine neurotransmission. Spinosad also has secondary effects as a γ-amino-butyric acid (GABA) neurotransmitter agonist. It kills insects by hyperexcitation of the insect nervous system. Spinosad has proven not to cause cross-resistance to any other known insecticide. Uses Spinosad has been used around the world for the control of a variety of insect pests, including Lepidoptera, Diptera, Thysanoptera, Coleoptera, Orthoptera, and Hymenoptera, and many others. It was first registered as a pesticide in the United States for use on crops in 1997. Its labeled use rate is set at 1 ppm (1 mg a.i./kg of grain) and its maximum residue limit (MRL) or tolerance is set at 1.5 ppm. Spinosad's widespread commercial launch was deferred, awaiting final MRL or tolerance approvals in a few remaining grain-importing countries. It is considered a natural product, thus is approved for use in organic agriculture by numerous nations. Two other uses for spinosad are for pets and humans. Spinosad has been used in oral preparations (as Comfortis) to treat C. felis, the cat flea, in canines and felines; the optimal dose set for canines is reported to be 30 mg/kg. Spinosad is sold under the brand names, Comfortis, Trifexis, and Natroba. Trifexis also includes milbemycin oxime. Comfortis and Trifexis brands treat adult fleas on pets; the latter also prevents heartworm disease. Natroba is sold for treatment of human head lice. Spinosad is also commonly used to kill thrips. Comfortis and Trifexis were withdrawn in the European Union. Spinosyn A Spinosyn A does not appear to interact directly with known insecticidal-relevant target sites, but rather acts via a novel mechanism. Spinosyn A resembles a GABA antagonist and is comparable to the effect of avermectin on insect neurons. Spinosyn A is highly active against neonate larvae of the tobacco budworm, Heliothis virescens, and is slightly more biologically active than . In general, spinosyns possessing a methyl group at C6 (spinosyn D-related analogs) tend to be more active and less affected by changes in the rest of the molecule. Spinosyn A is slow to penetrate to the internal fluids of larvae; it is also poorly metabolized once it enters the insect. The apparent lack of spinosyn A metabolism may contribute to its high level of activity, and may compensate for the slow rate of penetration. Resistance Spinosad resistance has been found in Musca domestica, Plutella xylostella, Bactrocera dorsalis, Frankliniella occidentalis, and Cydia pomonella. Safety and ecotoxicology Spinosad has high efficacy, a broad insect pest spectrum, low mammalian toxicity, and a good environmental profile, a unique feature of the insecticide compared to others currently used for the protection of grain products. It is regarded as natural product-based, and approved for use in organic agriculture by numerous national and international certifications. Spinosad residues are highly stable on grains stored in bins, with protection ranging from 6 months to 2 years. Ecotoxicology parameters have been reported for spinosad, and are: in rat (Rattus norvegicus ), acute oral:  >5000 mg/kg (nontoxic) in rat (R. norvegicus), acute dermal: LD50 >2000 mg/kg (nontoxic) in California quail (Callipepla californica ), oral toxicity: LD50 >2000 mg/kg (nontoxic) in duck (Anas platyrhynchos domestica ), dietary toxicity: LC50 >5000 mg/kg (nontoxic) in rainbow trout (Oncorhynchus mykiss ), LC50-96h = 30.0 mg/L (slightly toxic) in honeybee (Apis mellifera ), LD50 = 0.0025 mg/bee (highly toxic if directly sprayed on and of dried residues). Chronic exposure studies failed to induce tumor formation in rats and mice; mice given up to 51 mg/kg/day for 18 months resulted in no tumor formation. Similarly, administration of 25 mg/kg/day to rats for 24 months did not result in tumor formation. References Further reading Biological pest control Biopesticides Cat medications Dog medications Insecticides Organic gardening Rhamnosides Sustainable agriculture Withdrawn drugs
Spinosad
[ "Chemistry" ]
1,463
[ "Drug safety", "Withdrawn drugs" ]
10,831,865
https://en.wikipedia.org/wiki/Swimming%20pool%20sanitation
Swimming pool sanitation is the process of ensuring healthy conditions in swimming pools. Proper sanitation is needed to maintain the visual clarity of water and to prevent the transmission of infectious waterborne diseases. Methods Two distinct and separate methods are employed in the sanitation of a swimming pool. The filtration system removes organic waste on a daily basis by using the sieve baskets inside the skimmer and circulation pump and the sand unit with a backwash facility for easy removal of organic waste from the water circulation. Disinfection - normally in the form of hypochlorous acid (HClO) - kills infectious microorganisms. Alongside these two distinct measures within the pool owner's jurisdiction, swimmer hygiene and cleanliness helps reduce organic waste build-up. Guidelines The World Health Organization has published international guidelines for the safety of swimming pools and similar recreational-water environments, including standards for minimizing microbial and chemical hazards. The United States Centers for Disease Control and Prevention also provides information on pool sanitation and water related illnesses for health professionals and the public. The main organizations providing certifications for pool and spa operators and technicians are the National Swimming Pool Foundation and Association of Pool & Spa Professionals. The certifications are accepted by many state and local health departments. Contaminants and disease Swimming pool contaminants are introduced from environmental sources and swimmers. Affecting primarily outdoor swimming pools, environmental contaminants include windblown dirt and debris, incoming water from unsanitary sources, rain containing microscopic algae spores and droppings from birds possibly harboring disease-causing pathogens. Indoor pools are less susceptible to environmental contaminants. Contaminants introduced by swimmers can dramatically influence the operation of indoor and outdoor swimming pools. Contaminants include micro-organisms from infected swimmers and body oils including sweat, cosmetics, suntan lotion, urine, saliva and fecal matter; for example, it was estimated by researchers that swimming pools contain, on average, 30 to 80 mL of urine for each person that uses the pool. In addition, the interaction between disinfectants and pool water contaminants can produce a mixture of chloramines and other disinfection by-products. The journal Environmental Science & Technology reported that sweat and urine react with chlorine and produce trichloramine and cyanogen chloride, two chemicals dangerous to human health. An answer to the perennial question: Is it safe to pee in the pool? Nitrosamines are another type of the disinfection by-products that are of concern as a potential health hazard. Acesulfame potassium is widely used in the human diet and excreted by the kidneys. It has been used by researchers as a marker to estimate the degree to which swimming pools are contaminated by urine. It was estimated that a commercial-size swimming pool of 220,000 gallons would contain about 20 gallons of urine, equivalent to about 2 gallons of urine in a typical residential pool. Pathogenic contaminants are of greatest concern in swimming pools as they have been associated with numerous recreational water illnesses (RWIs). Public health pathogens can be present in swimming pools as viruses, bacteria, protozoa and fungi. Diarrhea is the most commonly reported illness associated with pathogenic contaminants, while other diseases associated with untreated pools are Cryptosporidiosis and Giardiasis. Other illnesses commonly occurring in poorly maintained swimming pools include otitis externa, commonly called swimmers ear, skin rashes and respiratory infections. Maintenance and hygiene Contamination can be minimized by good swimmer hygiene practices such as showering before and after swimming, and not letting children with intestinal disorders swim. Effective treatments are needed to address contaminants in pool water because preventing the introduction of pool contaminants, pathogenic and non-pathogenic, into swimming pools is, in practice, impossible. A well-maintained, properly operating pool filtration and re-circulation system is the first barrier, combating the contaminants large enough to be filtered. Rapid removal of these filterable contaminants reduces the impact on the disinfection system thereby limiting the formation of chloramines, restricting the formation of disinfection by-products and optimizing sanitation effectiveness. To kill pathogens and help prevent recreational water illnesses, pool operators must maintain proper levels of chlorine or another sanitizer. Over time, calcium from municipal water tends to accumulate, developing salt deposits in the swimming pool walls and equipment (filters, pumps), reducing their effectiveness. Therefore, it is advised to either completely drain the pool, and refill it with fresh water, or recycle the existing pool water, using reverse osmosis. The advantage of the latter method is that 90% of the water can be reused. Pool operators must also store and handle cleaning and sanitation chemicals safely. Prevention of diseases in swimming pools and spas Disease prevention should be the top priority for every water quality management program for pool and spa operators. Disinfection is critical to protect against pathogens, and is best managed through routine monitoring and maintenance of chemical feed equipment to ensure optimum chemical levels in accordance with state and local regulations. Chemical parameters include disinfectant levels according to regulated pesticide label directions. pH should be kept between 7.2 and 7.8. Human tears have a pH of 7.4, making this an ideal point to set a pool. More often than not, it is improper pH and not the sanitiser that is responsible for irritating swimmers' skin and eyes. Total alkalinity should be 80–120 ppm and calcium hardness between 200 and 400 ppm. Good hygienic behavior at swimming pools is also important for reducing health risk factors at swimming pools and spas. Showering before swimming can reduce introduction of contaminants to the pool, and showering again after swimming will help to remove any that may have been picked up by the swimmer. Those with diarrhea or other gastroenteritis illnesses should not swim within 2 weeks of an outbreak, especially children. Cryptosporidium is chlorine resistant. In order to minimize exposure to pathogens, swimmers should avoid getting water into their mouths, and should never swallow pool or spa water. Standards Maintaining an effective concentration of disinfectant is critically important in assuring the safety and health of swimming pool and spa users. When any of these pool chemicals are used, it is very important to keep the pH of the pool in the range 7.2 to 7.8 – according to the Langelier Saturation Index, or 7.8 to 8.2 – according to the Hamilton Index; higher pH drastically reduces the sanitizing power of the chlorine due to reduced oxidation-reduction potential (ORP), while lower pH produces more rapid loss of chlorine and causes bather discomfort, especially to the eyes. However, according to the Hamilton Index, a higher pH can reduce unnecessary chlorine consumption while still remaining effective at preventing algae and bacteria growth. To help ensure the health of bathers and protect pool equipment, it is essential to perform routine monitoring of water quality factors (or "parameters") on a regular basis. This process becomes the essence of an optimum water quality management program. Systems and disinfection methods Chlorine and bromine methods Conventional halogen-based oxidizers such as chlorine and bromine are convenient and economical primary sanitizers for swimming pools and provide a residual level of sanitizer that remains in the water. Chlorine-releasing compounds are the most popular and frequently used in swimming pools whereas bromine-releasing compounds have found heightened popularity in spas and hot tubs. Both are members of the halogen group with demonstrated ability to destroy and deactivate a wide range of potentially dangerous bacteria and viruses in swimming pools and spas. Both exhibit three essential elements as ideal first-line-of-defense sanitizers for swimming pools and spas: they are fast-acting and enduring, they are effective algaecides, and they oxidize undesired contaminants. Swimming pools can be disinfected with a variety of chlorine-releasing compounds. The most basic of these compounds is molecular chlorine (Cl2); however, its application is primarily in large commercial public swimming pools. Inorganic forms of chlorine-releasing compounds frequently used in residential and public swimming pools include sodium hypochlorite commonly known as liquid bleach or simply bleach, calcium hypochlorite and lithium hypochlorite. Chlorine residuals from Cl2 and inorganic chlorine-releasing compounds break down rapidly in sunlight. To extend their disinfectant usefulness and persistence in outdoor settings, swimming pools treated with one or more of the inorganic forms of chlorine-releasing compounds can be supplemented with cyanuric acid – a granular stabilizing agent capable of extending the active chlorine residual half-life (t½) by four to sixfold. Chlorinated isocyanurates, a family of organic chlorine-releasing compounds, are stabilized to prevent UV degradation due to the presence of cyanurate as part of their chemical backbone. These are commonly sold for general use in small summer pools, where the water is expected to be used for only a few months and is expected to be regularly topped up with fresh, due to evaporation and splash loss. It is important to change the water frequently, otherwise, levels of cyanuric acid will build up to beyond the point at which the mechanism functions. Excess cyanurates will actually work in reverse and will inhibit the chlorine. A steadily lowering pH value of the water may at first be noticed. Algal growth may become visible, even though chlorine tests show sufficient levels. Chlorine reacting with urea in urine and other nitrogen-containing wastes from bathers can produce chloramines. Chloramines typically occur when an insufficient amount of chlorine is used to disinfect a contaminated pool. Chloramines are generally responsible for the noxious, irritating smell prominently occurring in indoor pool settings. A common way to remove chloramines is to "superchlorinate" (commonly called "shocking") the pool with a high dose of inorganic chlorine sufficient to deliver 10 ppm chlorine. Regular superchlorination (every two weeks in summer) helps to eliminate these unpleasant odors in the pool. Levels of chloramines and other volatile compounds in water can be minimized by reducing contaminants that lead to their formation (e.g., urea, creatinine, amino acids and personal care products) as well as by use of non-chlorine "shock oxidizers" such as potassium peroxymonosulfate. Medium pressure UV technology is used to control the level of chloramines in indoor pools. It is also used as a secondary form of disinfection to address chlorine-tolerant pathogens. A properly sized and maintained UV system should remove the need to shock for chloramines, although shocking would still be used to address a fecal accident in the pool. UV will not replace chlorine but is used to control the level of chloramines, which are responsible for the odor, irritation, and enhanced corrosion at an indoor pool. Copper ion system Copper ion systems use an electric current across .500 gm bars (solid copper, or a mixture of copper and .100 gm or silver) to free copper ions into the flow of pool water to kill organisms such as algae in the water and provide a "residual" in the water. Alternative systems also use titanium plates to produce oxygen in the water to help degrade organic compounds. Private pool filtration Water pumps An electrically operated water pump is the prime motivator in recirculating the water from the pool. Water is forced through a filter and then returned to the pool. Using a water pump by itself is often not sufficient to completely sanitize a pool. Commercial and public pool pumps usually run 24 hours a day for the entire operating season of the pool. Residential pool pumps are typically run for 4 hours per day in winter (when the pool is not in use) and up to 24 hours in summer. To save electricity costs, most pools run water pumps for between 6 hours and 12 hours in summer with the pump being controlled by an electronic timer. Most pool pumps available today incorporate a small filter basket as the last effort to avoid leaf or hair contamination reaching the close-tolerance impeller section of the pump. Filtration units Sand A pressure-fed sand filter is typically placed in line immediately after the water pump. The filter typically contains a medium such as graded sand (called '14/24 Filter Media' in the UK system of grading the size of sand by sifting through a fine brass-wire mesh of 14 to the inch (5.5 per centimeter) to 24 to the inch (9.5 per cm)). A pressure fed sand filter is termed a 'High Rate' sand filter, and will generally filter turbid water of particulates no less than 10 micrometers in size. The rapid sand filter type are periodically 'back washed' as contaminants reduce water flow and increase back pressure. Indicated by a pressure gauge on the pressure side of the filter reaching into the 'red line' area, the pool owner is alerted to the need to 'backwash' the unit. The sand in the filter will typically last five to seven years before all the "rough edges" are worn off, and the more tightly packed sand no longer works as intended . Recommended filtration for public/commercial pools is 1 ton sand per 100,000 liters water (10 ounces avdp. per cubic foot of water) [7.48 US or 6.23 UK gallons]. Introduced in the early 1900s was another type of sand filter – the 'Rapid Sand' filter, whereby water was pumped into the top of a large volume tank (3' 0" or more cube) (1 cubic yard/200US gal/170UK gal/770 liters) containing filter grade sand and returning to the pool through a pipe at the bottom of the tank. As there is no pressure inside this tank, they were also known as "gravity filters". These types of filters are not greatly effective, and are no longer common in home swimming pools, being replaced by the pressure-fed type filter. Diatomaceous earth Some filters use diatomaceous earth to help filter out contaminants. Commonly referred to as 'D.E.' filters, they exhibit superior filtration capabilities. Often a D.E. filter will trap waterborne contaminants as small as 1 micrometer in size. D.E. filters are banned in some states, as they must be emptied out periodically and the contaminated media flushed down the sewer, causing a problem in some districts' sewage systems. As of 2020, several companies now produce regenerative media filters, sometimes called precoat media filters, which use perlite as the filtration media rather than diatomaceous earth. As of 2021, perlite can safely be flushed down the sewer and is approved and NSF listed for use in the United States. Cartridge filters Other filter media that have been introduced to the residential swimming pool market since 1970 include sand particles and paper type cartridge filters of filter area arranged in a tightly packed 12" diameter x 24" long (300 mm x 600 mm) accordion-like circular cartridge. These units can be 'daisy-chained' together to collectively filter almost any size home pool. The cartridges are typically cleaned by removal from the filter body and hosing-off down a sewer connection. They are popular where backwashed water from a sand filter is not allowed to be discharged or go into the aquifer. Fabric Filters Traditional pool filters vary in the micron particle sizes that they can capture. Fabric filters can capture particles smaller than that of standard swimming pool filtration systems. This type of filter connects where the water return to the pool after passing through a standard filter. They are usually in the form of a bag. With filtration levels as small as 1 micrometer, users can attain much cleaner water, when using a sand of cartridge filter. These levels are equal or better than that of a diatomaceous earth filter. Automated pool cleaners Automated pool cleaners more commonly known as "Automatic pool cleaners" and in particular electric, robotic pool cleaners provide an extra measure of filtration, and in fact like the handheld vacuums can microfilter a pool, which a sand filter without flocculation or coagulants is unable to accomplish. These cleaners are independent from the pool's main filter and pump system and are powered by a separate electricity source, usually in the form of a set-down transformer that is kept at least from the water in the pool, often on the pool deck. They have two internal motors: one to suck in water through a self-contained filter bag and then return the filtered water at a high speed back into the pool water, and one that is a drive motor connected to tractor-like rubber or synthetic tracks and "brushes" connected by rubber or plastic bands via a metal shaft. The brushes, resembling paint rollers, are located on the front and back of the machine, and help to remove contaminating particles from the pool's floor, walls, and, in some designs, even the pool steps (depending on size and configuration). They also direct the particles into the internal filter bag. Other systems Saline chlorination units, electronic oxidation systems, ionization systems, microbe disinfection with ultra-violet lamp systems, and "Tri-Chlor Feeders" are other independent or auxiliary systems for swimming pool sanitation. Consecutive dilution A consecutive dilution system is arranged to remove organic waste in stages after it passes through the skimmer. Waste matter is trapped inside one or more sequential skimmer basket sieves, each having a finer mesh to further dilute contaminant size. Dilution here is defined as the action of making something weaker in force, content, or value. The first basket is placed closely after the skimmer mouth. The second is attached to the circulation pump. Here the 25% of water drawn from the main drain at the bottom of the swimming pool meets the 75% drawn from the surface. The circulation pump sieve basket is easily accessible for service and is to be emptied daily. The third sieve is the sand unit. Here smaller organic waste that has slipped through the previous sieves is trapped by sand. If not removed regularly, organic waste will continue to rot down and affect water quality. The dilution process allows organic waste to be easily removed. Ultimately the sand sieve can be backwashed to remove smaller trapped organic waste which otherwise leaches ammonia and other compounds into the recirculated water. These additional solutes eventually lead to the formation of disinfection by-products (DBP's). The sieve baskets are easily removed daily for cleaning as is the sand unit, which should be back-washed at least once a week. A perfectly maintained consecutive dilution system drastically reduces the build-up of chloramines and other DBP's. The water returned to the pool should have been cleared of all organic waste above 10 microns in size. Mineral sanitizers Mineral sanitizers for the swimming pool and spa use minerals, metals, or elements derived from the natural environment to produce water quality benefits that would otherwise be produced by harsh or synthetic chemicals. Companies are not allowed to sell a mineral sanitizer in the United States unless it has been registered with the United States Environmental Protection Agency (EPA). Currently, two mineral sanitizers are registered with the EPA: one is a silver salt with a controlled release mechanism which is applied to calcium carbonate granules that help neutralize pH; the other uses a colloidal form of silver released into water from ceramic beads. Mineral technology takes advantage of the cleansing and filtering qualities of commonly occurring substances. Silver and copper are well-known oligodynamic substances that are effective in destroying pathogens. Silver has been shown to be effective against harmful bacteria, viruses, protozoa and fungi. Copper is widely used as an algicide. Alumina, derived from aluminates, filters detrimental materials at the molecular level and can be used to control the delivery rate of desirable metals such as copper. Working through the pool or spa filtration system, mineral sanitizers use combinations of these minerals to inhibit algae growth and eliminate contaminants. Unlike chlorine or bromine, metals and minerals do not evaporate and do not degrade. Minerals can make the water noticeably softer, and by replacing harsh chemicals in the water they lower the potential for red-eye, dry skin and foul odors. Skimmers Coping apertures Water is typically drawn from the pool via a rectangular aperture in the wall, connected through to a device fitted into one (or more) wall/s of the pool. The internals of the skimmer are accessed from the pool deck through a circular or rectangle lid, about one foot in diameter. If the pool's water pump is operational water is drawn from the pool over a floating hinged weir (operating from a vertical position to a 90-degree angle away from the pool, in order to stop leaves and debris being back-flooded into the pool by wave action), and down into a removable "skimmer basket", the purpose of which is to entrap leaves, dead insects and other larger floating debris. The aperture visible from the pool side is typically 1' 0" (300 mm) wide by 6" (150 mm) high, which intersects the water midway through the center of the aperture. Skimmers with apertures wider than this are termed "wide angle" skimmers and may be as much as 2' 0" wide (600 mm). Floating skimmers have the advantage of not being affected by the level of the water as these are adjusted to work with the rate of pump suction and will retain optimum skimming regardless of water level leading to a markedly reduced amount of bio-material in the water. Skimmers should always have a leaf basket or filter between it and the pump to avoid blockages in the pipes leading to the pump and filter. Prior to the mid 1970s most skimmers were either made of metal like copper or stainless steel either a large round or square shape. Built in concrete pour skimmers were also common on concrete pools before the introduction of PVC Skimmers in the late 1960s Pool re-circulation Water returning from the consecutive dilution system is passed through return jets below the surface. These are designed to impact a turbulent flow as the water enters the pool. This flow as a force is far less than the mass of water in the pool and takes the least pressure route upward where eventually surface tension reforms it into a laminar flow on the surface. As the returned water disturbs the surface, it creates a capillary wave. If the return jets are positioned correctly, this wave creates a circular motion within the surface tension of the water, allowing that on the surface to slowly circulate around the pool walls. Organic waste floating on the surface through this circulation from the capillary wave is slowly drawn past the mouth of the skimmer where it is pulled in due to the laminar flow and surface tension over the skimmer weir. In a well-designed pool, circulation caused by the disturbed returned water aids in removing organic waste from the pool surface, directing it to be trapped inside the consecutive dilution system for easy disposal. Many return jets are equipped with a swivel nozzle. Used correctly, it induces deeper circulation, further cleaning the water. Turning the jet nozzles at an angle imparts rotation within the entire depth of pool water. Orientation to the left or right would generate clockwise or anti-clockwise rotation respectively. This has the benefit of cleaning the bottom of the pool and slowly moving sunken inorganic debris to the main drain where it is removed by the circulation pump basket sieve. In a correctly constructed pool, rotation of the water caused by the manner it is returned from the consecutive dilution system will reduce or even waive the need to vacuum the bottom. To gain the maximum rotation force on the main body of water, the consecutive dilution system needs to be as clean and unblocked as possible to allow maximum flow pressure from the pump. As the water rotates, it also disturbs organic waste at lower water layers, forcing it to the top. Rotational force the pool return jets create is the most important part of cleaning the pool water and pushing organic waste across the mouth of the skimmer. With a correctly designed and operated swimming pool, this circulation is visible and after a period of time, reaches even the deep end, inducing a low-velocity vortex above the main drain due to suction. Correct use of the return jets is the most effective way of removing disinfection by-products caused by deeper decomposing organic waste and drawing it into the consecutive dilution system for immediate disposal. Heaters Another piece of equipment that may be optioned in the recirculation system is a pool water heater. They can be heat pumps, natural gas or propane gas heaters, electric heaters, wood-burning heaters, or Solar hot water panel heaters – increasingly used in the sustainable design of pools. Other equipment Diversions to electronic oxidation systems, ionization systems, microbe disinfection with ultra-violet lamp systems, and "Tri-Chlor Feeders" are other auxiliary systems for swimming pool sanitation - as well as solar panels - and are in most cases required to be placed after the filtration equipment, often the last items being placed before the water is returned to the pool. Other features Recreation amenities Features that are part of the water circulation system can extend treatment capacity needs for sizing calculations and can include: artificial streams and waterfalls, in-pool fountains, integrated hot tubs and spas, water slides and sluices, artificial "pebble beaches", submerged seating as bench-ledges or as "stools" at in-pool bars, plunge pools, and shallow children's wading pools. See also Automated pool cleaners Copper ion swimming pool system Fountain Reflecting pool Respiratory risks of indoor swimming pools Water purification References Swimming pools Swimming pool equipment Water filters Water treatment Water technology da:Filter (vand)
Swimming pool sanitation
[ "Chemistry", "Engineering", "Environmental_science" ]
5,483
[ "Water filters", "Water treatment", "Filters", "Water pollution", "Environmental engineering", "Water technology" ]
323,328
https://en.wikipedia.org/wiki/Follicle-stimulating%20hormone
Follicle-stimulating hormone (FSH) is a gonadotropin, a glycoprotein polypeptide hormone. FSH is synthesized and secreted by the gonadotropic cells of the anterior pituitary gland and regulates the development, growth, pubertal maturation, and reproductive processes of the body. FSH and luteinizing hormone (LH) work together in the reproductive system. Structure FSH is a 35.5 kDa glycoprotein heterodimer, consisting of two polypeptide units, alpha and beta. Its structure is similar to those of luteinizing hormone (LH), thyroid-stimulating hormone (TSH), and human chorionic gonadotropin (hCG). The alpha subunits of the glycoproteins LH, FSH, TSH, and hCG are identical and consist of 96 amino acids, while the beta subunits vary. Both subunits are required for biological activity. FSH has a beta subunit of 111 amino acids (FSH β), which confers its specific biologic action, and is responsible for interaction with the follicle-stimulating hormone receptor. The sugar portion of the hormone is covalently bonded to asparagine, and is composed of N-acetylgalactosamine, mannose, N-acetylglucosamine, galactose, and sialic acid. Genes In humans, the gene for the alpha subunit is located at cytogenetic location 6q14.3. It is expressed in two cell types, most notably the basophils of the anterior pituitary. The gene for the FSH beta subunit is located on chromosome 11p13, and is expressed in gonadotropes of the pituitary cells, controlled by GnRH, inhibited by inhibin, and enhanced by activin. Activity and functions FSH regulates the development, growth, pubertal maturation and reproductive processes of the human body. In both males and females, FSH stimulates the maturation of primordial germ cells. In males, FSH induces Sertoli cells to secrete androgen-binding proteins (ABPs), regulated by inhibin's negative feedback mechanism on the anterior pituitary. Specifically, activation of Sertoli cells by FSH sustains spermatogenesis and stimulates inhibin B secretion. In females, FSH initiates follicular growth, specifically affecting granulosa cells. With the concomitant rise in inhibin B, FSH levels then decline in the late follicular phase. This seems to be critical in selecting only the most advanced follicle to proceed to ovulation. At the end of the luteal phase, there is a slight rise in FSH that seems to be of importance to start the next ovulatory cycle. Control of FSH release from the pituitary gland is unknown. Low frequency gonadotropin-releasing hormone (GnRH) pulses increase FSH mRNA levels in the rat, but is not directly correlated with an increase in circulating FSH. GnRH has been shown to play an important role in the secretion of FSH, with hypothalamic-pituitary disconnection leading to a cessation of FSH. GnRH administration leads to a return of FSH secretion. FSH is subject to oestrogen feed-back from the gonads via the hypothalamic pituitary gonadal axis. Effects in females FSH stimulates the growth and recruitment of immature ovarian follicles in the ovary. In early (small) antral follicles, FSH is the major survival factor that rescues the small antral follicles (2–5 mm in diameter for humans) from apoptosis (programmed death of the somatic cells of the follicle and oocyte). In the luteal-follicle phase transition period the serum levels of progesterone and estrogen (primarily estradiol) decrease and no longer suppress the release of FSH, consequently FSH peaks at about day three (day one is the first day of menstrual flow). The cohort of small antral follicles is normally sufficient in number to produce enough Inhibin B to lower FSH serum levels. In addition, there is evidence that gonadotropin surge-attenuating factor produced by small follicles during the first half of the follicle phase also exerts a negative feedback on pulsatile luteinizing hormone (LH) secretion amplitude, thus allowing a more favorable environment for follicle growth and preventing premature luteinization. As a woman nears perimenopause, the number of small antral follicles recruited in each cycle diminishes and consequently insufficient Inhibin B is produced to fully lower FSH and the serum level of FSH begins to rise. Eventually, the FSH level becomes so high that downregulation of FSH receptors occurs and by postmenopause any remaining small secondary follicles no longer have FSH nor LH receptors. When the follicle matures and reaches 8–10 mm in diameter it starts to secrete significant amounts of estradiol. Normally in humans only one follicle becomes dominant and survives to grow to 18–30 mm in size and ovulate, the remaining follicles in the cohort undergo atresia. The sharp increase in estradiol production by the dominant follicle (possibly along with a decrease in gonadotrophin surge-attenuating factor) cause a positive effect on the hypothalamus and pituitary and rapid GnRH pulses occur and an LH surge results. The increase in serum estradiol levels causes a decrease in FSH production by inhibiting GnRH production in the hypothalamus. The decrease in serum FSH level causes the smaller follicles in the current cohort to undergo atresia as they lack sufficient sensitivity to FSH to survive. Occasionally two follicles reach the 10 mm stage at the same time by chance and as both are equally sensitive to FSH both survive and grow in the low FSH environment and thus two ovulations can occur in one cycle possibly leading to non-identical (dizygotic) twins. Effects in males FSH stimulates primary spermatocytes to undergo the first division of meiosis, to form secondary spermatocytes. FSH enhances the production of androgen-binding protein by the Sertoli cells of the testes by binding to FSH receptors on their basolateral membranes, and is critical for the initiation of spermatogenesis. Measurement Follicle-stimulating hormone is typically measured in the early follicular phase of the menstrual cycle, typically day three to five, counted from last menstruation. At this time, the levels of estradiol (E2) and progesterone are at the lowest point of the menstrual cycle. FSH levels in this time is often called basal FSH levels, to distinguish from the increased levels when approaching ovulation. FSH is measured in international units (IU). For Human Urinary FSH, one IU is defined as the amount of FSH that has an activity corresponding to 0.11388 mg of pure Human Urinary FSH. For recombinant FSH, one IU corresponds to approximately 0.065 to 0.075 μg of a "fill-by-mass" product. The mean values for women before ovulation are around (3.8-8.8) IU/L. After ovulation these levels drop to between (1.8-5.1) IU/L. At the mid of the menstrual cycle it reaches its highest value, between (4.5-22.5) IU/L. During menopause, the values goes up even more, between (16.74-113.59) IU/L. For men, the mean values are around (16.74-113.59) IU/L. Disease states FSH levels are normally low during childhood and, in females, high after menopause. High FSH levels The most common reason for high serum FSH concentration is in a female who is undergoing or has recently undergone menopause. High levels of FSH indicate that the normal restricting feedback from the gonad is absent, leading to an unrestricted pituitary FSH production. FSH may contribute to postmenopausal osteoporosis and cardiovascular disease. If high FSH levels occur during the reproductive years, it is abnormal. Conditions with high FSH levels include: Premature menopause also known as premature ovarian failure Poor ovarian reserve also known as premature ovarian aging Gonadal dysgenesis, Turner syndrome, Klinefelter syndrome Castration Swyer syndrome Certain forms of congenital adrenal hyperplasia Testicular failure Lupus Most of these conditions are associated with subfertility or infertility. Therefore, high FSH levels are an indication of subfertility or infertility. Low FSH levels Diminished secretion of FSH can result in failure of gonadal function (hypogonadism). This condition is typically manifested in males as failure in production of normal numbers of sperm. In females, cessation of reproductive cycles is commonly observed. Conditions with very low FSH secretions are: Polycystic Ovarian Syndrome Polycystic ovarian syndrome + obesity + hirsutism + infertility Kallmann syndrome Aromatase excess syndrome Hypothalamic suppression Hypopituitarism Hyperprolactinemia Gonadotropin deficiency Gonadal suppression therapy GnRH antagonist GnRH agonist (downregulation). Isolated FSH deficiency due to mutations in the gene for β-subunit of FSH is rare with 13 cases reported in the literature up to 2019. Use as therapy FSH is used commonly in infertility therapy, mainly for ovarian hyperstimulation as part of IVF. In some cases, it is used in ovulation induction for reversal of anovulation as well. FSH is available mixed with LH activity in various menotropins including more purified forms of urinary gonadotropins such as Menopur, as well as without LH activity as recombinant FSH (Gonapure, Gonal F, Follistim, Follitropin alpha). Potential role in vascularization of solid tumors Elevated FSH receptor levels have been detected in the endothelia of tumor vasculature in a very wide range of solid tumors. FSH binding is thought to upregulate neovascularization via at least two mechanisms – one in the VEGF pathway, and the other VEGF independent – related to the development of umbilical vasculature when physiological. This presents possible use of FSH and FSH-receptor antagonists as an anti-tumor angiogenesis therapy (cf. avastin for current anti-VEGF approaches). See also EFSH, a follicle-stimulating hormone obtained from equine species References External links FSH - Lab Tests Online Recombinant proteins Glycoproteins Gynaecological endocrinology Peptide hormones Sex hormones Human hormones In vitro fertilisation Hormones of the hypothalamus-pituitary-gonad axis Anterior pituitary hormones Human female endocrine system
Follicle-stimulating hormone
[ "Chemistry", "Biology" ]
2,465
[ "Behavior", "Biotechnology products", "Sex hormones", "Recombinant proteins", "Glycoproteins", "Glycobiology", "Sexuality" ]
323,371
https://en.wikipedia.org/wiki/Pitch%20%28resin%29
Pitch is a viscoelastic polymer which can be natural or manufactured, derived from petroleum, coal tar, or plants. Pitch produced from petroleum may be called bitumen or asphalt, while plant-derived pitch, a resin, is known as rosin in its solid form. Tar is sometimes used interchangeably with pitch, but generally refers to a more liquid substance derived from coal production, including coal tar, or from plants, as in pine tar. Uses Pitch, a traditional naval store, was traditionally used to help caulk the seams of wooden sailing vessels (see shipbuilding). Other important historic uses included coating earthenware vessels for the preservation of wine, waterproofing wooden containers, and making torches. It was also used to make patent fuel from coal slack around the turn of the 19th century. Petroleum-derived pitch is black in colour, hence the adjectival phrase, "pitch-black". The viscoelastic properties of pitch make it well suited for the polishing of high-quality optical lenses and mirrors. In use, the pitch is formed into a lap or polishing surface, which is charged with iron oxide (Jewelers' rouge) or cerium oxide. The surface to be polished is pressed into the pitch, then rubbed against the surface so formed. The ability of pitch to flow, albeit slowly, keeps it in constant uniform contact with the optical surface. Chasers pitch is a combination of pitch and other substances, used in jewelry making. Viscoelastic properties Naturally occurring asphalt/bitumen, a type of pitch, is a viscoelastic polymer. This means that even though it seems to be solid at room temperature and can be shattered with a hard impact, it is actually fluid and will flow over time, but extremely slowly. The pitch drop experiment taking place at University of Queensland is a long-term experiment which demonstrates the flow of a piece of pitch over many years. For the experiment, pitch was put in a glass funnel and allowed to slowly drip out. Since the pitch was allowed to start dripping in 1930, only nine drops have fallen. It was calculated in the 1980s that the pitch in the experiment has a viscosity approximately 100 billion (1011) times that of water. The eighth drop fell on 28 November 2000, and the ninth drop fell on 17 April 2014. Another experiment was started by a colleague of Nobel Prize winner Ernest Walton in the physics department of Trinity College in Ireland in 1944. Over the years, the pitch had produced several drops, but none had been recorded. On July 11, 2013, scientists at Trinity College caught pitch dripping from a funnel on camera for the first time. Winchester College has a 'pitch glacier' demonstration which has been running since 21 July 1906, but does not have records of regular measurements. Production The heating (dry distilling) of wood causes tar and pitch to drip away from the wood and leave behind charcoal. Birchbark is used to make birch-tar, a particularly fine tar. The terms tar and pitch are often used interchangeably. However, pitch is considered more solid, while tar is more liquid. Traditionally, pitch that was used for waterproofing buckets, barrels and ships was drawn from pine. It is used to make cutler's resin. A 10th-century redaction of an earlier Greek Byzantine agricultural work brings down the ancient method of applying pitch to ceramic wine casks: [A wood-based pitch] is put into an earthen vessel, and it is put over a small fire in the sun, then some hot water percolated through wood-ashes is poured on it, and the pitch is stirred; when it has afterward stood, it is poured out after two hours, then there is as much water again poured in. Having therefore done this thrice every day for three days and having taken up the moisture on the surface, they make the pitch that is left exceedingly good. Dry pitch is also bitter, but being boiled with wine it becomes more useful; [...] and having boiled the mixture to a third part, they use it as properly qualified: but some throw wood-ashes into it and boil it down. In Italy they use pitch of this kind: forty minæ of pitch, one of wax, eight drams of sal ammoniac, six drams of manna. Thus, having pounded them and boiled them together, they sprinkle eight ounces (uncia) of well-ground fenugreek over them and they pitch the cask with them when they are well mixed. The ceramic ware was pitched, both inside and out, immediately while they were removed from the kiln and still hot. Literary references The ability of pitch to contaminate those in contact with it is mentioned by Dogberry, a character in Shakespeare's Much Ado About Nothing, and the same point is made in a speech by Falstaff in Henry IV, Part 1, who refers to "ancient writers" who have made this observation. The Jewish deuterocanonical Book of Sirach states that "whoever touches pitch gets dirty, and whoever associates with a proud person becomes like him". See also Asphaltene Creosote Pine tar Tar Notes References External links The Pitch Drop Experiment Pine Tar Production Primitive tar and charcoal production Materials Chemical mixtures Amorphous solids Non-timber forest products
Pitch (resin)
[ "Physics", "Chemistry" ]
1,087
[ "Unsolved problems in physics", "Materials", "Chemical mixtures", "nan", "Amorphous solids", "Matter" ]
323,392
https://en.wikipedia.org/wiki/Theoretical%20computer%20science
Theoretical computer science is a subfield of computer science and mathematics that focuses on the abstract and mathematical foundations of computation. It is difficult to circumscribe the theoretical areas precisely. The ACM's Special Interest Group on Algorithms and Computation Theory (SIGACT) provides the following description: History While logical inference and mathematical proof had existed previously, in 1931 Kurt Gödel proved with his incompleteness theorem that there are fundamental limitations on what statements could be proved or disproved. Information theory was added to the field with a 1948 mathematical theory of communication by Claude Shannon. In the same decade, Donald Hebb introduced a mathematical model of learning in the brain. With mounting biological data supporting this hypothesis with some modification, the fields of neural networks and parallel distributed processing were established. In 1971, Stephen Cook and, working independently, Leonid Levin, proved that there exist practically relevant problems that are NP-complete – a landmark result in computational complexity theory. Modern theoretical computer science research is based on these basic developments, but includes many other mathematical and interdisciplinary problems that have been posed, as shown below: Topics Algorithms An algorithm is a step-by-step procedure for calculations. Algorithms are used for calculation, data processing, and automated reasoning. An algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input. Automata theory Automata theory is the study of abstract machines and automata, as well as the computational problems that can be solved using them. It is a theory in theoretical computer science, under discrete mathematics (a section of mathematics and also of computer science). Automata comes from the Greek word αὐτόματα meaning "self-acting". Automata Theory is the study of self-operating virtual machines to help in the logical understanding of input and output process, without or with intermediate stage(s) of computation (or any function/process). Coding theory Coding theory is the study of the properties of codes and their fitness for a specific application. Codes are used for data compression, cryptography, error correction and more recently also for network coding. Codes are studied by various scientific disciplines – such as information theory, electrical engineering, mathematics, and computer science – for the purpose of designing efficient and reliable data transmission methods. This typically involves the removal of redundancy and the correction (or detection) of errors in the transmitted data. Computational complexity theory Computational complexity theory is a branch of the theory of computation that focuses on classifying computational problems according to their inherent difficulty, and relating those classes to each other. A computational problem is understood to be a task that is in principle amenable to being solved by a computer, which is equivalent to stating that the problem may be solved by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do. Computational geometry Computational geometry is a branch of computer science devoted to the study of algorithms that can be stated in terms of geometry. Some purely geometrical problems arise out of the study of computational geometric algorithms, and such problems are also considered to be part of computational geometry. The main impetus for the development of computational geometry as a discipline was progress in computer graphics and computer-aided design and manufacturing (CAD/CAM), but many problems in computational geometry are classical in nature, and may come from mathematical visualization. Other important applications of computational geometry include robotics (motion planning and visibility problems), geographic information systems (GIS) (geometrical location and search, route planning), integrated circuit design (IC geometry design and verification), computer-aided engineering (CAE) (mesh generation), computer vision (3D reconstruction). Computational learning theory Theoretical results in machine learning mainly deal with a type of inductive learning called supervised learning. In supervised learning, an algorithm is given samples that are labeled in some useful way. For example, the samples might be descriptions of mushrooms, and the labels could be whether or not the mushrooms are edible. The algorithm takes these previously labeled samples and uses them to induce a classifier. This classifier is a function that assigns labels to samples including the samples that have never been previously seen by the algorithm. The goal of the supervised learning algorithm is to optimize some measure of performance such as minimizing the number of mistakes made on new samples. Computational number theory Computational number theory, also known as algorithmic number theory, is the study of algorithms for performing number theoretic computations. The best known problem in the field is integer factorization. Cryptography Cryptography is the practice and study of techniques for secure communication in the presence of third parties (called adversaries). More generally, it is about constructing and analyzing protocols that overcome the influence of adversaries and that are related to various aspects in information security such as data confidentiality, data integrity, authentication, and non-repudiation. Modern cryptography intersects the disciplines of mathematics, computer science, and electrical engineering. Applications of cryptography include ATM cards, computer passwords, and electronic commerce. Modern cryptography is heavily based on mathematical theory and computer science practice; cryptographic algorithms are designed around computational hardness assumptions, making such algorithms hard to break in practice by any adversary. It is theoretically possible to break such a system, but it is infeasible to do so by any known practical means. These schemes are therefore termed computationally secure; theoretical advances, e.g., improvements in integer factorization algorithms, and faster computing technology require these solutions to be continually adapted. There exist information-theoretically secure schemes that cannot be broken even with unlimited computing power—an example is the one-time pad—but these schemes are more difficult to implement than the best theoretically breakable but computationally secure mechanisms. Data structures A data structure is a particular way of organizing data in a computer so that it can be used efficiently. Different kinds of data structures are suited to different kinds of applications, and some are highly specialized to specific tasks. For example, databases use B-tree indexes for small percentages of data retrieval and compilers and databases use dynamic hash tables as look up tables. Data structures provide a means to manage large amounts of data efficiently for uses such as large databases and internet indexing services. Usually, efficient data structures are key to designing efficient algorithms. Some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design. Storing and retrieving can be carried out on data stored in both main memory and in secondary memory. Distributed computation Distributed computing studies distributed systems. A distributed system is a software system in which components located on networked computers communicate and coordinate their actions by passing messages. The components interact with each other in order to achieve a common goal. Three significant characteristics of distributed systems are: concurrency of components, lack of a global clock, and independent failure of components. Examples of distributed systems vary from SOA-based systems to massively multiplayer online games to peer-to-peer applications, and blockchain networks like Bitcoin. A computer program that runs in a distributed system is called a distributed program, and distributed programming is the process of writing such programs. There are many alternatives for the message passing mechanism, including RPC-like connectors and message queues. An important goal and challenge of distributed systems is location transparency. Information-based complexity Information-based complexity (IBC) studies optimal algorithms and computational complexity for continuous problems. IBC has studied continuous problems as path integration, partial differential equations, systems of ordinary differential equations, nonlinear equations, integral equations, fixed points, and very-high-dimensional integration. Formal methods Formal methods are a particular kind of mathematics based techniques for the specification, development and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design. Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular logic calculi, formal languages, automata theory, and program semantics, but also type systems and algebraic data types to problems in software and hardware specification and verification. Information theory Information theory is a branch of applied mathematics, electrical engineering, and computer science involving the quantification of information. Information theory was developed by Claude E. Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data. Since its inception it has broadened to find applications in many other areas, including statistical inference, natural language processing, cryptography, neurobiology, the evolution and function of molecular codes, model selection in statistics, thermal physics, quantum computing, linguistics, plagiarism detection, pattern recognition, anomaly detection and other forms of data analysis. Applications of fundamental topics of information theory include lossless data compression (e.g. ZIP files), lossy data compression (e.g. MP3s and JPEGs), and channel coding (e.g. for Digital Subscriber Line (DSL)). The field is at the intersection of mathematics, statistics, computer science, physics, neurobiology, and electrical engineering. Its impact has been crucial to the success of the Voyager missions to deep space, the invention of the compact disc, the feasibility of mobile phones, the development of the Internet, the study of linguistics and of human perception, the understanding of black holes, and numerous other fields. Important sub-fields of information theory are source coding, channel coding, algorithmic complexity theory, algorithmic information theory, information-theoretic security, and measures of information. Machine learning Machine learning is a scientific discipline that deals with the construction and study of algorithms that can learn from data. Such algorithms operate by building a model based on inputs and using that to make predictions or decisions, rather than following only explicitly programmed instructions. Machine learning can be considered a subfield of computer science and statistics. It has strong ties to artificial intelligence and optimization, which deliver methods, theory and application domains to the field. Machine learning is employed in a range of computing tasks where designing and programming explicit, rule-based algorithms is infeasible. Example applications include spam filtering, optical character recognition (OCR), search engines and computer vision. Machine learning is sometimes conflated with data mining, although that focuses more on exploratory data analysis. Machine learning and pattern recognition "can be viewed as two facets of the same field." Natural computation Parallel computation Parallel computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved "in parallel". There are several different forms of parallel computing: bit-level, instruction level, data, and task parallelism. Parallelism has been employed for many years, mainly in high-performance computing, but interest in it has grown lately due to the physical constraints preventing frequency scaling. As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors. Parallel computer programs are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance. The maximum possible speed-up of a single program as a result of parallelization is known as Amdahl's law. Programming language theory and program semantics Programming language theory is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of programming languages and their individual features. It falls within the discipline of theoretical computer science, both depending on and affecting mathematics, software engineering, and linguistics. It is an active research area, with numerous dedicated academic journals. In programming language theory, semantics is the field concerned with the rigorous mathematical study of the meaning of programming languages. It does so by evaluating the meaning of syntactically legal strings defined by a specific programming language, showing the computation involved. In such a case that the evaluation would be of syntactically illegal strings, the result would be non-computation. Semantics describes the processes a computer follows when executing a program in that specific language. This can be shown by describing the relationship between the input and output of a program, or an explanation of how the program will execute on a certain platform, hence creating a model of computation. Quantum computation A quantum computer is a computation system that makes direct use of quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data. Quantum computers are different from digital computers based on transistors. Whereas digital computers require data to be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1), quantum computation uses qubits (quantum bits), which can be in superpositions of states. A theoretical model is the quantum Turing machine, also known as the universal quantum computer. Quantum computers share theoretical similarities with non-deterministic and probabilistic computers; one example is the ability to be in more than one state simultaneously. The field of quantum computing was first introduced by Yuri Manin in 1980 and Richard Feynman in 1982. A quantum computer with spins as quantum bits was also formulated for use as a quantum space–time in 1968. Experiments have been carried out in which quantum computational operations were executed on a very small number of qubits. Both practical and theoretical research continues, and many national governments and military funding agencies support quantum computing research to develop quantum computers for both civilian and national security purposes, such as cryptanalysis. Symbolic computation Computer algebra, also called symbolic computation or algebraic computation is a scientific area that refers to the study and development of algorithms and software for manipulating mathematical expressions and other mathematical objects. Although, properly speaking, computer algebra should be a subfield of scientific computing, they are generally considered as distinct fields because scientific computing is usually based on numerical computation with approximate floating point numbers, while symbolic computation emphasizes exact computation with expressions containing variables that have not any given value and are thus manipulated as symbols (therefore the name of symbolic computation). Software applications that perform symbolic calculations are called computer algebra systems, with the term system alluding to the complexity of the main applications that include, at least, a method to represent mathematical data in a computer, a user programming language (usually different from the language used for the implementation), a dedicated memory manager, a user interface for the input/output of mathematical expressions, a large set of routines to perform usual operations, like simplification of expressions, differentiation using chain rule, polynomial factorization, indefinite integration, etc. Very-large-scale integration Very-large-scale integration (VLSI) is the process of creating an integrated circuit (IC) by combining thousands of transistors into a single chip. VLSI began in the 1970s when complex semiconductor and communication technologies were being developed. The microprocessor is a VLSI device. Before the introduction of VLSI technology most ICs had a limited set of functions they could perform. An electronic circuit might consist of a CPU, ROM, RAM and other glue logic. VLSI allows IC makers to add all of these circuits into one chip. Organizations European Association for Theoretical Computer Science SIGACT Simons Institute for the Theory of Computing Journals and newsletters Discrete Mathematics and Theoretical Computer Science Information and Computation Theory of Computing (open access journal) Formal Aspects of Computing Journal of the ACM SIAM Journal on Computing (SICOMP) SIGACT News Theoretical Computer Science Theory of Computing Systems TheoretiCS (open access journal) International Journal of Foundations of Computer Science Chicago Journal of Theoretical Computer Science (open access journal) Foundations and Trends in Theoretical Computer Science Journal of Automata, Languages and Combinatorics Acta Informatica Fundamenta Informaticae ACM Transactions on Computation Theory Computational Complexity Journal of Complexity ACM Transactions on Algorithms Information Processing Letters Open Computer Science (open access journal) Conferences Annual ACM Symposium on Theory of Computing (STOC) Annual IEEE Symposium on Foundations of Computer Science (FOCS) Innovations in Theoretical Computer Science (ITCS) Mathematical Foundations of Computer Science (MFCS) International Computer Science Symposium in Russia (CSR) ACM–SIAM Symposium on Discrete Algorithms (SODA) IEEE Symposium on Logic in Computer Science (LICS) Computational Complexity Conference (CCC) International Colloquium on Automata, Languages and Programming (ICALP) Annual Symposium on Computational Geometry (SoCG) ACM Symposium on Principles of Distributed Computing (PODC) ACM Symposium on Parallelism in Algorithms and Architectures (SPAA) Annual Conference on Learning Theory (COLT) International Conference on Current Trends in Theory and Practice of Computer Science (SOFSEM) Symposium on Theoretical Aspects of Computer Science (STACS) European Symposium on Algorithms (ESA) Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX) Workshop on Randomization and Computation (RANDOM) International Symposium on Algorithms and Computation (ISAAC) International Symposium on Fundamentals of Computation Theory (FCT) International Workshop on Graph-Theoretic Concepts in Computer Science (WG) See also Formal science Unsolved problems in computer science Sun–Ni law Notes Further reading Martin Davis, Ron Sigal, Elaine J. Weyuker, Computability, complexity, and languages: fundamentals of theoretical computer science, 2nd ed., Academic Press, 1994, . Covers theory of computation, but also program semantics and quantification theory. Aimed at graduate students. External links SIGACT directory of additional theory links (archived 15 July 2017) Theory Matters Wiki Theoretical Computer Science (TCS) Advocacy Wiki List of academic conferences in the area of theoretical computer science at confsearch Theoretical Computer Science – StackExchange, a Question and Answer site for researchers in theoretical computer science Computer Science Animated Theory of computation at the Massachusetts Institute of Technology Formal sciences
Theoretical computer science
[ "Mathematics" ]
3,882
[ "Theoretical computer science", "Applied mathematics" ]
323,413
https://en.wikipedia.org/wiki/Alpenglow
Alpenglow (from ; ) is an optical phenomenon that appears as a horizontal reddish glow near the horizon opposite to the Sun when the solar disk is just below the horizon. Description Strictly speaking, alpenglow refers to indirect sunlight reflected or diffracted by the atmosphere after sunset or before sunrise. This diffuse illumination creates soft shadows in addition to the reddish color. The term is also used informally to include direct illumination by the reddish light of the rising or setting sun, with sharply defined shadows. Reflected sunlight When the Sun is below the horizon, sunlight has no direct path to reach a mountain. Unlike the direct sunlight around sunrise or sunset, the light that causes alpenglow is reflected off airborne precipitation, ice crystals, or particulates in the lower atmosphere. These conditions differentiate between direct sunlight around sunrise or sunset and alpenglow. The term is generally confused to be any sunrise or sunset light reflected off the mountains or clouds, but alpenglow in the strict sense of the word is not direct sunlight and is only visible after sunset or before sunrise. After sunset, if mountains are absent, aerosols in the eastern sky can be illuminated in a similar way by the remaining scattered reddish light above the fringe of Earth's shadow. This backscattered light produces a pinkish band opposite of the Sun's direction, called the Belt of Venus. Direct sunlight Alpenglow in a looser sense may refer to any illumination by the rosy or reddish light of the setting or rising Sun. See also Golden hour (photography) Belt of Venus References Atmospheric optical phenomena
Alpenglow
[ "Physics" ]
324
[ "Optical phenomena", "Physical phenomena", "Atmospheric optical phenomena", "Earth phenomena" ]
323,428
https://en.wikipedia.org/wiki/List%20of%20craters%20on%20the%20Moon
This is a list of named lunar craters. The large majority of these features are impact craters. The crater nomenclature is governed by the International Astronomical Union, and this listing only includes features that are officially recognized by that scientific society. Craters The lunar craters are listed in the following subsections. Where a formation has associated satellite craters, these are detailed on the main crater description pages. Catalog Lunar craters are listed alphabetically on the following partial lists: List of craters on the Moon: A–B List of craters on the Moon: C–F List of craters on the Moon: G–K List of craters on the Moon: L–N List of craters on the Moon: O–Q List of craters on the Moon: R–S List of craters on the Moon: T–Z Prominent craters Locations and diameters of some prominent craters on the near side of the Moon: See also List of lunar features List of people with craters of the Moon named after them List of maria on the Moon List of mountains on the Moon List of valleys on the Moon Selenography References The following sources were used as references on the individual crater pages. External links The following reference sites were also used during the assembly of the crater information. Astronomica Langrenus — Italian Lunar Web Site Gazetteer of Planetary Nomenclature Moon map. List of craters on the Moon Lunar Atlases at the Lunar & Planetary Institute Digital Lunar Orbiter Photographic Atlas of the Moon Lunar Nomenclature Lunar Photo of the Day by Charles A. Wood et al. Moon
List of craters on the Moon
[ "Astronomy" ]
308
[ "Astronomy-related lists", "Lists of impact craters" ]
323,478
https://en.wikipedia.org/wiki/Critical%20regionalism
Critical regionalism is an approach to architecture that strives to counter the placelessness and lack of identity of the International Style, but also rejects the whimsical individualism and ornamentation of Postmodern architecture. The stylings of critical regionalism seek to provide an architecture rooted in the modern tradition, but tied to geographical and cultural context. Critical regionalism is not simply regionalism in the sense of vernacular architecture. It is a progressive approach to design that seeks to mediate between the global and the local languages of architecture. The phrase "critical regionalism" was first presented in 1981, in ‘The Grid and the Pathway,’ an essay published in Architecture in Greece, by the architectural theorists Alexander Tzonis and Liane Lefaivre and, with a slightly different meaning, by the historian-theorist Kenneth Frampton. Sri Lankan Architect Minnette De Silva was one of the pioneers in practicing this architecture style in the 1950s and termed it 'Regional Modernism'. Critical Regionalists thus hold that both modern and post-modern architecture are "deeply problematic". Kenneth Frampton In "Towards a Critical Regionalism: Six points for an architecture of resistance", Frampton recalls Paul Ricoeur's "how to become modern and to return to sources; how to revive an old, dormant civilization and take part in universal civilization". According to Frampton's proposal, critical regionalism should adopt modern architecture, critically, for its universal progressive qualities but at the same time value should be placed on the geographical context of the building. Emphasis, Frampton says, should be on topography, climate, light; on tectonic form rather than on scenography (i.e. painting theatrical scenery) and should be on the sense of touch rather than visual sense. Frampton draws on phenomenology for his argument. Two examples Frampton briefly discusses are Jørn Utzon and Alvar Aalto. In Frampton's view, Utzon's Bagsværd Church (1973–6), near Copenhagen is a self-conscious synthesis between universal civilization and world culture. This is revealed by the rational, modular, neutral and economic, partly prefabricated concrete outer shell (i.e. universal civilization) versus the specially-designed, 'uneconomic', organic, reinforced concrete shell of the interior, signifying with its manipulation of light sacred space and 'multiple cross-cultural references', which Frampton sees no precedent for in Western culture, but rather in the Chinese pagoda roof (i.e. world culture). In the case of Aalto, Frampton discusses the red brick Säynätsalo Town Hall (1952), where, he argues, there is a resistance to universal technology and vision, affected by using the tactile qualities of the building's materials. He notes, for instance, feeling the contrast between the friction of the brick surface of the stairs and the springy wooden floor of the council chamber. In addition to his own writings on the topic, Frampton has furthered the intellectual reach of these ideas through contributions, in the form of introductions, prefaces and forewords, written for publications on architects and architectural practices that conform with the ethics of critical regionalism. William J. R. Curtis and Suha Ozkan There have been two different perceptions of Regionalism in architecture. One of which is of Western writers, like Curtis, whose definitions are not encompassing enough to analyse architectural styles especially in the last two centuries in the Islamic countries, like Iran. However, Ozkan's definition of Regionalism is more objective. Alexander Tzonis and Liane Lefaivre According to Alexander Tzonis and Liane Lefaivre, critical regionalism need not directly draw from the context; rather elements can be stripped of context but used in unfamiliar ways. Here the aim is to make evident a disruption and loss of place, that is already a fait accompli, through reflection and self-evaluation. Critical regionalist architects In addition to Aalto and Utzon, the following architects have used Critical Regionalism (in the Frampton sense) in their work: Álvaro Siza Vieira, Studio Granda, Mario Botta, Eduardo Souto de Moura, Mahesh Naik, Sahil Ahmed, Mazharul Islam, B. V. Doshi, Max Strang, Charles Correa, Christopher Benninger, Alvaro Siza, Jorge Ferreira Chaves, Rafael Moneo, Geoffrey Bawa, Raj Rewal, Dharmesh Vadavala, Ashok "Bihari" Lall Neelkanth Chhaya (Kaka), P.K.Das, Soumitro Ghosh, Nisha Mathew Ghosh, Ngô Viết Thụ, Tadao Ando, Mack Scogin / Merrill Elam, Glenn Murcutt, Johnsen Schmaling Architects, Ken Yeang, Philippe Madec, William S.W. Lim, Tay Kheng Soon, WOHA Architects (Singapore), Juhani Pallasmaa, Wang Shu, Juha Leiviskä, Peter Zumthor, Carlo Scarpa, Miller | Hull, Tan Hock Beng. Peter Stutchbury, Lake Flato, Rick Joy, Tom Kundig, and Sverre Fehn. Suzana & Dimitris Antonakakis are the two Greek architects for whom the term was first used by Alexander Tzonis and Liane Lefaivre. Critical regionalism has developed into unique sub-styles across the world. Glenn Murcutt's simple vernacular architectural style is representative of an Australian variant to critical regionalism. In Singapore, WOHA has developed a unique architectural vocabulary based on an appreciation of the local climate and culture. Criticism Although supportive of Critical Regionalism's attempt to adapt design to local climate, site conditions, and locally-available materials, considering it an improvement in relation to the International Style of Modernism, architecture theorist Nikos Salingaros criticizes its anti-regional and anti-traditional tendencies derived from Critical Theory. Nikos Salingaros states that "In practice, critical regionalism willfully perpetuates the form languages of Modernism. Our understanding, however, is that regionalism has to protect and re-use traditional form languages. True regionalism has to free itself from any global form language imposed from above, and from any forces of uniformization and conformity." In cultural studies Subsequently, the phrase "critical regionalism" has also been used in cultural studies, literary studies, and political theory, specifically in the work of Gayatri Chakravorty Spivak. In her 2007 work "Who Sings the Nation-State?", co-authored with Judith Butler, Spivak proposes a deconstructive alternative to nationalism that is predicated on the deconstruction of borders and rigid national identity. Douglas Reichert Powell's book Critical Regionalism: Connecting Politics and Culture in the American Landscape (2007) traces the trajectory of the term critical regionalism from its original use in architectural theory to its inclusion in literary, cultural, and political studies and proposes a methodology based on the intersection of those fields. See also Contextual architecture Complementary architecture Critical theory Neo-Historicism Notes References Vincent B. Canizaro," Architectural Regionalism: Collected Writings on Place, Identity, Modernity, and Tradition," (2007) Princeton Architectural Press. Kenneth Frampton, "Towards a Critical Regionalism: Six Points for an Architecture of Resistance", in The Anti-Aesthetic. Essays on Postmodern Culture (1983) edited by Hal Foster, Bay Press, Seattle. Stylianos Giamarelos (2022). Resisting Postmodern Architecture: Critical Regionalism before Globalisation. London: UCL Press. DOI: https://doi.org/10.14324/111.9781800081338 Alex Tzonis and Liliane Lefaivre, "The grid and the pathway. An introduction to the work of Dimitris and Suzana Antonakakis", Architecture in Greece (1981) 15, Athens. Judith Butler and Gayatri Chakravorty Spivak, "Who Sings the Nation-State?: Language, Politics, Belonging" (2007), Seagull Books. Douglas Powell, Critical Regionalism: Connecting Politics and Culture in the American Landscape (2007), University of North Carolina Press. Thorsten Botz-Bornstein, "Is Critical Regionalist Philosophy Possible? Some Meta-Philosophical Considerations" in Comparative and Continental Philosophy (2010) 2:1. Thorsten Botz-Bornstein, Transcultural Architecture: Limits and Opportunities of Critical Regionalism (2015), Ashgate. Tom Avermaete, Veronique Patteeuw, Hans Teerds, Lea-Catherine Szacka (eds), Oase #103: Critical Regionalism Revisited, (2019), . External links Critical Analysis of "Towards a Critical Regionalism" The Theoretical Inapplicability of Regionalism Alexander Tzonis Authorised website Architectural theory 20th-century architectural styles
Critical regionalism
[ "Engineering" ]
1,863
[ "Architectural theory", "Architecture" ]
323,592
https://en.wikipedia.org/wiki/Nicolaus%20Copernicus
Nicolaus Copernicus (19 February 1473 – 24 May 1543) was a Renaissance polymath, active as a mathematician, astronomer, and Catholic canon, who formulated a model of the universe that placed the Sun rather than Earth at its center. In all likelihood, Copernicus developed his model independently of Aristarchus of Samos, an ancient Greek astronomer who had formulated such a model some eighteen centuries earlier. The publication of Copernicus's model in his book (On the Revolutions of the Celestial Spheres), just before his death in 1543, was a major event in the history of science, triggering the Copernican Revolution and making a pioneering contribution to the Scientific Revolution. Copernicus was born and died in Royal Prussia, a semiautonomous and multilingual region created within the Crown of the Kingdom of Poland from part of the lands regained from the Teutonic Order after the Thirteen Years' War. A polyglot and polymath, he obtained a doctorate in canon law and was a mathematician, astronomer, physician, classics scholar, translator, governor, diplomat, and economist. From 1497 he was a Warmian Cathedral chapter canon. In 1517 he derived a quantity theory of money—a key concept in economics—and in 1519 he formulated an economic principle that later came to be called Gresham's law. Life Nicolaus Copernicus was born on 19 February 1473 in the city of Toruń (Thorn), in the province of Royal Prussia, in the Crown of the Kingdom of Poland, to German-speaking parents. His father was a merchant from Kraków and his mother was the daughter of a wealthy Toruń merchant. Nicolaus was the youngest of four children. His brother Andreas (Andrew) became an Augustinian canon at Frombork (Frauenburg). His sister Barbara, named after her mother, became a Benedictine nun and, in her final years, prioress of a convent in Chełmno (Kulm); she died after 1517. His sister Katharina married the businessman and Toruń city councilor Barthel Gertner and left five children, whom Copernicus looked after to the end of his life. Copernicus never married and is not known to have had children, but from at least 1531 until 1539 his relations with Anna Schilling, a live-in housekeeper, were seen as scandalous by two bishops of Warmia who urged him over the years to break off relations with his "mistress". Father's family Copernicus's father's family can be traced to a village in Silesia between Nysa (Neiße) and Prudnik (Neustadt). The village's name has been variously spelled Kopernik, Copernik, Copernic, Kopernic, Coprirnik, and modern Koperniki. In the 14th century, members of the family began moving to various other Silesian cities, to the Polish capital, Kraków (1367), and to Toruń (1400). The father, Mikołaj the Elder (or ), likely the son of Jan (or Johann), came from the Kraków line. Nicolaus was named after his father, who appears in records for the first time as a well-to-do merchant who dealt in copper, selling it mostly in Danzig (Gdańsk). He moved from Kraków to Toruń around 1458. Toruń, situated on the Vistula River, was at that time embroiled in the Thirteen Years' War, in which the Kingdom of Poland and the Prussian Confederation, an alliance of Prussian cities, gentry and clergy, fought the Teutonic Order over control of the region. In this war, Hanseatic cities like Danzig and Toruń, Nicolaus Copernicus's hometown, chose to support the Polish King, Casimir IV Jagiellon, who promised to respect the cities' traditional vast independence, which the Teutonic Order had challenged. Nicolaus's father was actively engaged in the politics of the day and supported Poland and the cities against the Teutonic Order. In 1454 he mediated negotiations between Poland's Cardinal Zbigniew Oleśnicki and the Prussian cities for repayment of war loans. In the Second Peace of Thorn (1466), the Teutonic Order formally renounced all claims to the conquered lands, which returned to Poland as Royal Prussia and remained part of it until the First (1772) and Second (1793) Partitions of Poland. Copernicus's father married Barbara Watzenrode, the astronomer's mother, between 1461 and 1464. He died about 1483. Mother's family Nicolaus's mother, Barbara Watzenrode, was the daughter of a wealthy Toruń patrician and city councillor, Lucas Watzenrode the Elder (deceased 1462), and Katarzyna (widow of Jan Peckau), mentioned in other sources as Katarzyna Rüdiger gente Modlibóg (deceased 1476). The Modlibógs were a prominent Polish family who had been well known in Poland's history since 1271. The Watzenrode family, like the Kopernik family, had come from Silesia from near Schweidnitz (Świdnica), and after 1360 had settled in Toruń. They soon became one of the wealthiest and most influential patrician families. Through the Watzenrodes' extensive family relationships by marriage, Copernicus was related to wealthy families of Toruń (Thorn), Danzig (Gdansk) and Elbing (Elbląg), and to prominent Polish noble families of Prussia: the Czapskis, Działyńskis, Konopackis and Kościeleckis. Lucas and Katherine had three children: Lucas Watzenrode the Younger (1447–1512), who would become Bishop of Warmia and Copernicus's patron; Barbara, the astronomer's mother (deceased after 1495); and Christina (deceased before 1502), who in 1459 married the Toruń merchant and mayor, Tiedeman von Allen. Lucas Watzenrode the Elder, a wealthy merchant and in 1439–62 president of the judicial bench, was a decided opponent of the Teutonic Knights. In 1453 he was the delegate from Toruń at the Grudziądz (Graudenz) conference that planned the uprising against them. During the ensuing Thirteen Years' War, he actively supported the Prussian cities' war effort with substantial monetary subsidies (only part of which he later re-claimed), with political activity in Toruń and Danzig, and by personally fighting in battles at Łasin (Lessen) and Malbork (Marienburg). He died in 1462. Lucas Watzenrode the Younger, the astronomer's maternal uncle and patron, was educated at the University of Kraków and at the universities of Cologne and Bologna. He was a bitter opponent of the Teutonic Order, and its Grand Master once referred to him as "the devil incarnate". In 1489 Watzenrode was elected Bishop of Warmia (Ermeland, Ermland) against the preference of King Casimir IV, who had hoped to install his own son in that seat. As a result, Watzenrode quarreled with the king until Casimir IV's death three years later. Watzenrode was then able to form close relations with three successive Polish monarchs: John I Albert, Alexander Jagiellon, and Sigismund I the Old. He was a friend and key advisor to each ruler, and his influence greatly strengthened the ties between Warmia and Poland proper. Watzenrode came to be considered the most powerful man in Warmia, and his wealth, connections and influence allowed him to secure Copernicus's education and career as a canon at Frombork Cathedral. Education Early education Copernicus' father died around 1483, when the boy was 10. His maternal uncle, Lucas Watzenrode the Younger (1447–1512), took Copernicus under his wing and saw to his education and career. Six years later, Watzenrode was elected Bishop of Warmia. Watzenrode maintained contacts with leading intellectual figures in Poland and was a friend of the influential Italian-born humanist and Kraków courtier Filippo Buonaccorsi. There are no surviving primary documents on the early years of Copernicus's childhood and education. Copernicus biographers assume that Watzenrode first sent young Copernicus to St. John's School, at Toruń, where he himself had been a master. Later, according to Armitage, the boy attended the Cathedral School at Włocławek, up the Vistula River from Toruń, which prepared pupils for entrance to the University of Kraków. University of Kraków 1491–1495 In the winter semester of 1491–92 Copernicus, as "Nicolaus Nicolai de Thuronia", matriculated together with his brother Andrew at the University of Kraków. Copernicus began his studies in the Department of Arts (from the fall of 1491, presumably until the summer or fall of 1495) in the heyday of the Kraków astronomical-mathematical school, acquiring the foundations for his subsequent mathematical achievements. According to a later but credible tradition (Jan Brożek), Copernicus was a pupil of Albert Brudzewski, who by then (from 1491) was a professor of Aristotelian philosophy but taught astronomy privately outside the university; Copernicus became familiar with Brudzewski's widely read commentary to Georg von Peuerbach's Theoricæ novæ planetarum and almost certainly attended the lectures of Bernard of Biskupie and Wojciech Krypa of Szamotuły, and probably other astronomical lectures by Jan of Głogów, Michał of Wrocław (Breslau), Wojciech of Pniewy, and Marcin Bylica of Olkusz. Mathematical astronomy Copernicus's Kraków studies gave him a thorough grounding in the mathematical astronomy taught at the university (arithmetic, geometry, geometric optics, cosmography, theoretical and computational astronomy) and a good knowledge of the philosophical and natural-science writings of Aristotle (De coelo, Metaphysics) and Averroes, stimulating his interest in learning and making him conversant with humanistic culture. Copernicus broadened the knowledge that he took from the university lecture halls with independent reading of books that he acquired during his Kraków years (Euclid, Haly Abenragel, the Alfonsine Tables, Johannes Regiomontanus' Tabulae directionum); to this period, probably, also date his earliest scientific notes, preserved partly at Uppsala University. At Kraków Copernicus began collecting a large library on astronomy; it would later be carried off as war booty by the Swedes during the Deluge in the 1650s and has been preserved at the Uppsala University Library. Contradictions in the systems of Aristotle and Ptolemy Copernicus's four years at Kraków played an important role in the development of his critical faculties and initiated his analysis of logical contradictions in the two "official" systems of astronomy—Aristotle's theory of homocentric spheres, and Ptolemy's mechanism of eccentrics and epicycles—the surmounting and discarding of which would be the first step toward the creation of Copernicus's own doctrine of the structure of the universe. Warmia 1495–96 Without taking a degree, probably in the fall of 1495, Copernicus left Kraków for the court of his uncle Watzenrode, who in 1489 had been elevated to Prince-Bishop of Warmia and soon (before November 1495) sought to place his nephew in the Warmia canonry vacated by 26 August 1495 death of its previous tenant, Jan Czanow. For unclear reasons—probably due to opposition from part of the chapter, who appealed to Rome—Copernicus's installation was delayed, inclining Watzenrode to send both his nephews to study canon law in Italy, seemingly with a view to furthering their ecclesiastic careers and thereby also strengthening his own influence in the Warmia chapter. On 20 October 1497, Copernicus, by proxy, formally succeeded to the Warmia canonry which had been granted to him two years earlier. To this, by a document dated 10 January 1503 at Padua, he would add a sinecure at the Collegiate Church of the Holy Cross and St. Bartholomew in Wrocław (at the time in the Crown of Bohemia). Despite having been granted a papal indult on 29 November 1508 to receive further benefices, through his ecclesiastic career Copernicus not only did not acquire further prebends and higher stations (prelacies) at the chapter, but in 1538 he relinquished the Wrocław sinecure. It is unclear whether he was ever ordained a priest. Edward Rosen asserts that he was not. Copernicus did take minor orders, which sufficed for assuming a chapter canonry. The Catholic Encyclopedia proposes that his ordination was probable, as in 1537 he was one of four candidates for the episcopal seat of Warmia, a position that required ordination. Italy University of Bologna 1496–1501 Meanwhile, leaving Warmia in mid-1496—possibly with the retinue of the chapter's chancellor, Jerzy Pranghe, who was going to Italy—in the fall, possibly in October, Copernicus arrived in Bologna and a few months later (after 6 January 1497) signed himself into the register of the Bologna University of Jurists' "German nation", which included young Poles from Silesia, Prussia and Pomerania as well as students of other nationalities. During his three-year stay at Bologna, which occurred between fall 1496 and spring 1501, Copernicus seems to have devoted himself less keenly to studying canon law (he received his doctorate in canon law only after seven years, following a second return to Italy in 1503) than to studying the humanities—probably attending lectures by Filippo Beroaldo, Antonio Urceo, called Codro, Giovanni Garzoni, and Alessandro Achillini—and to studying astronomy. He met the famous astronomer Domenico Maria Novara da Ferrara and became his disciple and assistant. Copernicus was developing new ideas inspired by reading the "Epitome of the Almagest" (Epitome in Almagestum Ptolemei) by George von Peuerbach and Johannes Regiomontanus (Venice, 1496). He verified its observations about certain peculiarities in Ptolemy's theory of the Moon's motion, by conducting on 9 March 1497 at Bologna a memorable observation of the occultation of Aldebaran, the brightest star in the Taurus constellation, by the Moon. Copernicus the humanist sought confirmation for his growing doubts through close reading of Greek and Latin authors (Pythagoras, Aristarchos of Samos, Cleomedes, Cicero, Pliny the Elder, Plutarch, Philolaus, Heraclides, Ecphantos, Plato), gathering, especially while at Padua, fragmentary historic information about ancient astronomical, cosmological and calendar systems. Rome 1500 Copernicus spent the jubilee year 1500 in Rome, where he arrived with his brother Andrew that spring, doubtless to perform an apprenticeship at the Papal Curia. Here, too, however, he continued his astronomical work begun at Bologna, observing, for example, a lunar eclipse on the night of 5–6 November 1500. According to a later account by Rheticus, Copernicus also—probably privately, rather than at the Roman Sapienza—as a "Professor Mathematum" (professor of astronomy) delivered, "to numerous ... students and ... leading masters of the science", public lectures devoted probably to a critique of the mathematical solutions of contemporary astronomy. University of Padua 1501–1503 On his return journey doubtless stopping briefly at Bologna, in mid-1501 Copernicus arrived back in Warmia. After on 28 July receiving from the chapter a two-year extension of leave in order to study medicine (since "he may in future be a useful medical advisor to our Reverend Superior [Bishop Lucas Watzenrode] and the gentlemen of the chapter"), in late summer or in the fall he returned again to Italy, probably accompanied by his brother Andrew and by Canon Bernhard Sculteti. This time he studied at the University of Padua, famous as a seat of medical learning, and—except for a brief visit to Ferrara in May–June 1503 to pass examinations for, and receive, his doctorate in canon law—he remained at Padua from fall 1501 to summer 1503. Copernicus studied medicine probably under the direction of leading Padua professors—Bartolomeo da Montagnana, Girolamo Fracastoro, Gabriele Zerbi, Alessandro Benedetti—and read medical treatises that he acquired at this time, by Valescus de Taranta, Jan Mesue, Hugo Senensis, Jan Ketham, Arnold de Villa Nova, and Michele Savonarola, which would form the embryo of his later medical library. Astrology One of the subjects that Copernicus must have studied was astrology, since it was considered an important part of a medical education. However, unlike most other prominent Renaissance astronomers, he appears never to have practiced or expressed any interest in astrology. Greek studies As at Bologna, Copernicus did not limit himself to his official studies. It was probably the Padua years that saw the beginning of his Hellenistic interests. He familiarized himself with Greek language and culture with the aid of Theodorus Gaza's grammar (1495) and Johannes Baptista Chrestonius's dictionary (1499), expanding his studies of antiquity, begun at Bologna, to the writings of Bessarion, Lorenzo Valla, and others. There also seems to be evidence that it was during his Padua stay that the idea finally crystallized, of basing a new system of the world on the movement of the Earth. As the time approached for Copernicus to return home, in spring 1503 he journeyed to Ferrara where, on 31 May 1503, having passed the obligatory examinations, he was granted the degree of Doctor of Canon Law (Nicolaus Copernich de Prusia, Jure Canonico ... et doctoratus). No doubt it was soon after (at latest, in fall 1503) that he left Italy for good to return to Warmia. Planetary observations Copernicus made three observations of Mercury, with errors of −3, −15 and −1 minutes of arc. He made one of Venus, with an error of −24 minutes. Four were made of Mars, with errors of 2, 20, 77, and 137 minutes. Four observations were made of Jupiter, with errors of 32, 51, −11 and 25 minutes. He made four of Saturn, with errors of 31, 20, 23 and −4 minutes. Other observations With Novara, Copernicus observed an occultation of Aldebaran by the Moon on 9 March 1497. Copernicus also observed a conjunction of Saturn and the Moon on 4 March 1500. He saw an eclipse of the Moon on 6 November 1500. Work Having completed all his studies in Italy, 30-year-old Copernicus returned to Warmia, where he would live out the remaining 40 years of his life, apart from brief journeys to Kraków and to nearby Prussian cities: Toruń (Thorn), Gdańsk (Danzig), Elbląg (Elbing), Grudziądz (Graudenz), Malbork (Marienburg), Königsberg (Królewiec). The Prince-Bishopric of Warmia enjoyed substantial autonomy, with its own diet (parliament) and monetary unit (the same as in the other parts of Royal Prussia) and treasury. Copernicus was his uncle's secretary and physician from 1503 to 1510 (or perhaps until his uncle's death on 29 March 1512) and resided in the Bishop's castle at Lidzbark (Heilsberg), where he began work on his heliocentric theory. In his official capacity, he took part in nearly all his uncle's political, ecclesiastic and administrative-economic duties. From the beginning of 1504, Copernicus accompanied Watzenrode to sessions of the Royal Prussian diet held at Malbork and Elbląg and, write Dobrzycki and Hajdukiewicz, "participated ... in all the more important events in the complex diplomatic game that ambitious politician and statesman played in defense of the particular interests of Prussia and Warmia, between hostility to the [Teutonic] Order and loyalty to the Polish Crown." In 1504–1512 Copernicus made numerous journeys as part of his uncle's retinue—in 1504, to Toruń and Gdańsk, to a session of the Royal Prussian Council in the presence of Poland's King Alexander Jagiellon; to sessions of the Prussian diet at Malbork (1506), Elbląg (1507) and Sztum (Stuhm) (1512); and he may have attended a Poznań (Posen) session (1510) and the coronation of Poland's King Sigismund I the Old in Kraków (1507). Watzenrode's itinerary suggests that in spring 1509 Copernicus may have attended the Kraków sejm. It was probably on the latter occasion, in Kraków, that Copernicus submitted for printing at Jan Haller's press his translation, from Greek to Latin, of a collection, by the 7th-century Byzantine historian Theophylact Simocatta, of 85 brief poems called Epistles, or letters, supposed to have passed between various characters in a Greek story. They are of three kinds—"moral," offering advice on how people should live; "pastoral", giving little pictures of shepherd life; and "amorous", comprising love poems. They are arranged to follow one another in a regular rotation of subjects. Copernicus had translated the Greek verses into Latin prose, and he published his version as Theophilacti scolastici Simocati epistolae morales, rurales et amatoriae interpretatione latina, which he dedicated to his uncle in gratitude for all the benefits he had received from him. With this translation, Copernicus declared himself on the side of the humanists in the struggle over the question of whether Greek literature should be revived. Copernicus's first poetic work was a Greek epigram, composed probably during a visit to Kraków, for Johannes Dantiscus's epithalamium for Barbara Zapolya's 1512 wedding to King Zygmunt I the Old. Commentariolus – an initial outline of a heliocentric theory Some time before 1514, Copernicus wrote an initial outline of his heliocentric theory known only from later transcripts, by the title (perhaps given to it by a copyist), Nicolai Copernici de hypothesibus motuum coelestium a se constitutis commentariolus—commonly referred to as the Commentariolus. It was a succinct theoretical description of the world's heliocentric mechanism, without mathematical apparatus, and differed in some important details of geometric construction from De revolutionibus; but it was already based on the same assumptions regarding Earth's triple motions. The Commentariolus, which Copernicus consciously saw as merely a first sketch for his planned book, was not intended for printed distribution. He made only a very few manuscript copies available to his closest acquaintances, including, it seems, several Kraków astronomers with whom he collaborated in 1515–1530 in observing eclipses. Tycho Brahe would include a fragment from the Commentariolus in his own treatise, Astronomiae instauratae progymnasmata, published in Prague in 1602, based on a manuscript that he had received from the Bohemian physician and astronomer Tadeáš Hájek, a friend of Rheticus. The Commentariolus would appear complete in print for the first time only in 1878. Astronomical observations 1513–1516 In 1510 or 1512 Copernicus moved to Frombork, a town to the northwest at the Vistula Lagoon on the Baltic Sea coast. There, in April 1512, he participated in the election of Fabian of Lossainen as Prince-Bishop of Warmia. It was only in early June 1512 that the chapter gave Copernicus an "external curia"—a house outside the defensive walls of the cathedral mount. In 1514 he purchased the northwestern tower within the walls of the Frombork stronghold. He would maintain both these residences to the end of his life, despite the devastation of the chapter's buildings by a raid against Frauenburg carried out by the Teutonic Order in January 1520, during which Copernicus's astronomical instruments were probably destroyed. Copernicus conducted astronomical observations in 1513–1516 presumably from his external curia; and in 1522–1543, from an unidentified "small tower" (turricula), using primitive instruments modeled on ancient ones—the quadrant, triquetrum, armillary sphere. At Frombork Copernicus conducted over half of his more than 60 registered astronomical observations. Administrative duties in Warmia Having settled permanently at Frombork, where he would reside to the end of his life, with interruptions in 1516–1519 and 1520–21, Copernicus found himself at the Warmia chapter's economic and administrative center, which was also one of Warmia's two chief centers of political life. In the difficult, politically complex situation of Warmia, threatened externally by the Teutonic Order's aggressions (attacks by Teutonic bands; the Polish–Teutonic War of 1519–1521; Albert's plans to annex Warmia), internally subject to strong separatist pressures (the selection of the prince-bishops of Warmia; currency reform), he, together with part of the chapter, represented a program of strict cooperation with the Polish Crown and demonstrated in all his public activities (the defense of his country against the Order's plans of conquest; proposals to unify its monetary system with the Polish Crown's; support for Poland's interests in the Warmia dominion's ecclesiastic administration) that he was consciously a citizen of the Polish–Lithuanian Republic. Soon after the death of uncle Bishop Watzenrode, he participated in the signing of the Second Treaty of Piotrków Trybunalski (7 December 1512), governing the appointment of the Bishop of Warmia, declaring, despite opposition from part of the chapter, for loyal cooperation with the Polish Crown. That same year (before 8 November 1512) Copernicus assumed responsibility, as magister pistoriae, for administering the chapter's economic enterprises (he would hold this office again in 1530), having already since 1511 fulfilled the duties of chancellor and visitor of the chapter's estates. His administrative and economic duties did not distract Copernicus, in 1512–1515, from intensive observational activity. The results of his observations of Mars and Saturn in this period, and especially a series of four observations of the Sun made in 1515, led to the discovery of the variability of Earth's eccentricity and of the movement of the solar apogee in relation to the fixed stars, which in 1515–1519 prompted his first revisions of certain assumptions of his system. Some of the observations that he made in this period may have had a connection with a proposed reform of the Julian calendar made in the first half of 1513 at the request of the Bishop of Fossombrone, Paul of Middelburg. Their contacts in this matter in the period of the Fifth Lateran Council were later memorialized in a complimentary mention in Copernicus's dedicatory epistle in Dē revolutionibus orbium coelestium and in a treatise by Paul of Middelburg, Secundum compendium correctionis Calendarii (1516), which mentions Copernicus among the learned men who had sent the Council proposals for the calendar's emendation. During 1516–1521, Copernicus resided at Olsztyn (Allenstein) Castle as economic administrator of Warmia, including Olsztyn (Allenstein) and Pieniężno (Mehlsack). While there, he wrote a manuscript, Locationes mansorum desertorum (Locations of Deserted Fiefs), with a view to populating those fiefs with industrious farmers and so bolstering the economy of Warmia. When Olsztyn was besieged by the Teutonic Knights during the Polish–Teutonic War, Copernicus directed the defense of Olsztyn and Warmia by Royal Polish forces. He also represented the Polish side in the ensuing peace negotiations. Advisor on monetary reform Copernicus for years advised the Royal Prussian sejmik on monetary reform, particularly in the 1520s when that was a major question in regional Prussian politics. In 1526 he wrote a study on the value of money, "Monetae cudendae ratio". In it he formulated an early iteration of the theory called Gresham's law, that "bad" (debased) coinage drives "good" (un-debased) coinage out of circulation—several decades before Thomas Gresham. He also, in 1517, set down a quantity theory of money, a principal concept in modern economics. Copernicus's recommendations on monetary reform were widely read by leaders of both Prussia and Poland in their attempts to stabilize currency. Copernican system presented to the Pope In 1533, Johann Widmanstetter, secretary to Pope Clement VII, explained Copernicus's heliocentric system to the Pope and two cardinals. The Pope was so pleased that he gave Widmanstetter a valuable gift. In 1535 Bernard Wapowski wrote a letter to a gentleman in Vienna, urging him to publish an enclosed almanac, which he claimed had been written by Copernicus. This is the only mention of a Copernicus almanac in the historical records. The "almanac" was likely Copernicus's tables of planetary positions. Wapowski's letter mentions Copernicus's theory about the motions of the Earth. Nothing came of Wapowski's request, because he died a couple of weeks later. Following the death of Prince-Bishop of Warmia Mauritius Ferber (1 July 1537), Copernicus participated in the election of his successor, Johannes Dantiscus (20 September 1537). Copernicus was one of four candidates for the post, written in at the initiative of Tiedemann Giese; but his candidacy was actually pro forma, since Dantiscus had earlier been named coadjutor bishop to Ferber and since Dantiscus had the backing of Poland's King Sigismund I. At first Copernicus maintained friendly relations with the new Prince-Bishop, assisting him medically in spring 1538 and accompanying him that summer on an inspection tour of Chapter holdings. But that autumn, their friendship was strained by suspicions over Copernicus's housekeeper, Anna Schilling, whom Dantiscus banished from Frombork in spring 1539. Medical work In his younger days, Copernicus the physician had treated his uncle, brother and other chapter members. In later years he was called upon to attend the elderly bishops who in turn occupied the see of Warmia—Mauritius Ferber and Johannes Dantiscus—and, in 1539, his old friend Tiedemann Giese, Bishop of Chełmno (Kulm). In treating such important patients, he sometimes sought consultations from other physicians, including the physician to Duke Albert and, by letter, the Polish Royal Physician. In the spring of 1541, Duke Albert—former Grand Master of the Teutonic Order who had converted the Monastic State of the Teutonic Knights into a Lutheran and hereditary realm, the Duchy of Prussia, upon doing homage to his uncle, the King of Poland, Sigismund I—summoned Copernicus to Königsberg to attend the Duke's counselor, George von Kunheim, who had fallen seriously ill, and for whom the Prussian doctors seemed unable to do anything. Copernicus went willingly; he had met von Kunheim during negotiations over reform of the coinage. And Copernicus had come to feel that Albert himself was not such a bad person; the two had many intellectual interests in common. The Chapter readily gave Copernicus permission to go, as it wished to remain on good terms with the Duke, despite his Lutheran faith. In about a month the patient recovered, and Copernicus returned to Frombork. For a time, he continued to receive reports on von Kunheim's condition, and to send him medical advice by letter. Protestant attacks on the Copernican system Some of Copernicus's close friends turned Protestant, but Copernicus never showed a tendency in that direction. The first attacks on him came from Protestants. Wilhelm Gnapheus, a Dutch refugee settled in Elbląg, wrote a comedy in Latin, Morosophus (The Foolish Sage), and staged it at the Latin school that he had established there. In the play, Copernicus was caricatured as the eponymous Morosophus, a haughty, cold, aloof man who dabbled in astrology, considered himself inspired by God, and was rumored to have written a large work that was moldering in a chest. Elsewhere Protestants were the first to react to news of Copernicus's theory. Melanchthon wrote: Nevertheless, in 1551, eight years after Copernicus's death, astronomer Erasmus Reinhold published, under the sponsorship of Copernicus's former military adversary, the Protestant Duke Albert, the Prussian Tables, a set of astronomical tables based on Copernicus's work. Astronomers and astrologers quickly adopted it in place of its predecessors. Heliocentrism Some time before 1514 Copernicus made available to friends his "Commentariolus" ("Little Commentary"), a manuscript describing his ideas about the heliocentric hypothesis. It contained seven basic assumptions (detailed below). Thereafter he continued gathering data for a more detailed work. At about 1532, Copernicus had basically completed his work on the manuscript of Dē revolutionibus orbium coelestium; but despite urging by his closest friends, he resisted openly publishing his views, not wishing—as he confessed—to risk the scorn "to which he would expose himself on account of the novelty and incomprehensibility of his theses." Reception of the Copernican system in Rome In 1533, Johann Albrecht Widmannstetter delivered a series of lectures in Rome outlining Copernicus's theory. Pope Clement VII and several Catholic cardinals heard the lectures and were interested in the theory. On 1 November 1536, Cardinal Nikolaus von Schönberg, Archbishop of Capua, wrote to Copernicus from Rome: By then, Copernicus's work was nearing its definitive form, and rumors about his theory had reached educated people all over Europe. Despite urgings from many quarters, Copernicus delayed publication of his book, perhaps from fear of criticism—a fear delicately expressed in the subsequent dedication of his masterpiece to Pope Paul III. Scholars disagree on whether Copernicus's concern was limited to possible astronomical and philosophical objections, or whether he was also concerned about religious objections. De revolutionibus orbium coelestium Copernicus was still working on De revolutionibus orbium coelestium (even if not certain that he wanted to publish it) when in 1539 Georg Joachim Rheticus, a Wittenberg mathematician, arrived in Frombork. Philipp Melanchthon, a close theological ally of Martin Luther, had arranged for Rheticus to visit several astronomers and study with them. Rheticus became Copernicus's pupil, staying with him for two years and writing a book, Narratio prima (First Account), outlining the essence of Copernicus's theory. In 1542 Rheticus published a treatise on trigonometry by Copernicus (later included as chapters 13 and 14 of Book I of De revolutionibus). Under strong pressure from Rheticus, and having seen the favorable first general reception of his work, Copernicus finally agreed to give De revolutionibus to his close friend, Tiedemann Giese, bishop of Chełmno (Kulm), to be delivered to Rheticus for printing by the German printer Johannes Petreius at Nuremberg (Nürnberg), Germany. While Rheticus initially supervised the printing, he had to leave Nuremberg before it was completed, and he handed over the task of supervising the rest of the printing to a Lutheran theologian, Andreas Osiander. Osiander added an unauthorised and unsigned preface, defending Copernicus's work against those who might be offended by its novel hypotheses. He argued that "different hypotheses are sometimes offered for one and the same motion [and therefore] the astronomer will take as his first choice that hypothesis which is the easiest to grasp." According to Osiander, "these hypotheses need not be true nor even probable. [I]f they provide a calculus consistent with the observations, that alone is enough." Death Toward the close of 1542, Copernicus was seized with apoplexy and paralysis, and he died at age 70 on 24 May 1543. Legend has it that he was presented with the final printed pages of his Dē revolutionibus orbium coelestium on the very day that he died, allowing him to take farewell of his life's work. He is reputed to have awoken from a stroke-induced coma, looked at his book, and then died peacefully. Copernicus was reportedly buried in Frombork Cathedral, where a 1580 epitaph stood until being defaced; it was replaced in 1735. For over two centuries, archaeologists searched the cathedral in vain for Copernicus's remains. Efforts to locate them in 1802, 1909, 1939 had come to nought. In 2004 a team led by Jerzy Gąssowski, head of an archaeology and anthropology institute in Pułtusk, began a new search, guided by the research of historian Jerzy Sikorski. In August 2005, after scanning beneath the cathedral floor, they discovered what they believed to be Copernicus's remains. The discovery was announced only after further research, on 3 November 2008. Gąssowski said he was "almost 100 percent sure it is Copernicus". Forensic expert Capt. Dariusz Zajdel of the Polish Police Central Forensic Laboratory used the skull to reconstruct a face that closely resembled the features—including a broken nose and a scar above the left eye—on a Copernicus self-portrait. The expert also determined that the skull belonged to a man who had died around age 70—Copernicus's age at the time of his death. The grave was in poor condition, and not all the remains of the skeleton were found; missing, among other things, was the lower jaw. The DNA from the bones found in the grave matched hair samples taken from a book owned by Copernicus which was kept at the library of the University of Uppsala in Sweden. On 22 May 2010, Copernicus was given a second funeral in a Mass led by Józef Kowalczyk, the former papal nuncio to Poland and newly named Primate of Poland. Copernicus's remains were reburied in the same spot in Frombork Cathedral where part of his skull and other bones had been found. A black granite tombstone identifies him as the founder of the heliocentric theory and also a church canon. The tombstone bears a representation of Copernicus's model of the Solar System—a golden Sun encircled by six of the planets. Copernican system Predecessors Philolaus (c. 470 – c. 385 BCE) described an astronomical system in which a Central Fire (different from the Sun) occupied the centre of the universe, and a counter-Earth, the Earth, Moon, the Sun itself, planets, and stars all revolved around it, in that order outward from the centre. Heraclides Ponticus (387–312 BCE) proposed that the Earth rotates on its axis. Aristarchus of Samos (c. 310 BCE – c. 230 BCE) was the first to advance a theory that the Earth orbited the Sun. Further mathematical details of Aristarchus's heliocentric system were worked out around 150 BCE by the Hellenistic astronomer Seleucus of Seleucia. Though Aristarchus's original text has been lost, a reference in Archimedes' book The Sand Reckoner (Archimedis Syracusani Arenarius & Dimensio Circuli) describes a work by Aristarchus in which he advanced the heliocentric model. Thomas Heath gives the following English translation of Archimedes's text: In an early unpublished manuscript of De Revolutionibus (which still survives), Copernicus mentioned the (non-heliocentric) 'moving Earth' theory of Philolaus and the possibility that Aristarchus also had a 'moving Earth' theory (though it is unlikely that he was aware that it was a heliocentric theory). He removed both references from his final published manuscript. Copernicus was probably aware that Pythagoras's system involved a moving Earth. The Pythagorean system was mentioned by Aristotle. Copernicus owned a copy of Giorgio Valla's De expetendis et fugiendis rebus, which included a translation of Plutarch's reference to Aristarchus's heliostaticism. In Copernicus's dedication of On the Revolutions to Pope Paul III—which Copernicus hoped would dampen criticism of his heliocentric theory by "babblers ... completely ignorant of [astronomy]"—the book's author wrote that, in rereading all of philosophy, in the pages of Cicero and Plutarch he had found references to those few thinkers who dared to move the Earth "against the traditional opinion of astronomers and almost against common sense." The prevailing theory during Copernicus's lifetime was the one that Ptolemy published in his Almagest ; the Earth was the stationary center of the universe. Stars were embedded in a large outer sphere that rotated rapidly, approximately daily, while each of the planets, the Sun, and the Moon were embedded in their own, smaller spheres. Ptolemy's system employed devices, including epicycles, deferents and equants, to account for observations that the paths of these bodies differed from simple, circular orbits centered on the Earth. Beginning in the 10th century, a tradition criticizing Ptolemy developed within Islamic astronomy, which climaxed with Ibn al-Haytham of Basra's Al-Shukūk 'alā Baṭalamiyūs ("Doubts Concerning Ptolemy"). Several Islamic astronomers questioned the Earth's apparent immobility, and centrality within the universe. Some accepted that the earth rotates around its axis, such as Abu Sa'id al-Sijzi (d. ). According to al-Biruni, al-Sijzi invented an astrolabe based on a belief held by some of his contemporaries "that the motion we see is due to the Earth's movement and not to that of the sky." That others besides al-Sijzi held this view is further confirmed by a reference from an Arabic work in the 13th century which states: According to the geometers [or engineers] (muhandisīn), the earth is in constant circular motion, and what appears to be the motion of the heavens is actually due to the motion of the earth and not the stars. In the 12th century, Nur ad-Din al-Bitruji proposed a complete alternative to the Ptolemaic system (although not heliocentric). He declared the Ptolemaic system as an imaginary model, successful at predicting planetary positions, but not real or physical. Al-Bitruji's alternative system spread through most of Europe during the 13th century, with debates and refutations of his ideas continued up to the 16th century. Mathematical techniques developed in the 13th to 14th centuries by Mo'ayyeduddin al-Urdi, Nasir al-Din al-Tusi, and Ibn al-Shatir for geocentric models of planetary motions closely resemble some of those used later by Copernicus in his heliocentric models. Copernicus used what is now known as the Urdi lemma and the Tusi couple in the same planetary models as found in Arabic sources. Furthermore, the exact replacement of the equant by two epicycles used by Copernicus in the Commentariolus was found in an earlier work by Ibn al-Shatir (d. c. 1375) of Damascus. Ibn al-Shatir's lunar and Mercury models are also identical to those of Copernicus. This has led some scholars to argue that Copernicus must have had access to some yet to be identified work on the ideas of those earlier astronomers. However, no likely candidate for this conjectured work has yet come to light, and other scholars have argued that Copernicus could well have developed these ideas independently of the late Islamic tradition. Nevertheless, Copernicus cited some of the Islamic astronomers whose theories and observations he used in De Revolutionibus, namely al-Battani, Thabit ibn Qurra, al-Zarqali, Averroes, and al-Bitruji. It has been suggested that the idea of the Tusi couple may have arrived in Europe leaving few manuscript traces, since it could have occurred without the translation of any Arabic text into Latin. One possible route of transmission may have been through Byzantine science; Gregory Chioniades translated some of al-Tusi's works from Arabic into Byzantine Greek. Several Byzantine Greek manuscripts containing the Tusi-couple are still extant in Italy. Copernicus Copernicus's major work on his heliocentric theory was Dē revolutionibus orbium coelestium (On the Revolutions of the Celestial Spheres), published in the year of his death, 1543. He had formulated his theory by 1510. "He wrote out a short overview of his new heavenly arrangement [known as the Commentariolus, or Brief Sketch], also probably in 1510 [but no later than May 1514], and sent it off to at least one correspondent beyond Varmia [the Latin for "Warmia"]. That person in turn copied the document for further circulation, and presumably the new recipients did, too ...". Copernicus's Commentariolus summarized his heliocentric theory. It listed the "assumptions" upon which the theory was based, as follows: There is no one center of all the celestial circles or spheres. The center of the earth is not the center of the universe, but only the center towards which heavy bodies move and the center of the lunar sphere. All the spheres surround the sun as if it were in the middle of them all, and therefore the center of the universe is near the sun. The ratio of the earth's distance from the sun to the height of the firmament (outermost celestial sphere containing the stars) is so much smaller than the ratio of the earth's radius to its distance from the sun that the distance from the earth to the sun is imperceptible in comparison with the height of the firmament. Whatever motion appears in the firmament arises not from any motion of the firmament, but from the earth's motion. The earth together with its circumjacent elements performs a complete rotation on its fixed poles in a daily motion, while the firmament and highest heaven abide unchanged. What appear to us as motions of the sun arise not from its motion but from the motion of the earth and our sphere, with which we revolve about the sun like any other planet. The earth has, then, more than one motion. The apparent retrograde and direct motion of the planets arises not from their motion but from the earth's. The motion of the earth alone, therefore, suffices to explain so many apparent inequalities in the heavens. De revolutionibus itself was divided into six sections or parts, called "books": General vision of the heliocentric theory, and a summarized exposition of his idea of the World Mainly theoretical, presents the principles of spherical astronomy and a list of stars (as a basis for the arguments developed in the subsequent books) Mainly dedicated to the apparent motions of the Sun and to related phenomena Description of the Moon and its orbital motions Exposition of the motions in longitude of the non-terrestrial planets Exposition of the motions in latitude of the non-terrestrial planets Successors Georg Joachim Rheticus could have been Copernicus's successor, but did not rise to the occasion. Erasmus Reinhold could have been his successor, but died prematurely. The first of the great successors was Tycho Brahe (though he did not think the Earth orbited the Sun), followed by Johannes Kepler, who had collaborated with Tycho in Prague and benefited from Tycho's decades' worth of detailed observational data. Despite the near universal acceptance later of the heliocentric idea (though not the epicycles or the circular orbits), Copernicus's theory was originally slow to catch on. Scholars hold that sixty years after the publication of The Revolutions there were only around 15 astronomers espousing Copernicanism in all of Europe: "Thomas Digges and Thomas Harriot in England; Giordano Bruno and Galileo Galilei in Italy; Diego Zuniga in Spain; Simon Stevin in the Low Countries; and in Germany, the largest group—Georg Joachim Rheticus, Michael Maestlin, Christoph Rothmann (who may have later recanted), and Johannes Kepler." Additional possibilities are Englishman William Gilbert, along with Achilles Gasser, Georg Vogelin, Valentin Otto, and Tiedemann Giese. The Barnabite priest Redento Baranzano supported Copernicus's view in his Uranoscopia (1617) but was forced to retract it. Arthur Koestler, in his popular book The Sleepwalkers, asserted that Copernicus's book had not been widely read on its first publication. This claim was trenchantly criticised by Edward Rosen, and has been decisively disproved by Owen Gingerich, who examined nearly every surviving copy of the first two editions and found copious marginal notes by their owners throughout many of them. Gingerich published his conclusions in 2004 in The Book Nobody Read. The intellectual climate of the time "remained dominated by Aristotelian philosophy and the corresponding Ptolemaic astronomy. At that time there was no reason to accept the Copernican theory, except for its mathematical simplicity [by avoiding using the equant in determining planetary positions]." Tycho Brahe's system ("that the earth is stationary, the sun revolves about the earth, and the other planets revolve about the sun") also directly competed with Copernicus's. It was only a half-century later with the work of Kepler and Galileo that any substantial evidence defending Copernicanism appeared, starting "from the time when Galileo formulated the principle of inertia ... [which] helped to explain why everything would not fall off the earth if it were in motion." "[Not until] after Isaac Newton formulated the universal law of gravitation and the laws of mechanics [in his 1687 Principia], which unified terrestrial and celestial mechanics, was the heliocentric view generally accepted." Controversy The immediate result of the 1543 publication of Copernicus's book was only mild controversy. At the Council of Trent (1545–1563) neither Copernicus's theory nor calendar reform (which would later use tables deduced from Copernicus's calculations) were discussed. It has been much debated why it was not until six decades after the publication of De revolutionibus that the Catholic Church took any official action against it, even the efforts of Tolosani going unheeded. Catholic side opposition only commenced seventy-three years later, when it was occasioned by Galileo. Tolosani The first notable to move against Copernicanism was the Magister of the Holy Palace (i.e., the Catholic Church's chief censor), Dominican Bartolomeo Spina, who "expressed a desire to stamp out the Copernican doctrine". But with Spina's death in 1546, his cause fell to his friend, the well-known theologian-astronomer, the Dominican Giovanni Maria Tolosani of the Convent of St. Mark in Florence. Tolosani had written a treatise on reforming the calendar (in which astronomy would play a large role) and had attended the Fifth Lateran Council (1512–1517) to discuss the matter. He had obtained a copy of De Revolutionibus in 1544. His denunciation of Copernicanism was written a year later, in 1545, in an appendix to his unpublished work, On the Truth of Sacred Scripture. Emulating the rationalistic style of Thomas Aquinas, Tolosani sought to refute Copernicanism by philosophical argument. Copernicanism was absurd, according to Tolosani, because it was scientifically unproven and unfounded. First, Copernicus had assumed the motion of the Earth but offered no physical theory whereby one would deduce this motion. (No one realized that the investigation into Copernicanism would result in a rethinking of the entire field of physics.) Second, Tolosani charged that Copernicus's thought process was backwards. He held that Copernicus had come up with his idea and then sought phenomena that would support it, rather than observing phenomena and deducing from them the idea of what caused them. In this, Tolosani was linking Copernicus's mathematical equations with the practices of the Pythagoreans (whom Aristotle had made arguments against, which were later picked up by Thomas Aquinas). It was argued that mathematical numbers were a mere product of the intellect without any physical reality, and as such could not provide physical causes in the investigation of nature. Some astronomical hypotheses at the time (such as epicycles and eccentrics) were seen as mere mathematical devices to adjust calculations of where the heavenly bodies would appear, rather than an explanation of the cause of those motions. (As Copernicus still maintained the idea of perfectly spherical orbits, he relied on epicycles.) This "saving the phenomena" was seen as proof that astronomy and mathematics could not be taken as serious means to determine physical causes. Tolosani invoked this view in his final critique of Copernicus, saying that his biggest error was that he had started with "inferior" fields of science to make pronouncements about "superior" fields. Copernicus had used mathematics and astronomy to postulate about physics and cosmology, rather than beginning with the accepted principles of physics and cosmology to determine things about astronomy and mathematics. Thus Copernicus seemed to be undermining the whole system of the philosophy of science at the time. Tolosani held that Copernicus had fallen into philosophical error because he had not been versed in physics and logic; anyone without such knowledge would make a poor astronomer and be unable to distinguish truth from falsehood. Because Copernicanism had not met the criteria for scientific truth set out by Thomas Aquinas, Tolosani held that it could only be viewed as a wild unproven theory. Tolosani recognized that the Ad Lectorem preface to Copernicus's book was not actually by him. Its thesis that astronomy as a whole would never be able to make truth claims was rejected by Tolosani (though he still held that Copernicus's attempt to describe physical reality had been faulty); he found it ridiculous that Ad Lectorem had been included in the book (unaware that Copernicus had not authorized its inclusion). Tolosani wrote: "By means of these words [of the Ad Lectorem], the foolishness of this book's author is rebuked. For by a foolish effort he [Copernicus] tried to revive the weak Pythagorean opinion [that the element of fire was at the center of the Universe], long ago deservedly destroyed, since it is expressly contrary to human reason and also opposes holy writ. From this situation, there could easily arise disagreements between Catholic expositors of holy scripture and those who might wish to adhere obstinately to this false opinion." Tolosani declared: "Nicolaus Copernicus neither read nor understood the arguments of Aristotle the philosopher and Ptolemy the astronomer." Tolosani wrote that Copernicus "is expert indeed in the sciences of mathematics and astronomy, but he is very deficient in the sciences of physics and logic. Moreover, it appears that he is unskilled with regard to [the interpretation of] holy scripture, since he contradicts several of its principles, not without danger of infidelity to himself and the readers of his book. ... his arguments have no force and can very easily be taken apart. For it is stupid to contradict an opinion accepted by everyone over a very long time for the strongest reasons, unless the impugner uses more powerful and insoluble demonstrations and completely dissolves the opposed reasons. But he does not do this in the least." Tolosani declared that he had written against Copernicus "for the purpose of preserving the truth to the common advantage of the Holy Church." Despite this, his work remained unpublished and there is no evidence that it received serious consideration. Robert Westman describes it as becoming a "dormant" viewpoint with "no audience in the Catholic world" of the late sixteenth century, but also notes that there is some evidence that it did become known to Tommaso Caccini, who would criticize Galileo in a sermon in December 1613. Theology Tolosani may have criticized the Copernican theory as scientifically unproven and unfounded, but the theory also conflicted with the theology of the time, as can be seen in a sample of the works of John Calvin. In his Commentary on Genesis he said that "We indeed are not ignorant that the circuit of the heavens is finite, and that the earth, like a little globe, is placed in the centre." In his commentary on Psalms 93:1 he states that "The heavens revolve daily, and, immense as is their fabric and inconceivable the rapidity of their revolutions, we experience no concussion ... How could the earth hang suspended in the air were it not upheld by God's hand? By what means could it maintain itself unmoved, while the heavens above are in constant rapid motion, did not its Divine Maker fix and establish it." One sharp point of conflict between Copernicus's theory and the Bible concerned the story of the Battle of Gibeon in the Book of Joshua where the Hebrew forces were winning but whose opponents were likely to escape once night fell. This is averted by Joshua's prayers causing the Sun and the Moon to stand still. Martin Luther once made a remark about Copernicus, although without mentioning his name. According to Anthony Lauterbach, while eating with Martin Luther the topic of Copernicus arose during dinner on 4 June 1539 (in the same year as professor George Joachim Rheticus of the local University had been granted leave to visit him). Luther is said to have remarked "So it goes now. Whoever wants to be clever must agree with nothing others esteem. He must do something of his own. This is what that fellow does who wishes to turn the whole of astronomy upside down. Even in these things that are thrown into disorder I believe the Holy Scriptures, for Joshua commanded the sun to stand still and not the earth." These remarks were made four years before the publication of On the Revolutions of the Heavenly Spheres and a year before Rheticus's Narratio Prima. In John Aurifaber's account of the conversation Luther calls Copernicus "that fool" rather than "that fellow", this version is viewed by historians as less reliably sourced. Luther's collaborator Philipp Melanchthon also took issue with Copernicanism. After receiving the first pages of Narratio Prima from Rheticus himself, Melanchthon wrote to Mithobius (physician and mathematician Burkard Mithob of Feldkirch) on 16 October 1541 condemning the theory and calling for it to be repressed by governmental force, writing "certain people believe it is a marvelous achievement to extol so crazy a thing, like that Polish astronomer who makes the earth move and the sun stand still. Really, wise governments ought to repress impudence of mind." It had appeared to Rheticus that Melanchton would understand the theory and would be open to it. This was because Melanchton had taught Ptolemaic astronomy and had even recommended his friend Rheticus to an appointment to the Deanship of the Faculty of Arts & Sciences at the University of Wittenberg after he had returned from studying with Copernicus. Rheticus's hopes were dashed when six years after the publication of De Revolutionibus Melanchthon published his Initia Doctrinae Physicae presenting three grounds to reject Copernicanism. These were "the evidence of the senses, the thousand-year consensus of men of science, and the authority of the Bible". Blasting the new theory Melanchthon wrote, "Out of love for novelty or in order to make a show of their cleverness, some people have argued that the earth moves. They maintain that neither the eighth sphere nor the sun moves, whereas they attribute motion to the other celestial spheres, and also place the earth among the heavenly bodies. Nor were these jokes invented recently. There is still extant Archimedes's book on The Sand Reckoner; in which he reports that Aristarchus of Samos propounded the paradox that the sun stands still and the earth revolves around the sun. Even though subtle experts institute many investigations for the sake of exercising their ingenuity, nevertheless public proclamation of absurd opinions is indecent and sets a harmful example." Melanchthon went on to cite Bible passages and then declare "Encouraged by this divine evidence, let us cherish the truth and let us not permit ourselves to be alienated from it by the tricks of those who deem it an intellectual honor to introduce confusion into the arts." In the first edition of Initia Doctrinae Physicae, Melanchthon even questioned Copernicus's character claiming his motivation was "either from love of novelty or from desire to appear clever", these more personal attacks were largely removed by the second edition in 1550. Another Protestant theologian who disparaged heliocentrism on scriptural grounds was John Owen. In a passing remark in an essay on the origin of the sabbath, he characterised "the late hypothesis, fixing the sun as in the centre of the world" as being "built on fallible phenomena, and advanced by many arbitrary presumptions against evident testimonies of Scripture." In Roman Catholic circles, Copernicus's book was incorporated into scholarly curricula throughout the 16th century. For example, at the University of Salamanca in 1561 it became one of four text books that students of astronomy could choose from, and in 1594 it was made mandatory. German Jesuit Nicolaus Serarius was one of the first Catholics to write against Copernicus's theory as heretical, citing the Joshua passage, in a work published in 1609–1610, and again in a book in 1612. In his 12 April 1615 letter to a Catholic defender of Copernicus, Paolo Antonio Foscarini, Catholic Cardinal Robert Bellarmine condemned Copernican theory, writing, "not only the Holy Fathers, but also the modern commentaries on Genesis, the Psalms, Ecclesiastes, and Joshua, you will find all agreeing in the literal interpretation that the sun is in heaven and turns around the earth with great speed, and that the earth is very far from heaven and sits motionless at the center of the world ... Nor can one answer that this is not a matter of faith, since if it is not a matter of faith 'as regards the topic,' it is a matter of faith 'as regards the speaker': and so it would be heretical to say that Abraham did not have two children and Jacob twelve, as well as to say that Christ was not born of a virgin, because both are said by the Holy Spirit through the mouth of prophets and apostles." One year later, the Roman Inquisition prohibited Copernicus's work. Nevertheless, the Spanish Inquisition never banned the De revolutionibus, which continued to be taught at Salamanca. Ingoli Perhaps the most influential opponent of the Copernican theory was Francesco Ingoli, a Catholic priest. Ingoli wrote a January 1616 essay to Galileo presenting more than twenty arguments against the Copernican theory. Though "it is not certain, it is probable that he [Ingoli] was commissioned by the Inquisition to write an expert opinion on the controversy", (after the Congregation of the Index's decree against Copernicanism on 5 March 1616, Ingoli was officially appointed its consultant). Galileo himself was of the opinion that the essay played an important role in the rejection of the theory by church authorities, writing in a later letter to Ingoli that he was concerned that people thought the theory was rejected because Ingoli was right. Ingoli presented five physical arguments against the theory, thirteen mathematical arguments (plus a separate discussion of the sizes of stars), and four theological arguments. The physical and mathematical arguments were of uneven quality, but many of them came directly from the writings of Tycho Brahe, and Ingoli repeatedly cited Brahe, the leading astronomer of the era. These included arguments about the effect of a moving Earth on the trajectory of projectiles, and about parallax and Brahe's argument that the Copernican theory required that stars be absurdly large. Two of Ingoli's theological issues with the Copernican theory were "common Catholic beliefs not directly traceable to Scripture: the doctrine that hell is located at the center of Earth and is most distant from heaven; and the explicit assertion that Earth is motionless in a hymn sung on Tuesdays as part of the Liturgy of the Hours of the Divine Office prayers regularly recited by priests." Ingoli cited Robert Bellarmine in regards to both of these arguments, and may have been trying to convey to Galileo a sense of Bellarmine's opinion. Ingoli also cited Genesis 1:14 where God places "lights in the firmament of the heavens to divide the day from the night." Ingoli did not think the central location of the Sun in the Copernican theory was compatible with it being described as one of the lights placed in the firmament. Like previous commentators Ingoli also pointed to the passages about the Battle of Gibeon. He dismissed arguments that they should be taken metaphorically, saying "Replies which assert that Scripture speaks according to our mode of understanding are not satisfactory: both because in explaining the Sacred Writings the rule is always to preserve the literal sense, when it is possible, as it is in this case; and also because all the [Church] Fathers unanimously take this passage to mean that the Sun which was truly moving stopped at Joshua's request. An interpretation that is contrary to the unanimous consent of the Fathers is condemned by the Council of Trent, Session IV, in the decree on the edition and use of the Sacred Books. Furthermore, although the Council speaks about matters of faith and morals, nevertheless it cannot be denied that the Holy Fathers would be displeased with an interpretation of Sacred Scriptures which is contrary to their common agreement." However, Ingoli closed the essay by suggesting Galileo respond primarily to the better of his physical and mathematical arguments rather than to his theological arguments, writing "Let it be your choice to respond to this either entirely of in part—clearly at least to the mathematical and physical arguments, and not to all even of these, but to the more weighty ones." When Galileo wrote a letter in reply to Ingoli years later, he in fact only addressed the mathematical and physical arguments. In March 1616, in connection with the Galileo affair, the Roman Catholic Church's Congregation of the Index issued a decree suspending De revolutionibus until it could be "corrected," on the grounds of ensuring that Copernicanism, which it described as a "false Pythagorean doctrine, altogether contrary to the Holy Scripture," would not "creep any further to the prejudice of Catholic truth." The corrections consisted largely of removing or altering wording that spoke of heliocentrism as a fact, rather than a hypothesis. The corrections were made based largely on work by Ingoli. Galileo On the orders of Pope Paul V, Cardinal Robert Bellarmine gave Galileo prior notice that the decree was about to be issued, and warned him that he could not "hold or defend" the Copernican doctrine. The corrections to De revolutionibus, which omitted or altered nine sentences, were issued four years later, in 1620. In 1633, Galileo Galilei was convicted of grave suspicion of heresy for "following the position of Copernicus, which is contrary to the true sense and authority of Holy Scripture", and was placed under house arrest for the rest of his life. At the instance of Roger Boscovich, the Catholic Church's 1758 Index of Prohibited Books omitted the general prohibition of works defending heliocentrism, but retained the specific prohibitions of the original uncensored versions of De revolutionibus and Galileo's Dialogue Concerning the Two Chief World Systems. Those prohibitions were finally dropped from the 1835 Index. Languages, name, nationality Languages Copernicus is postulated to have spoken Latin, German, and Polish with equal fluency; he also spoke Greek and Italian. The vast majority of Copernicus's extant writings are in Latin, the language of European academia in his lifetime. Arguments for German being Copernicus's native tongue are that he was born into a predominantly German-speaking urban patrician class using German, next to Latin, as language of trade and commerce in written documents, and that, while studying canon law at the University of Bologna in 1496, he signed into the German natio (Natio Germanorum)—a student organization which, according to its 1497 by-laws, was open to students of all kingdoms and states whose mother-tongue was German. However, according to French philosopher Alexandre Koyré, Copernicus's registration with the Natio Germanorum does not in itself imply that Copernicus considered himself German, since students from Prussia and Silesia were routinely so categorized, which carried certain privileges that made it a natural choice for German-speaking students, regardless of their ethnicity or self-identification. Name The surname Kopernik, Copernik, Koppernigk, in various spellings, is recorded in Kraków from c. 1350, apparently given to people from the village of Koperniki (prior to 1845 rendered Kopernik, Copernik, Copirnik, and Koppirnik) in the Duchy of Nysa, 10 km south of Nysa, and now 10 km north of the Polish-Czech border. Nicolaus Copernicus's great-grandfather is recorded as having received citizenship in Kraków in 1386. The toponym Kopernik (modern Koperniki) has been variously tied to the Polish word for "dill" (koper) and the German word for "copper" (Kupfer). The suffix -nik (or plural, -niki) denotes a Slavic and Polish agent noun. As was common in the period, the spellings of both the toponym and the surname vary greatly. Copernicus "was rather indifferent about orthography". During his childhood, about 1480, the name of his father (and thus of the future astronomer) was recorded in Thorn as Niclas Koppernigk. At Kraków he signed himself, in Latin, Nicolaus Nicolai de Torunia (Nicolaus, son of Nicolaus, of Toruń). At Bologna, in 1496, he registered in the Matricula Nobilissimi Germanorum Collegii, resp. Annales Clarissimae Nacionis Germanorum, of the Natio Germanica Bononiae, as Dominus Nicolaus Kopperlingk de Thorn – IX grosseti. At Padua he signed himself "Nicolaus Copernik", later "Coppernicus". The astronomer thus Latinized his name to Coppernicus, generally with two "p"s (in 23 of 31 documents studied), but later in life he used a single "p". On the title page of De revolutionibus, Rheticus published the name (in the genitive, or possessive, case) as "Nicolai Copernici". Nationality There has been discussion of Copernicus's nationality and of whether it is meaningful to ascribe to him a nationality in the modern sense. Nicolaus Copernicus was born and raised in Royal Prussia, a semiautonomous and multilingual region of the Kingdom of Poland. He was the child of German-speaking parents and grew up with German as his mother tongue. His first alma mater was the University of Kraków in Poland. When he later studied in Italy, at the University of Bologna, he joined the German Nation, a student organization for German-speakers of all allegiances (Germany would not become a nation-state until 1871). His family stood against the Teutonic Order and actively supported the city of Toruń during the Thirteen Years' War. Copernicus's father lent money to Poland's King Casimir IV Jagiellon to finance the war against the Teutonic Knights, but the inhabitants of Royal Prussia also resisted the Polish crown's efforts for greater control over the region. Encyclopedia Americana, The Concise Columbia Encyclopedia, The Oxford World Encyclopedia, and World Book Encyclopedia refer to Copernicus as a "Polish astronomer". Sheila Rabin, writing in the Stanford Encyclopedia of Philosophy, describes Copernicus as a "child of a German family [who] was a subject of the Polish crown", while Manfred Weissenbacher writes that Copernicus's father was a Germanized Pole. noted that most of the 19th and 20th century encyclopedias, particularly the English-language sources, described Copernicus as a "German scientist". Kasparek and Kasparek stated that it is incorrect to ascribe him German or Polish nationality, as "a 16th century figure cannot be described with the use of 19th and 20th century concepts". No Polish texts by Copernicus survive due to the rarity of Polish language in literature before the writings of the Polish Renaissance poets Mikołaj Rej and Jan Kochanowski (educated Poles had generally written in Latin); but it is known that Copernicus knew Polish on a par with German and Latin. Historian Michael Burleigh describes the nationality debate as a "totally insignificant battle" between German and Polish scholars during the interwar period. Polish astronomer Konrad Rudnicki calls the discussion a "fierce scholarly quarrel in ... times of nationalism" and describes Copernicus as an inhabitant of a German-speaking territory that belonged to Poland, himself being of mixed Polish-German extraction. Czesław Miłosz describes the debate as an "absurd" projection of a modern understanding of nationality onto Renaissance people, who identified with their home territories rather than with a nation. Similarly, historian Norman Davies writes that Copernicus, as was common in his era, was "largely indifferent" to nationality, being a local patriot who considered himself "Prussian". Miłosz and Davies both write that Copernicus had a German-language cultural background, while his working language was Latin in accord with the usage of the time. Additionally, according to Davies, "there is ample evidence that he knew the Polish language". Davies concludes that, "Taking everything into consideration, there is good reason to regard him both as a German and as a Pole: and yet, in the sense that modern nationalists understand it, he was neither." Commemoration Orbiting Astronomical Observatory 3 The third in NASA's Orbiting Astronomical Observatory series of missions, launched on 21 August 1972, was named Copernicus after its successful launch. The satellite carried an X-ray detector and an ultraviolet telescope, and operated until February 1981. Copernicia Copernicia, a genus of palm trees native to South America and the Greater Antilles, was named after Copernicus in 1837. In some of the species, the leaves are coated with a thin layer of wax, known as carnauba wax. Copernicium On 14 July 2009, the discoverers, from the Gesellschaft für Schwerionenforschung in Darmstadt, Germany, of chemical element 112 (temporarily named ununbium) proposed to the International Union of Pure and Applied Chemistry (IUPAC) that its permanent name be "copernicium" (symbol Cn). "After we had named elements after our city and our state, we wanted to make a statement with a name that was known to everyone," said Hofmann. "We didn't want to select someone who was a German. We were looking world-wide." On the 537th anniversary of his birthday the name became official. 55 Cancri A In July 2014 the International Astronomical Union launched NameExoWorlds, a process for giving proper names to certain exoplanets and their host stars. The process involved public nomination and voting for the new names. In December 2015, the IAU announced the winning name for 55 Cancri A was Copernicus. Poland Copernicus is commemorated by the Nicolaus Copernicus Monument in Warsaw, designed by Bertel Thorvaldsen (1822), completed in 1830; and by Jan Matejko's 1873 painting, Astronomer Copernicus, or Conversations with God. Named for Copernicus are Nicolaus Copernicus University in Toruń; Warsaw's Copernicus Science Centre, the Centrum Astronomiczne im. Mikołaja Kopernika (a principal Polish research institution in astrophysics) and Copernicus Hospital in Poland's fourth largest city, Łódź. In arts and literature Contemporary literary and artistic works inspired by Copernicus include: Symphony No. 2 (Górecki), a choral symphony, by composer Henryk Górecki, commissioned by the Kosciuszko Foundation. The piece was composed in honor of the 500th anniversary of the birthday of Nicolaus Copernicus. Mover of the Earth, Stopper of the Sun, overture for symphony orchestra, by composer Svitlana Azarova, commissioned by ONDIF. Doctor Copernicus, 1975 novel by John Banville, sketching the life of Copernicus and the 16th-century world in which he lived. Orb: On the Movements of the Earth, a Japanese manga series from 2020, later adapted into anime See also Copernican principle Copernicus Science Centre History of philosophy in Poland, Renaissance List of multiple discoveries List of Roman Catholic scientist-clerics Nicolaus Copernicus Astronomical Center of the Polish Academy of Sciences Notes References Sources Davies, Norman, God's Playground: A History of Poland, 2 vols., New York, Columbia University Press, 1982, . Dobrzycki, Jerzy, and Leszek Hajdukiewicz, "Kopernik, Mikołaj", Polski słownik biograficzny (Polish Biographical Dictionary), vol. XIV, Wrocław, Polish Academy of Sciences, 1969, pp. 3–16. (Extracts from Finocchiaro (1989)) Original edition published by Hutchinson (1959, London) Miłosz, Czesław, The History of Polish Literature, second edition, Berkeley, University of California Press, 1969, . Mizwa, Stephen, Nicolaus Copernicus, 1543–1943, Kessinger Publishing, 1943. . Features a fictional play about Rheticus' visit to Copernicus, sandwiched between chapters about the visit's pre-history and post-history. (A biography of Danish astronomer and alchemist Tycho Brahe.) External links Primary sources De Revolutionibus, autograph manuscript – Full digital facsimile, Jagiellonian University Polish translations of letters written by Copernicus in Latin or German Online Galleries, History of Science Collections, University of Oklahoma Libraries High resolution images of works by and/or portraits of Nicolaus Copernicus in .jpg and .tiff format. Works by Nicolaus Copernicus in digital library Polona General Copernicus in Torun Copernicus House, District Museum in Toruń Nicolaus Copernicus Thorunensis by the Copernican Academic Portal Nicolaus Copernicus Museum in Frombork Portraits of Copernicus: Copernicus's face reconstructed; Portrait ; Nicolaus Copernicus Copernicus and Astrology Stanford Encyclopedia of Philosophy entry 'Body of Copernicus' identified – BBC article including image of Copernicus using facial reconstruction based on located skull Nicolaus Copernicus on the 1000 Polish Zloty banknote. Copernicus's model for Mars Retrograde Motion Copernicus's explanation for retrograde motion Geometry of Maximum Elongation Copernican Model Portraits of Nicolaus Copernicus About De Revolutionibus The Copernican Universe from the De Revolutionibus De Revolutionibus, 1543 first edition – Full digital facsimile, Lehigh University The text of the De Revolutionibus Digitized edition of De Revolutionibus Orbium Coelestium (1543) with annotations of Michael Maestlin on e-rara Prizes Nicolaus Copernicus Prize, founded by the City of Kraków, awarded since 1995 German-Polish cooperation German-Polish "Copernicus Prize" awarded to German and Polish scientists (DFG website) Büro Kopernikus – An initiative of German Federal Cultural Foundation German-Polish school project on Copernicus 1473 births 1543 deaths 16th-century German writers 16th-century German male writers 16th-century writers in Latin 16th-century mathematicians 16th-century Polish writers Anglican saints Burials at Frombork Cathedral Canons of Warmia Copernican Revolution 16th-century German astronomers German economists 16th-century German philosophers German Roman Catholics Jagiellonian University alumni People celebrated in the Lutheran liturgical calendar People from Royal Prussia People from Toruń People of the Polish–Teutonic War (1519–1521) 15th-century Polish astronomers Polish economists 16th-century Polish philosophers 16th-century Polish scientists Polish Roman Catholic writers Catholic clergy scientists University of Bologna alumni University of Ferrara alumni University of Padua alumni 16th-century German mathematicians 16th-century Polish astronomers Polish writers in Latin 16th-century economists 15th-century German philosophers Canon law jurists
Nicolaus Copernicus
[ "Astronomy" ]
17,440
[ "Copernican Revolution", "History of astronomy" ]
323,608
https://en.wikipedia.org/wiki/Global%20alert
Global alert is used as the global radio-communications network during times of international crises or threats to international security. Global Alerts are also issued by agencies such as the World Health Organization (WHO), when there is a perceived threat of an international pandemic, (global epidemic), such as the threat of a SARS, (Severe Acute Respiratory Syndrome), pandemic during March 2003, due to its high contagion level which was rapidly spread by travelers sharing international flights. The global alert released by the World Health Organization regarding the SARS outbreak and its rapid contagion saved many lives: The alert about the disease, precautionary measures, and preventive measures to be taken by individuals, including specific hygiene information needed to arrest the spread of SARS was communicated instantly throughout the world. Global Outbreak Alert & Response Network (GOARN) GOARN is a system of cooperating institutions and networks that are constantly ready to respond to disease outbreaks. Established in 2000, it is a branch of the World Health Organization. GOARN's partners include the Red Cross and divisions of the United Nations such as UNICEF and UNHCR. In addition to providing aid to areas affected by disease outbreaks, GOARN also works to standardize protocols for medical response systems. References Alert measurement systems
Global alert
[ "Technology" ]
265
[ "Warning systems", "Alert measurement systems" ]
323,631
https://en.wikipedia.org/wiki/Wieferich%20prime
In number theory, a Wieferich prime is a prime number p such that p2 divides , therefore connecting these primes with Fermat's little theorem, which states that every odd prime p divides . Wieferich primes were first described by Arthur Wieferich in 1909 in works pertaining to Fermat's Last Theorem, at which time both of Fermat's theorems were already well known to mathematicians. Since then, connections between Wieferich primes and various other topics in mathematics have been discovered, including other types of numbers and primes, such as Mersenne and Fermat numbers, specific types of pseudoprimes and some types of numbers generalized from the original definition of a Wieferich prime. Over time, those connections discovered have extended to cover more properties of certain prime numbers as well as more general subjects such as number fields and the abc conjecture. , the only known Wieferich primes are 1093 and 3511 . Equivalent definitions The stronger version of Fermat's little theorem, which a Wieferich prime satisfies, is usually expressed as a congruence relation . From the definition of the congruence relation on integers, it follows that this property is equivalent to the definition given at the beginning. Thus if a prime p satisfies this congruence, this prime divides the Fermat quotient . The following are two illustrative examples using the primes 11 and 1093: For p = 11, we get which is 93 and leaves a remainder of 5 after division by 11, hence 11 is not a Wieferich prime. For p = 1093, we get or 485439490310...852893958515 (302 intermediate digits omitted for clarity), which leaves a remainder of 0 after division by 1093 and thus 1093 is a Wieferich prime. Wieferich primes can be defined by other equivalent congruences. If p is a Wieferich prime, one can multiply both sides of the congruence by 2 to get . Raising both sides of the congruence to the power p shows that a Wieferich prime also satisfies , and hence for all . The converse is also true: for some implies that the multiplicative order of 2 modulo p2 divides gcd, φ, that is, and thus p is a Wieferich prime. This also implies that Wieferich primes can be defined as primes p such that the multiplicative orders of 2 modulo p and modulo p2 coincide: , (By the way, ord10932 = 364, and ord35112 = 1755). H. S. Vandiver proved that if and only if . History and search status In 1902, Meyer proved a theorem about solutions of the congruence ap − 1 ≡ 1 (mod pr). Later in that decade Arthur Wieferich showed specifically that if the first case of Fermat's last theorem has solutions for an odd prime exponent, then that prime must satisfy that congruence for a = 2 and r = 2. In other words, if there exist solutions to xp + yp + zp = 0 in integers x, y, z and p an odd prime with p ∤ xyz, then p satisfies 2p − 1 ≡ 1 (mod p2). In 1913, Bachmann examined the residues of . He asked the question when this residue vanishes and tried to find expressions for answering this question. The prime 1093 was found to be a Wieferich prime by in 1913 and confirmed to be the only such prime below 2000. He calculated the smallest residue of for all primes p < 2000 and found this residue to be zero for t = 364 and p = 1093, thereby providing a counterexample to a conjecture by Grave about the impossibility of the Wieferich congruence. later ordered verification of the correctness of Meissner's congruence via only elementary calculations. Inspired by an earlier work of Euler, he simplified Meissner's proof by showing that 10932 | (2182 + 1) and remarked that (2182 + 1) is a factor of (2364 − 1). It was also shown that it is possible to prove that 1093 is a Wieferich prime without using complex numbers contrary to the method used by Meissner, although Meissner himself hinted at that he was aware of a proof without complex values. The prime 3511 was first found to be a Wieferich prime by N. G. W. H. Beeger in 1922 and another proof of it being a Wieferich prime was published in 1965 by Guy. In 1960, Kravitz doubled a previous record set by and in 1961 Riesel extended the search to 500000 with the aid of BESK. Around 1980, Lehmer was able to reach the search limit of 6. This limit was extended to over 2.5 in 2006, finally reaching 3. Eventually, it was shown that if any other Wieferich primes exist, they must be greater than 6.7. In 2007–2016, a search for Wieferich primes was performed by the distributed computing project Wieferich@Home. In 2011–2017, another search was performed by the PrimeGrid project, although later the work done in this project was claimed wasted. While these projects reached search bounds above 1, neither of them reported any sustainable results. In 2020, PrimeGrid started another project that searched for Wieferich and Wall–Sun–Sun primes simultaneously. The new project used checksums to enable independent double-checking of each subinterval, thus minimizing the risk of missing an instance because of faulty hardware. The project ended in December 2022, definitely proving that a third Wieferich prime must exceed 264 (about 18). It has been conjectured (as for Wilson primes) that infinitely many Wieferich primes exist, and that the number of Wieferich primes below x is approximately log(log(x)), which is a heuristic result that follows from the plausible assumption that for a prime p, the degree roots of unity modulo p2 are uniformly distributed in the multiplicative group of integers modulo p2. Properties Connection with Fermat's Last Theorem The following theorem connecting Wieferich primes and Fermat's Last Theorem was proven by Wieferich in 1909: Let p be prime, and let x, y, z be integers such that . Furthermore, assume that p does not divide the product xyz. Then p is a Wieferich prime. The above case (where p does not divide any of x, y or z) is commonly known as the first case of Fermat's Last Theorem (FLTI) and FLTI is said to fail for a prime p, if solutions to the Fermat equation exist for that p, otherwise FLTI holds for p. In 1910, Mirimanoff expanded the theorem by showing that, if the preconditions of the theorem hold true for some prime p, then p2 must also divide . Granville and Monagan further proved that p2 must actually divide for every prime m ≤ 89. Suzuki extended the proof to all primes m ≤ 113. Let Hp be a set of pairs of integers with 1 as their greatest common divisor, p being prime to x, y and x + y, (x + y)p−1 ≡ 1 (mod p2), (x + ξy) being the pth power of an ideal of K with ξ defined as cos 2π/p + i sin 2π/p. K = Q(ξ) is the field extension obtained by adjoining all polynomials in the algebraic number ξ to the field of rational numbers (such an extension is known as a number field or in this particular case, where ξ is a root of unity, a cyclotomic number field). From uniqueness of factorization of ideals in Q(ξ) it follows that if the first case of Fermat's last theorem has solutions x, y, z then p divides x+y+z and (x, y), (y, z) and (z, x) are elements of Hp. Granville and Monagan showed that (1, 1) ∈ Hp if and only if p is a Wieferich prime. Connection with the abc conjecture and non-Wieferich primes A non-Wieferich prime is a prime p satisfying . J. H. Silverman showed in 1988 that if the abc conjecture holds, then there exist infinitely many non-Wieferich primes. More precisely he showed that the abc conjecture implies the existence of a constant only depending on α such that the number of non-Wieferich primes to base α with p less than or equal to a variable X is greater than log(X) as X goes to infinity. Numerical evidence suggests that very few of the prime numbers in a given interval are Wieferich primes. The set of Wieferich primes and the set of non-Wieferich primes, sometimes denoted by W2 and W2c respectively, are complementary sets, so if one of them is shown to be finite, the other one would necessarily have to be infinite. It was later shown that the existence of infinitely many non-Wieferich primes already follows from a weaker version of the abc conjecture, called the ABC-(k, ε) conjecture. Additionally, the existence of infinitely many non-Wieferich primes would also follow if there exist infinitely many square-free Mersenne numbers as well as if there exists a real number ξ such that the set {n ∈ N : λ(2n − 1) < 2 − ξ} is of density one, where the index of composition λ(n) of an integer n is defined as and , meaning gives the product of all prime factors of n. Connection with Mersenne and Fermat primes It is known that the nth Mersenne number is prime only if n is prime. Fermat's little theorem implies that if is prime, then Mp−1 is always divisible by p. Since Mersenne numbers of prime indices Mp and Mq are co-prime, A prime divisor p of Mq, where q is prime, is a Wieferich prime if and only if p2 divides Mq. Thus, a Mersenne prime cannot also be a Wieferich prime. A notable open problem is to determine whether or not all Mersenne numbers of prime index are square-free. If q is prime and the Mersenne number Mq is not square-free, that is, there exists a prime p for which p2 divides Mq, then p is a Wieferich prime. Therefore, if there are only finitely many Wieferich primes, then there will be at most finitely many Mersenne numbers with prime index that are not square-free. Rotkiewicz showed a related result: if there are infinitely many square-free Mersenne numbers, then there are infinitely many non-Wieferich primes. Similarly, if p is prime and p2 divides some Fermat number Fn , then p must be a Wieferich prime. In fact, there exists a natural number n and a prime p that p2 divides (where is the n-th cyclotomic polynomial) if and only if p is a Wieferich prime. For example, 10932 divides , 35112 divides . Mersenne and Fermat numbers are just special situations of . Thus, if 1093 and 3511 are only two Wieferich primes, then all are square-free except and (In fact, when there exists a prime p which p2 divides some , then it is a Wieferich prime); and clearly, if is a prime, then it cannot be Wieferich prime. (Any odd prime p divides only one and n divides , and if and only if the period length of 1/p in binary is n, then p divides . Besides, if and only if p is a Wieferich prime, then the period length of 1/p and 1/p2 are the same (in binary). Otherwise, this is p times than that.) For the primes 1093 and 3511, it was shown that neither of them is a divisor of any Mersenne number with prime index nor a divisor of any Fermat number, because 364 and 1755 are neither prime nor powers of 2. Connection with other equations Scott and Styer showed that the equation px – 2y = d has at most one solution in positive integers (x, y), unless when p4 | 2ordp 2 – 1 if p ≢ 65 (mod 192) or unconditionally when p2 | 2ordp 2 – 1, where ordp 2 denotes the multiplicative order of 2 modulo p. They also showed that a solution to the equation ±ax1 ± 2y1 = ±ax2 ± 2y2 = c must be from a specific set of equations but that this does not hold, if a is a Wieferich prime greater than 1.25 x 1015. Binary periodicity of p − 1 Johnson observed that the two known Wieferich primes are one greater than numbers with periodic binary expansions (1092 = 0100010001002=44416; 3510 = 1101101101102=66668). The Wieferich@Home project searched for Wieferich primes by testing numbers that are one greater than a number with a periodic binary expansion, but up to a "bit pseudo-length" of 3500 of the tested binary numbers generated by combination of bit strings with a bit length of up to 24 it has not found a new Wieferich prime. Abundancy of p − 1 It has been noted that the known Wieferich primes are one greater than mutually friendly numbers (the shared abundancy index being 112/39). Connection with pseudoprimes It was observed that the two known Wieferich primes are the square factors of all non-square free base-2 Fermat pseudoprimes up to 25. Later computations showed that the only repeated factors of the pseudoprimes up to 1012 are 1093 and 3511. In addition, the following connection exists: Let n be a base 2 pseudoprime and p be a prime divisor of n. If , then also . Furthermore, if p is a Wieferich prime, then p2 is a Catalan pseudoprime. Connection with directed graphs For all primes up to , only in two cases: and , where is the number of vertices in the cycle of 1 in the doubling diagram modulo . Here the doubling diagram represents the directed graph with the non-negative integers less than m as vertices and with directed edges going from each vertex x to vertex 2x reduced modulo m. It was shown, that for all odd prime numbers either or . Properties related to number fields It was shown that and if and only if where p is an odd prime and is the fundamental discriminant of the imaginary quadratic field . Furthermore, the following was shown: Let p be a Wieferich prime. If , let be the fundamental discriminant of the imaginary quadratic field and if , let be the fundamental discriminant of the imaginary quadratic field . Then and (χ and λ in this context denote Iwasawa invariants). Furthermore, the following result was obtained: Let q be an odd prime number, k and p are primes such that and the order of q modulo k is . Assume that q divides h+, the class number of the real cyclotomic field , the cyclotomic field obtained by adjoining the sum of a p-th root of unity and its reciprocal to the field of rational numbers. Then q is a Wieferich prime. This also holds if the conditions and are replaced by and as well as when the condition is replaced by (in which case q is a Wall–Sun–Sun prime) and the incongruence condition replaced by . Generalizations Near-Wieferich primes A prime p satisfying the congruence 2(p−1)/2 (mod p2) with small |A| is commonly called a near-Wieferich prime . Near-Wieferich primes with A = 0 represent Wieferich primes. Recent searches, in addition to their primary search for Wieferich primes, also tried to find near-Wieferich primes. The following table lists all near-Wieferich primes with |A| ≤ 10 in the interval [1, 3]. This search bound was reached in 2006 in a search effort by P. Carlisle, R. Crandall and M. Rodenkirch. Bigger entries are by PrimeGrid. The sign +1 or -1 above can be easily predicted by Euler's criterion (and the second supplement to the law of quadratic reciprocity). Dorais and Klyve used a different definition of a near-Wieferich prime, defining it as a prime p with small value of where is the Fermat quotient of 2 with respect to p modulo p (the modulo operation here gives the residue with the smallest absolute value). The following table lists all primes p ≤ with . The two notions of nearness are related as follows. If , then by squaring, clearly . So if had been chosen with small, then clearly is also (quite) small, and an even number. However, when is odd above, the related from before the last squaring was not "small". For example, with , we have which reads extremely non-near, but after squaring this is which is a near-Wieferich by the second definition. Base-a Wieferich primes A Wieferich prime base a is a prime p that satisfies , with a less than p but greater than 1. Such a prime cannot divide a, since then it would also divide 1. It's a conjecture that for every natural number a, there are infinitely many Wieferich primes in base a. Bolyai showed that if p and q are primes, a is a positive integer not divisible by p and q such that , , then . Setting p = q leads to . It was shown that if and only if . Known solutions of for small values of a are: (checked up to 5 × 1013) {| class="wikitable" |- ! a ! primes p such that ap − 1 = 1 (mod p2) ! OEIS sequence |- | 1 || 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, ... (All primes) | |- | 2 || 1093, 3511, ... | |- | 3 || 11, 1006003, ... | |- | 4 || 1093, 3511, ... | |- | 5 || 2, 20771, 40487, 53471161, 1645333507, 6692367337, 188748146801, ... | |- | 6 || 66161, 534851, 3152573, ... | |- | 7 || 5, 491531, ... | |- | 8 || 3, 1093, 3511, ... | |- | 9 || 2, 11, 1006003, ... | |- | 10 || 3, 487, 56598313, ... | |- | 11 || 71, ... | |- | 12 || 2693, 123653, ... | |- | 13 || 2, 863, 1747591, ... | |- | 14 || 29, 353, 7596952219, ... | |- | 15 || 29131, 119327070011, ... | |- | 16 || 1093, 3511, ... | |- | 17 || 2, 3, 46021, 48947, 478225523351, ... | |- | 18 || 5, 7, 37, 331, 33923, 1284043, ... | |- | 19 || 3, 7, 13, 43, 137, 63061489, ... | |- | 20 || 281, 46457, 9377747, 122959073, ... | |- | 21 || 2, ... | |- | 22 || 13, 673, 1595813, 492366587, 9809862296159, ... | |- | 23 || 13, 2481757, 13703077, 15546404183, 2549536629329, ... | |- | 24 || 5, 25633, ... | |- | 25 || 2, 20771, 40487, 53471161, 1645333507, 6692367337, 188748146801, ... | |- | 26 || 3, 5, 71, 486999673, 6695256707, ... | |- | 27 || 11, 1006003, ... | |- | 28 || 3, 19, 23, ... | |- | 29 || 2, ... | |- | 30 || 7, 160541, 94727075783, ... | |- | 31 || 7, 79, 6451, 2806861, ... | |- | 32 || 5, 1093, 3511, ... | |- | 33 || 2, 233, 47441, 9639595369, ... | |- | 34 || 46145917691, ... | |- | 35 || 3, 1613, 3571, ... | |- | 36 || 66161, 534851, 3152573, ... | |- | 37 || 2, 3, 77867, 76407520781, ... | |- | 38 || 17, 127, ... | |- | 39 || 8039, ... | |- | 40 || 11, 17, 307, 66431, 7036306088681, ... | |- | 41 || 2, 29, 1025273, 138200401, ... | |- | 42 || 23, 719867822369, ... | |- | 43 || 5, 103, 13368932516573, ... | |- | 44 || 3, 229, 5851, ... | |- | 45 || 2, 1283, 131759, 157635607, ... | |- | 46 || 3, 829, ... | |- | 47 || ... | |- | 48 || 7, 257, ... | |- | 49 || 2, 5, 491531, ... | |- | 50 || 7, ... | |} For more information, see and. (Note that the solutions to a = bk is the union of the prime divisors of k which does not divide b and the solutions to a = b) The smallest solutions of are 2, 1093, 11, 1093, 2, 66161, 5, 3, 2, 3, 71, 2693, 2, 29, 29131, 1093, 2, 5, 3, 281, 2, 13, 13, 5, 2, 3, 11, 3, 2, 7, 7, 5, 2, 46145917691, 3, 66161, 2, 17, 8039, 11, 2, 23, 5, 3, 2, 3, ... (The next term > 4.9×1013) There are no known solutions of for n = 47, 72, 186, 187, 200, 203, 222, 231, 304, 311, 335, 355, 435, 454, 546, 554, 610, 639, 662, 760, 772, 798, 808, 812, 858, 860, 871, 983, 986, 1002, 1023, 1130, 1136, 1138, .... It is a conjecture that there are infinitely many solutions of for every natural number a. The bases b < p2 which p is a Wieferich prime are (for b > p2, the solutions are just shifted by k·p2 for k > 0), and there are solutions < p2 of p and the set of the solutions congruent to p are {1, 2, 3, ..., {|class="wikitable" |- !p !values of b < p2 |- |2 |1 |- |3 |1, 8 |- |5 |1, 7, 18, 24 |- |7 |1, 18, 19, 30, 31, 48 |- |11 |1, 3, 9, 27, 40, 81, 94, 112, 118, 120 |- |13 |1, 19, 22, 23, 70, 80, 89, 99, 146, 147, 150, 168 |- |17 |1, 38, 40, 65, 75, 110, 131, 134, 155, 158, 179, 214, 224, 249, 251, 288 |- |19 |1, 28, 54, 62, 68, 69, 99, 116, 127, 234, 245, 262, 292, 293, 299, 307, 333, 360 |- |23 |1, 28, 42, 63, 118, 130, 170, 177, 195, 255, 263, 266, 274, 334, 352, 359, 399, 411, 466, 487, 501, 528 |- |29 |1, 14, 41, 60, 63, 137, 190, 196, 221, 236, 267, 270, 374, 416, 425, 467, 571, 574, 605, 620, 645, 651, 704, 778, 781, 800, 827, 840 |} The least base b > 1 which prime(n) is a Wieferich prime are 5, 8, 7, 18, 3, 19, 38, 28, 28, 14, 115, 18, 51, 19, 53, 338, 53, 264, 143, 11, 306, 31, 99, 184, 53, 181, 43, 164, 96, 68, 38, 58, 19, 328, 313, 78, 226, 65, 253, 259, 532, 78, 176, 276, 143, 174, 165, 69, 330, 44, 33, 332, 94, 263, 48, 79, 171, 747, 731, 20, ... We can also consider the formula , (because of the generalized Fermat little theorem, is true for all prime p and all natural number a such that both a and are not divisible by p). It's a conjecture that for every natural number a, there are infinitely many primes such that . Known solutions for small a are: (checked up to 4 × 1011) {|class="wikitable" ! !primes such that |- |1 |1093, 3511, ... |- |2 |23, 3842760169, 41975417117, ... |- |3 |5, 250829, ... |- |4 |3, 67, ... |- |5 |3457, 893122907, ... |- |6 |72673, 1108905403, 2375385997, ... |- |7 |13, 819381943, ... |- |8 |67, 139, 499, 26325777341, ... |- |9 |67, 887, 9257, 83449, 111539, 31832131, ... |- |10 |... |- |11 |107, 4637, 239357, ... |- |12 |5, 11, 51563, 363901, 224189011, ... |- |13 |3, ... |- |14 |11, 5749, 17733170113, 140328785783, ... |- |15 |292381, ... |- |16 |4157, ... |- |17 |751, 46070159, ... |- |18 |7, 142671309349, ... |- |19 |17, 269, ... |- |20 |29, 162703, ... |- |21 |5, 2711, 104651, 112922981, 331325567, 13315963127, ... |- |22 |3, 7, 13, 94447, 1198427, 23536243, ... |- |23 |43, 179, 1637, 69073, ... |- |24 |7, 353, 402153391, ... |- |25 |43, 5399, 21107, 35879, ... |- |26 |7, 131, 653, 5237, 97003, ... |- |27 |2437, 1704732131, ... |- |28 |5, 617, 677, 2273, 16243697, ... |- |29 |73, 101, 6217, ... |- |30 |7, 11, 23, 3301, 48589, 549667, ... |- |31 |3, 41, 416797, ... |- |32 |95989, 2276682269, ... |- |33 |139, 1341678275933, ... |- |34 |83, 139, ... |- |35 |... |- |36 |107, 137, 613, 2423, 74304856177, ... |- |37 |5, ... |- |38 |167, 2039, ... |- |39 |659, 9413, ... |- |40 |3, 23, 21029249, ... |- |41 |31, 71, 1934399021, 474528373843, ... |- |42 |4639, 1672609, ... |- |43 |31, 4962186419, ... |- |44 |36677, 17786501, ... |- |45 |241, 26120375473, ... |- |46 |5, 13877, ... |- |47 |13, 311, 797, 906165497, ... |- |48 |... |- |49 |3, 13, 2141, 281833, 1703287, 4805298913, ... |- |50 |2953, 22409, 99241, 5427425917, ... |} Wieferich pairs A Wieferich pair is a pair of primes p and q that satisfy pq − 1 ≡ 1 (mod q2) and qp − 1 ≡ 1 (mod p2) so that a Wieferich prime p ≡ 1 (mod 4) will form such a pair (p, 2): the only known instance in this case is . There are only 7 known Wieferich pairs. (2, 1093), (3, 1006003), (5, 1645333507), (5, 188748146801), (83, 4871), (911, 318917), and (2903, 18787) (sequence in OEIS) Wieferich sequence Start with a(1) any natural number (>1), a(n) = the smallest prime p such that (a(n − 1))p − 1 = 1 (mod p2) but p2 does not divide a(n − 1) − 1 or a(n − 1) + 1. (If p2 divides a(n − 1) − 1 or a(n − 1) + 1, then the solution is a trivial solution) It is a conjecture that every natural number k = a(1) > 1 makes this sequence become periodic, for example, let a(1) = 2: 2, 1093, 5, 20771, 18043, 5, 20771, 18043, 5, ..., it gets a cycle: {5, 20771, 18043}. Let a(1) = 83: 83, 4871, 83, 4871, 83, 4871, 83, ..., it gets a cycle: {83, 4871}. Let a(1) = 59 (a longer sequence): 59, 2777, 133287067, 13, 863, 7, 5, 20771, 18043, 5, ..., it also gets 5. However, there are many values of a(1) with unknown status, for example, let a(1) = 3: 3, 11, 71, 47, ? (There are no known Wieferich primes in base 47). Let a(1) = 14: 14, 29, ? (There are no known Wieferich prime in base 29 except 2, but 22 = 4 divides 29 − 1 = 28) Let a(1) = 39 (a longer sequence): 39, 8039, 617, 101, 1050139, 29, ? (It also gets 29) It is unknown that values for a(1) > 1 exist such that the resulting sequence does not eventually become periodic. When a(n − 1)=k, a(n) will be (start with k = 2): 1093, 11, 1093, 20771, 66161, 5, 1093, 11, 487, 71, 2693, 863, 29, 29131, 1093, 46021, 5, 7, 281, ?, 13, 13, 25633, 20771, 71, 11, 19, ?, 7, 7, 5, 233, 46145917691, 1613, 66161, 77867, 17, 8039, 11, 29, 23, 5, 229, 1283, 829, ?, 257, 491531, ?, ... (For k = 21, 29, 47, 50, even the next value is unknown) Wieferich numbers A Wieferich number is an odd natural number n satisfying the congruence 2(n) ≡ 1 (mod n2), where denotes the Euler's totient function (according to Euler's theorem, 2(n) ≡ 1 (mod n) for every odd natural number n). If Wieferich number n is prime, then it is a Wieferich prime. The first few Wieferich numbers are: 1, 1093, 3279, 3511, 7651, 10533, 14209, 17555, 22953, 31599, 42627, 45643, 52665, 68859, 94797, 99463, ... It can be shown that if there are only finitely many Wieferich primes, then there are only finitely many Wieferich numbers. In particular, if the only Wieferich primes are 1093 and 3511, then there exist exactly 104 Wieferich numbers, which matches the number of Wieferich numbers currently known. More generally, a natural number n is a Wieferich number to base a, if a(n) ≡ 1 (mod n2). Another definition specifies a Wieferich number as odd natural number n such that n and are not coprime, where m is the multiplicative order of 2 modulo n. The first of these numbers are: 21, 39, 55, 57, 105, 111, 147, 155, 165, 171, 183, 195, 201, 203, 205, 219, 231, 237, 253, 273, 285, 291, 301, 305, 309, 327, 333, 355, 357, 385, 399, ... As above, if Wieferich number q is prime, then it is a Wieferich prime. Weak Wieferich prime A weak Wieferich prime to base a is a prime p satisfies the condition ap ≡ a (mod p2) Every Wieferich prime to base a is also a weak Wieferich prime to base a. If the base a is squarefree, then a prime p is a weak Wieferich prime to base a if and only if p is a Wieferich prime to base a. Smallest weak Wieferich prime to base n are (start with n = 0) 2, 2, 1093, 11, 2, 2, 66161, 5, 2, 2, 3, 71, 2, 2, 29, 29131, 2, 2, 3, 3, 2, 2, 13, 13, 2, 2, 3, 3, 2, 2, 7, 7, 2, 2, 46145917691, 3, 2, 2, 17, 8039, 2, 2, 23, 5, 2, 2, 3, ... Wieferich prime with order n For integer n ≥2, a Wieferich prime to base a with order n is a prime p satisfies the condition ap−1 ≡ 1 (mod pn) Clearly, a Wieferich prime to base a with order n is also a Wieferich prime to base a with order m for all 2 ≤ m ≤ n, and Wieferich prime to base a with order 2 is equivalent to Wieferich prime to base a, so we can only consider the n ≥ 3 case. However, there are no known Wieferich prime to base 2 with order 3. The first base with known Wieferich prime with order 3 is 9, where 2 is a Wieferich prime to base 9 with order 3. Besides, both 5 and 113 are Wieferich prime to base 68 with order 3. Lucas–Wieferich primes Let P and Q be integers. The Lucas sequence of the first kind associated with the pair (P, Q) is defined by for all . A Lucas–Wieferich prime associated with (P, Q) is a prime p such that Up−ε(P, Q) ≡ 0 (mod p2), where ε equals the Legendre symbol . All Wieferich primes are Lucas–Wieferich primes associated with the pair (3, 2). Wieferich places Let K be a global field, i.e. a number field or a function field in one variable over a finite field and let E be an elliptic curve. If v is a non-archimedean place of norm qv of K and a ∈ K, with v(a) = 0 then ≥ 1. v is called a Wieferich place for base a, if > 1, an elliptic Wieferich place for base P ∈ E, if NvP ∈ E2 and a strong elliptic Wieferich place for base P ∈ E if nvP ∈ E2, where nv is the order of P modulo v and Nv gives the number of rational points (over the residue field of v) of the reduction of E at v. See also Wall–Sun–Sun prime – another type of prime number which in the broadest sense also resulted from the study of FLT Wolstenholme prime – another type of prime number which in the broadest sense also resulted from the study of FLT Wilson prime Table of congruences – lists other congruences satisfied by prime numbers PrimeGrid – primes search project BOINC Distributed computing References Further reading External links Fermat-/Euler-quotients with arbitrary k A note on the two known Wieferich primes PrimeGrid's Wieferich Prime Search project page Classes of prime numbers Unsolved problems in number theory Abc conjecture
Wieferich prime
[ "Mathematics" ]
8,777
[ "Unsolved problems in mathematics", "Unsolved problems in number theory", "Abc conjecture", "Mathematical problems", "Number theory" ]
323,646
https://en.wikipedia.org/wiki/Wilson%20prime
In number theory, a Wilson prime is a prime number such that divides , where "" denotes the factorial function; compare this with Wilson's theorem, which states that every prime divides . Both are named for 18th-century English mathematician John Wilson; in 1770, Edward Waring credited the theorem to Wilson, although it had been stated centuries earlier by Ibn al-Haytham. The only known Wilson primes are 5, 13, and 563 . Costa et al. write that "the case is trivial", and credit the observation that 13 is a Wilson prime to . Early work on these numbers included searches by N. G. W. H. Beeger and Emma Lehmer, but 563 was not discovered until the early 1950s, when computer searches could be applied to the problem. If any others exist, they must be greater than 2 × 1013. It has been conjectured that infinitely many Wilson primes exist, and that the number of Wilson primes in an interval is about . Several computer searches have been done in the hope of finding new Wilson primes. The Ibercivis distributed computing project includes a search for Wilson primes. Another search was coordinated at the Great Internet Mersenne Prime Search forum. Generalizations Wilson primes of order Wilson's theorem can be expressed in general as for every integer and prime . Generalized Wilson primes of order are the primes such that divides . It was conjectured that for every natural number , there are infinitely many Wilson primes of order . The smallest generalized Wilson primes of order are: Near-Wilson primes A prime satisfying the congruence with small can be called a near-Wilson prime. Near-Wilson primes with are bona fide Wilson primes. The table on the right lists all such primes with from up to 4. Wilson numbers A Wilson number is a natural number such that , where and where the term is positive if and only if has a primitive root and negative otherwise. For every natural number , is divisible by , and the quotients (called generalized Wilson quotients) are listed in . The Wilson numbers are If a Wilson number is prime, then is a Wilson prime. There are 13 Wilson numbers up to 5. See also PrimeGrid Table of congruences Wall–Sun–Sun prime Wieferich prime Wolstenholme prime References Further reading External links The Prime Glossary: Wilson prime Status of the search for Wilson primes Classes of prime numbers Factorial and binomial topics Unsolved problems in number theory
Wilson prime
[ "Mathematics" ]
523
[ "Unsolved problems in mathematics", "Factorial and binomial topics", "Unsolved problems in number theory", "Combinatorics", "Mathematical problems", "Number theory" ]
323,651
https://en.wikipedia.org/wiki/Wall%E2%80%93Sun%E2%80%93Sun%20prime
In number theory, a Wall–Sun–Sun prime or Fibonacci–Wieferich prime is a certain kind of prime number which is conjectured to exist, although none are known. Definition Let be a prime number. When each term in the sequence of Fibonacci numbers is reduced modulo , the result is a periodic sequence. The (minimal) period length of this sequence is called the Pisano period and denoted . Since , it follows that p divides . A prime p such that p2 divides is called a Wall–Sun–Sun prime. Equivalent definitions If denotes the rank of apparition modulo (i.e., is the smallest positive index such that divides ), then a Wall–Sun–Sun prime can be equivalently defined as a prime such that divides . For a prime p ≠ 2, 5, the rank of apparition is known to divide , where the Legendre symbol has the values This observation gives rise to an equivalent characterization of Wall–Sun–Sun primes as primes such that divides the Fibonacci number . A prime is a Wall–Sun–Sun prime if and only if . A prime is a Wall–Sun–Sun prime if and only if , where is the -th Lucas number. McIntosh and Roettger establish several equivalent characterizations of Lucas–Wieferich primes. In particular, let ; then the following are equivalent: Existence In a study of the Pisano period , Donald Dines Wall determined that there are no Wall–Sun–Sun primes less than . In 1960, he wrote: It has since been conjectured that there are infinitely many Wall–Sun–Sun primes. In 2007, Richard J. McIntosh and Eric L. Roettger showed that if any exist, they must be > 2. Dorais and Klyve extended this range to 9.7 without finding such a prime. In December 2011, another search was started by the PrimeGrid project; however, it was suspended in May 2017. In November 2020, PrimeGrid started another project that searches for Wieferich and Wall–Sun–Sun primes simultaneously. The project ended in December 2022, definitely proving that any Wall–Sun–Sun prime must exceed (about ). History Wall–Sun–Sun primes are named after Donald Dines Wall, Zhi Hong Sun and Zhi Wei Sun; Z. H. Sun and Z. W. Sun showed in 1992 that if the first case of Fermat's Last Theorem was false for a certain prime p, then p would have to be a Wall–Sun–Sun prime. As a result, prior to Andrew Wiles' proof of Fermat's Last Theorem, the search for Wall–Sun–Sun primes was also the search for a potential counterexample to this centuries-old conjecture. Generalizations A tribonacci–Wieferich prime is a prime p satisfying , where h is the least positive integer satisfying [Th,Th+1,Th+2] ≡ [T0, T1, T2] (mod m) and Tn denotes the n-th tribonacci number. No tribonacci–Wieferich prime exists below 1011. A Pell–Wieferich prime is a prime p satisfying p2 divides Pp−1, when p congruent to 1 or 7 (mod 8), or p2 divides Pp+1, when p congruent to 3 or 5 (mod 8), where Pn denotes the n-th Pell number. For example, 13, 31, and 1546463 are Pell–Wieferich primes, and no others below 109 . In fact, Pell–Wieferich primes are 2-Wall–Sun–Sun primes. Near-Wall–Sun–Sun primes A prime p such that with small |A| is called near-Wall–Sun–Sun prime. Near-Wall–Sun–Sun primes with A = 0 would be Wall–Sun–Sun primes. PrimeGrid recorded cases with |A| ≤ 1000. A dozen cases are known where A = ±1 . Wall–Sun–Sun primes with discriminant D Wall–Sun–Sun primes can be considered for the field with discriminant D. For the conventional Wall–Sun–Sun primes, D = 5. In the general case, a Lucas–Wieferich prime p associated with (P, Q) is a Wieferich prime to base Q and a Wall–Sun–Sun prime with discriminant D = P2 – 4Q. In this definition, the prime p should be odd and not divide D. It is conjectured that for every natural number D, there are infinitely many Wall–Sun–Sun primes with discriminant D. The case of corresponds to the k-Wall–Sun–Sun primes, for which Wall–Sun–Sun primes represent the special case k = 1. The k-Wall–Sun–Sun primes can be explicitly defined as primes p such that p2 divides the k-Fibonacci number , where Fk(n) = Un(k, −1) is a Lucas sequence of the first kind with discriminant D = k2 + 4 and is the Pisano period of k-Fibonacci numbers modulo p. For a prime p ≠ 2 and not dividing D, this condition is equivalent to either of the following. p2 divides , where is the Kronecker symbol; Vp(k, −1) ≡ k (mod p2), where Vn(k, −1) is a Lucas sequence of the second kind. The smallest k-Wall–Sun–Sun primes for k = 2, 3, ... are 13, 241, 2, 3, 191, 5, 2, 3, 2683, ... See also Wieferich prime Wolstenholme prime Wilson prime PrimeGrid Fibonacci prime Pisano period Table of congruences References Further reading External links Chris Caldwell, The Prime Glossary: Wall–Sun–Sun prime at the Prime Pages. Richard McIntosh, Status of the search for Wall–Sun–Sun primes (October 2003) Classes of prime numbers Unsolved problems in number theory
Wall–Sun–Sun prime
[ "Mathematics" ]
1,311
[ "Unsolved problems in mathematics", "Mathematical problems", "Unsolved problems in number theory", "Number theory" ]
323,689
https://en.wikipedia.org/wiki/Regular%20prime
In number theory, a regular prime is a special kind of prime number, defined by Ernst Kummer in 1850 to prove certain cases of Fermat's Last Theorem. Regular primes may be defined via the divisibility of either class numbers or of Bernoulli numbers. The first few regular odd primes are: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 41, 43, 47, 53, 61, 71, 73, 79, 83, 89, 97, 107, 109, 113, 127, 137, 139, 151, 163, 167, 173, 179, 181, 191, 193, 197, 199, ... . History and motivation In 1850, Kummer proved that Fermat's Last Theorem is true for a prime exponent p if p is regular. This focused attention on the irregular primes. In 1852, Genocchi was able to prove that the first case of Fermat's Last Theorem is true for an exponent p, if is not an irregular pair. Kummer improved this further in 1857 by showing that for the "first case" of Fermat's Last Theorem (see Sophie Germain's theorem) it is sufficient to establish that either or fails to be an irregular pair. ( is an irregular pair when p is irregular due to a certain condition described below being realized at 2k.) Kummer found the irregular primes less than 165. In 1963, Lehmer reported results up to 10000 and Selfridge and Pollack announced in 1964 to have completed the table of irregular primes up to 25000. Although the two latter tables did not appear in print, Johnson found that is in fact an irregular pair for and that this is the first and only time this occurs for . It was found in 1993 that the next time this happens is for ; see Wolstenholme prime. Definition Class number criterion An odd prime number p is defined to be regular if it does not divide the class number of the pth cyclotomic field Q(ζp), where ζp is a primitive pth root of unity. The prime number 2 is often considered regular as well. The class number of the cyclotomic field is the number of ideals of the ring of integers Z(ζp) up to equivalence. Two ideals I, J are considered equivalent if there is a nonzero u in Q(ζp) so that . The first few of these class numbers are listed in . Kummer's criterion Ernst Kummer showed that an equivalent criterion for regularity is that p does not divide the numerator of any of the Bernoulli numbers Bk for . Kummer's proof that this is equivalent to the class number definition is strengthened by the Herbrand–Ribet theorem, which states certain consequences of p dividing the numerator of one of these Bernoulli numbers. Siegel's conjecture It has been conjectured that there are infinitely many regular primes. More precisely conjectured that e−1/2, or about 60.65%, of all prime numbers are regular, in the asymptotic sense of natural density. Taking Kummer's criterion, the chance that one numerator of the Bernoulli numbers , , is not divisible by the prime is so that the chance that none of the numerators of these Bernoulli numbers are divisible by the prime is . By E_(mathematical_constant), we have so that we obtain the probability . It follows that about of the primes are regular by chance. Hart et al. indicate that of the primes less than are regular. Irregular primes An odd prime that is not regular is an irregular prime (or Bernoulli irregular or B-irregular to distinguish from other types of irregularity discussed below). The first few irregular primes are: 37, 59, 67, 101, 103, 131, 149, 157, 233, 257, 263, 271, 283, 293, 307, 311, 347, 353, 379, 389, 401, 409, 421, 433, 461, 463, 467, 491, 523, 541, 547, 557, 577, 587, 593, ... Infinitude K. L. Jensen (a student of Nielsen) proved in 1915 that there are infinitely many irregular primes of the form . In 1954 Carlitz gave a simple proof of the weaker result that there are in general infinitely many irregular primes. Metsänkylä proved in 1971 that for any integer , there are infinitely many irregular primes not of the form or , and later generalized this. Irregular pairs If p is an irregular prime and p divides the numerator of the Bernoulli number B2k for , then is called an irregular pair. In other words, an irregular pair is a bookkeeping device to record, for an irregular prime p, the particular indices of the Bernoulli numbers at which regularity fails. The first few irregular pairs (when ordered by k) are: (691, 12), (3617, 16), (43867, 18), (283, 20), (617, 20), (131, 22), (593, 22), (103, 24), (2294797, 24), (657931, 26), (9349, 28), (362903, 28), ... . The smallest even k such that nth irregular prime divides Bk are 32, 44, 58, 68, 24, 22, 130, 62, 84, 164, 100, 84, 20, 156, 88, 292, 280, 186, 100, 200, 382, 126, 240, 366, 196, 130, 94, 292, 400, 86, 270, 222, 52, 90, 22, ... For a given prime p, the number of such pairs is called the index of irregularity of p. Hence, a prime is regular if and only if its index of irregularity is zero. Similarly, a prime is irregular if and only if its index of irregularity is positive. It was discovered that is in fact an irregular pair for , as well as for . There are no more occurrences for . Irregular index An odd prime p has irregular index n if and only if there are n values of k for which p divides B2k and these ks are less than . The first irregular prime with irregular index greater than 1 is 157, which divides B62 and B110, so it has an irregular index 2. Clearly, the irregular index of a regular prime is 0. The irregular index of the nth prime is 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 2, 0, ... (Start with n = 2, or the prime = 3) The irregular index of the nth irregular prime is 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 1, 1, 1, 1, 1, 1, 2, 3, 1, 1, 2, 1, 1, 2, 1, 1, 1, 3, 1, 2, 3, 1, 1, 2, 1, 1, 2, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 3, 1, 1, 1, ... The primes having irregular index 1 are 37, 59, 67, 101, 103, 131, 149, 233, 257, 263, 271, 283, 293, 307, 311, 347, 389, 401, 409, 421, 433, 461, 463, 523, 541, 557, 577, 593, 607, 613, 619, 653, 659, 677, 683, 727, 751, 757, 761, 773, 797, 811, 821, 827, 839, 877, 881, 887, 953, 971, ... The primes having irregular index 2 are 157, 353, 379, 467, 547, 587, 631, 673, 691, 809, 929, 1291, 1297, 1307, 1663, 1669, 1733, 1789, 1933, 1997, 2003, 2087, 2273, 2309, 2371, 2383, 2423, 2441, 2591, 2671, 2789, 2909, 2957, ... The primes having irregular index 3 are 491, 617, 647, 1151, 1217, 1811, 1847, 2939, 3833, 4003, 4657, 4951, 6763, 7687, 8831, 9011, 10463, 10589, 12073, 13217, 14533, 14737, 14957, 15287, 15787, 15823, 16007, 17681, 17863, 18713, 18869, ... The least primes having irregular index n are 2, 3, 37, 157, 491, 12613, 78233, 527377, 3238481, ... (This sequence defines "the irregular index of 2" as −1, and also starts at .) Generalizations Euler irregular primes Similarly, we can define an Euler irregular prime (or E-irregular) as a prime p that divides at least one Euler number E2n with . The first few Euler irregular primes are 19, 31, 43, 47, 61, 67, 71, 79, 101, 137, 139, 149, 193, 223, 241, 251, 263, 277, 307, 311, 349, 353, 359, 373, 379, 419, 433, 461, 463, 491, 509, 541, 563, 571, 577, 587, ... The Euler irregular pairs are (61, 6), (277, 8), (19, 10), (2659, 10), (43, 12), (967, 12), (47, 14), (4241723, 14), (228135437, 16), (79, 18), (349, 18), (84224971, 18), (41737, 20), (354957173, 20), (31, 22), (1567103, 22), (1427513357, 22), (2137, 24), (111691689741601, 24), (67, 26), (61001082228255580483, 26), (71, 28), (30211, 28), (2717447, 28), (77980901, 28), ... Vandiver proved in 1940 that Fermat's Last Theorem () has no solution for integers x, y, z with if p is Euler-regular. Gut proved that has no solution if p has an E-irregularity index less than 5. It was proven that there is an infinity of E-irregular primes. A stronger result was obtained: there is an infinity of E-irregular primes congruent to 1 modulo 8. As in the case of Kummer's B-regular primes, there is as yet no proof that there are infinitely many E-regular primes, though this seems likely to be true. Strong irregular primes A prime p is called strong irregular if it is both B-irregular and E-irregular (the indexes of Bernoulli and Euler numbers that are divisible by p can be either the same or different). The first few strong irregular primes are 67, 101, 149, 263, 307, 311, 353, 379, 433, 461, 463, 491, 541, 577, 587, 619, 677, 691, 751, 761, 773, 811, 821, 877, 887, 929, 971, 1151, 1229, 1279, 1283, 1291, 1307, 1319, 1381, 1409, 1429, 1439, ... To prove the Fermat's Last Theorem for a strong irregular prime p is more difficult (since Kummer proved the first case of Fermat's Last Theorem for B-regular primes, Vandiver proved the first case of Fermat's Last Theorem for E-regular primes), the most difficult is that p is not only a strong irregular prime, but , , , , , and are also all composite (Legendre proved the first case of Fermat's Last Theorem for primes p such that at least one of , , , , , and is prime), the first few such p are 263, 311, 379, 461, 463, 541, 751, 773, 887, 971, 1283, ... Weak irregular primes A prime p is weak irregular if it is either B-irregular or E-irregular (or both). The first few weak irregular primes are 19, 31, 37, 43, 47, 59, 61, 67, 71, 79, 101, 103, 131, 137, 139, 149, 157, 193, 223, 233, 241, 251, 257, 263, 271, 277, 283, 293, 307, 311, 347, 349, 353, 373, 379, 389, 401, 409, 419, 421, 433, 461, 463, 491, 509, 523, 541, 547, 557, 563, 571, 577, 587, 593, ... Like the Bernoulli irregularity, the weak regularity relates to the divisibility of class numbers of cyclotomic fields. In fact, a prime p is weak irregular if and only if p divides the class number of the 4pth cyclotomic field Q(ζ4p). Weak irregular pairs In this section, "an" means the numerator of the nth Bernoulli number if n is even, "an" means the th Euler number if n is odd . Since for every odd prime p, p divides ap if and only if p is congruent to 1 mod 4, and since p divides the denominator of th Bernoulli number for every odd prime p, so for any odd prime p, p cannot divide ap−1. Besides, if and only if an odd prime p divides an (and 2p does not divide n), then p also divides an+k(p−1) (if 2p divides n, then the sentence should be changed to "p also divides an+2kp". In fact, if 2p divides n and does not divide n, then p divides an.) for every integer k (a condition is must be > 1). For example, since 19 divides a11 and does not divide 11, so 19 divides a18k+11 for all k. Thus, the definition of irregular pair , n should be at most . The following table shows all irregular pairs with odd prime : The only primes below 1000 with weak irregular index 3 are 307, 311, 353, 379, 577, 587, 617, 619, 647, 691, 751, and 929. Besides, 491 is the only prime below 1000 with weak irregular index 4, and all other odd primes below 1000 with weak irregular index 0, 1, or 2. (Weak irregular index is defined as "number of integers such that p divides an.) The following table shows all irregular pairs with n ≤ 63. (To get these irregular pairs, we only need to factorize an. For example, , but , so the only irregular pair with is ) (for more information (even ns up to 300 and odd ns up to 201), see ). The following table shows irregular pairs (), it is a conjecture that there are infinitely many irregular pairs for every natural number , but only few were found for fixed n. For some values of n, even there is no known such prime p''. See also Wolstenholme prime References Further reading External links Chris Caldwell, The Prime Glossary: regular prime at The Prime Pages. Keith Conrad, Fermat's last theorem for regular primes. Bernoulli irregular prime Euler irregular prime Bernoulli and Euler irregular primes. Factorization of Bernoulli and Euler numbers Factorization of Bernoulli and Euler numbers Algebraic number theory Cyclotomic fields Classes of prime numbers Unsolved problems in number theory
Regular prime
[ "Mathematics" ]
3,739
[ "Unsolved problems in mathematics", "Unsolved problems in number theory", "Algebraic number theory", "Mathematical problems", "Number theory" ]
323,705
https://en.wikipedia.org/wiki/Newman%E2%80%93Shanks%E2%80%93Williams%20prime
In mathematics, a Newman–Shanks–Williams prime (NSW prime) is a prime number p which can be written in the form NSW primes were first described by Morris Newman, Daniel Shanks and Hugh C. Williams in 1981 during the study of finite simple groups with square order. The first few NSW primes are 7, 41, 239, 9369319, 63018038201, … , corresponding to the indices 3, 5, 7, 19, 29, … . The sequence S alluded to in the formula can be described by the following recurrence relation: The first few terms of the sequence are 1, 1, 3, 7, 17, 41, 99, … . Each term in this sequence is half the corresponding term in the sequence of companion Pell numbers. These numbers also appear in the continued fraction convergents to . Further reading External links The Prime Glossary: NSW number Classes of prime numbers Unsolved problems in mathematics
Newman–Shanks–Williams prime
[ "Mathematics" ]
200
[ "Unsolved problems in mathematics", "Mathematical problems" ]
323,707
https://en.wikipedia.org/wiki/Finite%20group
In abstract algebra, a finite group is a group whose underlying set is finite. Finite groups often arise when considering symmetry of mathematical or physical objects, when those objects admit just a finite number of structure-preserving transformations. Important examples of finite groups include cyclic groups and permutation groups. The study of finite groups has been an integral part of group theory since it arose in the 19th century. One major area of study has been classification: the classification of finite simple groups (those with no nontrivial normal subgroup) was completed in 2004. History During the twentieth century, mathematicians investigated some aspects of the theory of finite groups in great depth, especially the local theory of finite groups and the theory of solvable and nilpotent groups. As a consequence, the complete classification of finite simple groups was achieved, meaning that all those simple groups from which all finite groups can be built are now known. During the second half of the twentieth century, mathematicians such as Chevalley and Steinberg also increased our understanding of finite analogs of classical groups, and other related groups. One such family of groups is the family of general linear groups over finite fields. Finite groups often occur when considering symmetry of mathematical or physical objects, when those objects admit just a finite number of structure-preserving transformations. The theory of Lie groups, which may be viewed as dealing with "continuous symmetry", is strongly influenced by the associated Weyl groups. These are finite groups generated by reflections which act on a finite-dimensional Euclidean space. The properties of finite groups can thus play a role in subjects such as theoretical physics and chemistry. Examples Permutation groups The symmetric group Sn on a finite set of n symbols is the group whose elements are all the permutations of the n symbols, and whose group operation is the composition of such permutations, which are treated as bijective functions from the set of symbols to itself. Since there are n! (n factorial) possible permutations of a set of n symbols, it follows that the order (the number of elements) of the symmetric group Sn is n!. Cyclic groups A cyclic group Zn is a group all of whose elements are powers of a particular element a where , the identity. A typical realization of this group is as the complex roots of unity. Sending a to a primitive root of unity gives an isomorphism between the two. This can be done with any finite cyclic group. Finite abelian groups An abelian group, also called a commutative group, is a group in which the result of applying the group operation to two group elements does not depend on their order (the axiom of commutativity). They are named after Niels Henrik Abel. An arbitrary finite abelian group is isomorphic to a direct sum of finite cyclic groups of prime power order, and these orders are uniquely determined, forming a complete system of invariants. The automorphism group of a finite abelian group can be described directly in terms of these invariants. The theory had been first developed in the 1879 paper of Georg Frobenius and Ludwig Stickelberger and later was both simplified and generalized to finitely generated modules over a principal ideal domain, forming an important chapter of linear algebra. Groups of Lie type A group of Lie type is a group closely related to the group G(k) of rational points of a reductive linear algebraic group G with values in the field k. Finite groups of Lie type give the bulk of nonabelian finite simple groups. Special cases include the classical groups, the Chevalley groups, the Steinberg groups, and the Suzuki–Ree groups. Finite groups of Lie type were among the first groups to be considered in mathematics, after cyclic, symmetric and alternating groups, with the projective special linear groups over prime finite fields, PSL(2, p) being constructed by Évariste Galois in the 1830s. The systematic exploration of finite groups of Lie type started with Camille Jordan's theorem that the projective special linear group PSL(2, q) is simple for q ≠ 2, 3. This theorem generalizes to projective groups of higher dimensions and gives an important infinite family PSL(n, q) of finite simple groups. Other classical groups were studied by Leonard Dickson in the beginning of 20th century. In the 1950s Claude Chevalley realized that after an appropriate reformulation, many theorems about semisimple Lie groups admit analogues for algebraic groups over an arbitrary field k, leading to construction of what are now called Chevalley groups. Moreover, as in the case of compact simple Lie groups, the corresponding groups turned out to be almost simple as abstract groups (Tits simplicity theorem). Although it was known since 19th century that other finite simple groups exist (for example, Mathieu groups), gradually a belief formed that nearly all finite simple groups can be accounted for by appropriate extensions of Chevalley's construction, together with cyclic and alternating groups. Moreover, the exceptions, the sporadic groups, share many properties with the finite groups of Lie type, and in particular, can be constructed and characterized based on their geometry in the sense of Tits. The belief has now become a theorem – the classification of finite simple groups. Inspection of the list of finite simple groups shows that groups of Lie type over a finite field include all the finite simple groups other than the cyclic groups, the alternating groups, the Tits group, and the 26 sporadic simple groups. Main theorems Lagrange's theorem For any finite group G, the order (number of elements) of every subgroup H of G divides the order of G. The theorem is named after Joseph-Louis Lagrange. Sylow theorems This provides a partial converse to Lagrange's theorem giving information about how many subgroups of a given order are contained in G. Cayley's theorem Cayley's theorem, named in honour of Arthur Cayley, states that every group G is isomorphic to a subgroup of the symmetric group acting on G. This can be understood as an example of the group action of G on the elements of G. Burnside's theorem Burnside's theorem in group theory states that if G is a finite group of order pq, where p and q are prime numbers, and a and b are non-negative integers, then G is solvable. Hence each non-Abelian finite simple group has order divisible by at least three distinct primes. Feit–Thompson theorem The Feit–Thompson theorem, or odd order theorem, states that every finite group of odd order is solvable. It was proved by Classification of finite simple groups The classification of finite simple groups is a theorem stating that every finite simple group belongs to one of the following families: A cyclic group with prime order; An alternating group of degree at least 5; A simple group of Lie type; One of the 26 sporadic simple groups; The Tits group (sometimes considered as a 27th sporadic group). The finite simple groups can be seen as the basic building blocks of all finite groups, in a way reminiscent of the way the prime numbers are the basic building blocks of the natural numbers. The Jordan–Hölder theorem is a more precise way of stating this fact about finite groups. However, a significant difference with respect to the case of integer factorization is that such "building blocks" do not necessarily determine uniquely a group, since there might be many non-isomorphic groups with the same composition series or, put in another way, the extension problem does not have a unique solution. The proof of the theorem consists of tens of thousands of pages in several hundred journal articles written by about 100 authors, published mostly between 1955 and 2004. Gorenstein (d.1992), Lyons, and Solomon are gradually publishing a simplified and revised version of the proof. Number of groups of a given order Given a positive integer n, it is not at all a routine matter to determine how many isomorphism types of groups of order n there are. Every group of prime order is cyclic, because Lagrange's theorem implies that the cyclic subgroup generated by any of its non-identity elements is the whole group. If n is the square of a prime, then there are exactly two possible isomorphism types of group of order n, both of which are abelian. If n is a higher power of a prime, then results of Graham Higman and Charles Sims give asymptotically correct estimates for the number of isomorphism types of groups of order n, and the number grows very rapidly as the power increases. Depending on the prime factorization of n, some restrictions may be placed on the structure of groups of order n, as a consequence, for example, of results such as the Sylow theorems. For example, every group of order pq is cyclic when are primes with not divisible by q. For a necessary and sufficient condition, see cyclic number. If n is squarefree, then any group of order n is solvable. Burnside's theorem, proved using group characters, states that every group of order n is solvable when n is divisible by fewer than three distinct primes, i.e. if , where p and q are prime numbers, and a and b are non-negative integers. By the Feit–Thompson theorem, which has a long and complicated proof, every group of order n is solvable when n is odd. For every positive integer n, most groups of order n are solvable. To see this for any particular order is usually not difficult (for example, there is, up to isomorphism, one non-solvable group and 12 solvable groups of order 60) but the proof of this for all orders uses the classification of finite simple groups. For any positive integer n there are at most two simple groups of order n, and there are infinitely many positive integers n for which there are two non-isomorphic simple groups of order n. Table of distinct groups of order n See also Association scheme Cauchy's theorem (group theory) Classification of finite simple groups Commuting probability Finite ring Finite-state machine Infinite group List of finite simple groups List of small groups Modular representation theory Monstrous moonshine P-group Profinite group Representation theory of finite groups References Further reading External links Small groups on GroupNames A classifier for groups of small order Properties of groups
Finite group
[ "Mathematics" ]
2,116
[ "Mathematical structures", "Algebraic structures", "Finite groups", "Properties of groups" ]
323,725
https://en.wikipedia.org/wiki/Oracle%20Database
Oracle Database (commonly referred to as Oracle DBMS, Oracle Autonomous Database, or simply as Oracle) is a proprietary multi-model database management system produced and marketed by Oracle Corporation. It is a database commonly used for running online transaction processing (OLTP), data warehousing (DW) and mixed (OLTP & DW) database workloads. Oracle Database is available by several service providers on-premises, on-cloud, or as a hybrid cloud installation. It may be run on third party servers as well as on Oracle hardware (Exadata on-premises, on Oracle Cloud or at Cloud at Customer). Oracle Database uses SQL for database updating and retrieval. History Larry Ellison and his two friends and former co-workers, Bob Miner and Ed Oates, started a consultancy called Software Development Laboratories (SDL) in 1977. SDL developed the original version of the Oracle software. The name Oracle comes from the code-name of a CIA-funded project Ellison had worked on while formerly employed by Ampex. Releases and versions Oracle products follow a custom release-numbering and -naming convention. The "ai" in the current release, Oracle Database 23ai, stands for "Artificial Intelligence". Previous releases (e.g. Oracle Database 19c, 10g, and Oracle9i Database) have used suffixes of "c", "g", and "i" which stand for "Cloud", "Grid", and "Internet" respectively. Prior to the release of Oracle8i Database, no suffixes featured in Oracle Database naming conventions. There was no v1 of Oracle Database, as co-founder Larry Ellison "knew no one would want to buy version 1". For some database releases, Oracle also provides an Express Edition (XE) that is free to use. Oracle Database release numbering has used the following codes: The Introduction to Oracle Database includes a brief history on some of the key innovations introduced with each major release of Oracle Database. See My Oracle Support (MOS) note Release Schedule of Current Database Releases (Doc ID 742060.1) for the current Oracle Database releases and their patching end dates. Patch updates and security alerts Prior to Oracle Database 18c, Oracle Corporation released Critical Patch Updates (CPUs) and Security Patch Updates (SPUs) and Security Alerts to close security vulnerabilities. These releases are issued quarterly; some of these releases have updates issued prior to the next quarterly release. Starting with Oracle Database 18c, Oracle Corporation releases Release Updates (RUs) and Release Update Revisions (RURs). RUs usually contain security, regression (bug), optimizer, and functional fixes which may include feature extensions as well. RURs include all fixes from their corresponding RU but only add new security and regression fixes. However, no new optimizer or functional fixes are included. Competition In the market for relational databases, Oracle Database competes against commercial products such as IBM Db2 and Microsoft SQL Server. Oracle and IBM tend to battle for the mid-range database market on Unix and Linux platforms, while Microsoft dominates the mid-range database market on Microsoft Windows platforms. However, since they share many of the same customers, Oracle and IBM tend to support each other's products in many middleware and application categories (for example: WebSphere, PeopleSoft, and Siebel Systems CRM), and IBM's hardware divisions work closely with Oracle on performance-optimizing server-technologies (for example, Linux on IBM Z). Niche commercial competitors include Teradata (in data warehousing and business intelligence), Software AG's ADABAS, Sybase, and IBM's Informix, among many others. In the cloud, Oracle Database competes against the database services of AWS, Microsoft Azure, and Google Cloud Platform. Increasingly, the Oracle database products compete against open-source software relational and non-relational database systems such as PostgreSQL, MongoDB, Couchbase, Neo4j, ArangoDB and others. Oracle acquired Innobase, supplier of the InnoDB codebase to MySQL, in part to compete better against open source alternatives, and acquired Sun Microsystems, owner of MySQL, in 2010. Database products licensed as open-source are, by the legal terms of the Open Source Definition, free to distribute and free of royalty or other licensing fees. See also Comparison of relational database management systems Comparison of object–relational database management systems Database management system List of relational database management systems List of databases using MVCC Oracle SQL Developer Oracle Real Application Testing References External links Overview provided by Oracle Corporation. Client-server database management systems Relational database management systems Proprietary database management systems Database engines Relational database management software for Linux Cloud infrastructure Oracle Cloud Services Database management systems
Oracle Database
[ "Technology" ]
985
[ "Cloud infrastructure", "IT infrastructure" ]
323,737
https://en.wikipedia.org/wiki/Superfund
Superfund is a United States federal environmental remediation program established by the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA). The program is administered by the Environmental Protection Agency (EPA) and is designed to pay for investigating and cleaning up sites contaminated with hazardous substances. Sites managed under this program are referred to as Superfund sites. Of the tens of thousands of sites selected for possible action under the Superfund program, 1178 (as of 2024) remain on the National Priorities List (NPL) that makes them eligible for cleanup under the Superfund program. Sites on the NPL are considered the most highly contaminated and undergo longer-term remedial investigation and remedial action (cleanups). The state of New Jersey, the fifth smallest state in the U.S., is the location of about ten percent of the priority Superfund sites, a disproportionate amount. The EPA seeks to identify parties responsible for hazardous substances released to the environment (polluters) and either compel them to clean up the sites, or it may undertake the cleanup on its own using the Superfund (a trust fund) and seek to recover those costs from the responsible parties through settlements or other legal means. Approximately 70% of Superfund cleanup activities historically have been paid for by the potentially responsible parties (PRPs), reflecting the polluter pays principle. However, 30% of the time the responsible party either cannot be found or is unable to pay for the cleanup. In these circumstances, taxpayers had been paying for the cleanup operations. Through the 1980s, most of the funding came from an excise tax on petroleum and chemical manufacturers. However, in 1995, Congress chose not to renew this tax and the burden of the cost was shifted to taxpayers in the general public. Since 2001, most of the cleanup of hazardous waste sites has been funded through taxpayers generally. Despite its name, the program suffered from under-funding, and by 2014 Superfund NPL cleanups had decreased to only 8 sites, out of over 1,200. In November 2021, the Infrastructure Investment and Jobs Act reauthorized an excise tax on chemical manufacturers, for ten years starting in July 2022. The EPA and state agencies use the Hazard Ranking System (HRS) to calculate a site score (ranging from 0 to 100) based on the actual or potential release of hazardous substances from a site. A score of 28.5 places a site on the National Priorities List, eligible for long-term, remedial action (i.e., cleanup) under the Superfund program. , there were 1,333 sites listed; an additional 448 had been delisted, and 43 new sites have been proposed. Superfund also authorizes natural resource trustees, which may be federal, state, and/or tribal, to perform a Natural Resource Damage Assessment (NRDA). Natural resource trustees determine and quantify injuries caused to natural resources through either releases of hazardous substances or cleanup actions and then seek to restore ecosystem services to the public through conservation, restoration, and/or acquisition of equivalent habitat. Responsible parties are assessed damages for the cost of the assessment and the restoration of ecosystem services. For the federal government, EPA, US Fish and Wildlife Service, or the National Oceanic and Atmospheric Administration may act as natural resource trustees. The US Department of Interior keeps a list of the natural resource trustees appointed by state's governors. Federally recognized Tribes may act as trustees for natural resources, including natural resources related to Tribal subsistence, cultural uses, spiritual values, and uses that are preserved by treaties. Tribal natural resource trustees are appointed by tribal governments. Some states have their own versions of a state Superfund law and may perform NRDA either through state laws or through other federal authorities such as the Oil Pollution Act. CERCLA created the Agency for Toxic Substances and Disease Registry (ATSDR). The primary goal of a Superfund cleanup is to reduce the risks to human health through a combination of cleanup, engineered controls like caps and site restrictions such as groundwater use restrictions. A secondary goal is to return the site to productive use as a business, recreation or as a natural ecosystem. Identifying the intended reuse early in the cleanup often results in faster and less expensive cleanups. EPA's Superfund Redevelopment Program provides tools and support for site redevelopment. History CERCLA was enacted by Congress in 1980 in response to the threat of hazardous waste sites, typified by the Love Canal disaster in New York, and the Valley of the Drums in Kentucky. It was recognized that funding would be difficult, since the responsible parties were not easily found, and so the Superfund was established to provide funding through a taxing mechanism on certain industries and to create a comprehensive liability framework to be able to hold a broader range of parties responsible. The initial Superfund trust fund to clean up sites where a polluter could not be identified, could not or would not pay (bankruptcy or refusal), consisted of about $1.6 billion and then increased to $8.5 billion. Initially, the framework for implementing the program came from the oil and hazardous substances National Contingency Plan. The EPA published the first Hazard Ranking System in 1981, and the first National Priorities List in 1983. Implementation of the program in early years, during the Ronald Reagan administration, was ineffective, with only 16 of the 799 Superfund sites cleaned up and only $40 million of $700 million in recoverable funds from responsible parties collected. The mismanagement of the program under Anne Gorsuch Burford, Reagan's first chosen Administrator of the agency, led to a congressional investigation and the reauthorization of the program in 1986 through an act amending CERCLA. 1986 amendments The Superfund Amendments and Reauthorization Act of 1986 (SARA) added minimum cleanup requirements in Section 121 and required that most cleanup agreements with polluters be entered in federal court as a consent decree subject to public comment (section 122). This was to address sweetheart deals between industry and the Reagan-era EPA that Congress had discovered. Environmental justice initiative In 1994 President Bill Clinton issued Executive Order 12898, which called for federal agencies to make achieving environmental justice a requirement by addressing low income populations and minority populations that have experienced disproportionate adverse health and environmental effects as a result of their programs, policies, and activities. The EPA regional offices had to apply required guidelines for its Superfund managers to take into consideration data analysis, managed public participation, and economic opportunity when considering the geography of toxic waste site remediation. Some environmentalists and industry lobbyists saw the Clinton administration's environmental justice policy as an improvement, but the order did not receive bipartisan support. The newly elected Republican Congress made numerous unsuccessful efforts to significantly weaken the program. The Clinton administration then adopted some industry favored reforms as policy and blocked most major changes. Decline of excise tax Until the mid-1990s, most of the funding came from an excise tax on the petroleum and chemical industries, reflecting the polluter pays principle. Even though by 1995 the Superfund balance had decreased to about $4 billion, Congress chose not to reauthorize collection of the tax, and by 2003 the fund was empty. Since 2001, most of the funding for cleanups of hazardous waste sites has come from taxpayers. State governments pay 10 percent of cleanup costs in general, and at least 50 percent of cleanup costs if the state operated the facility responsible for contamination. By 2013 federal funding for the program had decreased from $2 billion in 1999 to less than $1.1 billion (in constant dollars). In 2001, the EPA used funds from the Superfund program to institute the cleanup of anthrax on Capitol Hill after the 2001 anthrax attacks. It was the first time the agency dealt with a biological release rather than a chemical or oil spill. From 2000 to 2015, Congress allocated about $1.26 billion of general revenue to the Superfund program each year. Consequently, less than half the number of sites were cleaned up from 2001 to 2008, compared to before. The decrease continued during the Obama administration, and since under the direction of EPA Administrator Gina McCarthy Superfund cleanups decreased even more from 20 in 2009 to a mere 8 in 2014. Reauthorization of excise tax In November 2021, Congress reauthorized an excise tax on chemical manufacturers, under the Infrastructure Investment and Jobs Act. The new chemical excise tax is effective July 1, 2022, and is double the rate of the previous Superfund tax. The 2021 law also authorized $3.5 billion in emergency appropriations from the U.S. government general fund for hazardous site cleanups in the immediate future. Provisions CERCLA authorizes two kinds of response actions: Removal actions. These are typically short-term response actions, where actions may be taken to address releases or threatened releases requiring prompt response. Removal actions are classified as: (1) emergency; (2) time-critical; and (3) non-time critical. Removal responses are generally used to address localized risks such as abandoned drums containing hazardous substances, and contaminated surface soils posing acute risks to human health or the environment. Remedial actions. These are usually long-term response actions. Remedial actions seek to permanently and significantly reduce the risks associated with releases or threats of releases of hazardous substances, and are generally larger, more expensive actions. They can include measures such as using containment to prevent pollutants from migrating, and combinations of removing, treating, or neutralizing toxic substances. These actions can be conducted with federal funding only at sites listed on the EPA National Priorities List (NPL) in the United States and the territories. Remedial action by responsible parties under consent decrees or unilateral administrative orders with EPA oversight may be performed at both NPL and non-NPL sites, commonly called Superfund Alternative Sites in published EPA guidance and policy documents. A potentially responsible party (PRP) is a possible polluter who may eventually be held liable under CERCLA for the contamination or misuse of a particular property or resource. Four classes of PRPs may be liable for contamination at a Superfund site: the current owner or operator of the site; the owner or operator of a site at the time that disposal of a hazardous substance, pollutant or contaminant occurred; a person who arranged for the disposal of a hazardous substance, pollutant or contaminant at a site; and a person who transported a hazardous substance, pollutant or contaminant to a site, who also has selected that site for the disposal of the hazardous substances, pollutants or contaminants. The liability scheme of CERCLA changed commercial and industrial real estate, making sellers liable for contamination from past activities, meaning they can't pass liability onto unknowing buyers without any responsibility. Buyers also have to be aware of future liabilities. The CERCLA also required the revision of the National Oil and Hazardous Substances Pollution Contingency Plan 9605(a)(NCP). The NCP guides how to respond to releases and threatened releases of hazardous substances, pollutants, or contaminants. The NCP established the National Priorities List, which appears as Appendix B to the NCP, and serves as EPA's information and management tool. The NPL is updated periodically by federal rulemaking. The identification of a site for the NPL is intended primarily to guide the EPA in: Determining which sites warrant further investigation to assess the nature and extent of risks to human health and the environment Identifying what CERCLA-financed remedial actions may be appropriate Notifying the public of sites, the EPA believes warrant further investigation Notifying PRPs that the EPA may initiate CERCLA-financed remedial action. Despite the name, the Superfund trust fund has lacked sufficient funds to clean up even a small number of the sites on the NPL. As a result, the EPA typically negotiates consent orders with PRPs to study sites and develop cleanup alternatives, subject to EPA oversight and approval of all such activities. The EPA then issues a Proposed Plans for remedial action for a site on which it takes public comment, after which it makes a cleanup decision in a Record of Decision (ROD). RODs are typically implemented under consent decrees by PRPs or under unilateral orders if consent cannot be reached. If a party fails to comply with such an order, it may be fined up to $37,500 for each day that non-compliance continues. A party that spends money to clean up a site may sue other PRPs in a contribution action under the CERCLA. CERCLA liability has generally been judicially established as joint and several among PRPs to the government for cleanup costs (i.e., each PRP is hypothetically responsible for all costs subject to contribution), but CERCLA liability is allocable among PRPs in contribution based on comparative fault. An "orphan share" is the share of costs at a Superfund site that is attributable to a PRP that is either unidentifiable or insolvent. The EPA tries to treat all PRPs equitably and fairly. Budgetary cuts and constraints can make more equitable treatment of PRPs more difficult. Procedures Upon notification of a potentially hazardous waste site, the EPA conducts a Preliminary Assessment/Site Inspection (PA/SI), which involves records reviews, interviews, visual inspections, and limited field sampling. Information from the PA/SI is used by the EPA to develop a Hazard Ranking System (HRS) score to determine the CERCLA status of the site. Sites that score high enough to be listed typically proceed to a Remedial Investigation/Feasibility Study (RI/FS). The RI includes an extensive sampling program and risk assessment that defines the nature and extent of the site contamination and risks. The FS is used to develop and evaluate various remediation alternatives. The preferred alternative is presented in a Proposed Plan for public review and comment, followed by a selected alternative in a ROD. The site then enters into a Remedial Design phase and then the Remedial Action phase. Many sites include long-term monitoring. Once the Remedial Action has been completed, reviews are required every five years, whenever hazardous substances are left onsite above levels safe for unrestricted use. The CERCLA information system (CERCLIS) is a database maintained by the EPA and the states that lists sites where releases may have occurred, must be addressed, or have been addressed. CERCLIS consists of three inventories: the CERCLIS Removal Inventory, the CERCLIS Remedial Inventory, and the CERCLIS Enforcement Inventory. The Superfund Innovative Technology Evaluation (SITE) program supports development of technologies for assessing and treating waste at Superfund sites. The EPA evaluates the technology and provides an assessment of its potential for future use in Superfund remediation actions. The SITE program consists of four related components: the Demonstration Program, the Emerging Technologies Program, the Monitoring and Measurement Technologies Program, and Technology Transfer activities. A reportable quantity (RQ) is the minimum quantity of a hazardous substance which, if released, must be reported. A source control action represents the construction or installation and start-up of those actions necessary to prevent the continued release of hazardous substances (primarily from a source on top of or within the ground, or in buildings or other structures) into the environment. A section 104(e) letter is a request by the government for information about a site. It may include general notice to a potentially responsible party that CERCLA-related action may be undertaken at a site for which the recipient may be responsible. This section also authorizes the EPA to enter facilities and obtain information relating to PRPs, hazardous substances releases, and liability, and to order access for CERCLA activities. The 104(e) letter information-gathering resembles written interrogatories in civil litigation. A section 106 order is a unilateral administrative order issued by EPA to PRP(s) to perform remedial actions at a Superfund site when the EPA determines there may be an imminent and substantial endangerment to the public health or welfare or the environment because of an actual or threatened release of a hazardous substance from a facility, subject to treble damages and daily fines if the order is not obeyed. A remedial response is a long-term action that stops or substantially reduces a release of a hazardous substance that could affect public health or the environment. The term remediation, or cleanup, is sometimes used interchangeably with the terms remedial action, removal action, response action, remedy, or corrective action. A nonbinding allocation of responsibility (NBAR) is a device, established in the Superfund Amendments and Reauthorization Act, that allows the EPA to make a nonbinding estimate of the proportional share that each of the various responsible parties at a Superfund site should pay toward the costs of cleanup. Relevant and appropriate requirements are those United States federal or state cleanup requirements that, while not "applicable," address problems sufficiently similar to those encountered at the CERCLA site that their use is appropriate. Requirements may be relevant and appropriate if they would be "applicable" except for jurisdictional restrictions associated with the requirement. Implementation , there were 1,322 sites listed; an additional 447 had been delisted, and 51 new sites have been proposed. Historically about 70 percent of Superfund cleanup activities have been paid for by potentially responsible party (PRPs). When the party either cannot be found or is unable to pay for the cleanup, the Superfund law originally paid for site cleanups through an excise tax on petroleum and chemical manufacturers. The last full fiscal year (FY) in which the Department of the Treasury collected the excise tax was 1995. At the end of FY 1996, the invested trust fund balance was $6.0 billion. This fund was exhausted by the end of FY 2003. Since that time Superfund sites for which the PRPs could not pay have been paid for from the general fund. Under the 2021 authorization by Congress, collection of excise taxes from chemical manufacturers will resume in 2022. Hazard Ranking System The Hazard Ranking System is a scoring system used to evaluate potential relative risks to public health and the environment from releases or threatened releases of hazardous wastes at uncontrolled waste sites. Under the Superfund program, the EPA and state agencies use the HRS to calculate a site score (ranging from 0 to 100) based on the actual or potential release of hazardous substances from a site through air, surface water or groundwater. A score of 28.5 places the site on the National Priorities List, making the site eligible for long-term remedial action (i.e., cleanup) under the Superfund program. Environmental discrimination Federal actions to address the disproportionate health and environmental disparities that minority and low-income populations face through Executive Order 12898 required federal agencies to make environmental justice central to their programs and policies. Superfund sites have been shown to impact minority communities the most. Despite legislation specifically designed to ensure equity in Superfund listing, marginalized populations still experience a lesser chance of successful listing and cleanup than areas with higher income levels. After the executive order had been put in place, there persisted a discrepancy between the demographics of the communities living near toxic waste sites and their listing as Superfund sites, which would otherwise grant them federally funded cleanup projects. Communities with both increased minority and low-income populations were found to have lowered their chances of site listing after the executive order, while on the other hand, increases in income led to greater chances of site listing. Of the populations living within 1 mile radius of a Superfund site, 44% of those are minorities despite only being around 37% of the nation's population. As of January 2021, more than 9,000 federally subsidized properties, including ones with hundreds of dwellings, were less than a mile from a Superfund site. Case studies in African American communities In 1978, residents of the rural black community of Triana, Alabama were found to be contaminated with DDT and PCB, some of whom had the highest levels of DDT ever recorded in human history. The DDT was found in high levels in Indian Creek, which many residents relied on for sustenance fishing. Although this major health threat to residents of Triana was discovered in 1978, the federal government did not act until 5 years later after the mayor of Triana filed a class-action lawsuit in 1980. In West Dallas, Texas, a mostly African American and Latino community, a lead smelter poisoned the surrounding neighborhood, elementary school, and day cares for more than five decades. Dallas city officials were informed in 1972 that children in the proximity of the smelter were being exposed to lead contamination. The city sued the lead smelters in 1974, then reduced its lead regulations in 1976. It wasn't until 1981 that the EPA commissioned a study on the lead contamination in this neighborhood and found the same results that had been found a decade earlier. In 1983, the surrounding day cares had to close due to the lead exposure while the lead smelter remained operating. It was later revealed that EPA Deputy Administrator John Hernandez had deliberately stalled the cleanup of the lead-contaminated hot spots. It wasn't until 1993 that the site was declared a Superfund site, and at the time it was one of the largest ones. However, it was not until 2004 when the EPA completed the clean-up efforts and eliminated the lead pollutant sources from the site. The Afton community of Warren County, North Carolina is one of the most prominent environmental injustice cases and is often pointed to as the roots of the environmental justice movement. PCBs were illegally dumped into the community and then it eventually became a PCB landfill. Community leaders pressed the state for the site to be cleaned up for an entire decade until it was finally detoxified. However, this decontamination did not return the site to its pre-1982 conditions. There has been a call for reparations to the community which has not yet been met. Bayview-Hunters Point, San Francisco, a historically African American community, has faced persistent environmental discrimination due to the poor remediation efforts of the San Francisco Naval Shipyard, a federally declared Superfund site. The negligence of multiple agencies to adequately clean this site has led Bayview residents to be subject to high rates of pollution and has been tied to high rates of cancer, asthma, and overall higher health hazards than other regions of San Francisco. Case studies in Native American communities One example is the Church Rock uranium mill spill on the Navajo Nation. It was the largest radioactive spill in the US but received a long delay in government response and cleanup after being placed as a lower priority site. Two sets of five-year cleanup plans have been put in place by US Congress, but contamination from the Church Rock incident has still not been completely cleaned up. Today, uranium contamination from mining during the Cold War era remains throughout the Navajo Nation, posing health risks to the Navajo community. Accessing data The data in the Superfund Program are available to the public. Superfund Site Search Superfund Policy, Reports and Other Documents TOXMAP was a Geographic Information System (GIS) from the Division of Specialized Information Services of the United States National Library of Medicine (NLM) that was deprecated on December 16, 2019. The application used maps of the United States to help users visually explore data from the EPA Toxics Release Inventory (TRI) and Superfund programs. TOXMAP was a resource funded by the US Federal Government. TOXMAP's chemical and environmental health information is taken from NLM's Toxicology Data Network (TOXNET), PubMed, and other authoritative sources. Future challenges While the simple and relatively easy sites have been cleaned up, EPA is now addressing a residual number of difficult and massive sites such as large-area mining and sediment sites, which is tying up a significant amount of funding. Also, while the federal government has reserved funding for cleanup of federal facility sites, this clean-up is going much more slowly. The delay is due to a number of reasons, including EPA's limited ability to require performance, difficulty of dealing with Department of Energy radioactive wastes, and the sheer number of federal facility sites. See also Brownfield land Formerly Used Defense Sites - Environmental restoration program Hazardous Materials Transportation Act National Oil and Hazardous Substances Contingency Plan Phase I Environmental Site Assessment Pollution Toxin References Further reading "High Court Limits Liability in Superfund Cases." – New York Times, 2009-05-05 External links Common Chemicals found at Superfund Sites, August 1994 Superfund Program – EPA Superfund sites by state – EPA Superfund: A Half Century of Progress, a report by the EPA Alumni Association Agency for Toxic Substances and Disease Registry National Priorities List of Hazardous Substances 42 U.S.C. chapter 103 (CERCLA) of the United States Code from the LII 42 U.S.C. chapter 103 (CERCLA) of the United States Code from the US House of Representatives CERCLA (PDF/details) as amended in the GPO Statute Compilations collection Hazardous Substance Superfund account on USAspending.gov Pollution in the United States United States Environmental Protection Agency United States federal environmental legislation 1980 in the environment 1980 in American law 96th United States Congress Environmental issues in the United States Love Canal
Superfund
[ "Technology" ]
5,285
[ "Hazardous waste", "Superfund sites" ]
323,792
https://en.wikipedia.org/wiki/Genome%20project
Genome projects are scientific endeavours that ultimately aim to determine the complete genome sequence of an organism (be it an animal, a plant, a fungus, a bacterium, an archaean, a protist or a virus) and to annotate protein-coding genes and other important genome-encoded features. The genome sequence of an organism includes the collective DNA sequences of each chromosome in the organism. For a bacterium containing a single chromosome, a genome project will aim to map the sequence of that chromosome. For the human species, whose genome includes 22 pairs of autosomes and 2 sex chromosomes, a complete genome sequence will involve 46 separate chromosome sequences. The Human Genome Project is a well known example of a genome project. Genome assembly Genome assembly refers to the process of taking a large number of short DNA sequences and reassembling them to create a representation of the original chromosomes from which the DNA originated. In a shotgun sequencing project, all the DNA from a source (usually a single organism, anything from a bacterium to a mammal) is first fractured into millions of small pieces. These pieces are then "read" by automated sequencing machines. A genome assembly algorithm works by taking all the pieces and aligning them to one another, and detecting all places where two of the short sequences, or reads, overlap. These overlapping reads can be merged, and the process continues. Genome assembly is a very difficult computational problem, made more difficult because many genomes contain large numbers of identical sequences, known as repeats. These repeats can be thousands of nucleotides long, and occur different locations, especially in the large genomes of plants and animals. The resulting (draft) genome sequence is produced by combining the information sequenced contigs and then employing linking information to create scaffolds. Scaffolds are positioned along the physical map of the chromosomes creating a "golden path". Assembly software Originally, most large-scale DNA sequencing centers developed their own software for assembling the sequences that they produced. However, this has changed as the software has grown more complex and as the number of sequencing centers has increased. An example of such assembler Short Oligonucleotide Analysis Package developed by BGI for de novo assembly of human-sized genomes, alignment, SNP detection, resequencing, indel finding, and structural variation analysis. Genome annotation Since the 1980s, molecular biology and bioinformatics have created the need for DNA annotation. DNA annotation or genome annotation is the process of identifying attaching biological information to sequences, and particularly in identifying the locations of genes and determining what those genes do. Time of completion When sequencing a genome, there are usually regions that are difficult to sequence (often regions with highly repetitive DNA). Thus, 'completed' genome sequences are rarely ever complete, and terms such as 'working draft' or 'essentially complete' have been used to more accurately describe the status of such genome projects. Even when every base pair of a genome sequence has been determined, there are still likely to be errors present because DNA sequencing is not a completely accurate process. It could also be argued that a complete genome project should include the sequences of mitochondria and (for plants) chloroplasts as these organelles have their own genomes. It is often reported that the goal of sequencing a genome is to obtain information about the complete set of genes in that particular genome sequence. The proportion of a genome that encodes for genes may be very small (particularly in eukaryotes such as humans, where coding DNA may only account for a few percent of the entire sequence). However, it is not always possible (or desirable) to only sequence the coding regions separately. Also, as scientists understand more about the role of this noncoding DNA (often referred to as junk DNA), it will become more important to have a complete genome sequence as a background to understanding the genetics and biology of any given organism. In many ways genome projects do not confine themselves to only determining a DNA sequence of an organism. Such projects may also include gene prediction to find out where the genes are in a genome, and what those genes do. There may also be related projects to sequence ESTs or mRNAs to help find out where the genes actually are. Historical and technological perspectives Historically, when sequencing eukaryotic genomes (such as the worm Caenorhabditis elegans) it was common to first map the genome to provide a series of landmarks across the genome. Rather than sequence a chromosome in one go, it would be sequenced piece by piece (with the prior knowledge of approximately where that piece is located on the larger chromosome). Changes in technology and in particular improvements to the processing power of computers, means that genomes can now be 'shotgun sequenced' in one go (there are caveats to this approach though when compared to the traditional approach). Improvements in DNA sequencing technology have meant that the cost of sequencing a new genome sequence has steadily fallen (in terms of cost per base pair) and newer technology has also meant that genomes can be sequenced far more quickly. When research agencies decide what new genomes to sequence, the emphasis has been on species which are either high importance as model organism or have a relevance to human health (e.g. pathogenic bacteria or vectors of disease such as mosquitos) or species which have commercial importance (e.g. livestock and crop plants). Secondary emphasis is placed on species whose genomes will help answer important questions in molecular evolution (e.g. the common chimpanzee). In the future, it is likely that it will become even cheaper and quicker to sequence a genome. This will allow for complete genome sequences to be determined from many different individuals of the same species. For humans, this will allow us to better understand aspects of human genetic diversity. Examples Many organisms have genome projects that have either been completed or will be completed shortly, including: Humans, Homo sapiens; see Human genome project Humans, Homo sapiens; see The Human Genome Project–Write Palaeo-Eskimo, an ancient-human Neanderthal, Homo sapiens neanderthalensis (partial); see Neanderthal Genome Project Common chimpanzee Pan troglodytes; see Chimpanzee Genome Project Woolly mammoth, Mammuthus primigenius Domestic cow, Bos taurus Bovine genome Honey Bee Genome Sequencing Consortium Horse genome HRDetect Human microbiome project International Grape Genome Program International HapMap Project Tomato 150+ genome resequencing project 100,000 Genomes Project 100K Pathogen Genome Project International Mouse Phenotyping Consortium IMPC Knockout Mouse Phenotyping Project KOMP2 Giant Sequoia, Sequoiadendron giganteum See also Joint Genome Institute Illumina, private company involved in genome sequencing Knome, private company offering genome analysis & sequencing Model organism National Center for Biotechnology Information References External links GOLD:Genomes OnLine Database Genome Project Database The Protein Naming Utility SUPERFAMILY EchinoBase An Echinoderm genomic database, (previous SpBase, a sea urchin genome database) NRCPB. Global Invertebrate Genomics Alliance (GIGA) Wellcome Sanger Institute Wellcome Genome Campus
Genome project
[ "Biology" ]
1,496
[ "Genome projects" ]
323,912
https://en.wikipedia.org/wiki/Cultural%20ecology
Cultural ecology is the study of human adaptations to social and physical environments. Human adaptation refers to both biological and cultural processes that enable a population to survive and reproduce within a given or changing environment. This may be carried out diachronically (examining entities that existed in different epochs), or synchronically (examining a present system and its components). The central argument is that the natural environment, in small scale or subsistence societies dependent in part upon it, is a major contributor to social organization and other human institutions. In the academic realm, when combined with study of political economy, the study of economies as polities, it becomes political ecology, another academic subfield. It also helps interrogate historical events like the Easter Island Syndrome. History Anthropologist Julian Steward (1902-1972) coined the term, envisioning cultural ecology as a methodology for understanding how humans adapt to such a wide variety of environments. In his Theory of Culture Change: The Methodology of Multilinear Evolution (1955), cultural ecology represents the "ways in which culture change is induced by adaptation to the environment". A key point is that any particular human adaptation is in part historically inherited and involves the technologies, practices, and knowledge that allow people to live in an environment. This means that while the environment influences the character of human adaptation, it does not determine it. In this way, Steward wisely separated the vagaries of the environment from the inner workings of a culture that occupied a given environment. Viewed over the long term, this means that environment and culture are on more or less separate evolutionary tracks and that the ability of one to influence the other is dependent on how each is structured. It is this assertion - that the physical and biological environment affects culture - that has proved controversial, because it implies an element of environmental determinism over human actions, which some social scientists find problematic, particularly those writing from a Marxist perspective. Cultural ecology recognizes that ecological locale plays a significant role in shaping the cultures of a region. Steward's method was to: Document the technologies and methods used to exploit the environment to get a living from it. Look at patterns of human behavior/culture associated with using the environment. Assess how much these patterns of behavior influenced other aspects of culture (e.g., how, in a drought-prone region, great concern over rainfall patterns meant this became central to everyday life, and led to the development of a religious belief system in which rainfall and water figured very strongly. This belief system may not appear in a society where good rainfall for crops can be taken for granted, or where irrigation was practiced). Steward's concept of cultural ecology became widespread among anthropologists and archaeologists of the mid-20th century, though they would later be critiqued for their environmental determinism. Cultural ecology was one of the central tenets and driving factors in the development of processual archaeology in the 1960s, as archaeologists understood cultural change through the framework of technology and its effects on environmental adaptation. In anthropology Cultural ecology as developed by Steward is a major subdiscipline of anthropology. It derives from the work of Franz Boas and has branched out to cover a number of aspects of human society, in particular the distribution of wealth and power in a society, and how that affects such behaviour as hoarding or gifting (e.g. the tradition of the potlatch on the Northwest North American coast). As transdisciplinary project One 2000s-era conception of cultural ecology is as a general theory that regards ecology as a paradigm not only for the natural and human sciences, but for cultural studies as well. In his Die Ökologie des Wissens (The Ecology of Knowledge), Peter Finke explains that this theory brings together the various cultures of knowledge that have evolved in history, and that have been separated into more and more specialized disciplines and subdisciplines in the evolution of modern science (Finke 2005). In this view, cultural ecology considers the sphere of human culture not as separate from but as interdependent with and transfused by ecological processes and natural energy cycles. At the same time, it recognizes the relative independence and self-reflexive dynamics of cultural processes. As the dependency of culture on nature, and the ineradicable presence of nature in culture, are gaining interdisciplinary attention, the difference between cultural evolution and natural evolution is increasingly acknowledged by cultural ecologists. Rather than genetic laws, information and communication have become major driving forces of cultural evolution (see Finke 2006, 2007). Thus, causal deterministic laws do not apply to culture in a strict sense, but there are nevertheless productive analogies that can be drawn between ecological and cultural processes. Gregory Bateson was the first to draw such analogies in his project of an Ecology of Mind (Bateson 1973), which was based on general principles of complex dynamic life processes, e.g. the concept of feedback loops, which he saw as operating both between the mind and the world and within the mind itself. Bateson thinks of the mind neither as an autonomous metaphysical force nor as a mere neurological function of the brain, but as a "dehierarchized concept of a mutual dependency between the (human) organism and its (natural) environment, subject and object, culture and nature", and thus as "a synonym for a cybernetic system of information circuits that are relevant for the survival of the species." (Gersdorf/ Mayer 2005: 9). Finke fuses these ideas with concepts from systems theory. He describes the various sections and subsystems of society as 'cultural ecosystems' with their own processes of production, consumption, and reduction of energy (physical as well as psychic energy). This also applies to the cultural ecosystems of art and of literature, which follow their own internal forces of selection and self-renewal, but also have an important function within the cultural system as a whole (see next section). In literary studies The interrelatedness between culture and nature has been a special focus of literary culture from its archaic beginnings in myth, ritual, and oral story-telling, in legends and fairy tales, in the genres of pastoral literature, nature poetry. Important texts in this tradition include the stories of mutual transformations between human and nonhuman life, most famously collected in Ovid's Metamorphoses, which became a highly influential text throughout literary history and across different cultures. This attention to culture-nature interaction became especially prominent in the era of romanticism, but continues to be characteristic of literary stagings of human experience up to the present. The mutual opening and symbolic reconnection of culture and nature, mind and body, human and nonhuman life in a holistic and yet radically pluralistic way seems to be one significant mode in which literature functions and in which literary knowledge is produced. From this perspective, literature can itself be described as the symbolic medium of a particularly powerful form of "cultural ecology" (Zapf 2002). Literary texts have staged and explored, in ever new scenarios, the complex feedback relationship of prevailing cultural systems with the needs and manifestations of human and nonhuman "nature." From this paradoxical act of creative regression they have derived their specific power of innovation and cultural self-renewal. German ecocritic Hubert Zapf argues that literature draws its cognitive and creative potential from a threefold dynamics in its relationship to the larger cultural system: as a "cultural-critical metadiscourse," an "imaginative counterdiscourse," and a "reintegrative interdiscourse" (Zapf 2001, 2002). It is a textual form which breaks up ossified social structures and ideologies, symbolically empowers the marginalized, and reconnects what is culturally separated. In that way, literature counteracts economic, political or pragmatic forms of interpreting and instrumentalizing human life, and breaks up one-dimensional views of the world and the self, opening them up towards their repressed or excluded other. Literature is thus, on the one hand, a sensorium for what goes wrong in a society, for the biophobic, life-paralyzing implications of one-sided forms of consciousness and civilizational uniformity, and it is, on the other hand, a medium of constant cultural self-renewal, in which the neglected biophilic energies can find a symbolic space of expression and of (re-)integration into the larger ecology of cultural discourses. This approach has been applied and widened in volumes of essays by scholars from over the world (ed. Zapf 2008, 2016), as well as in a recent monograph (Zapf 2016). Similar approaches have also been developed in adjacent fields, such as film studies (Paalman 2011). In geography In geography, cultural ecology developed in response to the "landscape morphology" approach of Carl O. Sauer. Sauer's school was criticized for being unscientific and later for holding a "reified" or "superorganic" conception of culture. Cultural ecology applied ideas from ecology and systems theory to understand the adaptation of humans to their environment. These cultural ecologists focused on flows of energy and materials, examining how beliefs and institutions in a culture regulated its interchanges with the natural ecology that surrounded it. In this perspective humans were as much a part of the ecology as any other organism. Important practitioners of this form of cultural ecology include Karl Butzer and David Stoddart. The second form of cultural ecology introduced decision theory from agricultural economics, particularly inspired by the works of Alexander Chayanov and Ester Boserup. These cultural ecologists were concerned with how human groups made decisions about how they use their natural environment. They were particularly concerned with the question of agricultural intensification, refining the competing models of Thomas Malthus and Boserup. Notable cultural ecologists in this second tradition include Harold Brookfield and Billie Lee Turner II. Starting in the 1980s, cultural ecology came under criticism from political ecology. Political ecologists charged that cultural ecology ignored the connections between the local-scale systems they studied and the global political economy. Today few geographers self-identify as cultural ecologists, but ideas from cultural ecology have been adopted and built on by political ecology, land change science, and sustainability science. Conceptual views Human species Books about culture and ecology began to emerge in the 1950s and 1960s. One of the first to be published in the United Kingdom was The Human Species by a zoologist, Anthony Barnett. It came out in 1950-subtitled The biology of man but was about a much narrower subset of topics. It dealt with the cultural bearing of some outstanding areas of environmental knowledge about health and disease, food, the sizes and quality of human populations, and the diversity of human types and their abilities. Barnett's view was that his selected areas of information "....are all topics on which knowledge is not only desirable, but for a twentieth-century adult, necessary". He went on to point out some of the concepts underpinning human ecology towards the social problems facing his readers in the 1950s as well as the assertion that human nature cannot change, what this statement could mean, and whether it is true. The third chapter deals in more detail with some aspects of human genetics. Then come five chapters on the evolution of man, and the differences between groups of men (or races) and between individual men and women today in relation to population growth (the topic of 'human diversity'). Finally, there is a series of chapters on various aspects of human populations (the topic of "life and death"). Like other animals man must, in order to survive, overcome the dangers of starvation and infection; at the same time he must be fertile. Four chapters therefore deal with food, disease and the growth and decline of human populations. Barnett anticipated that his personal scheme might be criticized on the grounds that it omits an account of those human characteristics, which distinguish humankind most clearly, and sharply from other animals. That is to say, the point might be expressed by saying that human behaviour is ignored; or some might say that human psychology is left out, or that no account is taken of the human mind. He justified his limited view, not because little importance was attached to what was left out, but because the omitted topics were so important that each needed a book of similar size even for a summary account. In other words, the author was embedded in a world of academic specialists and therefore somewhat worried about taking a partial conceptual, and idiosyncratic view of the zoology of Homo sapiens. Ecology of man Moves to produce prescriptions for adjusting human culture to ecological realities were also afoot in North America. In his 1957 Condon Lecture at the University of Oregon, entitled "The Ecology of Man", American ecologist Paul Sears called for "serious attention to the ecology of man" and demanded "its skillful application to human affairs". Sears was one of the few prominent ecologists to successfully write for popular audiences. Sears documents the mistakes American farmers made in creating conditions that led to the disastrous Dust Bowl. This book gave momentum to the soil conservation movement in the United States. The "ecology of man" as a limiting factor which "should be respected", placing boundaries around the extent to which the human species can be manipulated, is reflected in the views of Popes Benedict XVI, and Francis. Impact on nature During this same time was J.A. Lauwery's Man's Impact on Nature, which was part of a series on 'Interdependence in Nature' published in 1969. Both Russel's and Lauwerys' books were about cultural ecology, although not titled as such. People still had difficulty in escaping from their labels. Even Beginnings and Blunders, produced in 1970 by the polymath zoologist Lancelot Hogben, with the subtitle Before Science Began, clung to anthropology as a traditional reference point. However, its slant makes it clear that 'cultural ecology' would be a more apt title to cover his wide-ranging description of how early societies adapted to environment with tools, technologies and social groupings. In 1973 the physicist Jacob Bronowski produced The Ascent of Man, which summarised a magnificent thirteen part BBC television series about all the ways in which humans have moulded the Earth and its future. Changing the Earth By the 1980s the human ecological-functional view had prevailed. It had become a conventional way to present scientific concepts in the ecological perspective of human animals dominating an overpopulated world, with the practical aim of producing a greener culture. This is exemplified by I. G. Simmons' book Changing the Face of the Earth, with its telling subtitle "Culture, Environment History" which was published in 1989. Simmons was a geographer, and his book was a tribute to the influence of W.L Thomas' edited collection, Man's role in 'Changing the Face of the Earth that came out in 1956. Simmons' book was one of many interdisciplinary culture/environment publications of the 1970s and 1980s, which triggered a crisis in geography with regards its subject matter, academic sub-divisions, and boundaries. This was resolved by officially adopting conceptual frameworks as an approach to facilitate the organisation of research and teaching that cuts cross old subject divisions. Cultural ecology is in fact a conceptual arena that has, over the past six decades allowed sociologists, physicists, zoologists and geographers to enter common intellectual ground from the sidelines of their specialist subjects. 21st Century In the first decade of the 21st century, there are publications dealing with the ways in which humans can develop a more acceptable cultural relationship with the environment. An example is sacred ecology, a sub-topic of cultural ecology, produced by Fikret Berkes in 1999. It seeks lessons from traditional ways of life in Northern Canada to shape a new environmental perception for urban dwellers. This particular conceptualisation of people and environment comes from various cultural levels of local knowledge about species and place, resource management systems using local experience, social institutions with their rules and codes of behaviour, and a world view through religion, ethics and broadly defined belief systems. Despite the differences in information concepts, all of the publications carry the message that culture is a balancing act between the mindset devoted to the exploitation of natural resources and that, which conserves them. Perhaps the best model of cultural ecology in this context is, paradoxically, the mismatch of culture and ecology that have occurred when Europeans suppressed the age-old native methods of land use and have tried to settle European farming cultures on soils manifestly incapable of supporting them. There is a sacred ecology associated with environmental awareness, and the task of cultural ecology is to inspire urban dwellers to develop a more acceptable sustainable cultural relationship with the environment that supports them. Educational framework Cultural Core To further develop the field of Cultural Ecology, Julian Steward developed a framework which he referred to as the cultural core. This framework, a “constellation” as Steward describes it, organizes the fundamental features of a culture that are most closely related to subsistence and economic arrangements. At the core of this framework is the fundamental human-environment relationship as it pertains to subsistence. Outside of the core, in the second layer, lies the innumerable direct features of this relationship - tools, knowledge, economics, labor, etc. Outside of that second, directly correlated layer is the less-direct but still influential layer, typically associated with larger historical, institutional, political or social factors. According to Steward, the secondary features are determined greatly by the “cultural-historical factors” and they contribute to building the uniqueness of the outward appearance of cultures when compared to others with similar cores. The field of Cultural Ecology is able to utilize the cultural core framework as a tool for better determining and understanding the features that are most closely involved in the utilization of the environment by humans and cultural groups. See also Cultural materialism Dual inheritance theory Ecological anthropology Environmental history Environmental racism Human behavioral ecology Political ecology Sexecology References Sources Barnett, A. 1950 The Human Species: MacGibbon and Kee, London. Bateson, G. 1973 Steps to an Ecology of Mind: Paladin, London Berkes, F. 1999 Sacred ecology: traditional ecological knowledge and resource management. Taylor and Francis. Bronowski, J. 1973 The Ascent of Man, BBC Publications, London Finke, P. 2005 Die Ökologie des Wissens. Exkursionen in eine gefährdete Landschaft: Alber, Freiburg and Munich Finke, P. 2006 "Die Evolutionäre Kulturökologie: Hintergründe, Prinzipien und Perspektiven einer neuen Theorie der Kultur", in: Anglia 124.1, 2006, p. 175-217 Finke, P. 2013 "A Brief Outline of Evolutionary Cultural Ecology," in Traditions of Systems Theory: Major Figures and Contemporary Developments, ed. Darrell P. Arnold, New York: Routledge. Frake, Charles O. (1962) "Cultural Ecology and Ethnography" American Anthropologist. 64 (1: 53–59. ISSN 0002-7294. Gersdorf, C. and S. Mayer, eds. Natur – Kultur – Text: Beiträge zu Ökologie und Literaturwissenschaft: Winter, Heidelberg Hamilton, G. 1947 History of the Homeland: George Allen and Unwin, London. Hogben, L. 1970 Beginnings and Blunders: Heinemann, London Hornborg, Alf; Cultural Ecology Lauwerys, J.A. 1969 Man's Impact on Nature: Aldus Books, London Maass, Petra (2008): The Cultural Context of Biodiversity Conservation. Seen and Unseen Dimensions of Indigenous Knowledge among Q'eqchi' Communities in Guatemala. Göttinger Beiträge zur Ethnologie - Band 2, Göttingen: Göttinger Universitätsverlag online-version Paalman, F. 2011 Cinematic Rotterdam: The Times and Tides of a Modern City: 010 Publishers, Rotterdam. Russel, W.M.S. 1967 Man Nature and History: Aldus Books, London Simmons, I.G. 1989 Changing the Face of the Earth: Blackwell, Oxford Steward, Julian H. 1972 Theory of Culture Change: The Methodology of Multilinear Evolution: University of Illinois Press Technical Report PNW-GTR-369. 1996. Defining social responsibility in ecosystem management. A workshop proceedings. United States Department of Agriculture Forest Service. Turner, B. L., II 2002. "Contested identities: human-environment geography and disciplinary implications in a restructuring academy." Annals of the Association of American Geographers 92(1): 52–74. Worster, D. 1977 Nature's Economy. Cambridge University Press Zapf, H. 2001 "Literature as Cultural Ecology: Notes Towards a Functional Theory of Imaginative Texts, with Examples from American Literature", in: REAL: Yearbook of Research in English and American Literature 17, 2001, p. 85-100. Zapf, H. 2002 Literatur als kulturelle Ökologie. Zur kulturellen Funktion imaginativer Texte an Beispielen des amerikanischen Romans: Niemeyer, Tübingen Zapf, H. 2008 Kulturökologie und Literatur: Beiträge zu einem transdisziplinären Paradigma der Literaturwissenschaft (Cultural Ecology and Literature: Contributions on a Transdisciplinary Paradigm of Literary Studies): Winter, Heidelberg Zapf, H. 2016 Literature as Cultural Ecology: Sustainable Texts: Bloomsbury Academic, London Zapf, H. 2016 ed. Handbook of Ecocriticism and Cultural Ecology: De Gruyter, Berlin External links Cultural and Political Ecology Specialty Group of the Association of American Geographers. Archive of newsletters, officers, award and honor recipients, as well as other resources associated with this community of scholars. Notes on the development of cultural ecology with an excellent reference list: Catherine Marquette Cultural ecology: an ideational scaffold for environmental education: an outcome of the EC LIFE ENVIRONMENT programme Cultural anthropology Ecology terminology Environmental humanities Human geography Interdisciplinary historical research
Cultural ecology
[ "Biology", "Environmental_science" ]
4,540
[ "Ecology terminology", "Environmental social science", "Human geography" ]
324,132
https://en.wikipedia.org/wiki/Registered%20jack
A registered jack (RJ) is a standardized telecommunication network interface for connecting voice and data equipment to a computer service provided by a local exchange carrier or long distance carrier. Registered interfaces were first defined in the Universal Service Ordering Code (USOC) of the Bell System in the United States for complying with the registration program for customer-supplied telephone equipment mandated by the Federal Communications Commission (FCC) in the 1970s. Subsequently, in 1980 they were codified in title 47 of the Code of Federal Regulations Part 68. Registered jack connections began to see use after their invention in 1973 by Bell Labs. The specification includes physical construction, wiring, and signal semantics. Accordingly, registered jacks are primarily named by the letters RJ, followed by two digits that express the type. Additional letter suffixes indicate minor variations. For example, RJ11, RJ14, and RJ25 are the most commonly used interfaces for telephone connections for one-, two-, and three-line service, respectively. Although these standards are legal definitions in the United States, some interfaces are used worldwide. The connectors used for registered jack installations are primarily the modular connector and the 50-pin miniature ribbon connector. For example, RJ11 and RJ14 use female six-position modular connectors, and RJ21 uses a 25-pair (50-pin) miniature ribbon connector. RJ11 uses two conductors in a six-position female modular connector, so can be made with any female six-position modular connector, while RJ14 uses four, so can be made with either a 6P4C or a 6P6C connector. Naming standard The registered jack designations originated in the standardization process of telephone connections in the Bell System in the United States, and describe application circuits and not just the physical geometry of the connectors. The same modular connector type may be used for different registered jack applications. Modular connectors were developed to replace older telephone installation methods that used hardwired cords or bulkier varieties of telephone plugs. Strictly, Registered Jack refers to both the female physical connector (modular connector) and specific wiring patterns, but the term is often used loosely to refer to modular connectors regardless of wiring, gender, or use, commonly for telephone line connections, but also for Ethernet over twisted pair, resulting in confusion over the various connection standards and applications. For example, the six-position physical connector, plug and jack, is identically dimensioned and inter-connectable, whether it is wired for one, two, or three lines. These are the RJ11, RJ14, and RJ25 interfaces. The RJ standards designations only pertain to the wiring of the (female) jacks, hence the name Registered Jack. It is commonplace, but not strictly correct, to refer to the unwired connectors or the (male) plugs by these names. The nomenclature for modular connectors is based on the number of contact positions and the number of contacts present. 6P indicates a six-position modular plug or jack. A six-position modular plug with conductors in only the middle two positions is designated 6P2C; 6P4C has four conductors in the middle positions, and 6P6C has all six. An RJ11 without power, if made with a 6P6C connector, has four unused contacts. History and authority Registration interfaces were created by the Bell System under a Federal Communications Commission order for the standard interconnection between telephone company equipment and customer premises equipment. These interfaces used newly standardized jacks and plugs, primarily based on miniature modular connectors. The wired communications provider (telephone company) is responsible for delivery of services to a minimum (or main) point of entry (MPOE). The MPOE is a utility box, usually containing surge protective circuitry, which connects the wiring on the customer's property to the communication provider's network. Customers are responsible for all jacks, wiring, and equipment on their side of the MPOE. The intent was to establish a universal standard for wiring and interfaces, and to separate ownership of in-home (or in-office) telephone wiring from the wiring owned by the service provider. In the Bell System, following the Communications Act of 1934, the telephone companies owned all telecommunications equipment and they did not allow interconnection of third-party equipment. Telephones were generally hardwired, but may have been installed with Bell System connectors to permit portability. The legal case Hush-A-Phone v. United States (1956) and the Federal Communications Commission's (FCC) Carterfone (1968) decision brought changes to this policy, and required the Bell System to allow some interconnection, culminating in the development of registered interfaces using new types of miniature connectors. Registered jacks replaced the use of protective couplers provided exclusively by the telephone company. The new modular connectors were much smaller and cheaper to produce than the earlier, bulkier connectors that were used in the Bell System since the 1930s. The Bell System issued specifications for the modular connectors and their wiring as Universal Service Order Codes (USOC), which were the only standards at the time. Large customers of telephone services commonly use the USOC to specify the interconnection type and, when necessary, pin assignments, when placing service orders with a network provider. When the U.S. telephone industry was reformed to foster competition in the 1980s, the connection specifications became federal law, ordered by the FCC and codified in the Code of Federal Regulations (CFR), Title 47 CFR Part 68, Subpart F, superseded by T1.TR5-1999. In January 2001, the FCC delegated responsibility for standardizing connections to the telephone network to a new private industry organization, the Administrative Council for Terminal Attachments (ACTA). For this delegation, the FCC removed Subpart F from the CFR and added Subpart G. The ACTA derives its recommendations for terminal attachments from the standards published by the engineering committees of the Telecommunications Industry Association (TIA). ACTA and TIA jointly published the standard TIA/EIA-IS-968, replacing the CFR information. TIA-968-A, the current version of that standard, details the physical aspects of modular connectors, but not the wiring. Instead, TIA-968-A incorporates the standard T1.TR5-1999, "Network and Customer Installation Interface Connector Wiring Configuration Catalog", by reference. With the publication of TIA-968-B, the connector descriptions have been moved to TIA-1096-A. A registered jack name, such as RJ11, still identifies both the physical connectors and the wiring (pinout) for each application. Types The most widely implemented registered jack in telecommunications is the RJ11. This is a modular connector wired for one telephone line, using the center two contacts of six available positions. This configuration is also used for single-line telephones in many countries other than the United States. It may also use a 6P4C connector, to use an additional wire pair for powering lamps on the telephone set. RJ14 is similar to RJ11, but is wired for two lines and RJ25 has three lines. RJ61 is a similar registered jack for four lines, but uses an 8P8C connector. The RJ45S jack is rarely used in telephone applications, and the keyed 8P8C modular plug used for RJ45S mechanically cannot be inserted into an Ethernet port, but a similar plug, the non-keyed 8P8C modular plugnever used for RJ45Sis used in Ethernet networks, and the connector is often, however improperly, referred to as RJ45 in this context. Many of the basic names have suffixes that indicate subtypes: C: flush-mount or surface mount F: flex-mount W: wall-mount L: lamp-mount S: single-line M: multi-line X: complex jack For example, RJ11 comes in two forms: RJ11W is a jack from which a wall telephone can be hung, while RJ11C is a jack designed to have a cord plugged into it. A cord can be plugged into an RJ11W as well. RJ11, RJ14, RJ25 wiring All of these registered jacks are described as containing a number of potential contact positions and the actual number of contacts installed within these positions. RJ11, RJ14, and RJ25 all use the same six-position modular connector, thus are physically identical except for the different number of contacts (two, four and six respectively) allowing connections for one, two, or three telephone lines respectively. Cords connecting to an RJ11 interface require a 6P2C connector. Nevertheless, cords sold as RJ11 often use 6P4C connectors (six position, four conductor) with four wires. Two of the six possible contact positions connect tip and ring, and the other two conductors are unused. RJ11 is commonly used to connect DSL modems to the customer line. The conductors other than the two central tip and ring conductors are in practice variously used for a second or third telephone line, a ground for selective ringers, low-voltage power for a dial light, or for anti-tinkle circuitry to prevent pulse dialing phones from sounding the bell on other extensions. Pinout The pins of the 6P6C connector are numbered 1 to 6, counting left to right when holding the connector tab side down with the opening for the cable facing the viewer. Provisioning of power Some telephones such as the Western Electric Princess and Trimline telephone models require additional power (~6 V AC) for operation of the incandescent dial light. This power is delivered to the telephone set from a transformer by the second wire pair (pins 2 and 5) of the 6P4C connector. RJ21 RJ21 is a registered jack standard using a micro ribbon connector with contacts for up to fifty conductors. It is used to implement connections for up to 25 lines, or circuits that require many wire pairs, such as used in the 1A2 key telephone system. The miniature ribbon connector of this interface is also known as a 50-pin telco connector, CHAMP(AMP), or Amphenol connector, the last being a genericized trademark, as Amphenol was a prominent manufacturer of these at one time. A cable color scheme, known as even-count color code, is determined for 25 pairs of conductors as follows: For each ring, the primary, more prominent color is chosen from the set blue, orange, green, brown, and slate, in that order, and the secondary, thinner stripe color from the set of white, red, black, yellow, and violet colors, in that order. The tip conductor color scheme uses the same colors as the matching ring but switches the thickness of the primary and secondary colored stripes. Since the sets are ordered, an orange (color 2 in its set) with a yellow (color 4) is the color scheme for the 4·5 + 2 − 5 = 17th pair of wires. If the yellow is the more prominent, thicker stripe, then the wire is a tip conductor connecting to the pin numbered 25 + the pair #, which is pin 42 in this case. Ring conductors connect to the same pin number as the pair number. A conventional enumeration of wire color pairs then begins blue (and white), orange (and white), green (and white) and brown (and white), which subsumes a color-coding convention used in cables of 4 or fewer pairs (8 wires or less) with 8P and 6P connectors. Dual 50-pin ribbon connectors are often used on punch blocks to create a breakout box for private branch exchange (PBX) and other key telephone systems. RJ45S The RJ45S, an obsolete standard jack once specified for modem or data interfaces, has a slot on one side to allow mating with a special variation of the 8P plug: a mechanically-keyed plug with an extra tab on one side that prevents it from mating with regular (non-keyed) 8P jacks. The visual difference from the more-common 8P female is subtle. The RJ45S keyed 8P modular connector has only pins 5 and 4 wired for tip and ring (respectively) of a single telephone line, and a "programming" resistor connected to pins 7 and 8. RJ48 RJ48 is used for T1 and ISDN termination, local-area data channels, and subrate digital services. It uses the eight-position modular connector (8P8C). RJ48C is commonly used for T1 circuits and uses pin numbers 1, 2, 4 and 5. RJ48X is a variation that contains shorting blocks in the jack for troubleshooting: With no plug inserted, pins 2 and 5 (the two tip wires) are connected to each other, and likewise 1 and 4 (ring), creating a loopback so that a signal received on one pair is returned on the other. Sometimes this is referred to as a self-looping jack. RJ48S is typically used for local-area data channels and subrate digital services and carries one line. It accepts a keyed variety of the 8P modular connector. RJ48 connectors are fastened to shielded twisted pair (STP) cables, not the unshielded twisted-pair (UTP) commonly used in other installations. RJ61 RJ61 is a physical interface that was often used for terminating twisted pair cables. It uses an eight-position, eight-conductor (8P8C) modular connector. This wiring pattern is for multi-line analog telephone use only; RJ61 is unsuitable for use with high-speed data because the pins for pairs 3 and 4 are too widely spaced for high signaling frequencies. T1 lines use another wiring for the same connector, designated RJ48. Ethernet over twisted pair (10BASE-T, 100BASE-TX and 1000BASE-T) also uses different wiring for the same connector, either T568A or T568B. RJ48, T568A, and T568B are all designed to keep both wires of each pair close together. The flat eight-conductor silver-satin cable conventionally used with four-line analog telephones and RJ61 jacks is also unsuitable for use with high-speed data. Twisted pair cabling is required for data applications. Twisted-pair patch cable typically used with common Ethernet and other data network standards is not compatible with RJ61, because RJ61 pairs 3 and 4 would each be split across two different twisted pairs in the patch cable, causing excessive cross-talk between voice lines 3 and 4, with conversations on each line literally being audible on the other. With the advent of structured wiring systems and TIA/EIA-568 (now ANSI/TIA-568) conventions, the RJ61 wiring pattern is falling into disuse. The T568A and T568B standards are used in place of RJ61 so that a single wiring standard in a facility can be used for both voice and data. Similar jacks and unofficial names The following RJ-style names do not refer to official ACTA types. The labels RJ9, RJ10, RJ22 are variously used for 4P4C and 4P2C modular connectors, most typically installed on telephone handsets and their cordage. Telephone handsets do not connect directly to the public network, and therefore have no registered jack designation. RJ45 is often incorrectly used when referring to an 8P8C connector used for ANSI/TIA-568 T568A and T568B and Ethernet, however, the plug used for RJ45 is both mechanically and electrically incompatible with any Ethernet port: it cannot fit into an Ethernet port, and it is wired in a way that is incompatible with Ethernet. The connector commonly used for twisted-pair Ethernet is a non-keyed 8P8C connector, quite distinct from that used for RJ45S. The new ARJ45 interface, however, is a plug and jack allowing higher transmission rates, and the jack can, optionally, be backward-compatible with the common 8P8C plugs of Gigabit Ethernet and earlier standards. RJ50 is often a 10P10C interface, often used for data applications. The micro ribbon connector, first made by Amphenol, that is used in the RJ21 interface, has also been used to connect Ethernet ports in bulk from a switch with 50-pin ports to a Cat-5 rated patch panel, or between two patch panels. A cable with a 50-pin connector on one end can support six fully wired 8P8C connectors or Ethernet ports on a patch panel with one spare pair. Alternatively, only the necessary pairs for 10/100 Ethernet can be wired allowing twelve Ethernet ports with a single spare pair. This connector is also used with spring bail locks for SCSI-1 connections. Some computer printers use a shorter 36-pin version known as a Centronics connector. The 8P8C modular jack was chosen as a candidate for ISDN systems. In order to be considered, the connector system had to be defined by an international standard, leading to the creation of the ISO 8877 standard. Under the rules of the IEEE 802 standards project, international standards are to be preferred over national standards, so when the original 10BASE-T twisted-pair wiring version of Ethernet was developed, this modular connector was chosen as the basis for IEEE 802.3i-1990. See also Audio and video interfaces and connectors generic article BS 6312 British equivalent to RJ25 EtherCON ruggedized 8P8C Ethernet connector Key telephone system Modified Modular Jack a variation used by Digital Equipment Corporation for serial computer connections, and also for CEA-909 antennas. Protea (telephone) South African telephone jack standard Telecommunications Industry Association Standards Developing Organization for ACTA References External links RJ glossary ANSI/TIA-968-B documents of FCC specifications from the Administrative Council for Terminal Attachments, section 6.2 in particular ANSI/TIA-1096-A Administrative Council for Terminal Attachments Doing your own telephone wiring Connecting a second phone line Telephone connectors Computer connectors Networking hardware
Registered jack
[ "Engineering" ]
3,825
[ "Computer networks engineering", "Networking hardware" ]
324,134
https://en.wikipedia.org/wiki/Slide%20library
A slide library is a library that houses a collection of photographic slides, either as a part of a larger library or image archive, or standing alone within a larger organization, such as an academic department of a college or university, a museum, or a corporation. Typically, a "slide library" contains slides depicting artwork, architecture, or cultural objects, and is typically used for the study, teaching, and documentation of art history, architectural history, and visual culture. Other academic disciplines, such as biology and other sciences, also maintain image collections akin to slide libraries. Corporations may also have image libraries to maintain and document their publications and history. Increasingly, these types of libraries are known as "Visual Resources Collections," as they may be responsible for all "visual" materials for the study of a subject and include still and moving images in a variety of physical and virtual formats. They may contain: 35mm slides lantern slides mounted study photographs born digital images 35mm, 8mm film Many educational institutions have changed the names of their slide libraries over the years, to a variety of titles like Visual Resources Center, Imaging & AV Center, Digital Collections Center, etc. The titles and duties of slide librarians have therefore expanded greatly. As keepers of these important historical images, visual resources librarians have continuously cataloged and inventoried slide collections, circulated them to faculty for teaching, and more recently, digitized slides and placed them online via content management systems. History of visual resources collections The first American lantern slide collections, developed by museums to reflect and augment their collections, got their start between 1860 and 1879: the American Natural History Museum, the New York State Military Museum, the Smithsonian Institution, and the Winterthur Museum. American colleges and universities began their collections during the same period of time: DePauw University, Columbia University, Oberlin College, Princeton University, University of Rochester. Colleges and university collections were used primarily for classroom instruction. The first illustrated architectural history course west of the Mississippi was John Galen Howard's Architecture 5A-F at the University of California, Berkeley in 1905. The six-semester course was required for all architecture students, and like other architectural history courses of its time, at MIT and Cornell University at least, were multi-year in duration. Of course, the lecture was illustrated by lantern slides. In the U.S., lantern slides generally measured 3"x 4.25". The 1950s was a period of transition from black and white lantern slides, which heretofore had often been hand colored, to color positive film. Lantern slides were shot directly onto color film, and the 35mm slide (2"x2" with an image of 24mm x 36mm) gained in popularity. The heyday of the lantern slide lasted one hundred years, more or less, from 1860 to 1960. The reign of the 35mm slide, more or less, was about half as long, fifty years, 1955–2005. Timeline: Development of visual resources (collections and profession) in the U.S. 1865. First lantern slide collections begin developing in the U.S. These 3.25" x 4.0" glass slides projected clearly with great detail. However, projectors required lime light which was dirty and dangerous 1887. First transparent, flexible nitrocellulose film base developed 1888. First perforated film stock developed 1889. Eastman combined nitrocellulose film stock, perforated edges, and dry-gelatino-bromide emulsion to create the first paperless film stock 1902. Court denies Eastman's exclusive patent, allowing any company to develop 35mm film 1905. UC Berkeley's Architecture Library acquires its first lantern slide, the tree of architecture, made from Banister Fletcher's book, A History of Architecture on the Comparative Method 1909. 35mm adopted as the international standard gauge by Motion Picture Patents Company, an Edison trust 1913. 35mm film format introduced into still photography 1925. Leica Camera introduced, using 35mm still film 1930. Safety film introduced (cellulose diacetate) 1934–1936. Kodachrome 35mm slide film introduced, but not widely adopted by colleges and universities. Film stock was either flammable or brittle 1949. Kodak replaces all nitrate-based films with its safety film, a cellulose-triacetate base 1952. All camera film is now triacetate based, paving the way for widespread adoption of 35mm film in both amateur and academic markets 1952+ American faculty widely divided in their allegiances to lantern slides for their clarity or to 35mm slides for their ease of production and transport to class. Huge debates begin about whether 35mm color film is stable enough for adoption and whether the loss of clarity will ruin the teaching of art history. Younger faculty adopt 35mm film, while older faculty prefer lantern slides 1968. Visual resources curators begin meeting during annual College Art Association (CAA) conferences 1969. Art Libraries Society, established in the United Kingdom and Ireland, founded 1969. The first "universal" classification system published by Luraine Tansey and Wendell Simons under the title, A slide classification system for the organization and automatic indexing of interdisciplinary collections of slides and pictures 1972. Art Libraries Society of North America (ARLIS/NA) founded by a group of art librarians attending the American Library Association annual conference in Chicago 1972. Nancy DeLaurier organizes the visual resources curators of Mid-America College Art Association 1974. Slide libraries; a guide for academic institutions and museums, by Betty Jo Irvine. Published by Libraries Unlimited for Art Libraries Society 1974. Mid-America College Art Association slides and photographs newsletter begins publishing under the leadership of Nancy DeLaurier 1974. Slide buyer's guide, revised edition, edited by Nancy DeLaurier, published by University of Missouri-Kansas City, "for The College Art Association of America". Limited to 500 copies 1976. Slide buyer's guide, 3rd edition, edited by Nancy DeLaurier, published by the College Art Association 1978. Guide for Photograph Collections, edited by Nancy Schuller and Susan Tamulonis, published by MACAA/VR 1978. Guide to Equipment for Slide Maintenance and Viewing, edited by Gillian Scott, published by MACAA/VR 1979. Slide libraries : a guide for academic institutions, museums, and special collections, by Betty Jo Irvine with assistance from P. Eileen Fry. Libraries Unlimited 1979. Guide for the Management of Visual Resources Collections, edited by Nancy Schuller and published by MACAA/VR (Mid-America College Art Association Visual Resources Committee) 1980. Guide to Copy Photography for Visual Resource Collections, edited by Rosemary Kuehn and Arlene Zelda Richardson, published by MACAA/VR 1980. Standard for staffing fine arts slide collections, by the Ad-hoc Committee on Professional Standards for Visual Resources Collections 1980. Slide buyer's guide, 4th edition, edited by Nancy DeLaurier, published by Mid-America College Art Association, Visual Resources Committee 1980. MACAA slides and photographs newsletter reborn as the International Bulletin for Photograph Documentation of the Visual Arts 1980. Visual Resources: an international journal of documentation launched by Helene Roberts, published by Iconographic Publications 1980. Art and Architecture Thesaurus project launched to provide subject access for art and architecture 1982–1983. Visual Resources curators from MACAA/VR, CAA, and ARLIS/NA launch Visual Resources Association (VRA) 1983. Standards for art libraries and fine arts slide collections, published as Occasional Paper No. 2 of ARLIS/NA 1985. Slide buyers' guide : an international directory of slide sources for art and architecture, 5th edition, edited by Norine Duncan Cashman, index by Mark Braunstein, published by Libraries Unlimited as part of their Visual resources series 1986. Sara Shatford Layne publishes "Analyzing the Subject of a Picture: A Theoretical Approach"in Cataloging and Classification Quarterly, vol. 6(3) 1987. Toni Petersen, President of ARLIS/NA, urges the Visual Resources Division, to begin developing some standard authorities for shared cataloging 1988. Barneyscan, first dedicated 35mm slide scanner, introduced 1989. Visual Resources Association launches its bulletin 1990. Art and Architecture Thesaurus, Toni Petersen, editor, published by Oxford University Press in 3 volumes. Critical step in providing subject access to individual 35mm slides in visual resources collections 1990. Slide buyers' guide : an international directory of slide sources for art and architecture, 6th edition edited by Norine Duncan Cashman, published by Libraries Unlimited, Visual resources series. At head of title: Visual Resources Association 1990. Beyond the Book: Extending MARC for Subject Access, edited by Toni Petersen and Pat Molholt, by G.K. Hall. Several papers on visual resources, including : "Access to Diverse Collections in University Settings: the Berkeley Dilemma", by Howard Besser and Maryly Snow, and "Visual Depictions and the Use of MARC: A View from the Trenches of Slide Librarianship", by Maryly Snow 1990. Tim Berners-Lee starts work on a hypertext graphical-user-interface (GUI) and makes up the name World Wide Web as the name for the program 1991. Facilities Standards for Art Libraries and Visual Resources Collections, edited by Betty Jo Irvine. Published by Libraries Unlimited for ARLIS/NA 1991. World Architecture Index: a Guide to Illustrations, compiled by Edward H. Teague, published by Greenwood Press as part of its Art Reference Collection No. 12 1991. Visual Resources Association creates its listserv, VRA-L, a vital communication tool for its visual resources curators members 1992. Users' Guide to The Art and Architecture Thesaurus, published along with the electronic edition by Oxford University Press 1993. Visual Resources Association established its Data Standards Committee 1994. March. Marc Andreessen leaves National Center for Supercomputing Applications (NCSA) to found the Mosaic Communications Corp, later becomes Netscape. Mosaic launches the World Wide Web for the general public 1994. September. First image database, SPIRO, debuts on the World Wide Web. 1995. Concordance of Ancient Site Names, edited by Eileen Fry and Maryly Snow, published as Topical Paper No. 2 of ARLIS/NA (see 1987 call for visual resources authority work). This is one of the first scholarly authorities created by visual resources curators for visual resources cataloging 1995. Criteria for the Hiring and Retention of Visual Resources Professionals adopted by the executive boards of both ARLIS/NA and VRA 1996. Art and Architecture Thesaurus Sourcebook, edited by Toni Petersen, published as Occasional Paper No. 10 of ARLIS/NA 1996. Staffing Standards for Art Libraries and Visual Resources Coillections, published as Occasional Paper No. 11 of ARLIS/NA 1996. VRA Core 1.0 released 1998. Vision Project, sponsored by Research Libraries Group. First shared cataloging project with 32 visual resources collections cataloging and sharing images. Vision Project also served as a test of VRA Core 1.0 1998. VRA Core 2.0 released 1998. ArtMARC Sourcebook: Cataloging Art, Architecture, and Their Visual Images, edited by Linda McRae and Lynda White, published by American Library Association 2000. Guidelines for the Visual Resources Profession, edited by Kim Kopatz. A joint publication of ARLIS/NA and VRA 2000. Collection Development Policies for Libraries and Visual Collections in the Arts, compiled by Ann Baird Whiteside, Pamela Born, Adeane Alpert Bregman, published as Occasional Paper No. 12 of ARLIS/NA 2001. VRA Copy Photography Computator (for determining intellectual property restrictions and fair use) released 2002. VRA Core 3.0 released 2002. Criteria for the Hiring and Retention of Visual Resources Professionals updated, and adopted by ARLIS/NA, VRA, and College Art Association 2004. ARTstor image database, a project of the Andrew Mellon Foundation, is available for licensing. ARTstor combines finding, organizing, and presenting images in one integrated software environment 2004. Kodak discontinues manufacturing its 35mm carousel projectors and carousels. This sends a strong signal to American professors that the time to switch from 35mm slides to digital images is now 2004. North American Lantern Slide Survey begun, jointly sponsored by ARLIS/NA and VRA 2005. VRA Core 4.0 Beta released 2006. Cataloging Cultural Objects published by American Library Association. Edited by Murtha Baca, Patricia Harpring, Elisa Lanzi, Linda McRae, Ann Baird Whiteside on behalf of the Visual Resources Association 2007. VRA Core 4.0 released External links Visual Resource Collections: Slides and Digital Images, Fine Arts Library of the Harvard College Library Architecture Visual Resources Library, Architecture Department, University of California, Berkeley Visual Resources Collection, University of Oregon, Eugene Visual Resources Center, Rice University, Houston, Texas Visual Resources Collection, College of Built Environments, University of Washington, Seattle Imaging Center, Smith College, Northampton, Massachusetts Roger Williams University Visual Resources Center, Bristol, Rhode Island Visual Media Center, Duke University, Durham, North Carolina Visual Resources Collection, School of Architecture, The University of Texas at Austin Visual Resources Collection, Department of Art History, Ithaca College Visual Resources Collection, Fine Arts Library, The University of Texas at Austin University of Michigan, Department of History of Art, Visual Resources Collections Image Databases: ARTstor Digital Public Library of America Visual Resources Center, Pratt Libraries, list of databases North Carolina State University University of Colorado, Boulder Oxford University University of Pennsylvania How to Digitize Slide Libraries: Workflow, American Museum of Natural History Scanning slides, Dartmouth College Library Workflow, Ball State University Grant proposal, Fisher Fine Arts Library, University of Pennsylvania Best practices, J. Willard Marriott Library, University of Utah Basics of Scanning, Library of Congress Professional Organizations: The Visual Resources Division (VRD) of Art Libraries Society of North America (ARLIS/NA) Art Libraries Society of North America (ARLIS/NA) Visual Resources Association (VRA) Visual Materials Section, Society of American Archivists Architectural history Art history Libraries by type Photography Types of library Photo archives
Slide library
[ "Engineering" ]
2,867
[ "Architectural history", "Architecture" ]
324,317
https://en.wikipedia.org/wiki/Theatrical%20scenery
Theatrical scenery is that which is used as a setting for a theatrical production. Scenery may be just about anything, from a single chair to an elaborately re-created street, no matter how large or how small, whether the item was custom-made or is the genuine item, appropriated for theatrical use. History The history of theatrical scenery is as old as the theatre itself, and just as obtuse and tradition bound. What we tend to think of as 'traditional scenery', i.e. two-dimensional canvas-covered 'flats' painted to resemble a three-dimensional surface or vista, is a relatively recent innovation and a significant departure from the more ancient forms of theatrical expression, which tended to rely less on the actual representation of space and more on the conveyance of action and mood. By the Shakespearean era, the occasional painted backdrop or theatrical prop was in evidence, but the show itself was written so as not to rely on such items to convey itself to the audience. However, this means that today's set designers must be that much more careful, so as to convey the setting without taking away from the actors. Contemporary scenery Our more modern notion of scenery, which dates back to the 19th century, finds its origins in the dramatic spectacle of opera buffa, from which the modern opera is descended. Its elaborate settings were appropriated by the 'straight', or dramatic, theatre, through their use in comic operettas, burlesques, pantomimes and the like. As time progressed, stage settings grew more realistic, reaching their peak in the Belasco realism of the 1910-'20s, in which complete diners, with working soda fountains and freshly made food, were recreated onstage. Perhaps as a reaction to such excess and in parallel with trends in the arts and architecture, scenery began a trend towards abstraction, although realistic settings remained in evidence, and are still used today. At the same time, the musical theatre was evolving its own set of scenic traditions, borrowing heavily from the burlesque and vaudeville style, with occasional nods to the trends of the 'straight' theatre. Everything came together in the 1980s and 1990s and, continuing to today, until there is no established style of scenic production and pretty much anything goes. Modern stagecraft has grown so complex as to require the highly specialized skills of hundreds of artists and craftspeople to mount a single production. Types of scenery The construction of theatrical scenery will be frequently one of the most time-consuming tasks when preparing for a show. As a result, many theatres have a place for storing scenery (such as a loft) so that it can be used for multiple shows. Since future shows typically are not known far in advance, theatres will often construct stock scenery that can be easily adapted to fit a variety of shows. Common stock scenery types include: Curtains Flats Platforms Scenery wagons Gallery See also Set (film and TV scenery) Scenic design Set construction Scenography References Scenic design Stagecraft
Theatrical scenery
[ "Engineering" ]
602
[ "Scenic design", "Design" ]
324,402
https://en.wikipedia.org/wiki/Hydrazone
Hydrazones are a class of organic compounds with the structure . They are related to ketones and aldehydes by the replacement of the oxygen =O with the = functional group. They are formed usually by the action of hydrazine on ketones or aldehydes. Synthesis Hydrazine, organohydrazines, and 1,1-diorganohydrazines react with aldehydes and ketones to give hydrazones. Phenylhydrazine reacts with reducing sugars to form hydrazones known as osazones, which was developed by German chemist Emil Fischer as a test to differentiate monosaccharides. Uses Hydrazones are the basis for various analyses of ketones and aldehydes. For example, dinitrophenylhydrazine coated onto a silica sorbent is the basis of an adsorption cartridge. The hydrazones are then eluted and analyzed by high-performance liquid chromatography (HPLC) using a UV detector. The compound carbonyl cyanide-p-trifluoromethoxyphenylhydrazone (abbreviated as FCCP) is used to uncouple ATP synthesis and reduction of oxygen in oxidative phosphorylation in molecular biology. Hydrazones are the basis of bioconjugation strategies. Hydrazone-based coupling methods are used in medical biotechnology to couple drugs to targeted antibodies (see ADC), e.g. antibodies against a certain type of cancer cell. The hydrazone-based bond is stable at neutral pH (in the blood), but is rapidly destroyed in the acidic environment of lysosomes of the cell. The drug is thereby released in the cell, where it exerts its function. Reactions Hydrazones are susceptible to hydrolysis: Alkyl hydrazones are 102- to 103-fold more sensitive to hydrolysis than analogous oximes. When derived from hydrazine itself, hydrazones condense with a second equivalent of a carbonyl to give azines: Hydrazones are intermediates in the Wolff–Kishner reduction. Hydrazones are reactants in hydrazone iodination, the Shapiro reaction, and the Bamford–Stevens reaction to vinyl compounds. Hydrazones can also be synthesized by the Japp–Klingemann reaction via β-keto acids or β-keto-esters and aryl diazonium salts. Hydrazones are converted to azines when used in the preparation of 3,5-disubstituted 1H-pyrazoles, a reaction also well known using hydrazine hydrate. With a transition metal catalyst, hydrazones can serve as organometallic reagent surrogates to react with various electrophiles. N,N-dialkylhydrazones In N,N-dialkylhydrazones the C=N bond can be hydrolysed, oxidised and reduced, the N–N bond can be reduced to the free amine. The carbon atom of the C=N bond can react with organometallic nucleophiles. The alpha-hydrogen atom is more acidic by 10 orders of magnitude compared to the ketone and therefore more nucleophilic. Deprotonation with for instance lithium diisopropylamide (LDA) gives an azaenolate which can be alkylated by alkyl halides. The hydrazines SAMP and RAMP function as chiral auxiliary. Recovery of carbonyl compounds from N,N-dialkylhydrazones Several methods are known to recover carbonyl compounds from N,N-dialkylhydrazones. Procedures include oxidative, hydrolytic or reductive cleavage conditions and can be compatible with a wide range of functional groups. Gallery See also Azo compound Imine Nitrosamine Hydrogenation of carbon–nitrogen double bonds References Functional groups
Hydrazone
[ "Chemistry" ]
821
[ "Hydrazones", "Functional groups" ]
324,412
https://en.wikipedia.org/wiki/Chromic%20acid
Chromic acid is jargon for a solution formed by the addition of sulfuric acid to aqueous solutions of dichromate. It consists at least in part of chromium trioxide. The term chromic acid is usually used for a mixture made by adding concentrated sulfuric acid to a dichromate, which may contain a variety of compounds, including solid chromium trioxide. This kind of chromic acid may be used as a cleaning mixture for glass. Chromic acid may also refer to the molecular species, of which the trioxide is the anhydride. Chromic acid features chromium in an oxidation state of +6 (and a valence of VI or 6). It is a strong and corrosive oxidizing agent and a moderate carcinogen. Molecular chromic acid Molecular chromic acid, , in principle, resembles sulfuric acid, . It would ionize accordingly: The pKa for the equilibrium is not well characterized. Reported values vary between about −0.8 to 1.6. The structure of the mono anion has been determined by X-ray crystallography. In this tetrahedral oxyanion, three Cr-O bond lengths are 156 pm and the Cr-OH bond is 201 pm condenses to form dichromate: , logKD = 2.05. Furthermore, the dichromate can be protonated: , pKa = 1.8 Loss of the second proton occurs in the pH range 4–8, making the ion a weak acid. Molecular chromic acid could in principle be made by adding chromium trioxide to water (cf. manufacture of sulfuric acid). In practice, the reverse reaction occurs: molecular chromic acid dehydrates. Some insights can be gleaned from observations on the reaction of dichromate solutions with sulfuric acid. The first colour change from orange to red signals the conversion of dichromate to chromic acid. Under these conditions deep red crystals of chromium trioxide precipitate from the mixture, without further colour change. Chromium trioxide is the anhydride of molecular chromic acid. It is a Lewis acid and can react with a Lewis base, such as pyridine in a non-aqueous medium such as dichloromethane (Collins reagent). Higher chromic acids with the formula are probable components of concentrated solutions of chromic acid. Uses Chromic acid is an intermediate in chromium plating, and is also used in ceramic glazes, and colored glass. Because a solution of chromic acid in sulfuric acid (also known as a sulfochromic mixture or chromosulfuric acid) is a powerful oxidizing agent, it can be used to clean laboratory glassware, particularly of otherwise insoluble organic residues. This application has declined due to environmental concerns. Furthermore, the acid leaves trace amounts of paramagnetic chromic ions () that can interfere with certain applications, such as NMR spectroscopy. This is especially the case for NMR tubes. Piranha solution can be used for the same task, without leaving metallic residues behind. Chromic acid was widely used in the musical instrument repair industry, due to its ability to "brighten" raw brass. A chromic acid dip leaves behind a bright yellow patina on the brass. Due to growing health and environmental concerns, many have discontinued use of this chemical in their repair shops. It was used in hair dye in the 1940s, under the name Melereon. It is used as a bleach in processing black and white photographic reversal film. Reactions Chromic acid is capable of oxidizing many kinds of organic compounds and many variations on this reagent have been developed: Chromic acid in aqueous sulfuric acid and acetone is known as the Jones reagent, which will oxidize primary and secondary alcohols to carboxylic acids and ketones respectively, while rarely affecting unsaturated bonds. Pyridinium chlorochromate is generated from chromium trioxide and pyridinium chloride. This reagent converts primary alcohols to the corresponding aldehydes (R–CHO). Collins reagent is an adduct of chromium trioxide and pyridine used for diverse oxidations. Chromyl chloride, is a well-defined molecular compound that is generated from chromic acid. Illustrative transformations Oxidation of methylbenzenes to benzoic acids. Oxidative scission of indene to homophthalic acid. Oxidation of secondary alcohol to ketone (cyclooctanone) and nortricyclanone. Use in qualitative organic analysis In organic chemistry, dilute solutions of chromic acid can be used to oxidize primary or secondary alcohols to the corresponding aldehydes and ketones. Similarly, it can also be used to oxidize an aldehyde to its corresponding carboxylic acid. Tertiary alcohols and ketones are unaffected. Because the oxidation is signaled by a color change from orange to brownish green (indicating chromium being reduced from oxidation state +6 to +3), chromic acid is commonly used as a lab reagent in high school or undergraduate college chemistry as a qualitative analytical test for the presence of primary or secondary alcohols, or aldehydes. Alternative reagents In oxidations of alcohols or aldehydes into carboxylic acids, chromic acid is one of several reagents, including several that are catalytic. For example, nickel(II) salts catalyze oxidations by bleach (hypochlorite). Aldehydes are relatively easily oxidized to carboxylic acids, and mild oxidizing agents are sufficient. Silver(I) compounds have been used for this purpose. Each oxidant offers advantages and disadvantages. Instead of using chemical oxidants, electrochemical oxidation is often possible. Safety Hexavalent chromium compounds (including chromium trioxide, chromic acids, chromates, chlorochromates) are toxic and carcinogenic. Chromium trioxide and chromic acids are strong oxidizers and may react violently if mixed with easily oxidizable organic substances. Chromic acid burns are treated with a dilute sodium thiosulfate solution. Notes References Alcohols from Carbonyl Compounds: Oxidation-Reduction and Organometallic Compounds (PDF) External links IARC Monograph "Chromium and Chromium compounds" Chromates Mineral acids Oxidizing acids Oxoacids Transition metal oxoacids
Chromic acid
[ "Chemistry" ]
1,423
[ "Acids", "Inorganic compounds", "Mineral acids", "Oxidizing agents", "Salts", "Chromates", "Oxidizing acids" ]
324,498
https://en.wikipedia.org/wiki/Mortar%20%28masonry%29
Mortar is a workable paste which hardens to bind building blocks such as stones, bricks, and concrete masonry units, to fill and seal the irregular gaps between them, spread the weight of them evenly, and sometimes to add decorative colours or patterns to masonry walls. In its broadest sense, mortar includes pitch, asphalt, and soft clay, as those used between bricks, as well as cement mortar. The word "mortar" comes from the Old French word mortier, "builder's mortar, plaster; bowl for mixing." (13c.). Cement mortar becomes hard when it cures, resulting in a rigid aggregate structure; however, the mortar functions as a weaker component than the building blocks and serves as the sacrificial element in the masonry, because mortar is easier and less expensive to repair than the building blocks. Bricklayers typically make mortars using a mixture of sand, a binder, and water. The most common binder since the early 20th century is Portland cement, but the ancient binder lime (producing lime mortar) is still used in some specialty new construction. Lime, lime mortar, and gypsum in the form of plaster of Paris are used particularly in the repair and repointing of historic buildings and structures, so that the repair materials will be similar in performance and appearance to the original materials. Several types of cement mortars and additives exist. Ancient mortar The first mortars were made of mud and clay, as demonstrated in the 10th millennia BCE buildings of Jericho, and the 8th millennia BCE of Ganj Dareh. According to Roman Ghirshman, the first evidence of humans using a form of mortar was at the Mehrgarh of Baluchistan in what is today Pakistan, built of sun-dried bricks in 6500 BCE. Gypsum mortar, also called plaster of Paris, was used in the construction of many ancient structures. It is made from gypsum, which requires a lower firing temperature. It is therefore easier to make than lime mortar and sets up much faster, which may be a reason it was used as the typical mortar in ancient, brick arch and vault construction. Gypsum mortar is not as durable as other mortars in damp conditions. In the Indian subcontinent, multiple cement types have been observed in the sites of the Indus Valley civilization, with gypsum appearing at sites such as the Mohenjo-daro city-settlement, which dates to earlier than 2600 BCE. Gypsum cement that was "light grey and contained sand, clay, traces of calcium carbonate, and a high percentage of lime" was used in the construction of wells, drains, and on the exteriors of "important looking buildings." Bitumen mortar was also used at a lower-frequency, including in the Great Bath at Mohenjo-daro. In early Egyptian pyramids, which were constructed during the Old Kingdom (~2600–2500 BCE), the limestone blocks were bound by a mortar of mud and clay, or clay and sand. In later Egyptian pyramids, the mortar was made of gypsum, or lime. Gypsum mortar was essentially a mixture of plaster and sand and was quite soft. 2nd millennia BCE Babylonian constructions used lime or pitch for mortar. Historically, building with concrete and mortar next appeared in Greece. The excavation of the underground aqueduct of Megara revealed that a reservoir was coated with a pozzolanic mortar 12 mm thick. This aqueduct dates back to c. 500 BCE. Pozzolanic mortar is a lime based mortar, but is made with an additive of volcanic ash that allows it to be hardened underwater; thus it is known as hydraulic cement. The Greeks obtained the volcanic ash from the Greek islands Thira and Nisiros, or from the then Greek colony of Dicaearchia (Pozzuoli) near Naples, Italy. The Romans later improved the use and methods of making what became known as pozzolanic mortar and cement. Even later, the Romans used a mortar without pozzolana using crushed terra cotta, introducing aluminum oxide and silicon dioxide into the mix. This mortar was not as strong as pozzolanic mortar, but, because it was denser, it better resisted penetration by water. Hydraulic mortar was not available in ancient China, possibly due to a lack of volcanic ash. Around 500 CE, sticky rice soup was mixed with slaked lime to make an inorganic−organic composite sticky rice mortar that had more strength and water resistance than lime mortar. It is not understood how the art of making hydraulic mortar and cement, which was perfected and in such widespread use by both the Greeks and Romans, was then lost for almost two millennia. During the Middle Ages when the Gothic cathedrals were being built, the only active ingredient in the mortar was lime. Since cured lime mortar can be degraded by contact with water, many structures suffered over the centuries from wind-blown rain. Ordinary Portland cement mortar Ordinary Portland cement mortar, commonly known as OPC mortar or just cement mortar, is created by mixing powdered ordinary Portland cement, fine aggregate and water. It was invented in 1794 by Joseph Aspdin and patented on 18 December 1824, largely as a result of efforts to develop stronger mortars. It was made popular during the late nineteenth century, and had by 1930 became more popular than lime mortar as construction material. The advantages of Portland cement is that it sets hard and quickly, allowing a faster pace of construction. Furthermore, fewer skilled workers are required in building a structure with Portland cement. As a general rule, however, Portland cement should not be used for the repair or repointing of older buildings built in lime mortar, which require the flexibility, softness and breathability of lime if they are to function correctly. In the United States and other countries, five standard types of mortar (available as dry pre-mixed products) are generally used for both new construction and repair. Strengths of mortar change based on the mix ratio for each type of mortar, which are specified under the ASTM standards. These premixed mortar products are designated by one of the five letters, M, S, N, O, and K. Type M mortar is the strongest, and Type K the weakest. The mix ratio is always expressed by volume of . These type letters are taken from the alternate letters of the words "MaSoN wOrK". Polymer cement mortar Polymer cement mortars (PCM) are the materials which are made by partially replacing the cement hydrate binders of conventional cement mortar with polymers. The polymeric admixtures include latexes or emulsions, redispersible polymer powders, water-soluble polymers, liquid thermoset resins and monomers. Although they increase cost of mortars when used as an additive, they enhance properties. Polymer mortar has low permeability that may be detrimental to moisture accumulation when used to repair a traditional brick, block or stone wall. It is mainly designed for repairing concrete structures. The use of recovered plastics in mortars is being researched and is gaining ground. Depolymerizing PET to use as a polymeric binder to enhance mortars is actively being studied. Lime mortar The setting speed can be increased by using impure limestone in the kiln, to form a hydraulic lime that will set on contact with water. Such a lime must be stored as a dry powder. Alternatively, a pozzolanic material such as calcined clay or brick dust may be added to the mortar mix. Addition of a pozzolanic material will make the mortar set reasonably quickly by reaction with the water. It would be problematic to use Portland cement mortars to repair older buildings originally constructed using lime mortar. Lime mortar is softer than cement mortar, allowing brickwork a certain degree of flexibility to adapt to shifting ground or other changing conditions. Cement mortar is harder and allows little flexibility. The contrast can cause brickwork to crack where the two mortars are present in a single wall. Lime mortar is considered breathable in that it will allow moisture to freely move through and evaporate from the surface. In old buildings with walls that shift over time, cracks can be found which allow rain water into the structure. The lime mortar allows this moisture to escape through evaporation and keeps the wall dry. Re−pointing or rendering an old wall with cement mortar stops the evaporation and can cause problems associated with moisture behind the cement. Pozzolanic mortar Pozzolana is a fine, sandy volcanic ash. It was originally discovered and dug at Pozzuoli, nearby Mount Vesuvius in Italy, and was subsequently mined at other sites, too. The Romans learned that pozzolana added to lime mortar allowed the lime to set relatively quickly and even under water. Vitruvius, the Roman architect, spoke of four types of pozzolana. It is found in all the volcanic areas of Italy in various colours: black, white, grey and red. Pozzolana has since become a generic term for any siliceous and/or aluminous additive to slaked lime to create hydraulic cement. Finely ground and mixed with lime it is a hydraulic cement, like Portland cement, and makes a strong mortar that will also set under water. The fact that the materials involved in the creation of pozzolana are found in abundance within certain territories make its use more common there, with areas inside of Central Europe as well as inside of Southern Europe being an example (significantly because of the many European volcanoes of note). It has, as such, been commonly associated with a variety of large structures constructed by the Roman Empire. Radiocarbon dating As the mortar hardens, the current atmosphere is encased in the mortar and thus provides a sample for analysis. Various factors affect the sample and raise the margin of error for the analysis. Radiocarbon dating of mortar began as early as the 1960s, soon after the method was established (Delibrias and Labeyrie 1964; Stuiver and Smith 1965; Folk and Valastro 1976). The very first data were provided by van Strydonck et al. (1983), Heinemeier et al.(1997) and Ringbom and Remmer (1995). Methodological aspects were further developed by different groups (an international team headed by Åbo Akademi University, and teams from CIRCE, CIRCe, ETHZ, Poznań, RICH and Milano-Bicocca laboratory. To evaluate the different anthropogenic carbon extraction methods for radiocarbon dating as well as to compare the different dating methods, i.e. radiocarbon and OSL, the first intercomparison study (MODIS) was set up and published in 2017. See also Cement accelerator Concrete Energetically modified cement Grout Thick bed mortar (technique) Thinset Tuckpointing References Concrete blowouts in Post-tension slabs Technical data sheets, Mortar Industry Association, www.mortar.org.uk Masonry Bricks Cement Concrete Soil-based building materials
Mortar (masonry)
[ "Engineering" ]
2,239
[ "Structural engineering", "Concrete", "Construction", "Masonry" ]
324,502
https://en.wikipedia.org/wiki/Tantric%20sex
Tantric sex or sexual yoga refers to a range of practices in Hindu and Buddhist tantra that utilize sexual activity in a ritual or yogic context. Tantric sex is associated with antinomian elements such as the consumption of alcohol, and the offerings of substances like meat to deities. Moreover, sexual fluids may be viewed as power substances and used for ritual purposes, either externally or internally. The actual terms used in the classical texts to refer to this practice include "Karmamudra" (Tibetan: ལས་ཀྱི་ཕྱག་རྒྱ las kyi phyag rgya, "action seal") in Buddhist tantras and "Maithuna" (Devanagari: मैथुन, "coupling") in Hindu sources. In Hindu Tantra, Maithuna is the most important of the five makara (five tantric substances) and constitutes the main part of the Grand Ritual of Tantra variously known as Panchamakara, Panchatattva, and Tattva Chakra. In Tibetan Buddhism, karmamudra is often an important part of the completion stage of tantric practice. While there may be some connection between these practices and the Kāmashāstra literature (which include the Kāmasūtra), the two practice traditions are separate methods with separate goals. As the British Indologist Geoffrey Samuel notes, while the kāmasāstra literature is about the pursuit of sexual pleasure (kāmā), sexual yoga practices are often aimed towards the quest for liberation (moksha). History In its earliest forms, Tantric intercourse was usually directed to generate sexual fluids that constituted the "preferred offering of the Tantric deities." While there is already a mention of ascetics practicing it in the 4th century CE Mahabharata, those techniques were rare until late Buddhist Tantra. Up to that point, sexual emission was both allowed and emphasized. Around the start of the first millennium, Tantra began to include practices of semen retention, like the penance ceremony of asidharavrata and the posterior yogic technique of vajroli mudra. They were probably adopted from ancient, non-Tantric celibate schools, like those mentioned in Mahabharata. The Brhadaranyaka Upanisad contains various sexual rituals and practices which are mostly aimed at obtaining a child which are concerned with the loss of male virility and power. One passage from the Brhadaranyaka Upanishad states: Her vulva is the sacrificial ground; her pubic hair is the sacred grass; her labia majora are the Soma-press; and her labia minora are the fire blazing at the centre. A man who engages in sexual intercourse with this knowledge obtains as great a world as a man who performs a Soma sacrifice, and he appropriates to himself the merits of the women with whom he has sex. The women, on the other hand, appropriate to themselves the merits of a man who engages in sexual intercourse with them without this knowledge. (Brhadaranyaka Upanishad 6.4.3, trans. Olivelle 1998: 88) According to Samuel, late Vedic texts like the Jaiminiya Brahmana, the Chandogya Upanisad, and the Brhadaranyaka Upanisad, "treat sexual intercourse as symbolically equivalent to the Vedic sacrifice, and ejaculation of semen as the offering." However, he also writes that while it is possible that some kind of sexual yoga existed in the fourth or fifth centuries, "Substantial evidence for such practices, however, dates from considerably later, from the seventh and eighth centuries, and derives from Saiva and Buddhist Tantric circles." Tantric sexual practices are often seen as exceptional and elite, and not accepted by all sects. They are found only in some tantric literature belonging to Buddhist and Hindu Tantra, but are entirely absent from Jain Tantra. In the Kaula tradition and others where sexual fluids as power substances and ritual sex are mentioned, scholars disagree in their translations, interpretations and practical significance. Emotions, eroticism and sex are universally regarded in Tantric literature as natural, desirable, a means of transformation of the deity within. Pleasure and sex is another aspect of life and a "root of the universe", whose purpose extends beyond procreation and is another means to spiritual journey and fulfillment. This idea flowers with the inclusion of kama art in Hindu temple arts, and its various temple architecture and design manuals such as the Shilpa-prakasha by the Hindu scholar Ramachandra Kulacara. Practice In Hinduism The actual term used in Hindu classical texts to refer to this practice is (Devanagari: मैथुन, "coupling"). In the Hindu Tantras, is always presented in the context of (the five or tantric substances) which constitutes primary ritual of Tantra. These may also be referred to as "the five Ms", or the , which consist of (alcohol), (meat), (fish), (pound grain), and (sexual intercourse). Taboo-breaking elements are only practiced literally by "left-hand path" tantrics (vāmācārins), whereas "right-hand path" tantrics (dakṣiṇācārins) use symbolic substitutes. Jayanta Bhatta, the 9th-century scholar of the Nyaya school of Hindu philosophy and who commented on Tantra literature, stated that the Tantric ideas and spiritual practices are mostly well placed, but it also has "immoral teachings" such as by the so-called "Nilambara" sect where its practitioners "wear simply one blue garment, and then as a group engage in unconstrained public sex" on festivals. He wrote, this practice is unnecessary and it threatens fundamental values of society. This sect might have been an offshoot of the Pashupata Shaivite school, or possibly a Buddhist cult of Vajrapani. Ascetics of the Shaivite school of Mantramarga, in order to gain supernatural power, reenacted the penance of Shiva after cutting off one of Brahma's heads (Bhikshatana). They worshipped Shiva with impure substances like alcohol, blood and sexual fluids generated in orgiastic rites with their consorts. Douglas Renfrew Brooks states that the antinomian elements such as the use of intoxicating substances and sex were not animistic, but were adopted in some Kaula traditions to challenge the Tantric devotee to break down the "distinctions between the ultimate reality of Brahman and the mundane physical and mundane world". By combining erotic and ascetic techniques, states Brooks, the Tantric broke down all social and internal assumptions, became Shiva-like. In Kashmir Shaivism, states David Gray, the antinomian transgressive ideas were internalized, for meditation and reflection, and as a means to "realize a transcendent subjectivity". As part of tantric inversion of social regulations, sexual yoga often recommends the usage of consorts from the most taboo groups available, such as close relatives or people from the lowest, most contaminated castes. They must be young and beautiful, as well as initiates in tantra. In Buddhism According to English, Buddhist sexual rites were incorporated from Shaiva tantra. One of the earliest mentions of sexual yoga is in the Mahayana Buddhist Mahāyānasūtrālamkāra of Asanga (c. 5th century). The passage states: Supreme self-control is achieved in the reversal of sexual intercourse in the blissful Buddha-poise and the untrammelled vision of one's spouse. According to David Snellgrove, the text's mention of a ‘reversal of sexual intercourse’ might indicate the practice of withholding ejaculation. Snellgrove states: It is by no means improbable that already by the fifth century when Asanga was writing, these techniques of sexual yoga were being used in reputable Buddhist circles, and that Asanga himself accepted such a practice as valid. The natural power of the breath, inhaling and exhaling, was certainly accepted as an essential force to be controlled in Buddhist as well as Hindu yoga. Why therefore not the natural power of the sexual force? [...] Once it is established that sexual yoga was already regarded by Asanga as an acceptable yogic practice, it becomes far easier to understand how Tantric treatises, despite their apparent contradiction of previous Buddhist teachings, were so readily canonized in the following centuries. Deities like Vajrayogini, sexually suggestive and streaming with blood, overturn traditional separation between intercourse and menstruation. Some extreme texts would go further, such as the 9th-century Buddhist text Candamaharosana-tantra, which advocated consumption of bodily waste products of the practitioner's sexual partner, like wash-water of her anus and genitalia. Those were thought to be "power substances", teaching the waste should be consumed as a diet "eaten by all the Buddhas." Japanese Buddhism 12th-century Japanese school Tachikawa-ryu did not discourage ejaculation in itself, considering it a "shower of love that contained thousands of potential Buddhas". They employed emission of sexual fluids in combination with worshipping of human skulls, which would be coated in the resultant mix in order to create honzon. However, those practices were considered heretical, leading to the sect's suppression. Tibetan Buddhism In Tibetan Buddhism, the higher tantric yogas are generally preceded by preliminary practices (Tib. ngondro), which include sutrayana practices (i.e. non-tantric Mahayana practices) as well as preliminary tantric meditations. Tantric initiation is required to enter into the practice of tantra. Tibetan tantric practice refers to the main tantric practices in Tibetan Buddhism. The great Rime scholar Jamgön Kongtrül refers to this as "the Process of Meditation in the Indestructible Way of Secret Mantra" and also as "the way of mantra," "way of method" and "the secret way" in his Treasury of Knowledge. These Vajrayāna Buddhist practices are mainly drawn from the Buddhist tantras and are generally not found in "common" (i.e. non-tantric) Mahayana. These practices are seen by Tibetan Buddhists as the fastest and most powerful path to Buddhahood. Unsurpassable Yoga Tantra, (Skt. anuttarayogatantra, also known as Mahayoga) are in turn seen as the highest tantric practices in Tibetan Buddhism. Anuttarayoga tantric practice is divided into two stages, the generation stage and the completion stage. In the generation stage, one meditates on emptiness and visualizes one's chosen deity (yidam), its mandala and companion deities, resulting in identification with this divine reality (called "divine pride"). This is also known as deity yoga (devata yoga). In the completion stage, the focus is shifted from the form of the deity to direct realization of ultimate reality (which is defined and explained in various ways). Completion stage practices also include techniques that work with the subtle body substances (Skt. bindu, Tib. thigle) and "vital winds" (vayu, lung), as well as the luminous or clear light nature of the mind. They are often grouped into different systems, such as the six dharmas of Naropa, or the six yogas of Kalachakra. Karmamudrā refers to the female yogini who engages in such a practice and the technique which makes use of sexual union with a physical or visualized consort as well as the practice of inner heat (tummo) to achieve a non-dual state of bliss and insight into emptiness. In Tibetan Buddhism, proficiency in tummo yoga, a completion stage practice, is generally seen as a prerequisite to the practice of karmamudrā. See also Coitus reservatus Mahamudra Yab-yum Yogini References Works cited Further reading Tantra Vajrayana Tantric practices Human sexuality Sexual acts Sexuality and religion
Tantric sex
[ "Biology" ]
2,517
[ "Human sexuality", "Behavior", "Human behavior", "Sexual acts", "Sexuality", "Mating" ]
324,744
https://en.wikipedia.org/wiki/Financial%20engineering
Financial engineering is a multidisciplinary field involving financial theory, methods of engineering, tools of mathematics and the practice of programming. It has also been defined as the application of technical methods, especially from mathematical finance and computational finance, in the practice of finance. Financial engineering plays a key role in a bank's customer-driven derivatives business — delivering bespoke OTC-contracts and "exotics", and implementing various structured products — which encompasses quantitative modelling, quantitative programming and risk managing financial products in compliance with the regulations and Basel capital/liquidity requirements. An older use of the term "financial engineering" that is less common today is aggressive restructuring of corporate balance sheets. Mathematical finance is the application of mathematics to finance. Computational finance and mathematical finance are both subfields of financial engineering. Computational finance is a field in computer science and deals with the data and algorithms that arise in financial modeling. Discipline Financial engineering draws on tools from applied mathematics, computer science, statistics and economic theory. In the broadest sense, anyone who uses technical tools in finance could be called a financial engineer, for example any computer programmer in a bank or any statistician in a government economic bureau. However, most practitioners restrict the term to someone educated in the full range of tools of modern finance and whose work is informed by financial theory. It is sometimes restricted even further, to cover only those originating new financial products and strategies. Despite its name, financial engineering does not belong to any of the fields in traditional professional engineering even though many financial engineers have studied engineering beforehand and many universities offering a postgraduate degree in this field require applicants to have a background in engineering as well. In the United States, the Accreditation Board for Engineering and Technology (ABET) does not accredit financial engineering degrees. In the United States, financial engineering programs are accredited by the International Association of Quantitative Finance. Quantitative analyst ("Quant") is a broad term that covers any person who uses math for practical purposes, including financial engineers. Quant is often taken to mean "financial quant", in which case it is similar to financial engineer. The difference is that it is possible to be a theoretical quant, or a quant in only one specialized niche in finance, while "financial engineer" usually implies a practitioner with broad expertise. "Rocket scientist" (aerospace engineer) is an older term, first coined in the development of rockets in WWII (Wernher von Braun), and later, the NASA space program; it was adapted by the first generation of financial quants who arrived on Wall Street in the late 1970s and early 1980s. While basically synonymous with financial engineer, it implies adventurousness and fondness for disruptive innovation. Financial "rocket scientists" were usually trained in applied mathematics, statistics or finance and spent their entire careers in risk-taking. They were not hired for their mathematical talents, they either worked for themselves or applied mathematical techniques to traditional financial jobs. The later generation of financial engineers were more likely to have PhDs in mathematics, physics, electrical and computer engineering, and often started their careers in academics or non-financial fields. Criticisms One of the prominent critics of financial engineering is Nassim Taleb, a professor of financial engineering at Polytechnic Institute of New York University who argues that it replaces common sense and leads to disaster. A series of economic collapses has led many governments to argue a return to "real" engineering from financial engineering. A gentler criticism came from Emanuel Derman who heads a financial engineering degree program at Columbia University. He blames over-reliance on models for financial problems; see Financial Modelers' Manifesto. Many other authors have identified specific problems in financial engineering that caused catastrophes: Aaron Brown named confusion between quants and regulators over the meaning of "capital" Felix Salmon gently pointed to the Gaussian copula (see ) Ian Stewart criticized the Black-Scholes formula Pablo Triana (along with others including Taleb and Brown) dislikes value at risk Scott Patterson accused quantitative traders and later high-frequency traders. The financial innovation often associated with financial engineers was mocked by former chairman of the Federal Reserve Paul Volcker in 2009 when he said it was a code word for risky securities, that brought no benefits to society. For most people, he said, the advent of the ATM was more crucial than any asset-backed bond. Education The first Master of Financial Engineering degree programs were set up in the early 1990s. The number and size of programs has grown rapidly, to the extent that some now use the term "financial engineer" to refer to a graduate in the field. The financial engineering program at New York University Polytechnic School of Engineering was the first curriculum to be certified by the International Association of Financial Engineers. The number, and variation, of these programs has grown over the decades subsequent (see ); and lately includes undergraduate study, as well as designations such as the Certificate in Quantitative Finance. See also Actuarial science Computational finance Financial modeling List of finance topics Mathematical finance Quantitative analyst References Further reading Mathematical finance Engineering disciplines
Financial engineering
[ "Mathematics", "Engineering" ]
1,018
[ "Applied mathematics", "Mathematical finance", "nan" ]
324,749
https://en.wikipedia.org/wiki/Sine%20wave
A sine wave, sinusoidal wave, or sinusoid (symbol: ∿) is a periodic wave whose waveform (shape) is the trigonometric sine function. In mechanics, as a linear motion over time, this is simple harmonic motion; as rotation, it corresponds to uniform circular motion. Sine waves occur often in physics, including wind waves, sound waves, and light waves, such as monochromatic radiation. In engineering, signal processing, and mathematics, Fourier analysis decomposes general functions into a sum of sine waves of various frequencies, relative phases, and magnitudes. When any two sine waves of the same frequency (but arbitrary phase) are linearly combined, the result is another sine wave of the same frequency; this property is unique among periodic waves. Conversely, if some phase is chosen as a zero reference, a sine wave of arbitrary phase can be written as the linear combination of two sine waves with phases of zero and a quarter cycle, the sine and cosine components, respectively. Audio example A sine wave represents a single frequency with no harmonics and is considered an acoustically pure tone. Adding sine waves of different frequencies results in a different waveform. Presence of higher harmonics in addition to the fundamental causes variation in the timbre, which is the reason why the same musical pitch played on different instruments sounds different. Sinusoid form Sine waves of arbitrary phase and amplitude are called sinusoids and have the general form: where: , amplitude, the peak deviation of the function from zero. , the real independent variable, usually representing time in seconds. , angular frequency, the rate of change of the function argument in units of radians per second. , ordinary frequency, the number of oscillations (cycles) that occur each second of time. , phase, specifies (in radians) where in its cycle the oscillation is at t = 0. When is non-zero, the entire waveform appears to be shifted backwards in time by the amount seconds. A negative value represents a delay, and a positive value represents an advance. Adding or subtracting (one cycle) to the phase results in an equivalent wave. As a function of both position and time Sinusoids that exist in both position and time also have: a spatial variable that represents the position on the dimension on which the wave propagates. a wave number (or angular wave number) , which represents the proportionality between the angular frequency and the linear speed (speed of propagation) : wavenumber is related to the angular frequency by where (lambda) is the wavelength. Depending on their direction of travel, they can take the form: , if the wave is moving to the right, or , if the wave is moving to the left. Since sine waves propagate without changing form in distributed linear systems, they are often used to analyze wave propagation. Standing waves When two waves with the same amplitude and frequency traveling in opposite directions superpose each other, then a standing wave pattern is created. On a plucked string, the superimposing waves are the waves reflected from the fixed endpoints of the string. The string's resonant frequencies are the string's only possible standing waves, which only occur for wavelengths that are twice the string's length (corresponding to the fundamental frequency) and integer divisions of that (corresponding to higher harmonics). Multiple spatial dimensions The earlier equation gives the displacement of the wave at a position at time along a single line. This could, for example, be considered the value of a wave along a wire. In two or three spatial dimensions, the same equation describes a travelling plane wave if position and wavenumber are interpreted as vectors, and their product as a dot product. For more complex waves such as the height of a water wave in a pond after a stone has been dropped in, more complex equations are needed. Sinusoidal plane wave Fourier analysis French mathematician Joseph Fourier discovered that sinusoidal waves can be summed as simple building blocks to approximate any periodic waveform, including square waves. These Fourier series are frequently used in signal processing and the statistical analysis of time series. The Fourier transform then extended Fourier series to handle general functions, and birthed the field of Fourier analysis. Differentiation and integration Differentiation Differentiating any sinusoid with respect to time can be viewed as multiplying its amplitude by its angular frequency and advancing it by a quarter cycle: A differentiator has a zero at the origin of the complex frequency plane. The gain of its frequency response increases at a rate of +20 dB per decade of frequency (for root-power quantities), the same positive slope as a 1 order high-pass filter's stopband, although a differentiator doesn't have a cutoff frequency or a flat passband. A n-order high-pass filter approximately applies the n time derivative of signals whose frequency band is significantly lower than the filter's cutoff frequency. Integration Integrating any sinusoid with respect to time can be viewed as dividing its amplitude by its angular frequency and delaying it a quarter cycle: The constant of integration will be zero if the bounds of integration is an integer multiple of the sinusoid's period. An integrator has a pole at the origin of the complex frequency plane. The gain of its frequency response falls off at a rate of -20 dB per decade of frequency (for root-power quantities), the same negative slope as a 1 order low-pass filter's stopband, although an integrator doesn't have a cutoff frequency or a flat passband. A n-order low-pass filter approximately performs the n time integral of signals whose frequency band is significantly higher than the filter's cutoff frequency. See also Crest (physics) Complex exponential Damped sine wave Euler's formula Fourier transform Harmonic analysis Harmonic series (mathematics) Harmonic series (music) Helmholtz equation Instantaneous phase In-phase and quadrature components Least-squares spectral analysis Oscilloscope Phasor Pure tone Simple harmonic motion Sinusoidal model Wave (physics) Wave equation ∿ the sine wave symbol (U+223F) References External links Trigonometry Wave mechanics Waves Waveforms Sound Acoustics
Sine wave
[ "Physics" ]
1,288
[ "Physical phenomena", "Classical mechanics", "Acoustics", "Waves", "Wave mechanics", "Motion (physics)", "Waveforms" ]
324,752
https://en.wikipedia.org/wiki/Atiyah%E2%80%93Singer%20index%20theorem
In differential geometry, the Atiyah–Singer index theorem, proved by Michael Atiyah and Isadore Singer (1963), states that for an elliptic differential operator on a compact manifold, the analytical index (related to the dimension of the space of solutions) is equal to the topological index (defined in terms of some topological data). It includes many other theorems, such as the Chern–Gauss–Bonnet theorem and Riemann–Roch theorem, as special cases, and has applications to theoretical physics. History The index problem for elliptic differential operators was posed by Israel Gel'fand. He noticed the homotopy invariance of the index, and asked for a formula for it by means of topological invariants. Some of the motivating examples included the Riemann–Roch theorem and its generalization the Hirzebruch–Riemann–Roch theorem, and the Hirzebruch signature theorem. Friedrich Hirzebruch and Armand Borel had proved the integrality of the  genus of a spin manifold, and Atiyah suggested that this integrality could be explained if it were the index of the Dirac operator (which was rediscovered by Atiyah and Singer in 1961). The Atiyah–Singer theorem was announced in 1963. The proof sketched in this announcement was never published by them, though it appears in Palais's book. It appears also in the "Séminaire Cartan-Schwartz 1963/64" that was held in Paris simultaneously with the seminar led by Richard Palais at Princeton University. The last talk in Paris was by Atiyah on manifolds with boundary. Their first published proof replaced the cobordism theory of the first proof with K-theory, and they used this to give proofs of various generalizations in another sequence of papers. 1965: Sergey P. Novikov published his results on the topological invariance of the rational Pontryagin classes on smooth manifolds. Robion Kirby and Laurent C. Siebenmann's results, combined with René Thom's paper proved the existence of rational Pontryagin classes on topological manifolds. The rational Pontryagin classes are essential ingredients of the index theorem on smooth and topological manifolds. 1969: Michael Atiyah defines abstract elliptic operators on arbitrary metric spaces. Abstract elliptic operators became protagonists in Kasparov's theory and Connes's noncommutative differential geometry. 1971: Isadore Singer proposes a comprehensive program for future extensions of index theory. 1972: Gennadi G. Kasparov publishes his work on the realization of K-homology by abstract elliptic operators. 1973: Atiyah, Raoul Bott, and Vijay Patodi gave a new proof of the index theorem using the heat equation, described in a paper by Melrose. 1977: Dennis Sullivan establishes his theorem on the existence and uniqueness of Lipschitz and quasiconformal structures on topological manifolds of dimension different from 4. 1983: Ezra Getzler motivated by ideas of Edward Witten and Luis Alvarez-Gaume, gave a short proof of the local index theorem for operators that are locally Dirac operators; this covers many of the useful cases. 1983: Nicolae Teleman proves that the analytical indices of signature operators with values in vector bundles are topological invariants. 1984: Teleman establishes the index theorem on topological manifolds. 1986: Alain Connes publishes his fundamental paper on noncommutative geometry. 1989: Simon K. Donaldson and Sullivan study Yang–Mills theory on quasiconformal manifolds of dimension 4. They introduce the signature operator S defined on differential forms of degree two. 1990: Connes and Henri Moscovici prove the local index formula in the context of non-commutative geometry. 1994: Connes, Sullivan, and Teleman prove the index theorem for signature operators on quasiconformal manifolds. Notation X is a compact smooth manifold (without boundary). E and F are smooth vector bundles over X. D is an elliptic differential operator from E to F. So in local coordinates it acts as a differential operator, taking smooth sections of E to smooth sections of F. Symbol of a differential operator If D is a differential operator on a Euclidean space of order n in k variables , then its symbol is the function of 2k variables , given by dropping all terms of order less than n and replacing by . So the symbol is homogeneous in the variables y, of degree n. The symbol is well defined even though does not commute with because we keep only the highest order terms and differential operators commute "up to lower-order terms". The operator is called elliptic if the symbol is nonzero whenever at least one y is nonzero. Example: The Laplace operator in k variables has symbol , and so is elliptic as this is nonzero whenever any of the 's are nonzero. The wave operator has symbol , which is not elliptic if , as the symbol vanishes for some non-zero values of the ys. The symbol of a differential operator of order n on a smooth manifold X is defined in much the same way using local coordinate charts, and is a function on the cotangent bundle of X, homogeneous of degree n on each cotangent space. (In general, differential operators transform in a rather complicated way under coordinate transforms (see jet bundle); however, the highest order terms transform like tensors so we get well defined homogeneous functions on the cotangent spaces that are independent of the choice of local charts.) More generally, the symbol of a differential operator between two vector bundles E and F is a section of the pullback of the bundle Hom(E, F) to the cotangent space of X. The differential operator is called elliptic if the element of Hom(Ex, Fx) is invertible for all non-zero cotangent vectors at any point x of X. A key property of elliptic operators is that they are almost invertible; this is closely related to the fact that their symbols are almost invertible. More precisely, an elliptic operator D on a compact manifold has a (non-unique) parametrix (or pseudoinverse) D′ such that DD′ -1 and D′D -1 are both compact operators. An important consequence is that the kernel of D is finite-dimensional, because all eigenspaces of compact operators, other than the kernel, are finite-dimensional. (The pseudoinverse of an elliptic differential operator is almost never a differential operator. However, it is an elliptic pseudodifferential operator.) Analytical index As the elliptic differential operator D has a pseudoinverse, it is a Fredholm operator. Any Fredholm operator has an index, defined as the difference between the (finite) dimension of the kernel of D (solutions of Df = 0), and the (finite) dimension of the cokernel of D (the constraints on the right-hand-side of an inhomogeneous equation like Df = g, or equivalently the kernel of the adjoint operator). In other words, Index(D) = dim Ker(D) − dim Coker(D) = dim Ker(D) − dim Ker(D*). This is sometimes called the analytical index of D. Example: Suppose that the manifold is the circle (thought of as R/Z), and D is the operator d/dx − λ for some complex constant λ. (This is the simplest example of an elliptic operator.) Then the kernel is the space of multiples of exp(λx) if λ is an integral multiple of 2πi and is 0 otherwise, and the kernel of the adjoint is a similar space with λ replaced by its complex conjugate. So D has index 0. This example shows that the kernel and cokernel of elliptic operators can jump discontinuously as the elliptic operator varies, so there is no nice formula for their dimensions in terms of continuous topological data. However the jumps in the dimensions of the kernel and cokernel are the same, so the index, given by the difference of their dimensions, does indeed vary continuously, and can be given in terms of topological data by the index theorem. Topological index The topological index of an elliptic differential operator between smooth vector bundles and on an -dimensional compact manifold is given by in other words the value of the top dimensional component of the mixed cohomology class on the fundamental homology class of the manifold up to a difference of sign. Here, is the Todd class of the complexified tangent bundle of . is equal to , where is the Thom isomorphism for the sphere bundle is the Chern character is the "difference element" in associated to two vector bundles and on and an isomorphism between them on the subspace . is the symbol of In some situations, it is possible to simplify the above formula for computational purposes. In particular, if is a -dimensional orientable (compact) manifold with non-zero Euler class , then applying the Thom isomorphism and dividing by the Euler class, the topological index may be expressed as where division makes sense by pulling back from the cohomology ring of the classifying space . One can also define the topological index using only K-theory (and this alternative definition is compatible in a certain sense with the Chern-character construction above). If X is a compact submanifold of a manifold Y then there is a pushforward (or "shriek") map from K(TX) to K(TY). The topological index of an element of K(TX) is defined to be the image of this operation with Y some Euclidean space, for which K(TY) can be naturally identified with the integers Z (as a consequence of Bott-periodicity). This map is independent of the embedding of X in Euclidean space. Now a differential operator as above naturally defines an element of K(TX), and the image in Z under this map "is" the topological index. As usual, D is an elliptic differential operator between vector bundles E and F over a compact manifold X. The index problem is the following: compute the (analytical) index of D using only the symbol s and topological data derived from the manifold and the vector bundle. The Atiyah–Singer index theorem solves this problem, and states: The analytical index of D is equal to its topological index. In spite of its formidable definition, the topological index is usually straightforward to evaluate explicitly. So this makes it possible to evaluate the analytical index. (The cokernel and kernel of an elliptic operator are in general extremely hard to evaluate individually; the index theorem shows that we can usually at least evaluate their difference.) Many important invariants of a manifold (such as the signature) can be given as the index of suitable differential operators, so the index theorem allows us to evaluate these invariants in terms of topological data. Although the analytical index is usually hard to evaluate directly, it is at least obviously an integer. The topological index is by definition a rational number, but it is usually not at all obvious from the definition that it is also integral. So the Atiyah–Singer index theorem implies some deep integrality properties, as it implies that the topological index is integral. The index of an elliptic differential operator obviously vanishes if the operator is self adjoint. It also vanishes if the manifold X has odd dimension, though there are pseudodifferential elliptic operators whose index does not vanish in odd dimensions. Relation to Grothendieck–Riemann–Roch The Grothendieck–Riemann–Roch theorem was one of the main motivations behind the index theorem because the index theorem is the counterpart of this theorem in the setting of real manifolds. Now, if there's a map of compact stably almost complex manifolds, then there is a commutative diagram if is a point, then we recover the statement above. Here is the Grothendieck group of complex vector bundles. This commutative diagram is formally very similar to the GRR theorem because the cohomology groups on the right are replaced by the Chow ring of a smooth variety, and the Grothendieck group on the left is given by the Grothendieck group of algebraic vector bundles. Extensions of the Atiyah–Singer index theorem Teleman index theorem Due to , : For any abstract elliptic operator on a closed, oriented, topological manifold, the analytical index equals the topological index. The proof of this result goes through specific considerations, including the extension of Hodge theory on combinatorial and Lipschitz manifolds , , the extension of Atiyah–Singer's signature operator to Lipschitz manifolds , Kasparov's K-homology and topological cobordism . This result shows that the index theorem is not merely a differentiability statement, but rather a topological statement. Connes–Donaldson–Sullivan–Teleman index theorem Due to , : For any quasiconformal manifold there exists a local construction of the Hirzebruch–Thom characteristic classes. This theory is based on a signature operator S, defined on middle degree differential forms on even-dimensional quasiconformal manifolds (compare ). Using topological cobordism and K-homology one may provide a full statement of an index theorem on quasiconformal manifolds (see page 678 of ). The work "provides local constructions for characteristic classes based on higher dimensional relatives of the measurable Riemann mapping in dimension two and the Yang–Mills theory in dimension four." These results constitute significant advances along the lines of Singer's program Prospects in Mathematics . At the same time, they provide, also, an effective construction of the rational Pontrjagin classes on topological manifolds. The paper provides a link between Thom's original construction of the rational Pontrjagin classes and index theory. It is important to mention that the index formula is a topological statement. The obstruction theories due to Milnor, Kervaire, Kirby, Siebenmann, Sullivan, Donaldson show that only a minority of topological manifolds possess differentiable structures and these are not necessarily unique. Sullivan's result on Lipschitz and quasiconformal structures shows that any topological manifold in dimension different from 4 possesses such a structure which is unique (up to isotopy close to identity). The quasiconformal structures and more generally the Lp-structures, p > n(n+1)/2, introduced by M. Hilsum , are the weakest analytical structures on topological manifolds of dimension n for which the index theorem is known to hold. Other extensions The Atiyah–Singer theorem applies to elliptic pseudodifferential operators in much the same way as for elliptic differential operators. In fact, for technical reasons most of the early proofs worked with pseudodifferential rather than differential operators: their extra flexibility made some steps of the proofs easier. Instead of working with an elliptic operator between two vector bundles, it is sometimes more convenient to work with an elliptic complex of vector bundles. The difference is that the symbols now form an exact sequence (off the zero section). In the case when there are just two non-zero bundles in the complex this implies that the symbol is an isomorphism off the zero section, so an elliptic complex with 2 terms is essentially the same as an elliptic operator between two vector bundles. Conversely the index theorem for an elliptic complex can easily be reduced to the case of an elliptic operator: the two vector bundles are given by the sums of the even or odd terms of the complex, and the elliptic operator is the sum of the operators of the elliptic complex and their adjoints, restricted to the sum of the even bundles. If the manifold is allowed to have boundary, then some restrictions must be put on the domain of the elliptic operator in order to ensure a finite index. These conditions can be local (like demanding that the sections in the domain vanish at the boundary) or more complicated global conditions (like requiring that the sections in the domain solve some differential equation). The local case was worked out by Atiyah and Bott, but they showed that many interesting operators (e.g., the signature operator) do not admit local boundary conditions. To handle these operators, Atiyah, Patodi and Singer introduced global boundary conditions equivalent to attaching a cylinder to the manifold along the boundary and then restricting the domain to those sections that are square integrable along the cylinder. This point of view is adopted in the proof of of the Atiyah–Patodi–Singer index theorem. Instead of just one elliptic operator, one can consider a family of elliptic operators parameterized by some space Y. In this case the index is an element of the K-theory of Y, rather than an integer. If the operators in the family are real, then the index lies in the real K-theory of Y. This gives a little extra information, as the map from the real K-theory of Y to the complex K-theory is not always injective. If there is a group action of a group G on the compact manifold X, commuting with the elliptic operator, then one replaces ordinary K-theory with equivariant K-theory. Moreover, one gets generalizations of the Lefschetz fixed-point theorem, with terms coming from fixed-point submanifolds of the group G. See also: equivariant index theorem. showed how to extend the index theorem to some non-compact manifolds, acted on by a discrete group with compact quotient. The kernel of the elliptic operator is in general infinite dimensional in this case, but it is possible to get a finite index using the dimension of a module over a von Neumann algebra; this index is in general real rather than integer valued. This version is called the L2 index theorem''', and was used by to rederive properties of the discrete series representations of semisimple Lie groups. The Callias index theorem is an index theorem for a Dirac operator on a noncompact odd-dimensional space. The Atiyah–Singer index is only defined on compact spaces, and vanishes when their dimension is odd. In 1978 Constantine Callias, at the suggestion of his Ph.D. advisor Roman Jackiw, used the axial anomaly to derive this index theorem on spaces equipped with a Hermitian matrix called the Higgs field. The index of the Dirac operator is a topological invariant which measures the winding of the Higgs field on a sphere at infinity. If U is the unit matrix in the direction of the Higgs field, then the index is proportional to the integral of U(dU)n−1 over the (n−1)-sphere at infinity. If n is even, it is always zero. The topological interpretation of this invariant and its relation to the Hörmander index proposed by Boris Fedosov, as generalized by Lars Hörmander, was published by Raoul Bott and Robert Thomas Seeley. Examples Chern-Gauss-Bonnet theorem Suppose that is a compact oriented manifold of dimension . If we take to be the sum of the even exterior powers of the cotangent bundle, and to be the sum of the odd powers, define , considered as a map from to . Then the analytical index of is the Euler characteristic of the Hodge cohomology of , and the topological index is the integral of the Euler class over the manifold. The index formula for this operator yields the Chern–Gauss–Bonnet theorem. The concrete computation goes as follows: according to one variation of the splitting principle, if is a real vector bundle of dimension , in order to prove assertions involving characteristic classes, we may suppose that there are complex line bundles such that . Therefore, we can consider the Chern roots , , . Using Chern roots as above and the standard properties of the Euler class, we have that . As for the Chern character and the Todd class, Applying the index theorem, which is the "topological" version of the Chern-Gauss-Bonnet theorem (the geometric one being obtained by applying the Chern-Weil homomorphism). Hirzebruch–Riemann–Roch theorem Take X to be a complex manifold of (complex) dimension n with a holomorphic vector bundle V. We let the vector bundles E and F be the sums of the bundles of differential forms with coefficients in V of type (0, i) with i even or odd, and we let the differential operator D be the sum restricted to E. This derivation of the Hirzebruch–Riemann–Roch theorem is more natural if we use the index theorem for elliptic complexes rather than elliptic operators. We can take the complex to be with the differential given by . Then the ith cohomology group is just the coherent cohomology group Hi(X, V), so the analytical index of this complex is the holomorphic Euler characteristic of V: Since we are dealing with complex bundles, the computation of the topological index is simpler. Using Chern roots and doing similar computations as in the previous example, the Euler class is given by and Applying the index theorem, we obtain the Hirzebruch-Riemann-Roch theorem: In fact we get a generalization of it to all complex manifolds: Hirzebruch's proof only worked for projective complex manifolds X. Hirzebruch signature theorem The Hirzebruch signature theorem states that the signature of a compact oriented manifold X of dimension 4k is given by the L genus of the manifold. This follows from the Atiyah–Singer index theorem applied to the following signature operator. The bundles E and F are given by the +1 and −1 eigenspaces of the operator on the bundle of differential forms of X, that acts on k-forms as times the Hodge star operator. The operator D is the Hodge Laplacian restricted to E, where d is the Cartan exterior derivative and d* is its adjoint. The analytic index of D is the signature of the manifold X, and its topological index is the L genus of X, so these are equal.  genus and Rochlin's theorem The  genus is a rational number defined for any manifold, but is in general not an integer. Borel and Hirzebruch showed that it is integral for spin manifolds, and an even integer if in addition the dimension is 4 mod 8. This can be deduced from the index theorem, which implies that the  genus for spin manifolds is the index of a Dirac operator. The extra factor of 2 in dimensions 4 mod 8 comes from the fact that in this case the kernel and cokernel of the Dirac operator have a quaternionic structure, so as complex vector spaces they have even dimensions, so the index is even. In dimension 4 this result implies Rochlin's theorem that the signature of a 4-dimensional spin manifold is divisible by 16: this follows because in dimension 4 the  genus is minus one eighth of the signature. Proof techniques Pseudodifferential operators Pseudodifferential operators can be explained easily in the case of constant coefficient operators on Euclidean space. In this case, constant coefficient differential operators are just the Fourier transforms of multiplication by polynomials, and constant coefficient pseudodifferential operators are just the Fourier transforms of multiplication by more general functions. Many proofs of the index theorem use pseudodifferential operators rather than differential operators. The reason for this is that for many purposes there are not enough differential operators. For example, a pseudoinverse of an elliptic differential operator of positive order is not a differential operator, but is a pseudodifferential operator. Also, there is a direct correspondence between data representing elements of K(B(X), S(X)) (clutching functions) and symbols of elliptic pseudodifferential operators. Pseudodifferential operators have an order, which can be any real number or even −∞, and have symbols (which are no longer polynomials on the cotangent space), and elliptic differential operators are those whose symbols are invertible for sufficiently large cotangent vectors. Most versions of the index theorem can be extended from elliptic differential operators to elliptic pseudodifferential operators. Cobordism The initial proof was based on that of the Hirzebruch–Riemann–Roch theorem (1954), and involved cobordism theory and pseudodifferential operators. The idea of this first proof is roughly as follows. Consider the ring generated by pairs (X, V) where V is a smooth vector bundle on the compact smooth oriented manifold X, with relations that the sum and product of the ring on these generators are given by disjoint union and product of manifolds (with the obvious operations on the vector bundles), and any boundary of a manifold with vector bundle is 0. This is similar to the cobordism ring of oriented manifolds, except that the manifolds also have a vector bundle. The topological and analytical indices are both reinterpreted as functions from this ring to the integers. Then one checks that these two functions are in fact both ring homomorphisms. In order to prove they are the same, it is then only necessary to check they are the same on a set of generators of this ring. Thom's cobordism theory gives a set of generators; for example, complex vector spaces with the trivial bundle together with certain bundles over even dimensional spheres. So the index theorem can be proved by checking it on these particularly simple cases. K-theory Atiyah and Singer's first published proof used K-theory rather than cobordism. If i is any inclusion of compact manifolds from X to Y, they defined a 'pushforward' operation i! on elliptic operators of X to elliptic operators of Y that preserves the index. By taking Y to be some sphere that X embeds in, this reduces the index theorem to the case of spheres. If Y is a sphere and X is some point embedded in Y, then any elliptic operator on Y is the image under i! of some elliptic operator on the point. This reduces the index theorem to the case of a point, where it is trivial. Heat equation gave a new proof of the index theorem using the heat equation, see e.g. . The proof is also published in and . If D is a differential operator with adjoint D*, then D*D and DD* are self adjoint operators whose non-zero eigenvalues have the same multiplicities. However their zero eigenspaces may have different multiplicities, as these multiplicities are the dimensions of the kernels of D and D*. Therefore, the index of D is given by for any positive t. The right hand side is given by the trace of the difference of the kernels of two heat operators. These have an asymptotic expansion for small positive t, which can be used to evaluate the limit as t tends to 0, giving a proof of the Atiyah–Singer index theorem. The asymptotic expansions for small t'' appear very complicated, but invariant theory shows that there are huge cancellations between the terms, which makes it possible to find the leading terms explicitly. These cancellations were later explained using supersymmetry. See also Citations References The papers by Atiyah are reprinted in volumes 3 and 4 of his collected works, This reformulates the result as a sort of Lefschetz fixed-point theorem, using equivariant K-theory. An announcement of the index theorem. This gives a proof using K-theory instead of cohomology. This paper shows how to convert from the K-theory version to a version using cohomology. This paper studies families of elliptic operators, where the index is now an element of the K-theory of the space parametrizing the family. . This studies families of real (rather than complex) elliptic operators, when one can sometimes squeeze out a little extra information. . This states a theorem calculating the Lefschetz number of an endomorphism of an elliptic complex. and These give the proofs and some applications of the results announced in the previous paper. . , This gives an elementary proof of the index theorem for the Dirac operator, using the heat equation and supersymmetry. Bismut proves the theorem for elliptic complexes using probabilistic methods, rather than heat equation methods. reprinted in volume 1 of his collected works, p. 65–75, . On page 120 Gel'fand suggests that the index of an elliptic operator should be expressible in terms of topological data. Free online textbook that proves the Atiyah–Singer theorem with a heat equation approach Free online textbook. This describes the original proof of the theorem (Atiyah and Singer never published their original proof themselves, but only improved versions of it.) - Personal accounts on Atiyah, Bott, Hirzebruch and Singer. External links Links on the theory Pdf presentation. Links of interviews R. R. Seeley and other (1999) Recollections from the early days of index theory and pseudo-differential operators - A partial transcript of informal post–dinner conversation during a symposium held in Roskilde, Denmark, in September 1998. Differential operators Elliptic partial differential equations Theorems in differential geometry
Atiyah–Singer index theorem
[ "Mathematics" ]
6,088
[ "Theorems in differential geometry", "Mathematical analysis", "Differential operators", "Theorems in geometry" ]
324,772
https://en.wikipedia.org/wiki/Eurotra
Eurotra was a machine translation project established and funded by the European Commission from 1978 until 1992. History In 1976, the European Commission started using the commercially developed machine translation system SYSTRAN with a plan to make it work for further languages than originally developed for (Russian-English and English-French), which however turned out to be difficult. This and the potential in existing systems within European research center, led to the decision in 1978 to start the project Eurotra, first through a preparatory Eurotra Coordination Group. Four years later, the European Commission and coordination group gained the approval of the European Parliament. The goal of the project as to create machine translation system for the official languages of the European Community, which at the time were Danish, Dutch, German, English, French, Italian, later including Greek, Spanish and Portuguese. However, as time passed, expectations became tempered; "Fully Automatic High Quality Translation" was not a reasonably attainable goal. The true character of Eurotra was eventually acknowledged to be in fact pre-competitive research rather than prototype development. The project was motivated by one of the founding principles of the EU: that all citizens had the right to read any and all proceedings of the Commission in their own language. As more countries joined, this produced a combinatorial explosion in the number of language pairs involved, and the need to translate every paper, speech and even set of meeting minutes produced by the EU into the other eight languages meant that translation rapidly became the overwhelming component in the administrative budget. To solve this problem Eurotra was devised. The project was unusual in that rather than consisting of a single research team, it had member groups distributed around the member countries, organised along language rather than national lines (for example, groups in Leuven and Utrecht worked closely together), and the secretariat was based at the European Commission in Luxembourg. The actual design of the project was unusual as MT projects go. Older systems, such as SYSTRAN, were heavily dictionary-based, with minor support for rearranging word order. More recent systems have often worked on a probabilistic approach, based on parallel corpora. Eurotra addressed the constituent structure of the text to be translated, going through first a syntactic parse followed by a second parse to produce a dependency structure followed by a final parse with a third grammar to produce what was referred to internally as Intermediate Representation (IR). Since all three modules were implemented as Prolog programs, it would then in principle be possible to put this structure backwards through the corresponding modules for another language to produce a translated text in any of the other languages. However, in practice this was not in fact how language pairs were implemented. The first "live" translation occupied a 4Mb Microvax running Ultrix and C-Prolog for a complete weekend some time in early 1987. The sentence, translated from English into Danish, was "Japan makes computers". The main problem faced by the system was the generation of so-called "Parse Forests" - often a large number of different grammar rules could be applied to any particular phrase, producing hundreds, even thousands of (often identical) parse trees. This used up huge quantities of computer store, slowing the whole process down unnecessarily. While Eurotra never delivered a "working" MT system, the project made a far-reaching long-term impact on the nascent language industries in European member states, in particular among the southern countries of Greece, Italy, Spain, and Portugal. There is at least one commercial MT system (developed by an academic/commercial consortium in Denmark) derived from Eurotra technology. See also Apertium Google Translate References External links Eurotra Spain Eurotra Utrecht Machine translation
Eurotra
[ "Technology" ]
756
[ "Machine translation", "Natural language and computing" ]
324,806
https://en.wikipedia.org/wiki/Sporadic%20group
In the mathematical classification of finite simple groups, there are a number of groups which do not fit into any infinite family. These are called the sporadic simple groups, or the sporadic finite groups, or just the sporadic groups. A simple group is a group G that does not have any normal subgroups except for the trivial group and G itself. The mentioned classification theorem states that the list of finite simple groups consists of 18 countably infinite families plus 26 exceptions that do not follow such a systematic pattern. These 26 exceptions are the sporadic groups. The Tits group is sometimes regarded as a sporadic group because it is not strictly a group of Lie type, in which case there would be 27 sporadic groups. The monster group, or friendly giant, is the largest of the sporadic groups, and all but six of the other sporadic groups are subquotients of it. Names Five of the sporadic groups were discovered by Émile Mathieu in the 1860s and the other twenty-one were found between 1965 and 1975. Several of these groups were predicted to exist before they were constructed. Most of the groups are named after the mathematician(s) who first predicted their existence. The full list is: Mathieu groups M11, M12, M22, M23, M24 Janko groups J1, J2 or HJ, J3 or HJM, J4 Conway groups Co1, Co2, Co3 Fischer groups Fi22, Fi23, Fi24′ or F3+ Higman-Sims group HS McLaughlin group McL Held group He or F7+ or F7 Rudvalis group Ru Suzuki group Suz or F3− O'Nan group O'N (ON) Harada-Norton group HN or F5+ or F5 Lyons group Ly Thompson group Th or F3|3 or F3 Baby Monster group B or F2+ or F2 Fischer-Griess Monster group M or F1 Various constructions for these groups were first compiled in , including character tables, individual conjugacy classes and lists of maximal subgroup, as well as Schur multipliers and orders of their outer automorphisms. These are also listed online at , updated with their group presentations and semi-presentations. The degrees of minimal faithful representation or Brauer characters over fields of characteristic p ≥ 0 for all sporadic groups have also been calculated, and for some of their covering groups. These are detailed in . A further exception in the classification of finite simple groups is the Tits group T, which is sometimes considered of Lie type or sporadic — it is almost but not strictly a group of Lie type — which is why in some sources the number of sporadic groups is given as 27, instead of 26. In some other sources, the Tits group is regarded as neither sporadic nor of Lie type, or both. The Tits group is the of the infinite family of commutator groups ; thus in a strict sense not sporadic, nor of Lie type. For these finite simple groups coincide with the groups of Lie type also known as Ree groups of type 2F4. The earliest use of the term sporadic group may be where he comments about the Mathieu groups: "These apparently sporadic simple groups would probably repay a closer examination than they have yet received." (At the time, the other sporadic groups had not been discovered.) The diagram at right is based on . It does not show the numerous non-sporadic simple subquotients of the sporadic groups. Organization Happy Family Of the 26 sporadic groups, 20 can be seen inside the monster group as subgroups or quotients of subgroups (sections). These twenty have been called the happy family by Robert Griess, and can be organized into three generations. First generation (5 groups): the Mathieu groups Mn for n = 11, 12, 22, 23 and 24 are multiply transitive permutation groups on n points. They are all subgroups of M24, which is a permutation group on 24 points. Second generation (7 groups): the Leech lattice All the subquotients of the automorphism group of a lattice in 24 dimensions called the Leech lattice: Co1 is the quotient of the automorphism group by its center {±1} Co2 is the stabilizer of a type 2 (i.e., length 2) vector Co3 is the stabilizer of a type 3 (i.e., length ) vector Suz is the group of automorphisms preserving a complex structure (modulo its center) McL is the stabilizer of a type 2-2-3 triangle HS is the stabilizer of a type 2-3-3 triangle J2 is the group of automorphisms preserving a quaternionic structure (modulo its center). Third generation (8 groups): other subgroups of the Monster Consists of subgroups which are closely related to the Monster group M: B or F2 has a double cover which is the centralizer of an element of order 2 in M Fi24′ has a triple cover which is the centralizer of an element of order 3 in M (in conjugacy class "3A") Fi23 is a subgroup of Fi24′ Fi22 has a double cover which is a subgroup of Fi23 The product of Th = F3 and a group of order 3 is the centralizer of an element of order 3 in M (in conjugacy class "3C") The product of HN = F5 and a group of order 5 is the centralizer of an element of order 5 in M The product of He = F7 and a group of order 7 is the centralizer of an element of order 7 in M. Finally, the Monster group itself is considered to be in this generation. (This series continues further: the product of M12 and a group of order 11 is the centralizer of an element of order 11 in M.) The Tits group, if regarded as a sporadic group, would belong in this generation: there is a subgroup S4 ×2F4(2)′ normalising a 2C2 subgroup of B, giving rise to a subgroup 2·S4 ×2F4(2)′ normalising a certain Q8 subgroup of the Monster. 2F4(2)′ is also a subquotient of the Fischer group Fi22, and thus also of Fi23 and Fi24′, and of the Baby Monster B. 2F4(2)′ is also a subquotient of the (pariah) Rudvalis group Ru, and has no involvements in sporadic simple groups except the ones already mentioned. Pariahs The six exceptions are J1, J3, J4, O'N, Ru, and Ly, sometimes known as the pariahs. Table of the sporadic group orders (with Tits group) Notes References Works cited (German) External links Atlas of Finite Group Representations: Sporadic groups Mathematical tables he:משפט המיון לחבורות פשוטות סופיות
Sporadic group
[ "Mathematics" ]
1,458
[ "Mathematical tables" ]
324,807
https://en.wikipedia.org/wiki/Amatol
Amatol is a highly explosive material made from a mixture of TNT and ammonium nitrate. The British name originates from the words ammonium and toluene (the precursor of TNT). Similar mixtures (one part dinitronaphthalene and seven parts ammonium nitrate) were known as Schneiderite in France. Amatol was used extensively during World War I and World War II, typically as an explosive in military weapons such as aircraft bombs, shells, depth charges, and naval mines. It was eventually replaced with alternative explosives such as Composition B, Torpex, and Tritonal. Invention Following the Shell Crisis of 1915 in which the UK did not have enough ordnance due to a lack of explosives, a team at the Royal Arsenal laboratories produced a mixture of ammonium nitrate and TNT, known as Amatol for short. Special factories were constructed for the manufacture of ammonium nitrate by the double decomposition of sodium nitrate and ammonium sulfate in solution followed by evaporative concentration and crystallization. It became the standard filling for shells and bombs, and was later adopted by the US as their principal high explosive. Manufacture and use Amatol exploits synergy between TNT and ammonium nitrate. TNT has higher explosive velocity and brisance, but is deficient in oxygen. Oxygen deficiency causes black smoke residue from a pure TNT explosion. The oxygen surplus of ammonium nitrate increases the energy release of TNT during detonation. Depending on the ratio of ingredients used, amatol leaves a residue of white or grey smoke after detonation. Amatol has a lower explosive velocity and correspondingly lower brisance than TNT but is cheaper because of the lower cost of ammonium nitrate. Amatol allowed supplies of TNT to be expanded considerably, with little reduction in the destructive power of the final product, so long as the amount of TNT in the mixture did not fall below 60%. Mixtures containing as little as 20% TNT were for less demanding uses. TNT is 50% deficient in oxygen. Amatol is oxygen balanced and is therefore more effective than pure TNT when exploding underground or underwater. Relatively unsophisticated cannery equipment can be adapted to amatol production. TNT is gently heated with steam or hot water until it melts, acquiring the physical characteristics of a syrup. Then the correct weight ratio of powdered ammonium nitrate is added and mixed in. Whilst this mixture is still in a molten state, it is poured into empty bomb casings and allowed to cool and solidify. The lowest grades of amatol could not be produced by casting molten TNT. Instead, flaked TNT was thoroughly mixed with powdered ammonium nitrate and then compressed or extruded. Amatol ranges from off-white to slightly yellow or pinkish brown depending on the mixture used, and remains soft for long periods of storage. It is hygroscopic, which complicates long-term storage. To prevent moisture problems, amatol charges were coated with a thin layer of pure molten TNT or alternatively bitumen. Long-term storage was rare during wars because munitions charged with amatol were generally used soon after manufacture. Amatol should not be stored in containers made from copper or brass, as it can form unstable compounds sensitive to vibration. Pressed, it is relatively insensitive but may be detonated by severe impact, whereas when cast, it is extremely insensitive. Primary explosives such as mercury fulminate were often used as a detonator, in combination with an explosive booster charge such as tetryl. The explosive charges hidden in HMS Campbeltown during the St. Nazaire Raid of 1942 contained amatol. The British X class midget submarines which planted explosive charges beneath the German battleship Tirpitz in September 1943 carried two "saddle charges" containing four tons of amatol. Warheads for the German V-1 flying bomb and V-2 rockets also contained amatol. A derivative of amatol is amatex, consisting of 51% ammonium nitrate, 40% TNT, and 9% RDX (which also has a negative oxygen balance). Ammonite Amatol is rare today, except in legacy munitions or unexploded ordnance. Ammonite, a form of amatol, is a civil engineering explosive popular in Eastern Europe and China. Generally comprising a 20/80 mixture of TNT and ammonium nitrate it is typically used for quarrying or mining. Because the proportion of TNT is significantly lower than in its military counterpart, ammonite has much less destructive power. In general, a 30 kilogram charge of ammonite is roughly equivalent to 20 kilograms of TNT. Amatol, New Jersey Amatol was the name given to a munitions factory and planned community built by the United States government in Mullica Township, New Jersey during World War I. After the war, the town was dismantled. The Atlantic City Speedway was built on part of the Amatol site in 1926. The site (including the speedway) is presently (2020) abandoned. See also Ammonal Minol Hexanite RE factor References Sources Explosives Trinitrotoluene British inventions
Amatol
[ "Chemistry" ]
1,061
[ "Explosive chemicals", "Trinitrotoluene", "Explosives", "Explosions" ]
324,828
https://en.wikipedia.org/wiki/Bimonster%20group
In mathematics, the bimonster is a group that is the wreath product of the monster group M with Z2: The Bimonster is also a quotient of the Coxeter group corresponding to the Dynkin diagram Y555, a Y-shaped graph with 16 nodes: Actually, the 3 outermost nodes are redundant. This is because the subgroup Y124 is the E8 Coxeter group. It generates the remaining node of Y125. This pattern extends all the way to Y444: it automatically generates the 3 extra nodes of Y555. John H. Conway conjectured that a presentation of the bimonster could be given by adding a certain extra relation to the presentation defined by the Y444 diagram. More specifically, the affine E6 Coxeter group is , which can be reduced to the finite group by adding a single relation called the spider relation. Once this relation is added, and the diagram is extended to Y444, the group generated is the bimonster. This was proved in 1990 by Simon P. Norton; the proof was simplified in 1999 by A. A. Ivanov. Other Y-groups Many subgroups of the (bi)monster can be defined by adjoining the spider relation to smaller Coxeter diagrams, most notably the Fischer groups and the baby monster group. The groups Yij0, Yij1, Y122, Y123, and Y124 are finite even without adjoining additional relations. They are the Coxeter groups Ai+j+1, Di+j, E6, E7, and E8, respectively. Other groups, which would be infinite without the spider relation, are summarized below: See also Triality - simple Lie group D4, Y111 Affine E_6 Y222 References . . . . . . . External links (Note: incorrectly named here as [36,6,6]) Group theory
Bimonster group
[ "Mathematics" ]
395
[ "Group theory", "Fields of abstract algebra" ]
324,834
https://en.wikipedia.org/wiki/HeLa
HeLa () is an immortalized cell line used in scientific research. It is the oldest human cell line and one of the most commonly used. HeLa cells are durable and prolific, allowing for extensive applications in scientific study. The line is derived from cervical cancer cells taken on February 8, 1951, from Henrietta Lacks, a 31-year-old African American mother of five, after whom the line is named. Lacks died of cancer on October 4, 1951. The cells from Lacks's cancerous cervical tumor were taken without her knowledge, which was common practice in the United States at the time. Cell biologist George Otto Gey found that they could be kept alive, and developed a cell line. Previously, cells cultured from other human cells would survive for only a few days, but cells from Lacks's tumor behaved differently. History Origin In 1951, Henrietta Lacks was admitted to the Johns Hopkins Hospital with symptoms of irregular vaginal bleeding; she was subsequently treated for cervical cancer. Her first treatment was performed by Lawrence Wharton Jr., who at that time collected tissue samples from her cervix without her consent. Her cervical biopsy supplied samples of tissue for clinical evaluation and research by George Otto Gey, head of the Tissue Culture Laboratory. Gey's lab assistant Mary Kubicek used the roller-tube technique to culture the cells. It was observed that the cells grew robustly, doubling every 20–24 hours, unlike previous specimens, which died out. The cells were propagated by Gey shortly before Lacks died of her cancer in 1951. This was the first human cell line to prove successful in vitro, which was a scientific achievement with profound future benefit to medical research. Gey freely donated these cells, along with the tools and processes that his lab developed, to any scientist requesting them, simply for the benefit of science. Neither Lacks nor her family gave permission to harvest the cells. The cells were later commercialized, although never patented in their original form. There was no requirement at that time to inform patients or their relatives about such matters, because discarded material or material obtained during surgery, diagnosis, or therapy was considered the property of the physician or the medical institution. As was customary for Gey's lab assistant, the culture was named after the first two letters of Henrietta Lacks' first and last names, He + La. Before a 1973 query printed in the journal Nature obtained her real name, the "HeLa" cell line was incorrectly attributed to a "Helen Lane" or "Helen Larson". The origin of this obfuscation is unclear. In 1973, when contamination by HeLa cells was raised as a serious issue, a staff physician at Johns Hopkins contacted the Lacks family, seeking DNA samples to help identify contaminating cell lines. The family never understood the purpose of the visit, but they were distressed by their understanding of what the researchers told them. These cells are treated as cancer cells, as they are descended from a biopsy taken from a visible lesion on the cervix as part of Lacks's diagnosis of cancer. HeLa cells, like other cell lines, are termed "immortal" because they can divide an unlimited number of times in a laboratory cell culture plate, as long as fundamental cell survival conditions are met (i.e. being maintained and sustained in a suitable environment). There are many strains of HeLa cells, because they mutate during division in cell cultures, but all HeLa cells are descended from the same tumor cells removed from Lacks. The total number of HeLa cells that have been propagated in cell culture far exceeds the total number of cells that were in Henrietta Lacks's body. Controversy Lacks's case is one of many examples of the lack of informed consent in 20th-century medicine. Communication between tissue donors and doctors was virtually nonexistentcells were taken without patient consent, and patients were not told what the cells would be used for. Johns Hopkins Hospital, where Lacks received treatment and had her tissue harvested, was the only hospital in the Baltimore area where African American patients could receive free care. The patients who received free care from this segregated sect of the hospital often became research subjects without their knowledge. Lacks's family also had no access to her patient files and had no say in who received HeLa cells or what they would be used for. Additionally, as HeLa cells were popularized and used more frequently throughout the scientific community, Lacks's relatives received no financial benefit and continued to live with limited access to healthcare. This issue of who owns tissue samples taken for research was brought up in the Supreme Court of California case of Moore v. Regents of the University of California in 1990. The court ruled that a person's discarded tissue and cells are not his or her property and can be commercialized. Lacks's case influenced the establishment of the Common Rule in 1991. The Common Rule enforces informed consent by ensuring that doctors inform patients if they plan to use any details of the patient's case in research and give them the choice of disclosing the details or not. Tissues connected to their donors' names are also strictly regulated under this rule, and samples are no longer named using donors' initials, but rather by code numbers. To further resolve the issue of patient privacy, Johns Hopkins established a joint committee with the NIH and several of Lacks's family members to determine who receives access to Henrietta Lacks's genome. In 2021, Henrietta Lacks's estate sued to get past and future payments for the alleged unauthorized and widely known sale of HeLa cells by Thermo Fisher Scientific. Lacks's family hired an attorney to seek compensation from upwards of 100 pharmaceutical companies that have used and profited from HeLa cells. Settlement of the suit with Thermo Fisher Scientific was announced in August 2023, with undisclosed terms. Subsequently the Lacks family announced that it will be suing the company Ultragenyx next. Use in research HeLa cells were the first human cells to be successfully cloned in 1953, by Theodore Puck and Philip I. Marcus at the University of Colorado, Denver. Since then, HeLa cells have "continually been used for research into cancer, AIDS, the effects of radiation and toxic substances, gene mapping, and countless other scientific pursuits." According to author Rebecca Skloot, by 2009, "more than 60,000 scientific articles had been published about research done on HeLa [cells], and that number was increasing steadily at a rate of more than 300 papers each month." Polio eradication HeLa cells were used by Jonas Salk to test the first polio vaccine in the 1950s. They were observed to be easily infected by the poliomyelitis virus, causing infected cells to die. This made HeLa cells highly desirable for polio vaccine testing, since results could be easily obtained. A large volume of HeLa cells were needed for the testing of Salk's polio vaccine, prompting the National Foundation for Infantile Paralysis (NFIP) to find a facility capable of mass-producing HeLa cells. In the spring of 1953, a cell culture factory was established at Tuskegee University to supply Salk and other labs with HeLa cells. Less than a year later, Salk's vaccine was ready for human trials. Virology HeLa cells have been used in testing how parvovirus infects cells of humans, dogs, and cats. These cells have also been used to study viruses such as the oropouche virus (OROV). OROV causes disruption of cells in culture; the cells start to degenerate shortly after they are infected, causing viral induction of apoptosis. HeLa cells have been used to study expression of the papillomavirus E2 and apoptosis. HeLa cells have also been used to study the ability of the canine distemper virus to induce apoptosis in cancer cell lines, which could play an important role in developing treatments for tumor cells resistant to radiation and chemotherapy. HeLa cells have also been instrumental in the development of human papilloma virus (HPV) vaccines. In the 1980s, Harald zur Hausen found that Lacks's cells from the original biopsy contained HPV-18, which was later found to be the cause of the aggressive cancer that had killed her. His work in linking HPV with cervical cancer won him a Nobel Prize and led to the development of HPV vaccines, which are predicted to reduce the number of deaths from cervical cancer by 70%. Over the years, HeLa cells have been infected with various types of viruses, including HIV, Zika, mumps, and herpes viruses to test and develop new vaccines and drugs. Dr. Richard Axel discovered that the addition of the CD4 protein to HeLa cells enabled them to be infected with HIV, allowing the virus to be studied. In 1979, scientists learned that the measles virus constantly mutates when it infects HeLa cells, and in 2019 they found that Zika cannot multiply in HeLa cells. Cancer HeLa cells have been used in a number of cancer studies, including those involving sex steroid hormones, such as estradiol and estrogen, and estrogen receptors, along with estrogen-like compounds, such as quercetin, which has cancer-reducing properties. There have also been studies on HeLa cells, involving the effects of flavonoids and antioxidants with estradiol on cancer cell proliferation. In 2011, HeLa cells were used in tests of novel heptamethine dyes IR-808 and other analogues, which are currently being explored for their unique uses in medical diagnostics, the individualized treatment of cancer patients with the aid of PDT, co-administration with other drugs, and irradiation. HeLa cells have been used in research involving fullerenes to induce apoptosis as a part of photodynamic therapy, as well as in in vitro cancer research using cell lines. HeLa cells have also been used to define cancer markers in RNA, and have been used to establish an RNAi Based Identification System and Interference of Specific Cancer Cells. In 2014, HeLa cells were shown to provide a viable cell line for tumor xenografts in C57BL/6 nude mice, and were subsequently used to examine the in vivo effects of fluoxetine and cisplatin on cervical cancer. Genetics In 1953, a lab mistake involving mixing HeLa cells with the wrong liquid allowed researchers for the first time to see and count each chromosome clearly in the HeLa cells with which they were working. This accidental discovery led scientists Joe Hin Tjio and Albert Levan to develop better techniques for staining and counting chromosomes. They were the first to show that humans have 23 pairs of chromosomes rather than 24, as was previously believed. This was important for the study of developmental disorders, such as Down syndrome, that involve the number of chromosomes. In 1965, Henry Harris and John Watkins created the first human-animal hybrid by fusing HeLa cells with mouse embryo cells. This enabled advances in mapping genes to specific chromosomes, which would eventually lead to the Human Genome Project. Space microbiology In the 1960s, HeLa cells were sent on the Soviet satellite Sputnik-6 and human space missions to determine the long term effects of space travel on living cells and tissues. Scientists discovered that HeLa cells divide more quickly in zero gravity. Analysis Telomerase The HeLa cell line was derived for use in cancer research. These cells proliferate abnormally rapidly, even compared with other cancer cells. Like many other cancer cells, HeLa cells have an active version of telomerase during cell division, which copies telomeres over and over again. This prevents the incremental shortening of telomeres that is implicated in aging and eventual cell death. In this way, the cells circumvent the Hayflick limit, which is the limited number of cell divisions that most normal cells can undergo before becoming senescent. This results in unlimited cell division and immortality. Chromosome number Horizontal gene transfer from human papillomavirus 18 (HPV18) to human cervical cells created the HeLa genome, which is different from Henrietta Lacks's genome in various ways, including the number of chromosomes. HeLa cells are rapidly dividing cancer cells, and the number of chromosomes varies during cancer formation and cell culture. The current estimate (excluding very tiny fragments) is a "hypertriploid chromosome number (3n+)", which means 76 to 80 total chromosomes (rather than the normal diploid number of 46) with 22–25 clonally abnormal chromosomes, known as "HeLa signature chromosomes". The signature chromosomes can be derived from multiple original chromosomes, making summary counts based on original numbering challenging. Researchers have also noted how stable these aberrant karyotypes can be. Studies that combined spectral karyotyping, FISH, and conventional cytogenic techniques have shown that the detected chromosomal aberrations may be representative of advanced cervical carcinomas and were probably present in the primary tumor, since the HeLa genome has remained stable, even after years of continued cultivation. Complete genome sequence The complete genome of HeLa cells was sequenced and published on 11 March 2013, without the Lacks family's knowledge. Concerns were raised by the family, so the authors voluntarily withheld access to the sequence data. Jay Shendure led a HeLa sequencing project at the University of Washington, which resulted in a paper that had been accepted for publication in March 2013 – but that was also put on hold while the Lacks family's privacy concerns were addressed. On 7 August 2013, NIH director Francis Collins announced a policy of controlled access to the cell line genome, based on an agreement reached after three meetings with the Lacks family. A data-access committee will review requests from researchers for access to the genome sequence, under the criteria that the study is for medical research and that the users will abide by terms in the HeLa Genome Data Use Agreement, which includes that all NIH-funded researchers will deposit the data in a single database for future sharing. The committee consists of six members, including representatives from the medical, scientific, and bioethics fields, as well as two members of the Lacks family. In an interview, Collins praised the Lacks family's willingness to participate in a situation that was thrust upon them. He described the whole experience with them as "powerful," saying that it brought together "science, scientific history and ethical concerns" in a unique way. Contamination HeLa cells are sometimes difficult to control, because they adapt to growth in tissue culture plates and invade and outcompete other cell lines. Through improper maintenance, they have been known to contaminate other cell cultures in the same laboratory, interfering with biological research and forcing researchers to declare many results invalid. The degree of HeLa cell contamination among other cell types is unknown, because few researchers test the identity or purity of already established cell lines. It has been shown that a substantial fraction of in vitro cell lines are contaminated with HeLa cells; estimates range from 10% to 20%. This observation suggests that any cell line may be susceptible to a degree of contamination. Stanley Gartler (1967) and Walter Nelson-Rees (1975) were the first to publish on contamination of various cell lines by HeLa cells. Gartler noted that "with the continued expansion of cell culture technology, it is almost certain that both interspecific and intraspecific contamination will occur." HeLa cell contamination has become a pervasive worldwide problem – affecting even the laboratories of many notable physicians, scientists, and researchers, including Jonas Salk. The HeLa contamination problem also contributed to Cold War tensions. The USSR and the USA had begun to cooperate in the war on cancer launched by President Richard Nixon, only to find that the exchanged cells were contaminated by HeLa. Rather than focus on how to resolve the problem of HeLa cell contamination, many scientists and science writers continue to document this problem as simply a contamination issue – caused not by human error or shortcomings but by the hardiness, proliferation, or overpowering nature of HeLa cells. Recent data suggest that cross-contamination is still a major problem with modern cell cultures. The International Cell Line Authentication Committee (ICLAC) notes that many cases of cell line misidentification are the result of cross-contamination of the culture by another, faster-growing cell line. This calls into question the validity of the research done using contaminated cell lines, as certain attributes of the contaminant, which may come from an entirely different species or tissue, may be misattributed to the cell line under investigation. New species proposal HeLa cells were described by evolutionary biologist Leigh Van Valen as an example of the contemporary creation of a new species, dubbed Helacyton gartleri, owing to their ability to replicate indefinitely and their non-human number of chromosomes. The species was named after geneticist Stanley M. Gartler, whom Van Valen credits with discovering "the remarkable success of this species". His argument for speciation depends on these points: the chromosomal incompatibility of HeLa cells with human cells; the ecological niche of HeLa cells; their ability to persist and expand well beyond the desires of human cultivators; the possession by HeLa cells of their own clonal karyotype, defining it as a distinct species. Van Valen proposed the new family Helacytidae and the genus Helacyton, and in the same paper proposed a new species for HeLa cells. However, this proposal was not taken seriously by other prominent evolutionary biologists, nor by scientists in other disciplines. Van Valen's argument that HeLa are a new species does not fulfill the criteria for an independent unicellular asexually reproducing species, because of the notorious instability of HeLa's karyotype and their lack of a strict ancestral-descendant lineage. Gallery In media The 1997 documentary The Way of All Flesh by Adam Curtis explained the history of HeLa cells and their implications for medicine and society. A 2010 episode of Law & Order, "Immortal", was heavily based on the story of Henrietta Lacks and the HeLa cell line, using the fictional "NaRo" cells as a stand-in. The story of how the HeLa cell line came to be was also the subject of a 2010 episode of the podcast Radiolab. HeLa cells were the subject of a 2010 book by Rebecca Skloot, The Immortal Life of Henrietta Lacks, investigating the historical context of the cell line and how the Lacks family was involved in its use. A 2017 HBO film, The Immortal Life of Henrietta Lacks, was based on the book. The film starred Oprah Winfrey, Sylvia Grace Crim, and Rocky Carroll, with Renee Elise Goldsberry as Henrietta Lacks. Author Rebecca Skloot also appeared as a character in the film, portrayed by Rose Byrne. See also Clonally transmissible cancer Moore v. Regents of the University of California, case that set precedent for discarded tissue List of contaminated cell lines WI-38 References Further reading External links HeLa (CCL-2 Cells) in the ATCC database HeLa Transfection and Selection Data for HeLa Cells Rebecca Skloot, The Immortal Life of Henrietta Lacks book website with additional features (photo/video/audio) The Henrietta Lacks Foundation , a foundation established to, among other things, help provide scholarship funds and health insurance to Henrietta Lacks's family. "Wonder Woman: The Life, Death, and Life After Death of Henrietta Lacks, Unwitting Heroine of Modern Medical Science" by Van Smith "What's Left of Henrietta Lacks?" by Anne Enright Cell Centered Database – HeLa cell Cellosaurus entry for HeLa The Legacy of Henrietta Lacks Human cell lines Bioethics Johns Hopkins Hospital Cellular senescence 1951 in biotechnology Cervical cancer
HeLa
[ "Technology", "Biology" ]
4,087
[ "Bioethics", "Senescence", "Cellular senescence", "Cellular processes", "Ethics of science and technology" ]
324,863
https://en.wikipedia.org/wiki/Formalism%20%28philosophy%29
The term formalism describes an emphasis on form over content or meaning in the arts, literature, or philosophy. A practitioner of formalism is called a formalist. A formalist, with respect to some discipline, holds that there is no transcendent meaning to that discipline other than the literal content created by a practitioner. For example, formalists within mathematics claim that mathematics is no more than the symbols written down by the mathematician, which is based on logic and a few elementary rules alone. This is as opposed to non-formalists, within that field, who hold that there are some things inherently true, and are not, necessarily, dependent on the symbols within mathematics so much as a greater truth. Formalists within a discipline are completely concerned with "the rules of the game," as there is no other external truth that can be achieved beyond those given rules. In this sense, formalism lends itself well to disciplines based upon axiomatic systems. Religion Formalism in religion means an emphasis on ritual and observance over their meanings. Within Christianity, the term legalism is a derogatory term that is loosely synonymous to religious formalism. Law Formalism is a school of thought in law and jurisprudence which assumes that the law is a system of rules that can determine the outcome of any case, without reference to external norms. For example, formalism animates the commonly heard criticism that "judges should apply the law, not make it." To formalism's rival, legal realism, this criticism is incoherent, because legal realism assumes that, at least in difficult cases, all applications of the law will require that a judge refer to external (i.e. non-legal) sources, such as the judge's conception of justice, or commercial norms. Criticism In general in the study of the arts and literature, formalism refers to the style of criticism that focuses on artistic or literary techniques in themselves, in separation from the work's social and historical context. Art criticism Generally speaking, formalism is the concept which everything necessary in a work of art is contained within it. The context for the work, including the reason for its creation, the historical background, and the life of the artist, is not considered to be significant. Examples of formalist aestheticians are Clive Bell, Jerome Stolnitz, and Edward Bullough. Literary criticism In contemporary discussions of literary theory, the school of criticism of I. A. Richards and his followers, traditionally the New Criticism, has sometimes been labelled 'formalist'. The formalist approach, in this sense, is a continuation of aspects of classical rhetoric. Russian formalism was a twentieth century school, based in Eastern Europe, with roots in linguistic studies and also theorising on fairy tales, in which content is taken as secondary since the tale 'is' the form, the princess 'is' the fairy-tale princess. The arts Poetry In modern poetry, Formalist poets may be considered as the opposite of writers of free verse. These are only labels, and rarely sum up matters satisfactorily. 'Formalism' in poetry represents an attachment to poetry that recognises and uses schemes of rhyme and rhythm to create poetic effects and to innovate. To distinguish it from archaic poetry the term 'neo-formalist' is sometimes used. See for example: The Formalist, a literary magazine (now defunct) for formalist poetry New Formalism, a movement within the poetry of the United States The New Formalist, a literary magazine for formalist poetry. It was published from 2001 to 2010. Film In film studies, formalism is a trait in filmmaking, which overtly uses the language of film, such as editing, shot composition, camera movement, set design, etc., so as to emphasise graphical (as opposed to diegetic) qualities of the image. Strict formalism, condemned by realist film theorists such as André Bazin, has declined substantially in popular usage since the 1950s, though some more postmodern filmmakers reference it to suggest the artificiality of the film experience. Examples of formalist films may include Resnais's Last Year at Marienbad and Parajanov's The Color of Pomegranates. Intellectual method Formalism can be applied to a set of notations and rules for manipulating them which yield results in agreement with experiment or other techniques of calculation. These rules and notations may or may not have a corresponding mathematical semantics. In the case no mathematical semantics exists, the calculations are often said to be purely formal. See for example scientific formalism. Mathematics In the foundations of mathematics, formalism is associated with a certain rigorous mathematical method: see formal system. In common usage, a formalism means the out-turn of the effort towards formalisation of a given limited area. In other words, matters can be formally discussed once captured in a formal system, or commonly enough within something formalisable with claims to be one. Complete formalisation is in the domain of computer science. Formalism also more precisely refers to a certain school in the philosophy of mathematics, stressing axiomatic proofs through theorems, specifically associated with David Hilbert. In the philosophy of mathematics, therefore, a formalist is a person who belongs to the school of formalism, which is a certain mathematical-philosophical doctrine descending from Hilbert. Anthropology In economic anthropology, formalism is the theoretical perspective that the principles of neoclassical economics can be applied to our understanding of all human societies. See also Zhdanov Doctrine, Stalinist "anti-formalist" doctrine leading to purges in the arts and culture of the USSR and satellite states References External links "Formalism in the Philosophy of Mathematics" by the Stanford Encyclopedia of Philosophy. Theories of aesthetics Theories of deduction Literary concepts
Formalism (philosophy)
[ "Mathematics" ]
1,179
[ "Theories of deduction" ]
324,888
https://en.wikipedia.org/wiki/Rational%20ignorance
Rational ignorance is refraining from acquiring knowledge when the supposed cost of educating oneself on an issue exceeds the expected potential benefit that the knowledge would provide. Ignorance about an issue is said to be "rational" when the cost of educating oneself about the issue sufficiently to make an informed decision can outweigh any potential benefit one could reasonably expect to gain from that decision, and so it would be irrational to spend time doing so. This has consequences for the quality of decisions made by large numbers of people, such as in general elections, where the probability of any one vote changing the outcome is very small. The term is most often found in economics, particularly public choice theory, but also used in other disciplines which study rationality and choice, including philosophy (epistemology) and game theory. The term was coined by Anthony Downs in An Economic Theory of Democracy. Example Consider an employer attempting to choose between two candidates offering to complete a task at the cost of $10/hour. The length of time needed to complete the task may be longer or shorter depending on the skill of the person performing the task, so it is in the employer's best interests to find the fastest worker possible. Assume that the cost of another day of interviewing the candidates is $100. If the employer had deduced from the interviews so far that both candidates would complete the task in somewhere between 195 and 205 hours, it would be in the employer's best interests to choose one or the other by some easily applied metric (for example, flipping a coin) rather than spend the $100 on determining the better candidate, saving at most $100 in labor. In many cases, the decision may be made on the basis of heuristics; a simple decision model which may not be completely accurate. For example, in deciding which brand of prepared food is most nutritious, a shopper might simply choose the one with (for example) the lowest amount of sugar, rather than conducting a research study of all the positive and negative factors in nutrition. Applications In marketing Marketers can take advantage of rational ignorance by increasing the complexity of a decision. If the difference in value between a quality product and a poor product is less than the cost to perform the research necessary to differentiate between them, then it is more rational for a consumer to just take his chances on whichever of the two is more convenient and available. Thus, it is in the interest of the producer of a lower value product to proliferate features, options, and package combinations which will tend to increase the number of shoppers who decide it's too much trouble to make an informed decision. In politics Politics and elections especially display the same dynamic. By increasing the number of issues that a person needs to consider to make a rational decision about candidates or policies, politicians and pundits encourage single-issue voting, party-line voting, jingoism, selling votes, or dart-throwing all of which may tip the playing field in favor of politicians who do not actually represent the electorate. This does not mean that voters make poor and biased decisions: rather that in carrying out their everyday responsibilities (like working and taking care of a family), many people do not have the time to devote to researching every aspect of a candidate's policies. So many people find themselves making rational decisions meaning they let others who are more versed in the subject do the research and they form their opinion based on the evidence provided. They are being rationally ignorant not because they don't care but because they simply do not have the time. Because the cost/benefit ratio increases with increasing costs decreasing the benefit, the same effect can occur when politicians protect their policy decisions from the preferences of the public. To the degree that the electorate perceives their individual votes to count for less, they will have less incentive to spend any time actually learning any details about the candidate(s). A more nuanced example occurs when a voter identifies with a particular political party, akin to the adoption of a favorite movie critic. Based on prior experience a responsible voter will seek politicians or a political party that draws conclusions about social policy that are similar to what their own conclusions would have been had they done a complete analysis. But when voters find themselves agreeing with the same party or politician across a number of election cycles, many voters simply trust that the same will continue to be true and "vote the ticket," also referred to as straight-ticket voting, instead of wasting time on a complete investigation. Criticisms Much of the empirical support for the idea of rational ignorance was drawn from studies of voter apathy, which reached particularly strong conclusions in the 1950s. However, apathy appeared to decline sharply in the 1960s as concern about issues such as the Vietnam War mounted, and political polarization increased. This is consistent with expectations from Public Choice Theory; as voters' interest in the results of policy decisions increase, the perceived benefit of the analysis (or the trip to the ballot box) increases, so more people will consider it rational to repair their ignorance. There also may be situations when "the individual may perceive the situation as one that has carry over benefits to other situations, and treat the learning as a capital investment with payoff beyond the specific situation in which it is presented," and not a waste of time even though the time invested in the learning my not have immediate payoff. (Denzou and North, 1994). Additionally, rational ignorance is scrutinized for its broadening effect on the decisions that individuals make in different matters. The investment of time and energy on learning about the specified subject has ramifications on other decision areas. Individuals sometimes ignore this when unconsciously assessing the investment cost versus payout. The external benefits of acquiring knowledge in one area—those benefits occurring in other decision areas—are therefore subject to being overlooked. See also Agency (sociology) Argument from authority Bounded rationality Mooers's law Rational irrationality Satisficing Sociology of scientific ignorance References External links "Would Rational Voters Acquire Costly Information?" by Cesar Martinelli, Journal of Economic Theory, Vol. 129, Issue 1, July 2006, pp. 225–251 "Rational Ignorance and Voting Behavior" by Cesar Martinelli, International Journal of Game Theory. Vol. 35, Issue 3, February 2007, pp. 315–335 Public choice theory Concepts in epistemology Game theory
Rational ignorance
[ "Mathematics" ]
1,287
[ "Game theory" ]
324,949
https://en.wikipedia.org/wiki/Accelerometer
An accelerometer is a device that measures the proper acceleration of an object. Proper acceleration is the acceleration (the rate of change of velocity) of the object relative to an observer who is in free fall (that is, relative to an inertial frame of reference). Proper acceleration is different from coordinate acceleration, which is acceleration with respect to a given coordinate system, which may or may not be accelerating. For example, an accelerometer at rest on the surface of the Earth will measure an acceleration due to Earth's gravity straight upwards of about g ≈ 9.81 m/s2. By contrast, an accelerometer that is in free fall will measure zero acceleration. Accelerometers have many uses in industry, consumer products, and science. Highly sensitive accelerometers are used in inertial navigation systems for aircraft and missiles. In unmanned aerial vehicles, accelerometers help to stabilize flight. Micromachined micro-electromechanical systems (MEMS) accelerometers are used in handheld electronic devices such as smartphones, cameras and video-game controllers to detect movement and orientation of these devices. Vibration in industrial machinery is monitored by accelerometers. Seismometers are sensitive accelerometers for monitoring ground movement such as earthquakes. When two or more accelerometers are coordinated with one another, they can measure differences in proper acceleration, particularly gravity, over their separation in space—that is, the gradient of the gravitational field. Gravity gradiometry is useful because absolute gravity is a weak effect and depends on the local density of the Earth, which is quite variable. A single-axis accelerometer measures acceleration along a specified axis. A multi-axis accelerometer detects both the magnitude and the direction of the proper acceleration, as a vector quantity, and is usually implemented as several single-axis accelerometers oriented along different axes. Physical principles An accelerometer measures proper acceleration, which is the acceleration it experiences relative to freefall and is the acceleration felt by people and objects. Put another way, at any point in spacetime the equivalence principle guarantees the existence of a local inertial frame, and an accelerometer measures the acceleration relative to that frame. Such accelerations are popularly denoted g-force; i.e., in comparison to standard gravity. An accelerometer at rest relative to the Earth's surface will indicate approximately 1 g upwards because the Earth's surface exerts a normal force upwards relative to the local inertial frame (the frame of a freely falling object near the surface). To obtain the acceleration due to motion with respect to the Earth, this "gravity offset" must be subtracted and corrections made for effects caused by the Earth's rotation relative to the inertial frame. The reason for the appearance of a gravitational offset is Einstein's equivalence principle, which states that the effects of gravity on an object are indistinguishable from acceleration. When held fixed in a gravitational field by, for example, applying a ground reaction force or an equivalent upward thrust, the reference frame for an accelerometer (its own casing) accelerates upwards with respect to a free-falling reference frame. The effects of this acceleration are indistinguishable from any other acceleration experienced by the instrument so that an accelerometer cannot detect the difference between sitting in a rocket on the launch pad, and being in the same rocket in deep space while it uses its engines to accelerate at 1 g. For similar reasons, an accelerometer will read zero during any type of free fall. This includes use in a coasting spaceship in deep space far from any mass, a spaceship orbiting the Earth, an airplane in a parabolic "zero-g" arc, or any free-fall in a vacuum. Another example is free-fall at a sufficiently high altitude that atmospheric effects can be neglected. However, this does not include a (non-free) fall in which air resistance produces drag forces that reduce the acceleration until constant terminal velocity is reached. At terminal velocity, the accelerometer will indicate 1 g acceleration upwards. For the same reason a skydiver, upon reaching terminal velocity, does not feel as though he or she were in "free-fall", but rather experiences a feeling similar to being supported (at 1 g) on a "bed" of uprushing air. Acceleration is quantified in the SI unit metres per second per second (m/s2), in the cgs unit gal (Gal), or popularly in terms of standard gravity (g). For the practical purpose of finding the acceleration of objects with respect to the Earth, such as for use in an inertial navigation system, a knowledge of local gravity is required. This can be obtained either by calibrating the device at rest, or from a known model of gravity at the approximate current position. Structure A basic mechanical accelerometer is a damped proof mass on a spring. When the accelerometer experiences an acceleration, Newton's third law causes the spring's compression to adjust to exert an equivalent force on the mass to counteract the acceleration. Since the spring's force scales linearly with amount of compression (according to Hooke's law) and because the spring constant and mass are known constants, a measurement of the spring's compression is also a measurement of acceleration. The system is damped to prevent oscillations of the mass and spring interfering with measurements. However, the damping causes accelerometers to have a frequency response. Many animals have sensory organs to detect acceleration, especially gravity. In these, the proof mass is usually one or more crystals of calcium carbonate otoliths (Latin for "ear stone") or statoconia, acting against a bed of hairs connected to neurons. The hairs form the springs, with the neurons as sensors. The damping is usually by a fluid. Many vertebrates, including humans, have these structures in their inner ears. Most invertebrates have similar organs, but not as part of their hearing organs. These are called statocysts. Mechanical accelerometers are often designed so that an electronic circuit senses a small amount of motion, then pushes on the proof mass with some type of linear motor to keep the proof mass from moving far. The motor might be an electromagnet or in very small accelerometers, electrostatic. Since the circuit's electronic behavior can be carefully designed, and the proof mass does not move far, these designs can be very stable (i.e. they do not oscillate), very linear with a controlled frequency response. (This is called servo mode design.) In mechanical accelerometers, measurement is often electrical, piezoelectric, piezoresistive or capacitive. Piezoelectric accelerometers use piezoceramic sensors (e.g. lead zirconate titanate) or single crystals (e.g. quartz, tourmaline). They are unmatched in high frequency measurements, low packaged weight, and resistance to high temperatures. Piezoresistive accelerometers resist shock (very high accelerations) better. Capacitive accelerometers typically use a silicon micro-machined sensing element. They measure low frequencies well. Modern mechanical accelerometers are often small micro-electro-mechanical systems (MEMS), and are often very simple MEMS devices, consisting of little more than a cantilever beam with a proof mass (also known as seismic mass). Damping results from the residual gas sealed in the device. As long as the Q-factor is not too low, damping does not result in a lower sensitivity. Under the influence of external accelerations, the proof mass deflects from its neutral position. This deflection is measured in an analog or digital manner. Most commonly, the capacitance between a set of fixed beams and a set of beams attached to the proof mass is measured. This method is simple, reliable, and inexpensive. Integrating piezoresistors in the springs to detect spring deformation, and thus deflection, is a good alternative, although a few more process steps are needed during the fabrication sequence. For very high sensitivities quantum tunnelling is also used; this requires a dedicated process making it very expensive. Optical measurement has been demonstrated in laboratory devices. Another MEMS-based accelerometer is a thermal (or convective) accelerometer. It contains a small heater in a very small dome. This heats the air or other fluid inside the dome. The thermal bubble acts as the proof mass. An accompanying temperature sensor (like a thermistor; or thermopile) in the dome measures the temperature in one location of the dome. This measures the location of the heated bubble within the dome. When the dome is accelerated, the colder, higher density fluid pushes the heated bubble. The measured temperature changes. The temperature measurement is interpreted as acceleration. The fluid provides the damping. Gravity acting on the fluid provides the spring. Since the proof mass is very lightweight gas, and not held by a beam or lever, thermal accelerometers can survive high shocks. Another variation uses a wire to both heat the gas and detect the change in temperature. The change of temperature changes the resistance of the wire. A two dimensional accelerometer can be economically constructed with one dome, one bubble and two measurement devices. Most micromechanical accelerometers operate in-plane, that is, they are designed to be sensitive only to a direction in the plane of the die. By integrating two devices perpendicularly on a single die a two-axis accelerometer can be made. By adding another out-of-plane device, three axes can be measured. Such a combination may have much lower misalignment error than three discrete models combined after packaging. Micromechanical accelerometers are available in a wide variety of measuring ranges, reaching up to thousands of gs. The designer must compromise between sensitivity and the maximum acceleration that can be measured. Applications Engineering Accelerometers can be used to measure vehicle acceleration. Accelerometers can be used to measure vibration on cars, machines, buildings, process control systems and safety installations. They can also be used to measure seismic activity, inclination, machine vibration, dynamic distance and speed with or without the influence of gravity. Applications for accelerometers that measure gravity, wherein an accelerometer is specifically configured for use in gravimetry, are called gravimeters. Biology Accelerometers are also increasingly used in the biological sciences. High frequency recordings of bi-axial or tri-axial acceleration allows the discrimination of behavioral patterns while animals are out of sight. Furthermore, recordings of acceleration allow researchers to quantify the rate at which an animal is expending energy in the wild, by either determination of limb-stroke frequency or measures such as overall dynamic body acceleration Such approaches have mostly been adopted by marine scientists due to an inability to study animals in the wild using visual observations, however an increasing number of terrestrial biologists are adopting similar approaches. For example, accelerometers have been used to study flight energy expenditure of Harris's Hawk (Parabuteo unicinctus). Researchers are also using smartphone accelerometers to collect and extract mechano-biological descriptors of resistance exercise. Increasingly, researchers are deploying accelerometers with additional technology, such as cameras or microphones, to better understand animal behaviour in the wild (for example, hunting behaviour of Canada lynx). Industry Accelerometers are also used for machinery health monitoring to report the vibration and its changes in time of shafts at the bearings of rotating equipment such as turbines, pumps, fans, rollers, compressors, or bearing fault which, if not attended to promptly, can lead to costly repairs. Accelerometer vibration data allows the user to monitor machines and detect these faults before the rotating equipment fails completely. Building and structural monitoring Accelerometers are used to measure the motion and vibration of a structure that is exposed to dynamic loads. Dynamic loads originate from a variety of sources including: Human activities – walking, running, dancing or skipping Working machines – inside a building or in the surrounding area Construction work – driving piles, demolition, drilling and excavating Moving loads on bridges Vehicle collisions Impact loads – falling debris Concussion loads – internal and external explosions Collapse of structural elements Wind loads and wind gusts Air blast pressure Loss of support because of ground failure Earthquakes and aftershocks Under structural applications, measuring and recording how a structure dynamically responds to these inputs is critical for assessing the safety and viability of a structure. This type of monitoring is called Health Monitoring, which usually involves other types of instruments, such as displacement sensors -Potentiometers, LVDTs, etc.- deformation sensors -Strain Gauges, Extensometers-, load sensors -Load Cells, Piezo-Electric Sensors- among others. Medical applications Zoll's AED Plus uses CPR-D•padz which contain an accelerometer to measure the depth of CPR chest compressions. Within the last several years, several companies have produced and marketed sports watches for runners that include footpods, containing accelerometers to help determine the speed and distance for the runner wearing the unit. In Belgium, accelerometer-based step counters are promoted by the government to encourage people to walk a few thousand steps each day. Herman Digital Trainer uses accelerometers to measure strike force in physical training. It has been suggested to build football helmets with accelerometers in order to measure the impact of head collisions. Accelerometers have been used to calculate gait parameters, such as stance and swing phase. This kind of sensor can be used to measure or monitor people. Navigation An inertial navigation system is a navigation aid that uses a computer and motion sensors (accelerometers) to continuously calculate via dead reckoning the position, orientation, and velocity (direction and speed of movement) of a moving object without the need for external references. Other terms used to refer to inertial navigation systems or closely related devices include inertial guidance system, inertial reference platform, and many other variations. An accelerometer alone is unsuitable to determine changes in altitude over distances where the vertical decrease of gravity is significant, such as for aircraft and rockets. In the presence of a gravitational gradient, the calibration and data reduction process is numerically unstable. Transport Accelerometers are used to detect apogee in both professional and in amateur rocketry. Accelerometers are also being used in Intelligent Compaction rollers. Accelerometers are used alongside gyroscopes in inertial navigation systems. One of the most common uses for MEMS accelerometers is in airbag deployment systems for modern automobiles. In this case, the accelerometers are used to detect the rapid negative acceleration of the vehicle to determine when a collision has occurred and the severity of the collision. Another common automotive use is in electronic stability control systems, which use a lateral accelerometer to measure cornering forces. The widespread use of accelerometers in the automotive industry has pushed their cost down dramatically. Another automotive application is the monitoring of noise, vibration, and harshness (NVH), conditions that cause discomfort for drivers and passengers and may also be indicators of mechanical faults. Tilting trains use accelerometers and gyroscopes to calculate the required tilt. Volcanology Modern electronic accelerometers are used in remote sensing devices intended for the monitoring of active volcanoes to detect the motion of magma. Consumer electronics Accelerometers are increasingly being incorporated into personal electronic devices to detect the orientation of the device, for example, a display screen. A free-fall sensor (FFS) is an accelerometer used to detect if a system has been dropped and is falling. It can then apply safety measures such as parking the head of a hard disk to prevent a head crash and resulting data loss upon impact. This device is included in the many common computer and consumer electronic products that are produced by a variety of manufacturers. It is also used in some data loggers to monitor handling operations for shipping containers. The length of time in free fall is used to calculate the height of drop and to estimate the shock to the package. Motion input Some smartphones, digital audio players and personal digital assistants contain accelerometers for user interface control; often the accelerometer is used to present landscape or portrait views of the device's screen, based on the way the device is being held. Apple has included an accelerometer in every generation of iPhone, iPad, and iPod touch, as well as in every iPod nano since the 4th generation. Along with orientation view adjustment, accelerometers in mobile devices can also be used as pedometers, in conjunction with specialized applications. Automatic Collision Notification (ACN) systems also use accelerometers in a system to call for help in event of a vehicle crash. Prominent ACN systems include OnStar AACN service, Ford Link's 911 Assist, Toyota's Safety Connect, Lexus Link, or BMW Assist. Many accelerometer-equipped smartphones also have ACN software available for download. ACN systems are activated by detecting crash-strength accelerations. Accelerometers are used in vehicle Electronic stability control systems to measure the vehicle's actual movement. A computer compares the vehicle's actual movement to the driver's steering and throttle input. The stability control computer can selectively brake individual wheels and/or reduce engine power to minimize the difference between driver input and the vehicle's actual movement. This can help prevent the vehicle from spinning or rolling over. Some pedometers use an accelerometer to more accurately measure the number of steps taken and distance traveled than a mechanical sensor can provide. Nintendo's Wii video game console uses a controller called a Wii Remote that contains a three-axis accelerometer and was designed primarily for motion input. Users also have the option of buying an additional motion-sensitive attachment, the Nunchuk, so that motion input could be recorded from both of the user's hands independently. Is also used on the Nintendo 3DS system. Sleep phase alarm clocks use accelerometric sensors to detect movement of a sleeper, so that it can wake the person when he/she is not in REM phase, in order to awaken the person more easily. Sound recording A microphone or eardrum is a membrane that responds to oscillations in air pressure. These oscillations cause acceleration, so accelerometers can be used to record sound. A 2012 study found that voices can be detected by smartphone accelerometers in 93% of typical daily situations. Conversely, carefully designed sounds can cause accelerometers to report false data. One study tested 20 models of (MEMS) smartphone accelerometers and found that a majority were susceptible to this attack. Orientation sensing A number of 21st-century devices use accelerometers to align the screen depending on the direction the device is held (e.g., switching between portrait and landscape modes). Such devices include many tablet PCs and some smartphones and digital cameras. The Amida Simputer, a handheld Linux device launched in 2004, was the first commercial handheld to have a built-in accelerometer. It incorporated many gesture-based interactions using this accelerometer, including page-turning, zoom-in and zoom-out of images, change of portrait to landscape mode, and many simple gesture-based games. As of January 2009, almost all new mobile phones and digital cameras contain at least a tilt sensor and sometimes an accelerometer for the purpose of auto image rotation, motion-sensitive mini-games, and correcting shake when taking photographs. Image stabilization Camcorders use accelerometers for image stabilization, either by moving optical elements to adjust the light path to the sensor to cancel out unintended motions or digitally shifting the image to smooth out detected motion. Some stills cameras use accelerometers for anti-blur capturing. The camera holds off capturing the image when the camera is moving. When the camera is still (if only for a millisecond, as could be the case for vibration), the image is captured. An example of the application of this technology is the Glogger VS2, a phone application which runs on Symbian based phones with accelerometers such as the Nokia N96. Some digital cameras contain accelerometers to determine the orientation of the photo being taken and also for rotating the current picture when viewing. Device integrity Many laptops feature an accelerometer which is used to detect drops. If a drop is detected, the heads of the hard disk are parked to avoid data loss and possible head or disk damage by the ensuing shock. Gravimetry A gravimeter or gravitometer, is an instrument used in gravimetry for measuring the local gravitational field. A gravimeter is a type of accelerometer, except that accelerometers are susceptible to all vibrations including noise, that cause oscillatory accelerations. This is counteracted in the gravimeter by integral vibration isolation and signal processing. Though the essential principle of design is the same as in accelerometers, gravimeters are typically designed to be much more sensitive than accelerometers in order to measure very tiny changes within the Earth's gravity, of 1 g. In contrast, other accelerometers are often designed to measure 1000 g or more, and many perform multi-axial measurements. The constraints on temporal resolution are usually less for gravimeters, so that resolution can be increased by processing the output with a longer "time constant". Types of accelerometer Bulk micromachined capacitive Bulk micromachined piezoelectric resistive Capacitive spring mass system base DC response Electromechanical servo (servo force balance) High gravity High temperature Laser accelerometer Low frequency Magnetic induction Modally tuned impact hammers Null-balance Optical Pendulous integrating gyroscopic accelerometer (PIGA) Piezoelectric accelerometer Quantum (rubidium atom cloud, laser cooled) Resonance Seat pad accelerometers Shear mode accelerometer Strain gauge Surface acoustic wave (SAW) Surface micromachined capacitive (MEMS) Thermal (submicrometre CMOS process) Triaxial Vacuum diode with flexible anode potentiometric type LVDT type accelerometer Exploits and privacy concerns Accelerometer data, which can be accessed by third-party apps without user permission in many mobile devices, has been used to infer rich information about users based on the recorded motion patterns (e.g., driving behavior, level of intoxication, age, gender, touchscreen inputs, geographic location). If done without a user's knowledge or consent, this is referred to as an inference attack. Additionally, millions of smartphones could be vulnerable to software cracking via accelerometers. See also Accelerograph Degrees of freedom g-force Geophone Gyroscope Inclinometer Inertial measurement unit Inertial navigation system Magnetometer Seismometer Vibration calibrator References Acceleration
Accelerometer
[ "Physics", "Mathematics", "Technology", "Engineering" ]
4,916
[ "Accelerometers", "Physical quantities", "Acceleration", "Quantity", "Measuring instruments", "Wikipedia categories named after physical quantities" ]
324,997
https://en.wikipedia.org/wiki/Radiological%20warfare
Radiological warfare is any form of warfare involving deliberate radiation poisoning or contamination of an area with radiological sources. Radiological weapons are normally classified as weapons of mass destruction (WMDs), although radiological weapons can also be specific in whom they target, such as the radiation poisoning of Alexander Litvinenko by the Russian FSB, using radioactive polonium-210. Numerous countries have expressed an interest in radiological weapons programs, several have actively pursued them, and three have performed radiological weapons tests. Salted nuclear weapons A salted bomb is a nuclear weapon that is equipped with a large quantity of radiologically inert salting material. The radiological warfare agents are produced through neutron capture by the salting materials of the neutron radiation emitted by the nuclear weapon. This avoids the problems of having to stockpile the highly radioactive material, as it is produced when the bomb explodes. The result is a more intense fallout than from regular nuclear weapons and can render an area uninhabitable for a long period. The cobalt bomb is an example of a radiological warfare weapon, where cobalt-59 is converted to cobalt-60 by neutron capture. Initially, gamma radiation of the nuclear fission products from an equivalent sized "clean" fission-fusion-fission bomb (assuming the amount of radioactive dust particles generated are equal) are much more intense than cobalt-60: 15,000 times more intense at 1 hour; 35 times more intense at 1 week; 5 times more intense at 1 month; and about equal at 6 months. Thereafter fission drops off rapidly so that cobalt-60 fallout is 8 times more intense than fission at 1 year and 150 times more intense at 5 years. The very long-lived isotopes produced by fission would overtake the cobalt-60 again after about 75 years. Other salted bomb variants that do not use cobalt have also been theorized. For example, salting with sodium-23, that transmutes to sodium-24, which because of its 15-hour half-life results in intense radiation. Surface-burst nuclear weapons An air burst is preferred if the effects of thermal radiation and blast wave is to be maximized for an area (i.e. area covered by direct line of sight and sufficient luminosity to cause burning, and formation of mach stem respectively). Both fission and fusion weapons will irradiate the detonation site with neutron radiation, causing neutron activation of the material there. Fission bombs will also contribute with the bomb-material residue. Air will not form isotopes useful for radiological warfare when neutron-activated. By detonating them at or near the surface instead, the ground will be vaporized, become radioactive, and when it cools down and condenses into particles cause significant fallout. Dirty bombs A far lower-tech radiological weapon than those discussed above is a "dirty bomb" or radiological dispersal device, whose purpose is to disperse radioactive dust over an area. The release of radioactive material may involve no special "weapon" or side forces like a blast explosion and include no direct killing of people from its radiation source, but rather could make whole areas or structures unusable or unfavorable for the support of human life. The radioactive material may be dispersed slowly over a large area, and it can be difficult for the victims to initially know that such a radiological attack is being carried out, especially if detectors for radioactivity are not installed beforehand. Radiological warfare with dirty bombs could be used for nuclear terrorism, spreading or intensifying fear. In relation to these weapons, nation states can also spread rumor, disinformation and fear. In July 2023, both Ukraine and Russia blamed each other for preparing to bomb the Zaporizhzhia nuclear power plant in Ukraine, in order to use the nuclear reactors as dirty bombs. See also Acute radiation syndrome Area denial weapons Depleted uranium Neutron bomb Nuclear detection Nuclear warfare Operation Peppermint Scorched earth and "Salting the earth" Yasser Arafat § Theories about the cause of death Further reading Kirby, R. (2020) Radiological Weapons: America's Cold War Experience. References External links Radiological Weapons as Means of Attack. Anthony H. Cordesman Radiological-weapons threats: case studies from the extreme right. BreAnne K. Fleer, 2020; The Nonproliferation Review Radiobiology Warfare by type Nuclear terrorism Radiological weapons
Radiological warfare
[ "Chemistry", "Biology" ]
897
[ "Radiobiology", "Radioactivity" ]
325,019
https://en.wikipedia.org/wiki/Modular%20group
In mathematics, the modular group is the projective special linear group of matrices with integer coefficients and determinant 1. The matrices and are identified. The modular group acts on the upper-half of the complex plane by fractional linear transformations, and the name "modular group" comes from the relation to moduli spaces and not from modular arithmetic. Definition The modular group is the group of linear fractional transformations of the upper half of the complex plane, which have the form where , , , are integers, and . The group operation is function composition. This group of transformations is isomorphic to the projective special linear group , which is the quotient of the 2-dimensional special linear group over the integers by its center . In other words, consists of all matrices where , , , are integers, , and pairs of matrices and are considered to be identical. The group operation is the usual multiplication of matrices. Some authors define the modular group to be , and still others define the modular group to be the larger group . Some mathematical relations require the consideration of the group of matrices with determinant plus or minus one. ( is a subgroup of this group.) Similarly, is the quotient group . A matrix with unit determinant is a symplectic matrix, and thus , the symplectic group of matrices. Finding elements To find an explicit matrix in , begin with two coprime integers , and solve the determinant equation(Notice the determinant equation forces to be coprime since otherwise there would be a factor such that , , hencewould have no integer solutions.) For example, if then the determinant equation readsthen taking and gives , henceis a matrix. Then, using the projection, these matrices define elements in . Number-theoretic properties The unit determinant of implies that the fractions , , , are all irreducible, that is having no common factors (provided the denominators are non-zero, of course). More generally, if is an irreducible fraction, then is also irreducible (again, provided the denominator be non-zero). Any pair of irreducible fractions can be connected in this way; that is, for any pair and of irreducible fractions, there exist elements such that Elements of the modular group provide a symmetry on the two-dimensional lattice. Let and be two complex numbers whose ratio is not real. Then the set of points is a lattice of parallelograms on the plane. A different pair of vectors and will generate exactly the same lattice if and only if for some matrix in . It is for this reason that doubly periodic functions, such as elliptic functions, possess a modular group symmetry. The action of the modular group on the rational numbers can most easily be understood by envisioning a square grid, with grid point corresponding to the fraction (see Euclid's orchard). An irreducible fraction is one that is visible from the origin; the action of the modular group on a fraction never takes a visible (irreducible) to a hidden (reducible) one, and vice versa. Note that any member of the modular group maps the projectively extended real line one-to-one to itself, and furthermore bijectively maps the projectively extended rational line (the rationals with infinity) to itself, the irrationals to the irrationals, the transcendental numbers to the transcendental numbers, the non-real numbers to the non-real numbers, the upper half-plane to the upper half-plane, et cetera. If and are two successive convergents of a continued fraction, then the matrix belongs to . In particular, if for positive integers , , , with and then and will be neighbours in the Farey sequence of order . Important special cases of continued fraction convergents include the Fibonacci numbers and solutions to Pell's equation. In both cases, the numbers can be arranged to form a semigroup subset of the modular group. Group-theoretic properties Presentation The modular group can be shown to be generated by the two transformations so that every element in the modular group can be represented (in a non-unique way) by the composition of powers of and . Geometrically, represents inversion in the unit circle followed by reflection with respect to the imaginary axis, while represents a unit translation to the right. The generators and obey the relations and . It can be shown that these are a complete set of relations, so the modular group has the presentation: This presentation describes the modular group as the rotational triangle group (infinity as there is no relation on ), and it thus maps onto all triangle groups by adding the relation , which occurs for instance in the congruence subgroup . Using the generators and instead of and , this shows that the modular group is isomorphic to the free product of the cyclic groups and : Braid group The braid group is the universal central extension of the modular group, with these sitting as lattices inside the (topological) universal covering group . Further, the modular group has a trivial center, and thus the modular group is isomorphic to the quotient group of modulo its center; equivalently, to the group of inner automorphisms of . The braid group in turn is isomorphic to the knot group of the trefoil knot. Quotients The quotients by congruence subgroups are of significant interest. Other important quotients are the triangle groups, which correspond geometrically to descending to a cylinder, quotienting the coordinate modulo , as . is the group of icosahedral symmetry, and the triangle group (and associated tiling) is the cover for all Hurwitz surfaces. Presenting as a matrix group The group can be generated by the two matrices since The projection turns these matrices into generators of , with relations similar to the group presentation. Relationship to hyperbolic geometry The modular group is important because it forms a subgroup of the group of isometries of the hyperbolic plane. If we consider the upper half-plane model of hyperbolic plane geometry, then the group of all orientation-preserving isometries of consists of all Möbius transformations of the form where , , , are real numbers. In terms of projective coordinates, the group acts on the upper half-plane by projectivity: This action is faithful. Since is a subgroup of , the modular group is a subgroup of the group of orientation-preserving isometries of . Tessellation of the hyperbolic plane The modular group acts on as a discrete subgroup of , that is, for each in we can find a neighbourhood of which does not contain any other element of the orbit of . This also means that we can construct fundamental domains, which (roughly) contain exactly one representative from the orbit of every in . (Care is needed on the boundary of the domain.) There are many ways of constructing a fundamental domain, but a common choice is the region bounded by the vertical lines and , and the circle . This region is a hyperbolic triangle. It has vertices at and , where the angle between its sides is , and a third vertex at infinity, where the angle between its sides is 0. There is a strong connection between the modular group and elliptic curves. Each point in the upper half-plane gives an elliptic curve, namely the quotient of by the lattice generated by 1 and . Two points in the upper half-plane give isomorphic elliptic curves if and only if they are related by a transformation in the modular group. Thus, the quotient of the upper half-plane by the action of the modular group is the so-called moduli space of elliptic curves: a space whose points describe isomorphism classes of elliptic curves. This is often visualized as the fundamental domain described above, with some points on its boundary identified. The modular group and its subgroups are also a source of interesting tilings of the hyperbolic plane. By transforming this fundamental domain in turn by each of the elements of the modular group, a regular tessellation of the hyperbolic plane by congruent hyperbolic triangles known as the V6.6.∞ Infinite-order triangular tiling is created. Note that each such triangle has one vertex either at infinity or on the real axis . This tiling can be extended to the Poincaré disk, where every hyperbolic triangle has one vertex on the boundary of the disk. The tiling of the Poincaré disk is given in a natural way by the -invariant, which is invariant under the modular group, and attains every complex number once in each triangle of these regions. This tessellation can be refined slightly, dividing each region into two halves (conventionally colored black and white), by adding an orientation-reversing map; the colors then correspond to orientation of the domain. Adding in and taking the right half of the region (where ) yields the usual tessellation. This tessellation first appears in print in , where it is credited to Richard Dedekind, in reference to . The map of groups (from modular group to triangle group) can be visualized in terms of this tiling (yielding a tiling on the modular curve), as depicted in the video at right. Congruence subgroups Important subgroups of the modular group , called congruence subgroups, are given by imposing congruence relations on the associated matrices. There is a natural homomorphism given by reducing the entries modulo . This induces a homomorphism on the modular group . The kernel of this homomorphism is called the principal congruence subgroup of level , denoted . We have the following short exact sequence: Being the kernel of a homomorphism is a normal subgroup of the modular group . The group is given as the set of all modular transformations for which and . It is easy to show that the trace of a matrix representing an element of cannot be −1, 0, or 1, so these subgroups are torsion-free groups. (There are other torsion-free subgroups.) The principal congruence subgroup of level 2, , is also called the modular group . Since is isomorphic to , is a subgroup of index 6. The group consists of all modular transformations for which and are odd and and are even. Another important family of congruence subgroups are the modular group defined as the set of all modular transformations for which , or equivalently, as the subgroup whose matrices become upper triangular upon reduction modulo . Note that is a subgroup of . The modular curves associated with these groups are an aspect of monstrous moonshine – for a prime number , the modular curve of the normalizer is genus zero if and only if divides the order of the monster group, or equivalently, if is a supersingular prime. Dyadic monoid One important subset of the modular group is the dyadic monoid, which is the monoid of all strings of the form for positive integers . This monoid occurs naturally in the study of fractal curves, and describes the self-similarity symmetries of the Cantor function, Minkowski's question mark function, and the Koch snowflake, each being a special case of the general de Rham curve. The monoid also has higher-dimensional linear representations; for example, the representation can be understood to describe the self-symmetry of the blancmange curve. Maps of the torus The group is the linear maps preserving the standard lattice , and is the orientation-preserving maps preserving this lattice; they thus descend to self-homeomorphisms of the torus (SL mapping to orientation-preserving maps), and in fact map isomorphically to the (extended) mapping class group of the torus, meaning that every self-homeomorphism of the torus is isotopic to a map of this form. The algebraic properties of a matrix as an element of correspond to the dynamics of the induced map of the torus. Hecke groups The modular group can be generalized to the Hecke groups, named for Erich Hecke, and defined as follows. The Hecke group with , is the discrete group generated by where . For small values of , one has: The modular group is isomorphic to and they share properties and applications – for example, just as one has the free product of cyclic groups more generally one has which corresponds to the triangle group . There is similarly a notion of principal congruence subgroups associated to principal ideals in . History The modular group and its subgroups were first studied in detail by Richard Dedekind and by Felix Klein as part of his Erlangen programme in the 1870s. However, the closely related elliptic functions were studied by Joseph Louis Lagrange in 1785, and further results on elliptic functions were published by Carl Gustav Jakob Jacobi and Niels Henrik Abel in 1827. See also Bianchi group Classical modular curve Fuchsian group -invariant Kleinian group Mapping class group Minkowski's question-mark function Möbius transformation Modular curve Modular form Kuṭṭaka Poincaré half-plane model Uniform tilings in hyperbolic plane References . Group theory Analytic number theory Modular forms
Modular group
[ "Mathematics" ]
2,683
[ "Analytic number theory", "Group theory", "Fields of abstract algebra", "Modular forms", "Number theory" ]