id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
1,557,416
https://en.wikipedia.org/wiki/Case-hardening
Case-hardening or carburization is the process of introducing carbon to the surface of a low-carbon iron, or more commonly a low-carbon steel object, in order to harden the surface. Iron which has a carbon content greater than ~0.02% is known as steel. Steel which has a carbon content greater than ~0.25% can be direct-hardened by heating to around 600°C, and then quickly cooling, often by immersing in water or oil, known as quenching. Hardening is desirable for metal components because it gives increased strength and wear resistance, the tradeoff being that hardened steel is generally more brittle and less malleable than when it is in a softer state. In order to produce a hard skin on steels which have less than ~0.2% carbon, carbon can be introduced into the surface by heating steel in the presence of some carbon-rich substance such as powdered charcoal or hydrocarbon gas. This causes carbon to diffuse into the surface of the steel. The depth of this high carbon layer depends on the exposure time, but 0.5mm is a typical case depth. Once this has been done the steel must be heated and quenched to harden this higher carbon 'skin'. Below this skin, the steel core will remain soft due to its low carbon content. History Early iron smelting made use of bloomeries which converted iron ore into metallic iron by heating it in a furnace which burnt wood and charcoal. Because the temperatures that could be achieved by this method were generally below the melting point of iron, it was not truly smelted, but instead converted into a spongy metallic iron/slag matrix. This matrix then required re-heating and hammering to extract as much of the slag as possible, in order to produce a low-carbon malleable wrought iron which could then be forged into tools etc. Due to its low carbon content, wrought iron is quite soft, so something like a knife blade could not be kept very sharp; it would blunt quickly and bend easily. As smelting techniques improved, higher furnace temperatures could be achieved which were sufficient to fully melt iron. However, in the process, the iron picked up carbon from the charcoal or coke used to heat it. This resulted in molten iron with a carbon content of around 3%, which was termed cast iron. This liquid iron could be cast into complex shapes, but due to its high carbon content, it was very brittle, not at all malleable, and totally unsuitable for something like a knife blade. Further processing was required to remove the excess carbon from cast iron and create malleable wrought iron (the ultimate developments of this being the Bessemer converter and the Siemens process). After the removal of almost all carbon from cast iron, the result was a metal that was very malleable and ductile but not very hard, nor capable of being hardened by heating and quenching. This led to the introduction of case hardening. The resulting case-hardened product combines much of the malleability and toughness of a low-carbon steel core with the hardness and resilience of the outer high-carbon steel skin. The traditional method of applying the carbon to the surface of the iron involved packing the iron in a mixture of carbon-rich material such as ground bone and charcoal or a combination of leather, hooves, salt and urine, all inside a well-sealed box (the "case"). This carburizing package is then heated to a high temperature—but still under the melting point of the iron—and left at that temperature for a length of time. The longer the package is held at the high temperature, the deeper the carbon will diffuse into the surface. Different depths of hardening are desirable for different purposes: sharp tools need deep hardening to allow grinding and resharpening without exposing the soft core, while machine parts like gears might need only shallow hardening for increased wear resistance. The resulting case-hardened part may show distinct surface discoloration, if the carbon material is mixed organic matter as described above. The steel darkens significantly and shows a mottled pattern of black, blue, and purple caused by the various compounds formed from impurities in the bone and charcoal. This oxide surface works similarly to bluing, providing a degree of corrosion resistance, as well as an attractive finish. Case colouring refers to this pattern and is commonly encountered as a decorative finish on firearms. Case-hardened steel combines extreme hardness and extreme toughness, which is not readily matched by homogeneous alloys since hard homogeneous steels tend to be brittle, especially those steels whose hardness relies on carbon content alone. Alloy steels containing nickel, chromium, or molybdenum can have very high hardness, strength, or elongation values, but at a greater cost than a case-hardened item with a low-carbon core. Chemistry Carbon itself is solid at case-hardening temperatures and so is immobile. Transport to the surface of the steel was as gaseous carbon monoxide, generated by the breakdown of the carburising compound and the oxygen packed into the sealed box. This takes place with pure carbon but too slowly to be workable. Although oxygen is required for this process it is re-circulated through the CO cycle and so can be carried out inside a sealed box (the "case"). The sealing is necessary to stop the CO either leaking out or being oxidised to CO2 by excess outside air. Adding an easily decomposed carbonate "energiser" such as barium carbonate breaks down to BaO + CO2 and this encourages the reaction: C (from the donor) + CO2 <—> 2 CO increasing the overall abundance of CO and the activity of the carburising compound. It is a common knowledge fallacy that case-hardening was done with bone but this is misleading. Although bone was used, the main carbon donor was hoof and horn. Bone contains some carbonates but is mainly calcium phosphate (as hydroxylapatite). This does not have the beneficial effect of encouraging CO production and it can also introduce phosphorus as an impurity into the steel alloy. Modern use Both carbon and alloy steels are suitable for case-hardening; typically mild steels are used, with low carbon content, usually less than 0.3% (see plain-carbon steel for more information). These mild steels are not normally hardenable due to the low quantity of carbon, so the surface of the steel is chemically altered to increase the hardenability. Case-hardened steel is formed by diffusing carbon (carburization), nitrogen (nitriding) or boron (boriding) into the outer layer of the steel at high temperature, and then heat treating the surface layer to the desired hardness. The term case-hardening is derived from the practicalities of the carburization process itself, which is essentially the same as the ancient process. The steel work piece is placed inside a case packed tight with a carbon-based case-hardening compound. This is collectively known as a carburizing pack. The pack is put inside a hot furnace for a variable length of time. Time and temperature determines how deep into the surface the hardening extends. However, the depth of hardening is ultimately limited by the inability of carbon to diffuse deeply into solid steel, and a typical depth of surface hardening with this method is up to 1.5 mm. Other techniques are also used in modern carburizing, such as heating in a carbon-rich atmosphere. Small items may be case-hardened by repeated heating with a torch and quenching in a carbon rich medium, such as the commercial products Kasenit / Casenite or "Cherry Red". Older formulations of these compounds contain potentially toxic cyanide compounds, while the more recent types such as Cherry Red do not. Processes Flame or induction hardening Flame or induction hardening are processes in which the surface of the steel is heated very rapidly to high temperatures (by direct application of an oxy-gas flame, or by induction heating) then cooled rapidly, generally using water; this creates a "case" of martensite on the surface. A carbon content of 0.3–0.6 wt% C is needed for this type of hardening. Unlike other methods, flame or induction hardening does not change chemical composition of the material. Because it is merely a localized heat-treatment process, they are typically only useful on high-carbon steels that will respond sufficiently to quench hardening. Typical uses are for the shackle of a lock, where the outer layer is hardened to be file resistant, and mechanical gears, where hard gear mesh surfaces are needed to maintain a long service life while toughness is required to maintain durability and resistance to catastrophic failure. Flame hardening uses direct impingement of an oxy-gas flame onto a defined surface area. The result of the hardening process is controlled by four factors: Design of the flame head Duration of heating Target temperature to be reached Composition of the metal being treated Carburizing Carburizing is a process used to case-harden steel with a carbon content between 0.1 and 0.3 wt% C. In this process iron is introduced to a carbon rich environment at elevated temperatures for a certain amount of time, and then quenched so that the carbon is locked in the structure; one of the simpler procedures is repeatedly to heat a part with an acetylene torch set with a fuel-rich flame and quench it in a carbon-rich fluid such as oil. Carburization is a diffusion-controlled process, so the longer the steel is held in the carbon-rich environment the greater the carbon penetration will be and the higher the carbon content. The carburized section will have a carbon content high enough that it can be hardened again through flame or induction hardening. It is possible to carburize only a portion of a part, either by protecting the rest by a process such as copper plating, or by applying a carburizing medium to only a section of the part. The carbon can come from a solid, liquid or gaseous source; if it comes from a solid source the process is called pack carburizing. Packing low carbon steel parts with a carbonaceous material and heating for some time diffuses carbon into the outer layers. A heating period of a few hours might form a high-carbon layer about one millimeter thick. Liquid carburizing involves placing parts in a bath of a molten carbon-containing material, often a metal cyanide; gas carburizing involves placing the parts in a furnace maintained with a methane-rich interior. Nitriding Nitriding heats the steel part to in an atmosphere of ammonia gas and dissociated ammonia. The time the part spends in this environment dictates the depth of the case. The hardness is achieved by the formation of nitrides. Nitride forming elements must be present for this method to work; these elements include chromium, molybdenum, and aluminum. The advantage of this process is that it causes little distortion, so the part can be case-hardened after being quenched, tempered and machined. No quenching is done after nitriding. Cyaniding Cyaniding is a case-hardening process that is fast and efficient; it is mainly used on low-carbon steels. The part is heated to in a bath of sodium cyanide and then is quenched and rinsed, in water or oil, to remove any residual cyanide. 2NaCN + O2 → 2NaCNO 2NaCNO + O2 → Na2CO3 + CO + N2 2CO → CO2 + C This process produces a thin, hard shell (between ) that is harder than the one produced by carburizing, and can be completed in 20 to 30 minutes compared to several hours so the parts have less opportunity to become distorted. It is typically used on small parts such as bolts, nuts, screws and small gears. The major drawback of cyaniding is that cyanide salts are poisonous. Carbonitriding Carbonitriding is similar to cyaniding except a gaseous atmosphere of ammonia and hydrocarbons is used instead of sodium cyanide. If the part is to be quenched, it is heated to ; if not, then the part is heated to . Ferritic nitrocarburizing Ferritic nitrocarburizing diffuses mostly nitrogen and some carbon into the case of a workpiece below the critical temperature, approximately . Under the critical temperature the workpiece's microstructure does not convert to an austenitic phase, but stays in the ferritic phase, which is why it is called ferritic nitrocarburization. Applications Parts that are subject to high pressures and sharp impacts are still commonly case-hardened. Examples include firing pins and rifle bolt faces, or engine camshafts. In these cases, the surfaces requiring the hardness may be hardened selectively, leaving the bulk of the part in its original tough state. Firearms were a common item case-hardened in the past, as they required precision machining best done on low carbon alloys, yet needed the hardness and wear resistance of a higher carbon alloy. Many modern replicas of older firearms, particularly single action revolvers, are still made with case-hardened frames, or with case coloring, which simulates the mottled pattern left by traditional charcoal and bone case-hardening. Another common application of case-hardening is on screws, particularly self-drilling screws. In order for the screws to be able to drill, cut and tap into other materials like steel, the drill point and the forming threads must be harder than the material(s) that it is drilling into. However, if the whole screw is uniformly hard, it will become very brittle and it will break easily. This is overcome by ensuring that only the surface is hardened, and the core remains relatively softer and thus less brittle. For screws and fasteners, case-hardening is achieved by a simple heat treatment consisting of heating and then quenching. For theft prevention, lock shackles and chains are often case-hardened to resist cutting, whilst remaining less brittle inside to resist impact. As case-hardened components are difficult to machine, they are generally shaped before hardening. See also Differential hardening Diffusion hardening Quench polish quench Shot peening Surface engineering Von Stahel und Eysen References External links Case Hardening Surface Hardening of Steels Case Hardening Steel and Metal Metal heat treatments
Case-hardening
[ "Chemistry" ]
3,025
[ "Metallurgical processes", "Metal heat treatments" ]
1,557,503
https://en.wikipedia.org/wiki/Wake-on-ring
Wake-on-Ring (WOR) or Wake-on-Modem (WOM) is a specification that allows supported computers and devices to "wake up" or turn on from a sleeping, hibernating or "soft off" state (e.g. ACPI state G1 or G2), and begin operation. The basic premise is that a special signal is sent over phone lines to the computer through its dial-up modem, telling it to fully power-on and begin operation. Common uses were archive databases and BBSes, although hobbyist use was significant. Fax machines use a similar system, in which they are mostly idle until receiving an incoming fax signal, which spurs operation. This style of remote operation has mostly been supplanted by Wake-on-LAN, which is newer but works in much the same way. See also Additional resources "Wake on Modem" entry from Smart Computing Encyclopedia Networking standards BIOS Unified Extensible Firmware Interface Remote control
Wake-on-ring
[ "Technology", "Engineering" ]
207
[ "Networking standards", "Computing stubs", "Computer standards", "Computer networks engineering" ]
1,557,562
https://en.wikipedia.org/wiki/Propositional%20variable
In mathematical logic, a propositional variable (also called a sentence letter, sentential variable, or sentential letter) is an input variable (that can either be true or false) of a truth function. Propositional variables are the basic building-blocks of propositional formulas, used in propositional logic and higher-order logics. Uses Formulas in logic are typically built up recursively from some propositional variables, some number of logical connectives, and some logical quantifiers. Propositional variables are the atomic formulas of propositional logic, and are often denoted using capital roman letters such as , and . Example In a given propositional logic, a formula can be defined as follows: Every propositional variable is a formula. Given a formula X, the negation ¬X is a formula. Given two formulas X and Y, and a binary connective b (such as the logical conjunction ∧), the expression (X b Y) is a formula. (Note the parentheses.) Through this construction, all of the formulas of propositional logic can be built up from propositional variables as a basic unit. Propositional variables should not be confused with the metavariables, which appear in the typical axioms of propositional calculus; the latter effectively range over well-formed formulae, and are often denoted using lower-case greek letters such as , and . Predicate logic Propositional variables with no object variables such as x and y attached to predicate letters such as Px and xRy, having instead individual constants a, b, ..attached to predicate letters are propositional constants Pa, aRb. These propositional constants are atomic propositions, not containing propositional operators. The internal structure of propositional variables contains predicate letters such as P and Q, in association with bound individual variables (e.g., x, y), individual constants such as a and b (singular terms from a domain of discourse D), ultimately taking a form such as Pa, aRb.(or with parenthesis, and ). Propositional logic is sometimes called zeroth-order logic due to not considering the internal structure in contrast with first-order logic which analyzes the internal structure of the atomic sentences. See also Boolean algebra (logic) Boolean data type Boolean domain Boolean function Logical value Predicate variable Propositional logic References Bibliography Smullyan, Raymond M. First-Order Logic. 1968. Dover edition, 1995. Chapter 1.1: Formulas of Propositional Logic. Propositional calculus Concepts in logic Logic symbols
Propositional variable
[ "Mathematics" ]
526
[ "Symbols", "Mathematical symbols", "Logic symbols" ]
1,557,574
https://en.wikipedia.org/wiki/Conservation%20genetics
Conservation genetics is an interdisciplinary subfield of population genetics that aims to understand the dynamics of genes in a population for the purpose of natural resource management, conservation of genetic diversity, and the prevention of species extinction. Scientists involved in conservation genetics come from a variety of fields including population genetics, research in natural resource management, molecular ecology, molecular biology, evolutionary biology, and systematics. The genetic diversity within species is one of the three fundamental components of biodiversity (along with species diversity and ecosystem diversity), so it is an important consideration in the wider field of conservation biology. Genetic diversity Genetic diversity is the total amount of genetic variability within a species. It can be measured in several ways, including: observed heterozygosity, expected heterozygosity, the mean number of alleles per locus, the percentage of loci that are polymorphic, and estimated effective population size. Genetic diversity on the population level is a crucial focus for conservation genetics as it influences both the health of individuals and the long-term survival of populations: decreased genetic diversity has been associated with reduced average fitness of individuals, such as high juvenile mortality, reduced immunity, diminished population growth, and ultimately, higher extinction risk. Heterozygosity, a fundamental measurement of genetic diversity in population genetics, plays an important role in determining the chance of a population surviving environmental change, novel pathogens not previously encountered, as well as the average fitness within a population over successive generations. Heterozygosity is also deeply connected, in population genetics theory, to population size (which itself clearly has a fundamental importance to conservation). All things being equal, small populations will be less heterozygous– across their whole genomes– than comparable, but larger, populations. This lower heterozygosity (i.e. low genetic diversity) renders small populations more susceptible to the challenges mentioned above. In a small population, over successive generations and without gene flow, the probability of mating with close relatives becomes very high, leading to inbreeding depression– a reduction in average fitness of individuals within a population. The reduced fitness of the offspring of closely related individuals is fundamentally tied to the concept of heterozygosity, as the offspring of these kinds of pairings are, by necessity, less heterozygous (more homozygous) across their whole genomes than outbred individuals. A diploid individual with the same maternal and paternal grandfather, for example, will have a much higher chance of being homozygous at any loci inherited from the paternal copies of each of their parents' genomes than would an individual with unrelated maternal and paternal grandfathers (each diploid individual inherits one copy of their genome from their mother and one from their father). High homozygosity (low heterozygosity) reduces fitness because it exposes the phenotypic effects of recessive alleles at homozygous sites. Selection can favour the maintenance of alleles which reduce the fitness of homozygotes, the textbook example being the sickle-cell beta-globin allele, which is maintained at high frequencies in populations where malaria is endemic due to the highly adaptive heterozygous phenotype (resistance to the malarial parasite Plasmodium falciparum). Low genetic diversity also reduces the opportunities for chromosomal crossover during meiosis to create new combinations of alleles on chromosomes, effectively increasing the average length of unrecombined tracts of chromosomes inherited from parents. This in turn reduces the efficacy of selection, across successive generations, to remove fitness-reducing alleles and promote fitness-enhancing alleles from a population. A simple hypothetical example would be two adjacent genes– A and B– on the same chromosome in an individual. If the allele at A promotes fitness "one point", while the allele at B reduces fitness "one point", but the two genes are inherited together, then selection cannot favour the allele at A while penalising the allele at B– the fitness balance is "zero points". Recombination can swap out alternative alleles at A and B, allowing selection to promote the optimal alleles to the optimal frequencies in the population– but only if there are alternative alleles to choose between. The fundamental connection between genetic diversity and population size in population genetics theory can be clearly seen in the classic population genetics measure of genetic diversity, the Watterson estimator, in which genetic diversity is measured as a function of effective population size and mutation rate. Given the relationship between population size, mutation rate, and genetic diversity, it is clearly important to recognise populations at risk of losing genetic diversity before problems arise as a result of the loss of that genetic diversity. Once lost, genetic diversity can only be restored by mutation and gene flow. If a species is already on the brink of extinction there will likely be no populations to use to restore diversity by gene flow, and any given population will be small and therefore diversity will accumulate in that population by mutation much more slowly than it would in a comparable, but bigger, population (since there are fewer individuals whose genomes are mutating in a smaller population than a bigger population). Contributors to extinction Inbreeding and inbreeding depression The accumulation of deleterious mutations A decrease in frequency of heterozygotes in a population, or heterozygosity, which decreases a species' ability to evolve to deal with change in the environment Outbreeding depression Fragmented populations Taxonomic uncertainties, which can lead to a reprioritization of conservation efforts Genetic drift as the main evolutionary process, instead of natural selection Management units within species Hybridization with allochthonous species, with the progressive substitution of the initial endemic species Techniques Specific genetic techniques are used to assess the genomes of a species regarding specific conservation issues as well as general population structure. This analysis can be done in two ways, with current DNA of individuals or historic DNA. Techniques for analysing the differences between individuals and populations include Alloenzymes Random fragment length polymorphisms Amplified fragment length polymorphisms Random amplification of polymorphic DNA Single strand conformation polymorphism Minisatellites Microsatellites Single-nucleotide polymorphisms DNA sequencing These different techniques focus on different variable areas of the genomes within animals and plants. The specific information that is required determines which techniques are used and which parts of the genome are analysed. For example, mitochondrial DNA in animals has a high substitution rate, which makes it useful for identifying differences between individuals. However, it is only inherited in the female line, and the mitochondrial genome is relatively small. In plants, the mitochondrial DNA has very high rates of structural mutations, so is rarely used for genetic markers, as the chloroplast genome can be used instead. Other sites in the genome that are subject to high mutation rates such as the major histocompatibility complex, and the microsatellites and minisatellites are also frequently used. These techniques can provide information on long-term conservation of genetic diversity and expound demographic and ecological matters such as taxonomy. Another technique is using historic DNA for genetic analysis. Historic DNA is important because it allows geneticists to understand how species reacted to changes to conditions in the past. This is a key to understanding the reactions of similar species in the future. Techniques using historic DNA include looking at preserved remains found in museums and caves. Museums are used because there is a wide range of species that are available to scientists all over the world. The problem with museums is that, historical perspectives are important because understanding how species reacted to changes in conditions in the past is a key to understanding reactions of similar species in the future. Evidence found in caves provides a longer perspective and does not disturb the animals. Another technique that relies on specific genetics of an individual is noninvasive monitoring, which uses extracted DNA from organic material that an individual leaves behind, such as a feather. Environmental DNA (eDNA) can be extracted from soil, water, and air. Organisms deposit tissue cells into the environment and the degradation of these cells results in DNA being released into the environment.This too avoids disrupting the animals and can provide information about the sex, movement, kinship and diet of an individual. Other more general techniques can be used to correct genetic factors that lead to extinction and risk of extinction. For example, when minimizing inbreeding and increasing genetic variation multiple steps can be taken. Increasing heterozygosity through immigration, increasing the generational interval through cryopreservation or breeding from older animals, and increasing the effective population size through equalization of family size all helps minimize inbreeding and its effects. Deleterious alleles arise through mutation, however certain recessive ones can become more prevalent due to inbreeding. Deleterious mutations that arise from inbreeding can be removed by purging, or natural selection. Populations raised in captivity with the intent of being reintroduced in the wild suffer from adaptations to captivity. Inbreeding depression, loss of genetic diversity, and genetic adaptation to captivity are disadvantageous in the wild, and many of these issues can be dealt with through the aforementioned techniques aimed at increasing heterozygosity. In addition creating a captive environment that closely resembles the wild and fragmenting the populations so there is less response to selection also help reduce adaptation to captivity. Solutions to minimize the factors that lead to extinction and risk of extinction often overlap because the factors themselves overlap. For example, deleterious mutations are added to populations through mutation, however the deleterious mutations conservation biologists are concerned with are ones that are brought about by inbreeding, because those are the ones that can be taken care of by reducing inbreeding. Here the techniques to reduce inbreeding also help decrease the accumulation of deleterious mutations. Applications These techniques have wide-ranging applications. One example is in defining species and subspecies of salmonids. Hybridization is an especially important issue in salmonids and this has wide-ranging conservation, political, social and economic implications. More specific example, the Cutthroat Trout. In analysis of its mtDNA and alloenzymes, hybridization between native and non-native species has been shown to be one of the major factors contributing to the decline in its populations. This has led to efforts to remove some hybridized populations so native populations could breed more readily. Cases like these impact everything from the economy of local fishermen to larger companies, such as timber. Defining species and subspecies has conservation implication in mammals, too. For example, the northern white rhino and southern white rhino were previously mistakenly identified as the same species given their morphological similarities, but recent mtDNA analyses showed that the species are genetically distinct. As a result, the northern white rhino population has dwindled to near-extinction due to poaching crisis, and the prior assumption that it could freely breed with the southern population is revealed to be a misguided approach in conservation efforts. More recent applications include using forensic genetic identification to identify species in cases of poaching. Wildlife DNA registers are used to regulate trade of protected species, species laundering, and poaching. Conservation genetics techniques can be used alongside a variety of scientific disciplines. For example, landscape genetics has been used in conjunction with conservation genetics to identify corridors and population dispersal barriers to give insight into conservation management. Implications New technology in conservation genetics has many implications for the future of conservation biology. At the molecular level, new technologies are advancing. Some of these techniques include the analysis of minisatellites and MHC. These molecular techniques have wider effects from clarifying taxonomic relationships, as in the previous example, to determining the best individuals to reintroduce to a population for recovery by determining kinship. These effects then have consequences that reach even further. Conservation of species has implications for humans in the economic, social, and political realms. In the biological realm increased genotypic diversity has been shown to help ecosystem recovery, as seen in a community of grasses which was able to resist disturbance to grazing geese through greater genotypic diversity. Because species diversity increases ecosystem function, increasing biodiversity through new conservation genetic techniques has wider reaching effects than before. A short list of studies a conservation geneticist may research include: Phylogenetic classification of species, subspecies, geographic races, and populations, and measures of phylogenetic diversity and uniqueness. Identifying hybrid species, hybridization in natural populations, and assessing the history and extent of introgression between species. Population genetic structure of natural and managed populations, including identification of Evolutionary Significant Units (ESUs) and management units for conservation. Assessing genetic variation within a species or population, including small or endangered populations, and estimates such as effective population size (Ne). Measuring the impact of inbreeding and outbreeding depression, and the relationship between heterozygosity and measures of fitness (see Fisher's fundamental theorem of natural selection). Evidence of disrupted mate choice and reproductive strategy in disturbed populations. Forensic applications, especially for the control of trade in endangered species. Practical methods for monitoring and maximizing genetic diversity during captive breeding programs and re-introduction schemes, including mathematical models and case studies. Conservation issues related to the introduction of genetically modified organisms. The interaction between environmental contaminants and the biology and health of an organism, including changes in mutation rates and adaptation to local changes in the environment (e.g. industrial melanism). New techniques for noninvasive genotyping, see noninvasive genotyping for conservation. Monitor genetic variability in populations and assess genes of fitness amongst organism populations. See also Animal genetic resources Forest genetic resources The State of the World's Animal Genetic Resources for Food and Agriculture Notes References External links What is Conservation Genetics? Science Genetics Blackwell - synergy UTM Departments UWYO PNAS Science ESF Conservation biology Applied genetics Population genetics Rare breed conservation
Conservation genetics
[ "Biology" ]
2,832
[ "Conservation biology" ]
1,557,599
https://en.wikipedia.org/wiki/Mike%20Lesk
Michael E. Lesk (born 1945) is an American computer scientist. Biography In the 1960s, Michael Lesk worked for the SMART Information Retrieval System project, wrote much of its retrieval code and did many of the retrieval experiments, as well as obtaining a BA degree in Physics and Chemistry from Harvard College in 1964 and a PhD from Harvard University in Chemical Physics in 1969. From 1970 to 1984, Lesk worked at Bell Labs in the group that built Unix. Lesk wrote Unix tools for word processing (tbl, refer, and the standard ms macro package, all for troff), for compiling (Lex), and for networking (uucp). He also wrote the Portable I/O Library (the predecessor to stdio.h in C) and contributed significantly to the development of the C language preprocessor. In 1984, he left to work for Bellcore, where he managed the computer science research group. There, Lesk worked on specific information systems applications, mostly with geography (a system for driving directions) and dictionaries (a system for disambiguating words in context). In the 1990s, Lesk worked on a large chemical information system, the CORE project, with Cornell, Online Computer Library Center, American Chemical Society, and Chemical Abstracts Service. From 1998 to 2002, Lesk headed the National Science Foundation's Division of Information and Intelligent Systems, where he oversaw Phase 2 of the NSF's Digital Library Initiative. He was a professor on the faculty of the Library and Information Science Department, School of Communication & Information, Rutgers University, from 2003 to 2023. Lesk received the Flame award for lifetime achievement from Usenix in 1994, is a Fellow of the ACM in 1996, and in 2005 was elected to the National Academy of Engineering. He has authored a number of books. See also Lesk algorithm Bibliography Selected books by Michael Lesk: Practical Digital Libraries: Books, Bytes, and Bucks, 1997. . Understanding Digital Libraries, 2nd ed., December 2004. . References External links Michael Lesk personal website 1945 births Living people Harvard College alumni American computer programmers Scientists at Bell Labs Rutgers University faculty Unix people Members of the United States National Academy of Engineering 1996 fellows of the Association for Computing Machinery Troff Computational linguistics researchers Data miners
Mike Lesk
[ "Mathematics" ]
467
[ "Troff", "Mathematical markup languages" ]
1,557,627
https://en.wikipedia.org/wiki/Cross-sectional%20study
In medical research, epidemiology, social science, and biology, a cross-sectional study (also known as a cross-sectional analysis, transverse study, prevalence study) is a type of observational study that analyzes data from a population, or a representative subset, at a specific point in time—that is, cross-sectional data. In economics, cross-sectional studies typically involve the use of cross-sectional regression, in order to sort out the existence and magnitude of causal effects of one independent variable upon a dependent variable of interest at a given point in time. They differ from time series analysis, in which the behavior of one or more economic aggregates is traced through time. In medical research, cross-sectional studies differ from case-control studies in that they aim to provide data on the entire population under study, whereas case-control studies typically include only individuals who have developed a specific condition and compare them with a matched sample, often a tiny minority, of the rest of the population. Cross-sectional studies are descriptive studies (neither longitudinal nor experimental). Unlike case-control studies, they can be used to describe, not only the odds ratio, but also absolute risks and relative risks from prevalences (sometimes called prevalence risk ratio, or PRR). They may be used to describe some feature of the population, such as prevalence of an illness, but cannot prove cause and effect . Longitudinal studies differ from both in making a series of observations more than once on members of the study population over a period of time. Healthcare Cross-sectional studies involve data collected at a defined time. They are often used to assess the prevalence of acute or chronic conditions, but cannot be used to answer questions about the causes of disease or the results of intervention. Cross-sectional data cannot be used to infer causality because temporality is not known. They may also be described as censuses. Cross-sectional studies may involve special data collection, including questions about the past, but they often rely on data originally collected for other purposes. They are moderately expensive, and are not suitable for the study of rare diseases. Difficulty in recalling past events may also contribute bias. Advantages The use of routinely collected data allows large cross-sectional studies to be made at little or no expense. This is a major advantage over other forms of epidemiological study. A natural progression has been suggested from cheap cross-sectional studies of routinely collected data which suggest hypotheses, to case-control studies testing them more specifically, then to cohort studies and trials which cost much more and take much longer, but may give stronger evidence. In a cross-sectional survey, a specific group is looked at to see if an activity, say alcohol consumption, is related to the health effect being investigated, say cirrhosis of the liver. If alcohol use is correlated with cirrhosis of the liver, this would support the hypothesis that alcohol use may be associated with cirrhosis. Disadvantages Routine data may not be designed to answer the specific question. Routinely collected data does not normally describe which variable is the cause and which is the effect. Cross-sectional studies using data originally collected for other purposes are often unable to include data on confounding factors, other variables that affect the relationship between the putative cause and effect. For example, data only on present alcohol consumption and cirrhosis would not allow the role of past alcohol use, or of other causes, to be explored. Cross-sectional studies are very susceptible to recall bias. Most case-control studies collect specifically designed data on all participants, including data fields designed to allow the hypothesis of interest to be tested. However, in issues where strong personal feelings may be involved, specific questions may be a source of bias. For example, past alcohol consumption may be incorrectly reported by an individual wishing to reduce their personal feelings of guilt. Such bias may be less in routinely collected statistics, or effectively eliminated if the observations are made by third parties, for example taxation records of alcohol by area.In addition, there may be cohort effect, in which differences in social and environmental influences are treated as developmental changes due to ageing. Since the occurrence of differences is consistent with the division of generations and ethnic groups, that is, a group of people experiencing a common historical event is affected by a common influence, it is difficult to obtain the causal relationship of the event. Weaknesses of aggregated data Cross-sectional studies can contain individual-level data (one record per individual, for example, in national health surveys). However, in modern epidemiology it may be impossible to survey the entire population of interest, so cross-sectional studies often involve secondary analysis of data collected for another purpose. In many such cases, no individual records are available to the researcher, and group-level information must be used. Major sources of such data are often large institutions like the Census Bureau or the Centers for Disease Control in the United States. Recent census data is not provided on individuals, for example in the UK individual census data is released only after a century. Instead data is aggregated, usually by administrative area. Inferences about individuals based on aggregate data are weakened by the ecological fallacy. Also consider the potential for committing the "atomistic fallacy" where assumptions about aggregated counts are made based on the aggregation of individual level data (such as averaging census tracts to calculate a county average). For example, it might be true that there is no correlation between infant mortality and family income at the city level, while still being true that there is a strong relationship between infant mortality and family income at the individual level. All aggregate statistics are subject to compositional effects, so that what matters is not only the individual-level relationship between income and infant mortality, but also the proportions of low, middle, and high income individuals in each city. Because case-control studies are usually based on individual-level data, they do not have this problem. Economics In economics, cross-sectional analysis has the advantage of avoiding various complicating aspects of the use of data drawn from various points in time, such as serial correlation of residuals. It also has the advantage that the data analysis itself does not need an assumption that the nature of the relationships between variables is stable over time, though this comes at the cost of requiring caution if the results for one time period are to be assumed valid at some different point in time. An example of cross-sectional analysis in economics is the regression of money demand—the amounts that various people hold in highly liquid financial assets—at a particular time upon their income, total financial wealth, and various demographic factors. Each data point is for a particular individual or family, and the regression is conducted on a statistical sample drawn at one point in time from the entire population of individuals or families. In contrast, an intertemporal analysis of money demand would use data on an entire country's holdings of money at each of various points in time, and would regress that on contemporaneous (or near-contemporaneous) income, total financial wealth, and some measure of interest rates. The cross-sectional study has the advantage that it can investigate the effects of various demographic factors (age, for example) on individual differences; but it has the disadvantage that it cannot find the effect of interest rates on money demand, because in the cross-sectional study at a particular point in time all observed units are faced with the same current level of interest rates. See also Longitudinal study References Sources Epidemiology for the Uninitiated by Coggon, Rose, and Barker, Chapter 8, "Case-control and cross-sectional studies", BMJ (British Medical Journal) Publishing, 1997 Research Methods Knowledge Base by William M. K. Trochim, Web Center for Social Research Methods, copyright 2006 Cross-Sectional Design by Michelle A. Saint-Germain External links Study Design Tutorial Cornell University College of Veterinary Medicine Epidemiology Observational study Nursing research Mathematical and quantitative methods (economics) Research methods Social research
Cross-sectional study
[ "Environmental_science" ]
1,634
[ "Epidemiology", "Environmental social science" ]
1,557,634
https://en.wikipedia.org/wiki/Propositional%20formula
In propositional logic, a propositional formula is a type of syntactic formula which is well formed. If the values of all variables in a propositional formula are given, it determines a unique truth value. A propositional formula may also be called a propositional expression, a sentence, or a sentential formula. A propositional formula is constructed from simple propositions, such as "five is greater than three" or propositional variables such as p and q, using connectives or logical operators such as NOT, AND, OR, or IMPLIES; for example: (p AND NOT q) IMPLIES (p OR q). In mathematics, a propositional formula is often more briefly referred to as a "proposition", but, more precisely, a propositional formula is not a proposition but a formal expression that denotes a proposition, a formal object under discussion, just like an expression such as "" is not a value, but denotes a value. In some contexts, maintaining the distinction may be of importance. Propositions For the purposes of the propositional calculus, propositions (utterances, sentences, assertions) are considered to be either simple or compound. Compound propositions are considered to be linked by sentential connectives, some of the most common of which are "AND", "OR", "IF ... THEN ...", "NEITHER ... NOR ...", "... IS EQUIVALENT TO ..." . The linking semicolon ";", and connective "BUT" are considered to be expressions of "AND". A sequence of discrete sentences are considered to be linked by "AND"s, and formal analysis applies a recursive "parenthesis rule" with respect to sequences of simple propositions (see more below about well-formed formulas). For example: The assertion: "This cow is blue. That horse is orange but this horse here is purple." is actually a compound proposition linked by "AND"s: ( ("This cow is blue" AND "that horse is orange") AND "this horse here is purple" ) . Simple propositions are declarative in nature, that is, they make assertions about the condition or nature of a particular object of sensation e.g. "This cow is blue", "There's a coyote!" ("That coyote IS there, behind the rocks."). Thus the simple "primitive" assertions must be about specific objects or specific states of mind. Each must have at least a subject (an immediate object of thought or observation), a verb (in the active voice and present tense preferred), and perhaps an adjective or adverb. "Dog!" probably implies "I see a dog" but should be rejected as too ambiguous. Example: "That purple dog is running", "This cow is blue", "Switch M31 is closed", "This cap is off", "Tomorrow is Friday". For the purposes of the propositional calculus a compound proposition can usually be reworded into a series of simple sentences, although the result will probably sound stilted. Relationship between propositional and predicate formulas The predicate calculus goes a step further than the propositional calculus to an "analysis of the inner structure of propositions" It breaks a simple sentence down into two parts (i) its subject (the object (singular or plural) of discourse) and (ii) a predicate (a verb or possibly verb-clause that asserts a quality or attribute of the object(s)). The predicate calculus then generalizes the "subject|predicate" form (where | symbolizes concatenation (stringing together) of symbols) into a form with the following blank-subject structure " ___|predicate", and the predicate in turn generalized to all things with that property. Example: "This blue pig has wings" becomes two sentences in the propositional calculus: "This pig has wings" AND "This pig is blue", whose internal structure is not considered. In contrast, in the predicate calculus, the first sentence breaks into "this pig" as the subject, and "has wings" as the predicate. Thus it asserts that object "this pig" is a member of the class (set, collection) of "winged things". The second sentence asserts that object "this pig" has an attribute "blue" and thus is a member of the class of "blue things". One might choose to write the two sentences connected with AND as: p|W AND p|B The generalization of "this pig" to a (potential) member of two classes "winged things" and "blue things" means that it has a truth-relationship with both of these classes. In other words, given a domain of discourse "winged things", p is either found to be a member of this domain or not. Thus there is a relationship W (wingedness) between p (pig) and { T, F }, W(p) evaluates to { T, F } where { T, F } is the set of the Boolean values "true" and "false". Likewise for B (blueness) and p (pig) and { T, F }: B(p) evaluates to { T, F }. So one now can analyze the connected assertions "B(p) AND W(p)" for its overall truth-value, i.e.: ( B(p) AND W(p) ) evaluates to { T, F } In particular, simple sentences that employ notions of "all", "some", "a few", "one of", etc. called logical quantifiers are treated by the predicate calculus. Along with the new function symbolism "F(x)" two new symbols are introduced: ∀ (For all), and ∃ (There exists ..., At least one of ... exists, etc.). The predicate calculus, but not the propositional calculus, can establish the formal validity of the following statement: "All blue pigs have wings but some pigs have no wings, hence some pigs are not blue". Identity Tarski asserts that the notion of IDENTITY (as distinguished from LOGICAL EQUIVALENCE) lies outside the propositional calculus; however, he notes that if a logic is to be of use for mathematics and the sciences it must contain a "theory" of IDENTITY. Some authors refer to "predicate logic with identity" to emphasize this extension. See more about this below. An algebra of propositions, the propositional calculus An algebra (and there are many different ones), loosely defined, is a method by which a collection of symbols called variables together with some other symbols such as parentheses (, ) and some sub-set of symbols such as *, +, ~, &, ∨, =, ≡, ∧, ¬ are manipulated within a system of rules. These symbols, and well-formed strings of them, are said to represent objects, but in a specific algebraic system these objects do not have meanings. Thus work inside the algebra becomes an exercise in obeying certain laws (rules) of the algebra's syntax (symbol-formation) rather than in semantics (meaning) of the symbols. The meanings are to be found outside the algebra. For a well-formed sequence of symbols in the algebra —a formula— to have some usefulness outside the algebra the symbols are assigned meanings and eventually the variables are assigned values; then by a series of rules the formula is evaluated. When the values are restricted to just two and applied to the notion of simple sentences (e.g. spoken utterances or written assertions) linked by propositional connectives this whole algebraic system of symbols and rules and evaluation-methods is usually called the propositional calculus or the sentential calculus. While some of the familiar rules of arithmetic algebra continue to hold in the algebra of propositions (e.g. the commutative and associative laws for AND and OR), some do not (e.g. the distributive laws for AND, OR and NOT). Usefulness of propositional formulas Analysis: In deductive reasoning, philosophers, rhetoricians and mathematicians reduce arguments to formulas and then study them (usually with truth tables) for correctness (soundness). For example: Is the following argument sound? "Given that consciousness is sufficient for an artificial intelligence and only conscious entities can pass the Turing test, before we can conclude that a robot is an artificial intelligence the robot must pass the Turing test." Engineers analyze the logic circuits they have designed using synthesis techniques and then apply various reduction and minimization techniques to simplify their designs. Synthesis: Engineers in particular synthesize propositional formulas (that eventually end up as circuits of symbols) from truth tables. For example, one might write down a truth table for how binary addition should behave given the addition of variables "b" and "a" and "carry_in" "ci", and the results "carry_out" "co" and "sum" Σ: Example: in row 5, ( (b+a) + ci ) = ( (1+0) + 1 ) = the number "2". written as a binary number this is 102, where "co"=1 and Σ=0 as shown in the right-most columns. Propositional variables The simplest type of propositional formula is a propositional variable. Propositions that are simple (atomic), symbolic expressions are often denoted by variables named p, q, or P, Q, etc. A propositional variable is intended to represent an atomic proposition (assertion), such as "It is Saturday" = p (here the symbol = means " ... is assigned the variable named ...") or "I only go to the movies on Monday" = q. Truth-value assignments, formula evaluations Evaluation of a propositional formula begins with assignment of a truth value to each variable. Because each variable represents a simple sentence, the truth values are being applied to the "truth" or "falsity" of these simple sentences. Truth values in rhetoric, philosophy and mathematics The truth values are only two: { TRUTH "T", FALSITY "F" }. An empiricist puts all propositions into two broad classes: analytic—true no matter what (e.g. tautology), and synthetic—derived from experience and thereby susceptible to confirmation by third parties (the verification theory of meaning). Empiricists hold that, in general, to arrive at the truth-value of a synthetic proposition, meanings (pattern-matching templates) must first be applied to the words, and then these meaning-templates must be matched against whatever it is that is being asserted. For example, my utterance "That cow is !" Is this statement a TRUTH? Truly I said it. And maybe I am seeing a blue cow—unless I am lying my statement is a TRUTH relative to the object of my (perhaps flawed) perception. But is the blue cow "really there"? What do you see when you look out the same window? In order to proceed with a verification, you will need a prior notion (a template) of both "cow" and "", and an ability to match the templates against the object of sensation (if indeed there is one). Truth values in engineering Engineers try to avoid notions of truth and falsity that bedevil philosophers, but in the final analysis engineers must trust their measuring instruments. In their quest for robustness, engineers prefer to pull known objects from a small library—objects that have well-defined, predictable behaviors even in large combinations, (hence their name for the propositional calculus: "combinatorial logic"). The fewest behaviors of a single object are two (e.g. { OFF, ON }, { open, shut }, { UP, DOWN } etc.), and these are put in correspondence with { 0, 1 }. Such elements are called digital; those with a continuous range of behaviors are called analog. Whenever decisions must be made in an analog system, quite often an engineer will convert an analog behavior (the door is 45.32146% UP) to digital (e.g. DOWN=0 ) by use of a comparator. Thus an assignment of meaning of the variables and the two value-symbols { 0, 1 } comes from "outside" the formula that represents the behavior of the (usually) compound object. An example is a garage door with two "limit switches", one for UP labelled SW_U and one for DOWN labelled SW_D, and whatever else is in the door's circuitry. Inspection of the circuit (either the diagram or the actual objects themselves—door, switches, wires, circuit board, etc.) might reveal that, on the circuit board "node 22" goes to +0 volts when the contacts of switch "SW_D" are mechanically in contact ("closed") and the door is in the "down" position (95% down), and "node 29" goes to +0 volts when the door is 95% UP and the contacts of switch SW_U are in mechanical contact ("closed"). The engineer must define the meanings of these voltages and all possible combinations (all 4 of them), including the "bad" ones (e.g. both nodes 22 and 29 at 0 volts, meaning that the door is open and closed at the same time). The circuit mindlessly responds to whatever voltages it experiences without any awareness of TRUTH or FALSEHOOD, RIGHT or WRONG, SAFE or DANGEROUS. Propositional connectives Arbitrary propositional formulas are built from propositional variables and other propositional formulas using propositional connectives. Examples of connectives include: The unary negation connective. If is a formula, then is a formula. The classical binary connectives . Thus, for example, if and are formulas, so is . Other binary connectives, such as NAND, NOR, and XOR The ternary connective IF ... THEN ... ELSE ... Constant 0-ary connectives ⊤ and ⊥ (alternately, constants { T, F }, { 1, 0 } etc. ) The "theory-extension" connective EQUALS (alternately, IDENTITY, or the sign " = " as distinguished from the "logical connective" ) Connectives of rhetoric, philosophy and mathematics The following are the connectives common to rhetoric, philosophy and mathematics together with their truth tables. The symbols used will vary from author to author and between fields of endeavor. In general the abbreviations "T" and "F" stand for the evaluations TRUTH and FALSITY applied to the variables in the propositional formula (e.g. the assertion: "That cow is blue" will have the truth-value "T" for Truth or "F" for Falsity, as the case may be.). The connectives go by a number of different word-usages, e.g. "a IMPLIES b" is also said "IF a THEN b". Some of these are shown in the table. Engineering connectives In general, the engineering connectives are just the same as the mathematics connectives excepting they tend to evaluate with "1" = "T" and "0" = "F". This is done for the purposes of analysis/minimization and synthesis of formulas by use of the notion of minterms and Karnaugh maps (see below). Engineers also use the words logical product from Boole's notion (a*a = a) and logical sum from Jevons' notion (a+a = a). CASE connective: IF ... THEN ... ELSE ... The IF ... THEN ... ELSE ... connective appears as the simplest form of CASE operator of recursion theory and computation theory and is the connective responsible for conditional goto's (jumps, branches). From this one connective all other connectives can be constructed (see more below). Although " IF c THEN b ELSE a " sounds like an implication it is, in its most reduced form, a switch that makes a decision and offers as outcome only one of two alternatives "a" or "b" (hence the name switch statement in the C programming language). The following three propositions are equivalent (as indicated by the logical equivalence sign ≡ ): ( IF 'counter is zero' THEN 'go to instruction b ' ELSE 'go to instruction a ') ≡ ( (c → b) & (~c → a) ) ≡ ( ( IF 'counter is zero' THEN 'go to instruction b ' ) AND ( IF 'It is NOT the case that counter is zero' THEN 'go to instruction a ) " ≡ ( (c & b) ∨ (~c & a) ) ≡ " ( 'Counter is zero' AND 'go to instruction b ) OR ( 'It is NOT the case that 'counter is zero' AND 'go to instruction a ) " Thus IF ... THEN ... ELSE—unlike implication—does not evaluate to an ambiguous "TRUTH" when the first proposition is false i.e. c = F in (c → b). For example, most people would reject the following compound proposition as a nonsensical non sequitur because the second sentence is not connected in meaning to the first. Example: The proposition " IF 'Winston Churchill was Chinese' THEN 'The sun rises in the east' " evaluates as a TRUTH given that 'Winston Churchill was Chinese' is a FALSEHOOD and 'The sun rises in the east' evaluates as a TRUTH. In recognition of this problem, the sign → of formal implication in the propositional calculus is called material implication to distinguish it from the everyday, intuitive implication. The use of the IF ... THEN ... ELSE construction avoids controversy because it offers a completely deterministic choice between two stated alternatives; it offers two "objects" (the two alternatives b and a), and it selects between them exhaustively and unambiguously. In the truth table below, d1 is the formula: ( (IF c THEN b) AND (IF NOT-c THEN a) ). Its fully reduced form d2 is the formula: ( (c AND b) OR (NOT-c AND a). The two formulas are equivalent as shown by the columns "=d1" and "=d2". Electrical engineers call the fully reduced formula the AND-OR-SELECT operator. The CASE (or SWITCH) operator is an extension of the same idea to n possible, but mutually exclusive outcomes. Electrical engineers call the CASE operator a multiplexer. IDENTITY and evaluation The first table of this section stars *** the entry logical equivalence to note the fact that "Logical equivalence" is not the same thing as "identity". For example, most would agree that the assertion "That cow is blue" is identical to the assertion "That cow is blue". On the other hand, logical equivalence sometimes appears in speech as in this example: " 'The sun is shining' means 'I'm biking' " Translated into a propositional formula the words become: "IF 'the sun is shining' THEN 'I'm biking', AND IF 'I'm biking' THEN 'the sun is shining'": "IF 's' THEN 'b' AND IF 'b' THEN 's' " is written as ((s → b) & (b → s)) or in an abbreviated form as (s ↔ b). As the rightmost symbol string is a definition for a new symbol in terms of the symbols on the left, the use of the IDENTITY sign = is appropriate: ((s → b) & (b → s)) = (s ↔ b) Different authors use different signs for logical equivalence: ↔ (e.g. Suppes, Goodstein, Hamilton), ≡ (e.g. Robbin), ⇔ (e.g. Bender and Williamson). Typically identity is written as the equals sign =. One exception to this rule is found in Principia Mathematica. For more about the philosophy of the notion of IDENTITY see Leibniz's law. As noted above, Tarski considers IDENTITY to lie outside the propositional calculus, but he asserts that without the notion, "logic" is insufficient for mathematics and the deductive sciences. In fact the sign comes into the propositional calculus when a formula is to be evaluated. In some systems there are no truth tables, but rather just formal axioms (e.g. strings of symbols from a set { ~, →, (, ), variables p1, p2, p3, ... } and formula-formation rules (rules about how to make more symbol strings from previous strings by use of e.g. substitution and modus ponens). the result of such a calculus will be another formula (i.e. a well-formed symbol string). Eventually, however, if one wants to use the calculus to study notions of validity and truth, one must add axioms that define the behavior of the symbols called "the truth values" {T, F} ( or {1, 0}, etc.) relative to the other symbols. For example, Hamilton uses two symbols = and ≠ when he defines the notion of a valuation v of any well-formed formulas (wffs) A and B in his "formal statement calculus" L. A valuation v is a function from the wffs of his system L to the range (output) { T, F }, given that each variable p1, p2, p3 in a wff is assigned an arbitrary truth value { T, F }. The two definitions () and () define the equivalent of the truth tables for the ~ (NOT) and → (IMPLICATION) connectives of his system. The first one derives F ≠ T and T ≠ F, in other words " v(A) does not mean v(~A)". Definition () specifies the third row in the truth table, and the other three rows then come from an application of definition (). In particular () assigns the value F (or a meaning of "F") to the entire expression. The definitions also serve as formation rules that allow substitution of a value previously derived into a formula: Some formal systems specify these valuation axioms at the outset in the form of certain formulas such as the law of contradiction or laws of identity and nullity. The choice of which ones to use, together with laws such as commutation and distribution, is up to the system's designer as long as the set of axioms is complete (i.e. sufficient to form and to evaluate any well-formed formula created in the system). More complex formulas As shown above, the CASE (IF c THEN b ELSE a ) connective is constructed either from the 2-argument connectives IF ... THEN ... and AND or from OR and AND and the 1-argument NOT. Connectives such as the n-argument AND (a & b & c & ... & n), OR (a ∨ b ∨ c ∨ ... ∨ n) are constructed from strings of two-argument AND and OR and written in abbreviated form without the parentheses. These, and other connectives as well, can then be used as building blocks for yet further connectives. Rhetoricians, philosophers, and mathematicians use truth tables and the various theorems to analyze and simplify their formulas. Electrical engineering uses drawn symbols and connect them with lines that stand for the mathematicals act of substitution and replacement. They then verify their drawings with truth tables and simplify the expressions as shown below by use of Karnaugh maps or the theorems. In this way engineers have created a host of "combinatorial logic" (i.e. connectives without feedback) such as "decoders", "encoders", "mutifunction gates", "majority logic", "binary adders", "arithmetic logic units", etc. Definitions A definition creates a new symbol and its behavior, often for the purposes of abbreviation. Once the definition is presented, either form of the equivalent symbol or formula can be used. The following symbolism =Df is following the convention of Reichenbach. Some examples of convenient definitions drawn from the symbol set { ~, &, (, ) } and variables. Each definition is producing a logically equivalent formula that can be used for substitution or replacement. definition of a new variable: (c & d) =Df s OR: ~(~a & ~b) =Df (a ∨ b) IMPLICATION: (~a ∨ b) =Df (a → b) XOR: (~a & b) ∨ (a & ~b) =Df (a ⊕ b) LOGICAL EQUIVALENCE: ( (a → b) & (b → a) ) =Df ( a ≡ b ) Axiom and definition schemas The definitions above for OR, IMPLICATION, XOR, and logical equivalence are actually schemas (or "schemata"), that is, they are models (demonstrations, examples) for a general formula format but shown (for illustrative purposes) with specific letters a, b, c for the variables, whereas any variable letters can go in their places as long as the letter substitutions follow the rule of substitution below. Example: In the definition (~a ∨ b) =Df (a → b), other variable-symbols such as "SW2" and "CON1" might be used, i.e. formally: a =Df SW2, b =Df CON1, so we would have as an instance of the definition schema (~SW2 ∨ CON1) =Df (SW2 → CON1) Substitution versus replacement Substitution: The variable or sub-formula to be substituted with another variable, constant, or sub-formula must be replaced in all instances throughout the overall formula. Example: (c & d) ∨ (p & ~(c & ~d)), but (q1 & ~q2) ≡ d. Now wherever variable "d" occurs, substitute (q1 & ~q2): (c & (q1 & ~q2)) ∨ (p & ~(c & ~(q1 & ~q2))) Replacement: (i) the formula to be replaced must be within a tautology, i.e. logically equivalent ( connected by ≡ or ↔) to the formula that replaces it, and (ii) unlike substitution its permissible for the replacement to occur only in one place (i.e. for one formula). Example: Use this set of formula schemas/equivalences: ( (a ∨ 0) ≡ a ). ( (a & ~a) ≡ 0 ). ( (~a ∨ b) =Df (a → b) ). ( ~(~a) ≡ a ) Inductive definition The classical presentation of propositional logic (see Enderton 2002) uses the connectives . The set of formulas over a given set of propositional variables is inductively defined to be the smallest set of expressions such that: Each propositional variable in the set is a formula, is a formula whenever is, and is a formula whenever and are formulas and is one of the binary connectives . This inductive definition can be easily extended to cover additional connectives. The inductive definition can also be rephrased in terms of a closure operation (Enderton 2002). Let V denote a set of propositional variables and let XV denote the set of all strings from an alphabet including symbols in V, left and right parentheses, and all the logical connectives under consideration. Each logical connective corresponds to a formula building operation, a function from XXV to XXV: Given a string z, the operation returns . Given strings y and z, the operation returns . There are similar operations , , and corresponding to the other binary connectives. The set of formulas over V is defined to be the smallest subset of XXV containing V and closed under all the formula building operations. Parsing formulas The following "laws" of the propositional calculus are used to "reduce" complex formulas. The "laws" can be verified easily with truth tables. For each law, the principal (outermost) connective is associated with logical equivalence ≡ or identity =. A complete analysis of all 2n combinations of truth-values for its n distinct variables will result in a column of 1's (T's) underneath this connective. This finding makes each law, by definition, a tautology. And, for a given law, because its formula on the left and right are equivalent (or identical) they can be substituted for one another. Example: The following truth table is De Morgan's law for the behavior of NOT over OR: ~(a ∨ b) ≡ (~a & ~b). To the left of the principal connective ≡ (yellow column labelled "taut") the formula ~(b ∨ a) evaluates to (1, 0, 0, 0) under the label "P". On the right of "taut" the formula (~(b) ∨ ~(a)) also evaluates to (1, 0, 0, 0) under the label "Q". As the two columns have equivalent evaluations, the logical equivalence ≡ under "taut" evaluates to (1, 1, 1, 1), i.e. P ≡ Q. Thus either formula can be substituted for the other if it appears in a larger formula. Enterprising readers might challenge themselves to invent an "axiomatic system" that uses the symbols { ∨, &, ~, (, ), variables a, b, c }, the formation rules specified above, and as few as possible of the laws listed below, and then derive as theorems the others as well as the truth-table valuations for ∨, &, and ~. One set attributed to Huntington (1904) (Suppes:204) uses eight of the laws defined below. If used in an axiomatic system, the symbols 1 and 0 (or T and F) are considered to be well-formed formulas and thus obey all the same rules as the variables. Thus the laws listed below are actually axiom schemas, that is, they stand in place of an infinite number of instances. Thus ( x ∨ y ) ≡ ( y ∨ x ) might be used in one instance, ( p ∨ 0 ) ≡ ( 0 ∨ p ) and in another instance ( 1 ∨ q ) ≡ ( q ∨ 1 ), etc. Connective seniority (symbol rank) In general, to avoid confusion during analysis and evaluation of propositional formulas, one can make liberal use of parentheses. However, quite often authors leave them out. To parse a complicated formula one first needs to know the seniority, or rank, that each of the connectives (excepting *) has over the other connectives. To "well-form" a formula, start with the connective with the highest rank and add parentheses around its components, then move down in rank (paying close attention to the connective's scope over which it is working). From most- to least-senior, with the predicate signs ∀x and ∃x, the IDENTITY = and arithmetic signs added for completeness: ≡ (LOGICAL EQUIVALENCE) → (IMPLICATION) & (AND) ∨ (OR) ~ (NOT) ∀x (FOR ALL x) ∃x (THERE EXISTS AN x) = (IDENTITY) + (arithmetic sum) * (arithmetic multiply) ' (s, arithmetic successor). Thus the formula can be parsed—but because NOT does not obey the distributive law, the parentheses around the inner formula (~c & ~d) is mandatory: Example: " d & c ∨ w " rewritten is ( (d & c) ∨ w ) Example: " a & a → b ≡ a & ~a ∨ b " rewritten (rigorously) is ≡ has seniority: ( ( a & a → b ) ≡ ( a & ~a ∨ b ) ) → has seniority: ( ( a & (a → b) ) ≡ ( a & ~a ∨ b ) ) & has seniority both sides: ( ( ( (a) & (a → b) ) ) ≡ ( ( (a) & (~a ∨ b) ) ) ~ has seniority: ( ( ( (a) & (a → b) ) ) ≡ ( ( (a) & (~(a) ∨ b) ) ) check 9 ( -parenthesis and 9 ) -parenthesis: ( ( ( (a) & (a → b) ) ) ≡ ( ( (a) & (~(a) ∨ b) ) ) Example: d & c ∨ p & ~(c & ~d) ≡ c & d ∨ p & c ∨ p & ~d rewritten is ( ( (d & c) ∨ ( p & ~((c & ~(d)) ) ) ) ≡ ( (c & d) ∨ (p & c) ∨ (p & ~(d)) ) ) Commutative and associative laws Both AND and OR obey the commutative law and associative law: Commutative law for OR: ( a ∨ b ) ≡ ( b ∨ a ) Commutative law for AND: ( a & b ) ≡ ( b & a ) Associative law for OR: (( a ∨ b ) ∨ c ) ≡ ( a ∨ (b ∨ c) ) Associative law for AND: (( a & b ) & c ) ≡ ( a & (b & c) ) Omitting parentheses in strings of AND and OR: The connectives are considered to be unary (one-variable, e.g. NOT) and binary (i.e. two-variable AND, OR, IMPLIES). For example: ( (c & d) ∨ (p & c) ∨ (p & ~d) ) above should be written ( ((c & d) ∨ (p & c)) ∨ (p & ~(d) ) ) or possibly ( (c & d) ∨ ( (p & c) ∨ (p & ~(d)) ) ) However, a truth-table demonstration shows that the form without the extra parentheses is perfectly adequate. Omitting parentheses with regards to a single-variable NOT: While ~(a) where a is a single variable is perfectly clear, ~a is adequate and is the usual way this literal would appear. When the NOT is over a formula with more than one symbol, then the parentheses are mandatory, e.g. ~(a ∨ b). Distributive laws OR distributes over AND and AND distributes over OR. NOT does not distribute over AND or OR. See below about De Morgan's law: Distributive law for OR: ( c ∨ ( a & b) ) ≡ ( (c ∨ a) & (c ∨ b) ) Distributive law for AND: ( c & ( a ∨ b) ) ≡ ( (c & a) ∨ (c & b) ) De Morgan's laws NOT, when distributed over OR or AND, does something peculiar (again, these can be verified with a truth-table): De Morgan's law for OR: ¬(a ∨ b) ≡ (¬a ^ ¬b) De Morgan's law for AND: ¬(a ^ b) ≡ (¬a ∨ ¬b) Laws of absorption Absorption, in particular the first one, causes the "laws" of logic to differ from the "laws" of arithmetic: Absorption (idempotency) for OR: (a ∨ a) ≡ a Absorption (idempotency) for AND: (a & a) ≡ a Laws of evaluation: Identity, nullity, and complement The sign " = " (as distinguished from logical equivalence ≡, alternately ↔ or ⇔) symbolizes the assignment of value or meaning. Thus the string (a & ~(a)) symbolizes "0", i.e. it means the same thing as symbol "0" ". In some "systems" this will be an axiom (definition) perhaps shown as ( (a & ~(a)) =Df 0 ); in other systems, it may be derived in the truth table below: Commutation of equality: (a = b) ≡ (b = a) Identity for OR: (a ∨ 0) = a or (a ∨ F) = a Identity for AND: (a & 1) = a or (a & T) = a Nullity for OR: (a ∨ 1) = 1 or (a ∨ T) = T Nullity for AND: (a & 0) = 0 or (a & F) = F Complement for OR: (a ∨ ~a) = 1 or (a ∨ ~a) = T, law of excluded middle Complement for AND: (a & ~a) = 0 or (a & ~a) = F, law of contradiction Double negative (involution) ¬(¬a) ≡ a Well-formed formulas (wffs) A key property of formulas is that they can be uniquely parsed to determine the structure of the formula in terms of its propositional variables and logical connectives. When formulas are written in infix notation, as above, unique readability is ensured through an appropriate use of parentheses in the definition of formulas. Alternatively, formulas can be written in Polish notation or reverse Polish notation, eliminating the need for parentheses altogether. The inductive definition of infix formulas in the previous section can be converted to a formal grammar in Backus-Naur form: <formula> ::= <propositional variable> | ( ¬ <formula> ) | ( <formula> ∧ <formula>) | ( <formula> ∨ <formula> ) | ( <formula> → <formula> ) | ( <formula> ↔ <formula> ) It can be shown that any expression matched by the grammar has a balanced number of left and right parentheses, and any nonempty initial segment of a formula has more left than right parentheses. This fact can be used to give an algorithm for parsing formulas. For example, suppose that an expression x begins with . Starting after the second symbol, match the shortest subexpression y of x that has balanced parentheses. If x is a formula, there is exactly one symbol left after this expression, this symbol is a closing parenthesis, and y itself is a formula. This idea can be used to generate a recursive descent parser for formulas. Example of parenthesis counting: This method locates as "1" the principal connective the connective under which the overall evaluation of the formula occurs for the outer-most parentheses (which are often omitted). It also locates the inner-most connective where one would begin evaluatation of the formula without the use of a truth table, e.g. at "level 6". Well-formed formulas versus valid formulas in inferences The notion of valid argument is usually applied to inferences in arguments, but arguments reduce to propositional formulas and can be evaluated the same as any other propositional formula. Here a valid inference means: "The formula that represents the inference evaluates to "truth" beneath its principal connective, no matter what truth-values are assigned to its variables", i.e. the formula is a tautology. Quite possibly a formula will be well-formed but not valid. Another way of saying this is: "Being well-formed is necessary for a formula to be valid but it is not sufficient." The only way to find out if it is both well-formed and valid is to submit it to verification with a truth table or by use of the "laws": Example 1: What does one make of the following difficult-to-follow assertion? Is it valid? "If it's sunny, but if the frog is croaking then it's not sunny, then it's the same as saying that the frog isn't croaking." Convert this to a propositional formula as follows: " IF (a AND (IF b THEN NOT-a) THEN NOT-a" where " a " represents "its sunny" and " b " represents "the frog is croaking": ( ( (a) & ( (b) → ~(a) ) ≡ ~(b) ) This is well-formed, but is it valid? In other words, when evaluated will this yield a tautology (all T) beneath the logical-equivalence symbol ≡ ? The answer is NO, it is not valid. However, if reconstructed as an implication then the argument is valid. "Saying it's sunny, but if the frog is croaking then it's not sunny, implies that the frog isn't croaking." Other circumstances may be preventing the frog from croaking: perhaps a crane ate it. Example 2 (from Reichenbach via Bertrand Russell): "If pigs have wings, some winged animals are good to eat. Some winged animals are good to eat, so pigs have wings." ( ((a) → (b)) & (b) → (a) ) is well formed, but an invalid argument as shown by the red evaluation under the principal implication: Reduced sets of connectives A set of logical connectives is called complete if every propositional formula is tautologically equivalent to a formula with just the connectives in that set. There are many complete sets of connectives, including , , and . There are two binary connectives that are complete on their own, corresponding to NAND and NOR, respectively. Some pairs are not complete, for example . The stroke (NAND) The binary connective corresponding to NAND is called the Sheffer stroke, and written with a vertical bar | or vertical arrow ↑. The completeness of this connective was noted in Principia Mathematica (1927:xvii). Since it is complete on its own, all other connectives can be expressed using only the stroke. For example, where the symbol " ≡ " represents logical equivalence: ~p ≡ p|p p → q ≡ p|~q p ∨ q ≡ ~p|~q p & q ≡ ~(p|q) In particular, the zero-ary connectives (representing truth) and (representing falsity) can be expressed using the stroke: IF ... THEN ... ELSE This connective together with { 0, 1 }, ( or { F, T } or { , } ) forms a complete set. In the following the IF...THEN...ELSE relation (c, b, a) = d represents ( (c → b) ∨ (~c → a) ) ≡ ( (c & b) ∨ (~c & a) ) = d (c, b, a): (c, 0, 1) ≡ ~c (c, b, 1) ≡ (c → b) (c, c, a) ≡ (c ∨ a) (c, b, c) ≡ (c & b) Example: The following shows how a theorem-based proof of "(c, b, 1) ≡ (c → b)" would proceed, below the proof is its truth-table verification. ( Note: (c → b) is defined to be (~c ∨ b) ): Begin with the reduced form: ( (c & b) ∨ (~c & a) ) Substitute "1" for a: ( (c & b) ∨ (~c & 1) ) Identity (~c & 1) = ~c: ( (c & b) ∨ (~c) ) Law of commutation for V: ( (~c) ∨ (c & b) ) Distribute "~c V" over (c & b): ( ((~c) ∨ c ) & ((~c) ∨ b ) Law of excluded middle (((~c) ∨ c ) = 1 ): ( (1) & ((~c) ∨ b ) ) Distribute "(1) &" over ((~c) ∨ b): ( ((1) & (~c)) ∨ ((1) & b )) ) Commutivity and Identity (( 1 & ~c) = (~c & 1) = ~c, and (( 1 & b) ≡ (b & 1) ≡ b: ( ~c ∨ b ) ( ~c ∨ b ) is defined as c → b Q. E. D. In the following truth table the column labelled "taut" for tautology evaluates logical equivalence (symbolized here by ≡) between the two columns labelled d. Because all four rows under "taut" are 1's, the equivalence indeed represents a tautology. Normal forms An arbitrary propositional formula may have a very complicated structure. It is often convenient to work with formulas that have simpler forms, known as normal forms. Some common normal forms include conjunctive normal form and disjunctive normal form. Any propositional formula can be reduced to its conjunctive or disjunctive normal form. Reduction to normal form Reduction to normal form is relatively simple once a truth table for the formula is prepared. But further attempts to minimize the number of literals (see below) requires some tools: reduction by De Morgan's laws and truth tables can be unwieldy, but Karnaugh maps are very suitable a small number of variables (5 or less). Some sophisticated tabular methods exist for more complex circuits with multiple outputs but these are beyond the scope of this article; for more see Quine–McCluskey algorithm. Literal, term and alterm In electrical engineering, a variable x or its negation ~(x) can be referred to as a literal. A string of literals connected by ANDs is called a term. A string of literals connected by OR is called an alterm. Typically the literal ~(x) is abbreviated ~x. Sometimes the &-symbol is omitted altogether in the manner of algebraic multiplication. Examples a, b, c, d are variables. ((( a & ~(b) ) & ~(c)) & d) is a term. This can be abbreviated as (a & ~b & ~c & d), or a~b~cd. p, q, r, s are variables. (((p ∨ ~(q) ) ∨ r) ∨ ~(s) ) is an alterm. This can be abbreviated as (p ∨ ~q ∨ r ∨ ~s). Minterms In the same way that a 2n-row truth table displays the evaluation of a propositional formula for all 2n possible values of its variables, n variables produces a 2n-square Karnaugh map (even though we cannot draw it in its full-dimensional realization). For example, 3 variables produces 23 = 8 rows and 8 Karnaugh squares; 4 variables produces 16 truth-table rows and 16 squares and therefore 16 minterms. Each Karnaugh-map square and its corresponding truth-table evaluation represents one minterm. Any propositional formula can be reduced to the "logical sum" (OR) of the active (i.e. "1"- or "T"-valued) minterms. When in this form the formula is said to be in disjunctive normal form. But even though it is in this form, it is not necessarily minimized with respect to either the number of terms or the number of literals. In the following table, observe the peculiar numbering of the rows: (0, 1, 3, 2, 6, 7, 5, 4, 0). The first column is the decimal equivalent of the binary equivalent of the digits "cba", in other words: Example cba2 = c*22 + b*21 + a*20: cba = (c=1, b=0, a=1) = 1012 = 1*22 + 0*21 + 1*20 = 510 This numbering comes about because as one moves down the table from row to row only one variable at a time changes its value. Gray code is derived from this notion. This notion can be extended to three and four-dimensional hypercubes called Hasse diagrams where each corner's variables change only one at a time as one moves around the edges of the cube. Hasse diagrams (hypercubes) flattened into two dimensions are either Veitch diagrams or Karnaugh maps (these are virtually the same thing). When working with Karnaugh maps one must always keep in mind that the top edge "wrap arounds" to the bottom edge, and the left edge wraps around to the right edge—the Karnaugh diagram is really a three- or four- or n-dimensional flattened object. Reduction by use of the map method (Veitch, Karnaugh) Veitch improved the notion of Venn diagrams by converting the circles to abutting squares, and Karnaugh simplified the Veitch diagram by converting the minterms, written in their literal-form (e.g. ~abc~d) into numbers. The method proceeds as follows: Produce the formula's truth table Produce the formula's truth table. Number its rows using the binary-equivalents of the variables (usually just sequentially 0 through n-1) for n variables. Technically, the propositional function has been reduced to its (unminimized) conjunctive normal form: each row has its minterm expression and these can be OR'd to produce the formula in its (unminimized) conjunctive normal form. Example: ((c & d) ∨ (p & ~(c & (~d)))) = q in conjunctive normal form is: ( (~p & d & c ) ∨ (p & d & c) ∨ (p & d & ~c) ∨ (p & ~d & ~c) ) = q However, this formula be reduced both in the number of terms (from 4 to 3) and in the total count of its literals (12 to 6). Create the formula's Karnaugh map Use the values of the formula (e.g. "p") found by the truth-table method and place them in their into their respective (associated) Karnaugh squares (these are numbered per the Gray code convention). If values of "d" for "don't care" appear in the table, this adds flexibility during the reduction phase. Reduce minterms Minterms of adjacent (abutting) 1-squares (T-squares) can be reduced with respect to the number of their literals, and the number terms also will be reduced in the process. Two abutting squares (2 x 1 horizontal or 1 x 2 vertical, even the edges represent abutting squares) lose one literal, four squares in a 4 x 1 rectangle (horizontal or vertical) or 2 x 2 square (even the four corners represent abutting squares) lose two literals, eight squares in a rectangle lose 3 literals, etc. (One seeks out the largest square or rectangles and ignores the smaller squares or rectangles contained totally within it. ) This process continues until all abutting squares are accounted for, at which point the propositional formula is minimized. For example, squares #3 and #7 abut. These two abutting squares can lose one literal (e.g. "p" from squares #3 and #7), four squares in a rectangle or square lose two literals, eight squares in a rectangle lose 3 literals, etc. (One seeks out the largest square or rectangles.) This process continues until all abutting squares are accounted for, at which point the propositional formula is said to be minimized. Example: The map method usually is done by inspection. The following example expands the algebraic method to show the "trick" behind the combining of terms on a Karnaugh map: Minterms #3 and #7 abut, #7 and #6 abut, and #4 and #6 abut (because the table's edges wrap around). So each of these pairs can be reduced. Observe that by the Idempotency law (A ∨ A) = A, we can create more terms. Then by association and distributive laws the variables to disappear can be paired, and then "disappeared" with the Law of contradiction (x & ~x)=0. The following uses brackets [ and ] only to keep track of the terms; they have no special significance: Put the formula in conjunctive normal form with the formula to be reduced: q = ( (~p & d & c ) ∨ (p & d & c) ∨ (p & d & ~c) ∨ (p & ~d & ~c) ) = ( #3 ∨ #7 ∨ #6 ∨ #4 ) Idempotency (absorption) [ A ∨ A) = A: ( #3 ∨ [ #7 ∨ #7 ] ∨ [ #6 ∨ #6 ] ∨ #4 ) Associative law (x ∨ (y ∨ z)) = ( (x ∨ y) ∨ z ) ( [ #3 ∨ #7 ] ∨ [ #7 ∨ #6 ] ∨ [ #6 ∨ #4] ) [ (~p & d & c ) ∨ (p & d & c) ] ∨ [ (p & d & c) ∨ (p & d & ~c) ] ∨ [ (p & d & ~c) ∨ (p & ~d & ~c) ]. Distributive law ( x & (y ∨ z) ) = ( (x & y) ∨ (x & z) ) : ( [ (d & c) ∨ (~p & p) ] ∨ [ (p & d) ∨ (~c & c) ] ∨ [ (p & ~c) ∨ (c & ~c) ] ) Commutative law and law of contradiction (x & ~x) = (~x & x) = 0: ( [ (d & c) ∨ (0) ] ∨ [ (p & d) ∨ (0) ] ∨ [ (p & ~c) ∨ (0) ] ) Law of identity ( x ∨ 0 ) = x leading to the reduced form of the formula: q = ( (d & c) ∨ (p & d) ∨ (p & ~c) ) Verify reduction with a truth table Impredicative propositions Given the following examples-as-definitions, what does one make of the subsequent reasoning: (1) "This sentence is simple." (2) "This sentence is complex, and it is conjoined by AND." Then assign the variable "s" to the left-most sentence "This sentence is simple". Define "compound" c = "not simple" ~s, and assign c = ~s to "This sentence is compound"; assign "j" to "It [this sentence] is conjoined by AND". The second sentence can be expressed as: ( NOT(s) AND j ) If truth values are to be placed on the sentences c = ~s and j, then all are clearly FALSEHOODS: e.g. "This sentence is complex" is a FALSEHOOD (it is simple, by definition). So their conjunction (AND) is a falsehood. But when taken in its assembled form, the sentence a TRUTH. This is an example of the paradoxes that result from an impredicative definition—that is, when an object m has a property P, but the object m is defined in terms of property P. The best advice for a rhetorician or one involved in deductive analysis is avoid impredicative definitions but at the same time be on the lookout for them because they can indeed create paradoxes. Engineers, on the other hand, put them to work in the form of propositional formulas with feedback. Propositional formula with "feedback" The notion of a propositional formula appearing as one of its own variables requires a formation rule that allows the assignment of the formula to a variable. In general there is no stipulation (either axiomatic or truth-table systems of objects and relations) that forbids this from happening. The simplest case occurs when an OR formula becomes one its own inputs e.g. p = q. Begin with (p ∨ s) = q, then let p = q. Observe that q's "definition" depends on itself "q" as well as on "s" and the OR connective; this definition of q is thus impredicative. Either of two conditions can result: oscillation or memory. It helps to think of the formula as a black box. Without knowledge of what is going on "inside" the formula-"box" from the outside it would appear that the output is no longer a function of the inputs alone. That is, sometimes one looks at q and sees 0 and other times 1. To avoid this problem one has to know the state (condition) of the "hidden" variable p inside the box (i.e. the value of q fed back and assigned to p). When this is known the apparent inconsistency goes away. To understand [predict] the behavior of formulas with feedback requires the more sophisticated analysis of sequential circuits. Propositional formulas with feedback lead, in their simplest form, to state machines; they also lead to memories in the form of Turing tapes and counter-machine counters. From combinations of these elements one can build any sort of bounded computational model (e.g. Turing machines, counter machines, register machines, Macintosh computers, etc.). Oscillation In the abstract (ideal) case the simplest oscillating formula is a NOT fed back to itself: ~(~(p=q)) = q. Analysis of an abstract (ideal) propositional formula in a truth-table reveals an inconsistency for both p=1 and p=0 cases: When p=1, q=0, this cannot be because p=q; ditto for when p=0 and q=1. Oscillation with delay: If a delay (ideal or non-ideal) is inserted in the abstract formula between p and q then p will oscillate between 1 and 0: 101010...101... ad infinitum. If either of the delay and NOT are not abstract (i.e. not ideal), the type of analysis to be used will be dependent upon the exact nature of the objects that make up the oscillator; such things fall outside mathematics and into engineering. Analysis requires a delay to be inserted and then the loop cut between the delay and the input "p". The delay must be viewed as a kind of proposition that has "qd" (q-delayed) as output for "q" as input. This new proposition adds another column to the truth table. The inconsistency is now between "qd" and "p" as shown in red; two stable states resulting: Memory Without delay, inconsistencies must be eliminated from a truth table analysis. With the notion of "delay", this condition presents itself as a momentary inconsistency between the fed-back output variable q and p = qdelayed. A truth table reveals the rows where inconsistencies occur between p = qdelayed at the input and q at the output. After "breaking" the feed-back, the truth table construction proceeds in the conventional manner. But afterwards, in every row the output q is compared to the now-independent input p and any inconsistencies between p and q are noted (i.e. p=0 together with q=1, or p=1 and q=0); when the "line" is "remade" both are rendered impossible by the Law of contradiction ~(p & ~p)). Rows revealing inconsistencies are either considered transient states or just eliminated as inconsistent and hence "impossible". Once-flip memory About the simplest memory results when the output of an OR feeds back to one of its inputs, in this case output "q" feeds back into "p". Given that the formula is first evaluated (initialized) with p=0 & q=0, it will "flip" once when "set" by s=1. Thereafter, output "q" will sustain "q" in the "flipped" condition (state q=1). This behavior, now time-dependent, is shown by the state diagram to the right of the once-flip. Flip-flop memory The next simplest case is the "set-reset" flip-flop shown below the once-flip. Given that r=0 & s=0 and q=0 at the outset, it is "set" (s=1) in a manner similar to the once-flip. It however has a provision to "reset" q=0 when "r"=1. And additional complication occurs if both set=1 and reset=1. In this formula, the set=1 forces the output q=1 so when and if (s=0 & r=1) the flip-flop will be reset. Or, if (s=1 & r=0) the flip-flop will be set. In the abstract (ideal) instance in which s=1 ⇒ s=0 & r=1 ⇒ r=0 simultaneously, the formula q will be indeterminate (undecidable). Due to delays in "real" OR, AND and NOT the result will be unknown at the outset but thereafter predicable. Clocked flip-flop memory The formula known as "clocked flip-flop" memory ("c" is the "clock" and "d" is the "data") is given below. It works as follows: When c = 0 the data d (either 0 or 1) cannot "get through" to affect output q. When c = 1 the data d "gets through" and output q "follows" d's value. When c goes from 1 to 0 the last value of the data remains "trapped" at output "q". As long as c=0, d can change value without causing q to change. Examples ( ( c & d ) ∨ ( p & ( ~( c & ~( d ) ) ) ) = q, but now let p = q: ( ( c & d ) ∨ ( q & ( ~( c & ~( d ) ) ) ) = q The state diagram is similar in shape to the flip-flop's state diagram, but with different labelling on the transitions. Historical development Bertrand Russell (1912:74) lists three laws of thought that derive from Aristotle: (1) The law of identity: "Whatever is, is.", (2) The law of noncontradiction: "Nothing can both be and not be", and (3) The law of excluded middle: "Everything must be or not be." Example: Here O is an expression about an object's BEING or QUALITY: Law of Identity: O = O Law of contradiction: ~(O & ~(O)) Law of excluded middle: (O ∨ ~(O)) The use of the word "everything" in the law of excluded middle renders Russell's expression of this law open to debate. If restricted to an expression about BEING or QUALITY with reference to a finite collection of objects (a finite "universe of discourse") -- the members of which can be investigated one after another for the presence or absence of the assertion—then the law is considered intuitionistically appropriate. Thus an assertion such as: "This object must either BE or NOT BE (in the collection)", or "This object must either have this QUALITY or NOT have this QUALITY (relative to the objects in the collection)" is acceptable. See more at Venn diagram. Although a propositional calculus originated with Aristotle, the notion of an algebra applied to propositions had to wait until the early 19th century. In an (adverse) reaction to the 2000 year tradition of Aristotle's syllogisms, John Locke's Essay concerning human understanding (1690) used the word semiotics (theory of the use of symbols). By 1826 Richard Whately had critically analyzed the syllogistic logic with a sympathy toward Locke's semiotics. George Bentham's work (1827) resulted in the notion of "quantification of the predicate" (1827) (nowadays symbolized as ∀ ≡ "for all"). A "row" instigated by William Hamilton over a priority dispute with Augustus De Morgan "inspired George Boole to write up his ideas on logic, and to publish them as MAL [Mathematical Analysis of Logic] in 1847" (Grattin-Guinness and Bornet 1997:xxviii). About his contribution Grattin-Guinness and Bornet comment: "Boole's principal single innovation was [the] law [ xn = x ] for logic: it stated that the mental acts of choosing the property x and choosing x again and again is the same as choosing x once... As consequence of it he formed the equations x•(1-x)=0 and x+(1-x)=1 which for him expressed respectively the law of contradiction and the law of excluded middle" (p. xxviiff). For Boole "1" was the universe of discourse and "0" was nothing. Gottlob Frege's massive undertaking (1879) resulted in a formal calculus of propositions, but his symbolism is so daunting that it had little influence excepting on one person: Bertrand Russell. First as the student of Alfred North Whitehead he studied Frege's work and suggested a (famous and notorious) emendation with respect to it (1904) around the problem of an antinomy that he discovered in Frege's treatment ( cf Russell's paradox ). Russell's work led to a collaboration with Whitehead that, in the year 1912, produced the first volume of Principia Mathematica (PM). It is here that what we consider "modern" propositional logic first appeared. In particular, PM introduces NOT and OR and the assertion symbol ⊦ as primitives. In terms of these notions they define IMPLICATION → ( def. *1.01: ~p ∨ q ), then AND (def. *3.01: ~(~p ∨ ~q) ), then EQUIVALENCE p ←→ q (*4.01: (p → q) & ( q → p ) ). Henry M. Sheffer (1921) and Jean Nicod demonstrate that only one connective, the "stroke" | is sufficient to express all propositional formulas. Emil Post (1921) develops the truth-table method of analysis in his "Introduction to a general theory of elementary propositions". He notes Nicod's stroke | . Whitehead and Russell add an introduction to their 1927 re-publication of PM adding, in part, a favorable treatment of the "stroke". Computation and switching logic: William Eccles and F. W. Jordan (1919) describe a "trigger relay" made from a vacuum tube. George Stibitz (1937) invents the binary adder using mechanical relays. He builds this on his kitchen table. Example: Given binary bits ai and bi and carry-in ( c_ini), their summation Σi and carry-out (c_outi) are: ( ( ai XOR bi ) XOR c_ini )= Σi ( ai & bi ) ∨ c_ini ) = c_outi; Alan Turing builds a multiplier using relays (1937–1938). He has to hand-wind his own relay coils to do this. Textbooks about "switching circuits" appear in the early 1950s. Willard Quine 1952 and 1955, E. W. Veitch 1952, and M. Karnaugh (1953) develop map-methods for simplifying propositional functions. George H. Mealy (1955) and Edward F. Moore (1956) address the theory of sequential (i.e. switching-circuit) "machines". E. J. McCluskey and H. Shorr develop a method for simplifying propositional (switching) circuits (1962). Footnotes Citations References and , 2005, A Short Course in Discrete Mathematics, Dover Publications, Mineola NY, . This text is used in a "lower division two-quarter [computer science] course" at UC San Diego. , 2002, A Mathematical Introduction to Logic. Harcourt/Academic Press. , (Pergamon Press 1963), 1966, (Dover edition 2007), Boolean Algebra, Dover Publications, Inc. Minola, New York, . Emphasis on the notion of "algebra of classes" with set-theoretic symbols such as ∩, ∪, ' (NOT), ⊂ (IMPLIES). Later Goldstein replaces these with &, ∨, ¬, → (respectively) in his treatment of "Sentence Logic" pp. 76–93. and Gérard Bornet 1997, George Boole: Selected Manuscripts on Logic and its Philosophy, Birkhäuser Verlag, Basil, (Boston). 1978, Logic for Mathematicians, Cambridge University Press, Cambridge UK, . 1965, Introduction to the Theory of Switching Circuits, McGraw-Hill Book Company, New York. No ISBN. Library of Congress Catalog Card Number 65-17394. McCluskey was a student of Willard Quine and developed some notable theorems with Quine and on his own. For those interested in the history, the book contains a wealth of references. 1967, Computation: Finite and Infinite Machines, Prentice-Hall, Inc, Englewood Cliffs, N.J.. No ISBN. Library of Congress Catalog Card Number 67-12342. Useful especially for computability, plus good sources. 1969, 1997, Mathematical Logic: A First Course, Dover Publications, Inc., Mineola, New York, (pbk.). 1957 (1999 Dover edition), Introduction to Logic, Dover Publications, Inc., Mineola, New York. (pbk.). This book is in print and readily available. On his page 204 in a footnote he references his set of axioms to E. V. Huntington, "Sets of Independent Postulates for the Algebra of Logic", Transactions of the American Mathematical Society, Vol. 5 91904) pp. 288-309. 1941 (1995 Dover edition), Introduction to Logic and to the Methodology of Deductive Sciences, Dover Publications, Inc., Mineola, New York. (pbk.). This book is in print and readily available. 1967, 3rd printing with emendations 1976, From Frege to Gödel: A Source Book in Mathematical Logic, 1879-1931, Harvard University Press, Cambridge, Massachusetts. (pbk.) Translation/reprints of Frege (1879), Russell's letter to Frege (1902) and Frege's letter to Russell (1902), Richard's paradox (1905), Post (1921) can be found here. and 1927 2nd edition, paperback edition to *53 1962, Principia Mathematica, Cambridge University Press, no ISBN. In the years between the first edition of 1912 and the 2nd edition of 1927, H. M. Sheffer 1921 and M. Jean Nicod (no year cited) brought to Russell's and Whitehead's attention that what they considered their primitive propositions (connectives) could be reduced to a single |, nowadays known as the "stroke" or NAND (NOT-AND, NEITHER ... NOR...). Russell-Whitehead discuss this in their "Introduction to the Second Edition" and makes the definitions as discussed above. 1968, Logic Design with Integrated Circuits, John Wiley & Sons, Inc., New York. No ISBN. Library of Congress Catalog Card Number: 68-21185. Tight presentation of engineering's analysis and synthesis methods, references McCluskey 1965. Unlike Suppes, Wickes' presentation of "Boolean algebra" starts with a set of postulates of a truth-table nature and then derives the customary theorems of them (p. 18ff). External links Propositional calculus Boolean algebra Statements Syntax (logic) Propositions Logical expressions
Propositional formula
[ "Mathematics" ]
15,189
[ "Boolean algebra", "Fields of abstract algebra", "Mathematical logic", "Logical expressions" ]
1,557,669
https://en.wikipedia.org/wiki/Rundlet
The rundlet is an archaic unit-like size of wine casks once used in Britain. It was equivalent to about 68 litres. It used to be defined as 18 wine gallons—one of several gallons then in use—before the adoption of the imperial system in 1824, afterwards it was 15 imperial gallons, which became the universal English base unit of volume in the British realm. References Units of volume Wine terminology British wine
Rundlet
[ "Mathematics" ]
87
[ "Units of volume", "Quantity", "Units of measurement" ]
1,557,738
https://en.wikipedia.org/wiki/Tun%20%28unit%29
The tun (, , ) is an English unit of liquid volume (not weight), used for measuring wine, oil or honey. Typically a large vat or vessel, most often holding 252 wine gallons, but occasionally other sizes (e.g. 256, 240 and 208 gallons) were also used. The modern tun is about 954 litres. The word tun is etymologically related to the word ton for the unit of mass, the mass of a tun of wine being approximately one long ton, which is . The spellings "tun" and "ton" were sometimes used interchangeably. History Originally, the tun was defined as 256 wine gallons; this is the basis for the name of the quarter of 64 corn gallons. At some time before the 15th century, it was reduced to 252 wine gallons, so as to be evenly divisible by other small integers, including seven. In one Early Modern English example from 1507, a tun is defined as 240 gallons. With the adoption of the Queen Anne wine gallon of 231 cubic inches in 1706, the tun approximated the volume of a cylinder with both diameter and height of 42 inches. These were adopted as the standard US liquid gallon and tun. When the imperial system was introduced in 1824, the tun was redefined in the UK and its colonies as 210 imperial gallons. The imperial tun remained evenly divisible by small integers. There was also little change in the actual value of the tun. Standard tuns of wine came to serve as a measure of a ship's capacity. Definitions In the US customary system, the tun (symbol: US tu) is defined as 252 US fluid gallons (about ). In the imperial system, the tun is defined as 210 imperial gallons (about ). Conversions Both the imperial and US tuns were subdivided into smaller units as follows. Explanatory notes See also Tonelada References Customary units of measurement Units of volume
Tun (unit)
[ "Mathematics" ]
393
[ "Units of volume", "Quantity", "Customary units of measurement", "Units of measurement" ]
1,557,763
https://en.wikipedia.org/wiki/Butt%20%28unit%29
The butt is an obsolete English measure of liquid volume equalling two hogsheads, being between by various definitions. Equivalents A butt approximately equated to for ale or for wine (also known as a pipe), although the Oxford English Dictionary notes that "these standards were not always precisely adhered to". The butt is one in a series of English wine cask units, being half of a tun. See also English wine cask units § Pipe or butt References Units of volume
Butt (unit)
[ "Mathematics" ]
102
[ "Units of volume", "Quantity", "Units of measurement" ]
1,557,789
https://en.wikipedia.org/wiki/Longeron
In engineering, a longeron or stringer is a load-bearing component of a framework. The term is commonly used in connection with aircraft fuselages and automobile chassis. Longerons are used in conjunction with stringers to form structural frameworks. Aircraft In an aircraft fuselage, stringers are attached to formers (also called frames) and run in the longitudinal direction of the aircraft. They are primarily responsible for transferring the aerodynamic loads acting on the skin onto the frames and formers. In the wings or horizontal stabilizer, longerons run spanwise (from wing root to wing tip) and attach between the ribs. The primary function here also is to transfer the bending loads acting on the wings onto the ribs and spar. The terms "longeron" and "stringer" are sometimes used interchangeably. Historically, though, there is a subtle difference between the two terms. If the longitudinal members in a fuselage are few in number (usually 4 to 8) and run all along the fuselage length, then they are called "longerons". The longeron system also requires that the fuselage frames be closely spaced (about every ). If the longitudinal members are numerous (usually 50 to 100) and are placed just between two formers/frames, then they are called "stringers". In the stringer system the longitudinal members are smaller and the frames are spaced further apart (about ). Generally, longerons are of larger cross-section when compared to stringers. On large modern aircraft the stringer system is more common because it is more weight-efficient, despite being more complex to construct and analyze. Some aircraft use a combination of both stringers and longerons. Longerons often carry larger loads than stringers and also help to transfer skin loads to internal structure. Longerons nearly always attach to frames or ribs. Stringers are usually not attached to anything but the skin, where they carry a portion of the fuselage bending moment through axial loading. It is not uncommon to have a mixture of longerons and stringers in the same major structural component. Space launch vehicles Stringers are also used in the construction of some launch vehicle propellant tanks. For example, the Falcon 9 launch vehicle uses stringers in the kerosene (RP-1) tanks, but not in the liquid oxygen tanks, on both the first and second stages. References Aircraft components Mechanical engineering
Longeron
[ "Physics", "Engineering" ]
482
[ "Applied and interdisciplinary physics", "Mechanical engineering" ]
1,558,020
https://en.wikipedia.org/wiki/Experix
Experix is an open-source command interpreter designed for operating laboratory equipment, especially data acquisition devices, and processing, displaying and storing the data from them. It is usable now, only under Linux on the x86 architecture, but still under development, and users are welcome to participate in extending and improving it. Experix is radically different from most commercial data acquisition programs, for example LabVIEW, which model a measurement and control application as a network of operational units represented graphically as boxes with connections that stand for data flow. In these systems an application is created by manipulating these symbols on the screen, and then it is used by clicking buttons and filling dialog boxes in a GUI environment. Experix, in contrast, represents the application as a series of operations generally taking place one after another. It processes a command line in a sequential way, and numbers, operators, functions and commands in the command line consume and create objects on a stack. These objects include integers and floating-point numbers in several sizes, complex and polar numbers, multi-dimensional arrays made from any of the numerical types, several kinds of strings, and pointers to functions, commands and variables (which can be numbers, arrays and strings). A function, command or operator requires certain types of objects on the stack and puts objects on the stack, and may also change values in stack objects and variables, draw graphs, order operations in device drivers, and read and write files. Experix is released under the GNU GPL. Syntax A command line can have practically arbitrary length, and is a series of tokens. For example, .01 1000 ]+ \c .sin * graph/yK function1= would create an array of 1000 double-precision values representing the function j*0.01*sin(j*.01) for j from 0 to 999; draw a graph of that using black points on a yellow background; and copy that array into a variable called function1. This sample of command tokens will give an idea of the range of capabilities that experix has. 123e4 puts the double-precision number 1.23*10^6 on the stack #x5a1 puts the integer 0x5a1 on the stack + adds the object in stack level 1 to the object in level 2; what that means exactly depends on what those objects are: add two numbers, or add a number to each member of an array, or add corresponding members of two arrays .cos replaces the number in stack level 1, or each member of the array in stack level 1, with cosine (that number) ;c puts the speed of light on the stack :BS sets the bit designated by stack level 1 (integer) in the integer in stack level 2; does arrays too ] makes an array from numbers on the stack ]+ makes a ramp array using the increment value in level 2 and number of elements in level 1 [= sets the value of an array element [s extracts a subspace of an array as specified by stack arguments %v gets the smaller value of two stack objects \/D (that's backslash,slash,D) decodes the option characters which follow the name of a command in a command string ?\/ displays the help file about the command-option operators ??fft starts a virtual terminal session with a text editor loaded with the help file about the Fourier transform function &path runs the commands from the file specified by "path" $9: command string label number 9 >=$9 if the value in stack level 1 is greater than that in level 2, command execution branches to label $9: >D makes a deferred command <D runs a deferred command ,3s sets command string local variable number 3 | concatenates two strings or arrays ) makes a complex number from two numbers or a complex array from two arrays .>3Y makes an unsigned 1-byte number or array with values from stack level 3, which may be in any numerical type \d drops stack level 1 \;3u gets the number of data units in the object in stack level 3 >0A makes a truth value: 1 (true) if all members of the array in stack level 1 are greater than 0; 0 (false) otherwise def creates variables and commands graph draws graphs exec runs programs and collects the standard output and standard error in files that can be displayed or edited by special help operators file transfers data between files and stack objects xcd performs operations on experix device files Experix provides hardware operations by way of a command-line interface to device drivers. An experix driver has a 'read' entry point which functions more like an ioctl. It copies the integer array that experix has prepared, finds in it an operation code and supporting information, performs the operation and returns results to the array. The driver maintains a control page which experix maps with read-only permission, and a number of data pages which are mapped with read-write permission. The xcd function performs this memory mapping and creates command variables that represent the data pages. These variables can then be used in command strings to perform data display and analysis. A data acquisition device driver has an interrupt handler which uses data from the output pages and stores data in the input pages. At designated index values it sends the new data signal to experix. The xcd function is used to bind the signal to an experix command string. Then that command is executed whenever the new data signal comes. A device handler command might update variables, perform analysis functions, draw graphs and issue warnings. It runs atomically, which means it uses a separate stack and runs uninterrupted between two tokens in whatever user command happens to be in progress. Documentation Documentation is extensive. The keywords.doc file describes functions and data structures in the program. Commands, functions and operators are described in help files, which are accessed in experix by the help operators. Help files are ordinary text files with terminal escape sequences to provide color highlights. System commands such as cat and grep will show these files with their colorization, and the editor that experix uses for the two-question-mark help operator is nano (from the GNU project) with the escape sequence extension. The source files for nano that were changed to provide this extension are available on the experix website. Experix users are encouraged to correct and improve help files as they work. Limitations Currently, experix is only available for Linux and, due to assembly language code, only on the x86 architecture. At present the only graphics support for experix is with the open-source SVGALib. It is possible to have an experix graphics session in one virtual terminal and text or X sessions in others, and switch between them with the keys. Graphics operations are done by a server process, and experix sends commands and data to this process through a fifo. It has an execution thread that uses readline (an open-source library from the GNU project) to obtain command lines and place them in the execution queue. Another thread translates the standard output (i.e. the echo from readline) into graphics server commands. Experix can also run in a text screen or X-term without using svgalib at all. There is some assembly code and other matters to attend to before it can run on architectures other than x86. The range of device drivers and applications available now is extremely limited. It runs as root, which is a considerable security hazard on a networked computer. It should be possible to run experix without root privileges, but this has not yet been done. External links Official Site References Free science software Free mathematics software
Experix
[ "Mathematics" ]
1,590
[ "Free mathematics software", "Mathematical software" ]
1,558,159
https://en.wikipedia.org/wiki/Charrette
A charrette (American pronunciation: ; ), often Anglicized to charette or charet and sometimes called a design charrette, is an intense period of design or planning activity. The word charrette may refer to any collaborative process by which a group of designers draft a solution to a design problem, and in a broader sense can be applied to the development of public policy through dialogue between decision-makers and stakeholders. In a design setting, whilst the structure of a charrette depends on the problem and individuals in the group, charrettes often take place in multiple sessions in which the group divides into sub-groups. Each sub-group then presents its work to the full group as material for further dialogue. Such charrettes serve as a way of quickly generating a design solution while integrating the aptitudes and interests of a diverse group of people. The general idea of a charrette is to create an innovative atmosphere in which a diverse group of stakeholders can collaborate to "generate visions for the future". The term was introduced to many in the Northeast US by a popular art and architecture supply store chain Charrette (1969-2009). Origin The word charrette is French for 'cart' or 'chariot'. Its use in the sense of design and planning arose in the 19th century at the École des Beaux-Arts in Paris, where it was not unusual at the end of a term for teams of student architects to work right up until a deadline, when a charrette would be wheeled among them to collect up their scale models and other work for review. The furious continuation of their work to apply the finishing touches came to be referred to as working en charrette 'in the cart'. Émile Zola depicted such a scene of feverish activity, a nuit de charrette 'charrette night', in L'Œuvre (serialized 1885, published 1886), his fictionalized account of his friendship with Paul Cézanne. The term evolved into the current design-related usage in conjunction with working right up until a deadline. Examples Charrettes take place in many disciplines, including land use planning, or urban planning. In planning, the charrette has become a technique for consulting with all stakeholders. This type of charrette (sometimes called an enquiry by design) typically involves intense and possibly multi-day meetings, involving municipal officials, developers, and residents. A successful charrette promotes joint ownership of solutions and attempts to defuse typical confrontational attitudes between residents and developers. Charrettes tend to involve small groups, however the residents participating may not represent all the residents nor have the moral authority to represent them. Residents who do participate get early input into the planning process. For developers and municipal officials charrettes achieve community involvement, may satisfy consultation criteria, with the objective of avoiding costly legal battles. Other uses of the term "charrette" occur within an academic or professional setting, whereas urban planners invite the general public to their planning charrettes. Thus most people (unless they happen to be design students) encounter the term "charrette" in an urban-planning context. In fields of design such as architecture, landscape architecture, industrial design, interior design, interaction design, or graphic design, the term charrette may refer to an intense period of work by one person or a group of people prior to a deadline. The period of a charrette typically involves both focused and sustained effort. The word "charrette" may also be used as a verb, as in, for example, "I am charretting" or "I am on charrette [or: en charrette]," simply meaning I am working long nights, intensively toward a deadline. An example of a charrette occurred in Florida in 1973 when the future residents of the Miccosukee Land Co-op in Tallahassee traveled by auto caravan to Orlando and spent the weekend at the offices of the King Helie Planning Group of Orlando (sleeping on the floor) working with its staff to develop the community's land use plans; features desired by individual members and acceptable to the group included a perfectly circular lot, a huge treehouse lot, and streets named after Beatles songs (such as "The Long and Winding Road". "Penny Lane", "Abbey Road"). A more recent example, from New College of Florida, is their Master Plan Design Charrettes that took place over a week in 2005 involving students, alumni, administrators, professors, area residents, and local government staff members as well as architects, designers, and planners from Moule & Polyzoides, The Folsom Group, the Florida House Institute for Sustainable Development, Hall Planning & Engineering, and Biohabitats in a process to make long-range suggestions for the campus layout, landscaping, architecture, and transportation corridors of the master plan for its campus. In some cases, a charrette may be held on a recurring basis, such as the annual charrette held by the Landscape Architecture and Environmental Planning department at Utah State University. Each February, the faculty choose a site in partnership with communities and groups throughout Utah, and hold an intense five-day design charrette focusing on particular issues in that community or region. The charrette begins with a field visit, followed by all-day work sessions accompanied by project stakeholders and volunteer landscape architects and other professionals, and overseen by senior and graduate level students. The final work is then presented to the community. Charrettes such as these offer students and professionals the opportunity to work together in a close setting on real-world design scenarios, and often provide communities with tens of thousands of dollars of design work for free. The Schools of Architecture at Rice University and at the University of Virginia call the last week before the end of classes Charrette. At the final deadline time (assigned by the school), all students must put their "pencils down" and stop working. Students then present their work to fellow students and faculty in a critiqued presentation. Many municipalities around the world develop long-term city plans or visions through multiple charrettes - both communal and professional. Notable successes on the west coast of Canada include the city of Vancouver, British Columbia , as well as the District of Tofino. Tofino won an Award of Excellence in Planning after a successful multi-day charrette. As dramatised for the film The Best of Enemies (2019), in 1971 a charette was used to address inter-racial tensions in order to facilitate school desegregation in the city of Durham, North Carolina. See also Barn raising Shturmovshchina Talkoot Workshop External links Online Compendium of Free Information for the Community Based Urban Design Process CharretteCenter.net The Neighborhood Charrette Handbook University of Louisville's Sustainable Urban Neighborhoods Program (SUN) A Handbook for Planning and Conducting Charrettes for High Performance Projects U.S. Department of Energy | Office of Energy Efficiency and Renewable Energy "PUBLIC INVOLVEMENT TECHNIQUES FOR TRANSPORTATION DECISION-MAKING: CHARRETTES, US Dept of Transportation. References Civil society Community building Group decision-making Design Group processes Urban studies and planning terminology
Charrette
[ "Engineering" ]
1,476
[ "Design" ]
1,558,208
https://en.wikipedia.org/wiki/Agricultural%20wastewater%20treatment
Agricultural wastewater treatment is a farm management agenda for controlling pollution from confined animal operations and from surface runoff that may be contaminated by chemicals in fertilizer, pesticides, animal slurry, crop residues or irrigation water. Agricultural wastewater treatment is required for continuous confined animal operations like milk and egg production. It may be performed in plants using mechanized treatment units similar to those used for industrial wastewater. Where land is available for ponds, settling basins and facultative lagoons may have lower operational costs for seasonal use conditions from breeding or harvest cycles. Animal slurries are usually treated by containment in anaerobic lagoons before disposal by spray or trickle application to grassland. Constructed wetlands are sometimes used to facilitate treatment of animal wastes. Nonpoint source pollution includes sediment runoff, nutrient runoff and pesticides. Point source pollution includes animal wastes, silage liquor, milking parlour (dairy farming) wastes, slaughtering waste, vegetable washing water and firewater. Many farms generate nonpoint source pollution from surface runoff which is not controlled through a treatment plant. Farmers can install erosion controls to reduce runoff flows and retain soil on their fields. Common techniques include contour plowing, crop mulching, crop rotation, planting perennial crops and installing riparian buffers. Farmers can also develop and implement nutrient management plans to reduce excess application of nutrients and reduce the potential for nutrient pollution. To minimize pesticide impacts, farmers may use Integrated Pest Management (IPM) techniques (which can include biological pest control) to maintain control over pests, reduce reliance on chemical pesticides, and protect water quality. Nonpoint source pollution Nonpoint source pollution from farms is caused by surface runoff from fields during rain storms. Agricultural runoff is a major source of pollution, in some cases the only source, in many watersheds. Sediment runoff Soil washed off fields is the largest source of agricultural pollution in the United States. Excess sediment causes high levels of turbidity in water bodies, which can inhibit growth of aquatic plants, clog fish gills and smother animal larvae. Farmers may utilize erosion controls to reduce runoff flows and retain soil on their fields. Common techniques include: contour ploughing crop mulching crop rotation planting perennial crops installing riparian buffers. Nutrient runoff Nitrogen and phosphorus are key pollutants found in runoff, and they are applied to farmland in several ways, such as in the form of commercial fertilizer, animal manure, or municipal or industrial wastewater (effluent) or sludge. These chemicals may also enter runoff from crop residues, irrigation water, wildlife, and atmospheric deposition. Farmers can develop and implement nutrient management plans to mitigate impacts on water quality by: mapping and documenting fields, crop types, soil types, water bodies developing realistic crop yield projections conducting soil tests and nutrient analyses of manures and/or sludges applied identifying other significant nutrient sources (e.g., irrigation water) evaluating significant field features such as highly erodible soils, subsurface drains, and shallow aquifers applying fertilizers, manures, and/or sludges based on realistic yield goals and using precision agriculture techniques. Pesticides Pesticides are widely used by farmers to control plant pests and enhance production, but chemical pesticides can also cause water quality problems. Pesticides may appear in surface water due to: direct application (e.g. aerial spraying or broadcasting over water bodies) runoff during rain storms aerial drift (from adjacent fields). Some pesticides have also been detected in groundwater. Farmers may use Integrated Pest Management (IPM) techniques (which can include biological pest control) to maintain control over pests, reduce reliance on chemical pesticides, and protect water quality. There are few safe ways of disposing of pesticide surpluses other than through containment in well managed landfills or by incineration. In some parts of the world, spraying on land is a permitted method of disposal. Point source pollution and treatment steps Farms with large livestock and poultry operations, such as factory farms, can be a major source of point source wastewater. In the United States, these facilities are called concentrated animal feeding operations or confined animal feeding operations and are being subject to increasing government regulation. Antibiotic-resistant bacteria have been found to infiltrate the water cycle from farms. Raising animals accounts for 73% of antibiotics use globally, and wastewater treatment facilities can transfer antibiotic-resistant bacteria to humans. Animal wastes The constituents of animal wastewater typically contain Strong organic content — much stronger than human sewage High solids concentration High nitrate and phosphorus content Antibiotics Synthetic hormones Often high concentrations of parasites and their eggs Spores of Cryptosporidium (a protozoan) resistant to drinking water treatment processes Spores of Giardia Human pathogenic bacteria such as Brucella and Salmonella Animal wastes from cattle can be produced as solid or semisolid manure or as a liquid slurry. The production of slurry is especially common in housed dairy cattle. Treatment Whilst solid manure heaps outdoors can give rise to polluting wastewaters from runoff, this type of waste is usually relatively easy to treat by containment and/or covering of the heap. Animal slurries require special handling and are usually treated by containment in lagoons before disposal by spray or trickle application to grassland. Constructed wetlands are sometimes used to facilitate treatment of animal wastes, as are anaerobic lagoons. Excessive application or application to sodden land or insufficient land area can result in direct runoff to watercourses, with the potential for causing severe pollution. Application of slurries to land overlying aquifers can result in direct contamination or, more commonly, elevation of nitrogen levels as nitrite or nitrate. The disposal of any wastewater containing animal waste upstream of a drinking water intake can pose serious health problems to those drinking the water because of the highly resistant spores present in many animals that are capable of causing disabling disease in humans. This risk exists even for very low-level seepage via shallow surface drains or from rainfall run-off. Some animal slurries are treated by mixing with straws and composted at high temperature to produce a bacteriologically sterile and friable manure for soil improvement. Piggery waste Piggery waste is comparable to other animal wastes and is processed as for general animal waste, except that many piggery wastes contain elevated levels of copper that can be toxic in the natural environment. The liquid fraction of the waste is frequently separated off and re-used in the piggery to avoid the prohibitively expensive costs of disposing of copper-rich liquid. Ascarid worms and their eggs are also common in piggery waste and can infect humans if wastewater treatment is ineffective. Silage liquor Fresh or wilted grass or other green crops can be made into a semi-fermented product called silage which can be stored and used as winter forage for cattle and sheep. The production of silage often involves the use of an acid conditioner such as sulfuric acid or formic acid. The process of silage making frequently produces a yellow-brown strongly smelling liquid which is very rich in simple sugars, alcohol, short-chain organic acids and silage conditioner. This liquor is one of the most polluting organic substances known. The volume of silage liquor produced is generally in proportion to the moisture content of the ensiled material. Treatment Silage liquor is best treated through prevention by wilting crops well before silage making. Any silage liquor that is produced can be used as part of the food for pigs. The most effective treatment is by containment in a slurry lagoon and by subsequent spreading on land following substantial dilution with slurry. Containment of silage liquor on its own can cause structural problems in concrete pits because of the acidic nature of silage liquor. Milking parlour (dairy farming) wastes Although milk is an important food product, its presence in wastewaters is highly polluting because of its organic strength, which can lead to very rapid de-oxygenation of receiving waters. Milking parlour wastes also contain large volumes of wash-down water, some animal waste together with cleaning and disinfection chemicals. Treatment Milking parlour wastes are often treated in admixture with human sewage in a local sewage treatment plant. This ensures that disinfectants and cleaning agents are sufficiently diluted and amenable to treatment. Running milking wastewaters into a farm slurry lagoon is a possible option although this tends to consume lagoon capacity very quickly. Land spreading is also a treatment option. Slaughtering waste Wastewater from slaughtering activities is similar to milking parlour waste (see above) although considerably stronger in its organic composition and therefore potentially much more polluting. Treatment As for milking parlour waste (see above). Vegetable washing water Washing of vegetables produces large volumes of water contaminated by soil and vegetable pieces. Low levels of pesticides used to treat the vegetables may also be present together with moderate levels of disinfectants such as chlorine. Treatment Most vegetable washing waters are extensively recycled with the solids removed by settlement and filtration. The recovered soil can be returned to the land. Firewater Although few farms plan for fires, fires are nevertheless more common on farms than on many other industrial premises. Stores of pesticides, herbicides, fuel oil for farm machinery and fertilizers can all help promote fire and can all be present in environmentally lethal quantities in firewater from fire fighting at farms. Treatment All farm environmental management plans should allow for containment of substantial quantities of firewater and for its subsequent recovery and disposal by specialist disposal companies. The concentration and mixture of contaminants in firewater make them unsuited to any treatment method available on the farm. Even land spreading has produced severe taste and odour problems for downstream water supply companies in the past. See also Agricultural waste Agricultural surface runoff Dark fermentation Sustainable agriculture References External links Electronic Field Office Technical Guide - U.S. NRCS - Detailed soil conservation guides tailored to individual states/counties. Waste treatment technology Water pollution Agriculture and the environment
Agricultural wastewater treatment
[ "Chemistry", "Engineering", "Environmental_science" ]
2,075
[ "Environmental engineering", "Waste treatment technology", "Water treatment", "Water pollution" ]
1,558,218
https://en.wikipedia.org/wiki/Industrial%20wastewater%20treatment
Industrial wastewater treatment describes the processes used for treating wastewater that is produced by industries as an undesirable by-product. After treatment, the treated industrial wastewater (or effluent) may be reused or released to a sanitary sewer or to a surface water in the environment. Some industrial facilities generate wastewater that can be treated in sewage treatment plants. Most industrial processes, such as petroleum refineries, chemical and petrochemical plants have their own specialized facilities to treat their wastewaters so that the pollutant concentrations in the treated wastewater comply with the regulations regarding disposal of wastewaters into sewers or into rivers, lakes or oceans. This applies to industries that generate wastewater with high concentrations of organic matter (e.g. oil and grease), toxic pollutants (e.g. heavy metals, volatile organic compounds) or nutrients such as ammonia. Some industries install a pre-treatment system to remove some pollutants (e.g., toxic compounds), and then discharge the partially treated wastewater to the municipal sewer system. Most industries produce some wastewater. Recent trends have been to minimize such production or to recycle treated wastewater within the production process. Some industries have been successful at redesigning their manufacturing processes to reduce or eliminate pollutants. Sources of industrial wastewater include battery manufacturing, chemical manufacturing, electric power plants, food industry, iron and steel industry, metal working, mines and quarries, nuclear industry, oil and gas extraction, petroleum refining and petrochemicals, pharmaceutical manufacturing, pulp and paper industry, smelters, textile mills, industrial oil contamination, water treatment and wood preserving. Treatment processes include brine treatment, solids removal (e.g. chemical precipitation, filtration), oils and grease removal, removal of biodegradable organics, removal of other organics, removal of acids and alkalis, and removal of toxic materials. Types Industrial facilities may generate the following industrial wastewater flows: Manufacturing process wastestreams, which can include conventional pollutants (i.e. controllable with secondary treatment systems), toxic pollutants (e.g. solvents, heavy metals), and other harmful compounds such as nutrients Non-process wastestreams: boiler blowdown and cooling water, which produce thermal pollution and other pollutants Industrial site drainage, generated both by manufacturing facilities, service industries and energy and mining sites Wastestreams from the energy and mining sectors: acid mine drainage, produced water from oil and gas extraction, radionuclides Wastestreams that are by-products of treatment or cooling processes: backwashing (water treatment), brine. Contaminants Industrial sectors The specific pollutants generated and the resultant effluent concentrations can vary widely among the industrial sectors. Battery manufacturing Battery manufacturers specialize in fabricating small devices for electronics and portable equipment (e.g., power tools), or larger, high-powered units for cars, trucks and other motorized vehicles. Pollutants generated at manufacturing plants includes cadmium, chromium, cobalt, copper, cyanide, iron, lead, manganese, mercury, nickel, silver, zinc, oil and grease. Centralized waste treatment A centralized waste treatment (CWT) facility processes liquid or solid industrial wastes generated by off-site manufacturing facilities. A manufacturer may send its wastes to a CWT plant, rather than perform treatment on site, due to constraints such as limited land availability, difficulty in designing and operating an on-site system, or limitations imposed by environmental regulations and permits. A manufacturer may determine that using a CWT is more cost-effective than treating the waste itself; this is often the case where the manufacturer is a small business. CWT plants often receive wastes from a wide variety of manufacturers, including chemical plants, metal fabrication and finishing; and used oil and petroleum products from various manufacturing sectors. The wastes may be classified as hazardous, have high pollutant concentrations or otherwise be difficult to treat. In 2000 the U.S. Environmental Protection Agency published wastewater regulations for CWT facilities in the US. Chemical manufacturing Organic chemicals manufacturing The specific pollutants discharged by organic chemical manufacturers vary widely from plant to plant, depending on the types of products manufactured, such as bulk organic chemicals, resins, pesticides, plastics, or synthetic fibers. Some of the organic compounds that may be discharged are benzene, chloroform, naphthalene, phenols, toluene and vinyl chloride. Biochemical oxygen demand (BOD), which is a gross measurement of a range of organic pollutants, may be used to gauge the effectiveness of a biological wastewater treatment system, and is used as a regulatory parameter in some discharge permits. Metal pollutant discharges may include chromium, copper, lead, nickel and zinc. Inorganic chemicals manufacturing The inorganic chemicals sector covers a wide variety of products and processes, although an individual plant may produce a narrow range of products and pollutants. Products include aluminum compounds; calcium carbide and calcium chloride; hydrofluoric acid; potassium compounds; borax; chrome and fluorine-based compounds; cadmium and zinc-based compounds. The pollutants discharged vary by product sector and individual plant, and may include arsenic, chlorine, cyanide, fluoride; and heavy metals such as chromium, copper, iron, lead, mercury, nickel and zinc. Electric power plants Fossil-fuel power stations, particularly coal-fired plants, are a major source of industrial wastewater. Many of these plants discharge wastewater with significant levels of metals such as lead, mercury, cadmium and chromium, as well as arsenic, selenium and nitrogen compounds (nitrates and nitrites). Wastewater streams include flue-gas desulfurization, fly ash, bottom ash and flue gas mercury control. Plants with air pollution controls such as wet scrubbers typically transfer the captured pollutants to the wastewater stream. Ash ponds, a type of surface impoundment, are a widely used treatment technology at coal-fired plants. These ponds use gravity to settle out large particulates (measured as total suspended solids) from power plant wastewater. This technology does not treat dissolved pollutants. Power stations use additional technologies to control pollutants, depending on the particular wastestream in the plant. These include dry ash handling, closed-loop ash recycling, chemical precipitation, biological treatment (such as an activated sludge process), membrane systems, and evaporation-crystallization systems. Technological advancements in ion-exchange membranes and electrodialysis systems has enabled high efficiency treatment of flue-gas desulfurization wastewater to meet recent EPA discharge limits. The treatment approach is similar for other highly scaling industrial wastewaters. Food industry Wastewater generated from agricultural and food processing operations has distinctive characteristics that set it apart from common municipal wastewater managed by public or private sewage treatment plants throughout the world: it is biodegradable and non-toxic, but has high Biological Oxygen Demand (BOD) and suspended solids (SS). The constituents of food and agriculture wastewater are often complex to predict, due to the differences in BOD and pH in effluents from vegetable, fruit, and meat products and due to the seasonal nature of food processing and post-harvesting. Processing of food from raw materials requires large volumes of high grade water. Vegetable washing generates water with high loads of particulate matter and some dissolved organic matter. It may also contain surfactants and pesticides. Aquaculture facilities (fish farms) often discharge large amounts of nitrogen and phosphorus, as well as suspended solids. Some facilities use drugs and pesticides, which may be present in the wastewater. Dairy processing plants generate conventional pollutants (BOD, SS). Animal slaughter and processing produces organic waste from body fluids, such as blood, and gut contents. Pollutants generated include BOD, SS, coliform bacteria, oil and grease, organic nitrogen and ammonia. Processing food for sale produces wastes generated from cooking which are often rich in plant organic material and may also contain salt, flavourings, colouring material and acids or alkali. Large quantities of fats, oil and grease ("FOG") may also be present, which in sufficient concentrations can clog sewer lines. Some municipalities require restaurants and food processing businesses to use grease interceptors and regulate the disposal of FOG in the sewer system. Food processing activities such as plant cleaning, material conveying, bottling, and product washing create wastewater. Many food processing facilities require on-site treatment before operational wastewater can be land applied or discharged to a waterway or a sewer system. High suspended solids levels of organic particles increase BOD and can result in significant sewer surcharge fees. Sedimentation, wedge wire screening, or rotating belt filtration (microscreening) are commonly used methods to reduce suspended organic solids loading prior to discharge. Glass manufacturing Glass manufacturing wastes vary with the type of glass manufactured, which includes fiberglass, plate glass, rolled glass, and glass containers, among others. The wastewater discharged by glass plants may include ammonia, BOD, chemical oxygen demand (COD), fluoride, lead, oil, phenol, and/or phosphorus. The discharges may also be highly acidic (low pH) or alkaline (high pH). Iron and steel industry The production of iron from its ores involves powerful reduction reactions in blast furnaces. Cooling waters are inevitably contaminated with products especially ammonia and cyanide. Production of coke from coal in coking plants also requires water cooling and the use of water in by-products separation. Contamination of waste streams includes gasification products such as benzene, naphthalene, anthracene, cyanide, ammonia, phenols, cresols together with a range of more complex organic compounds known collectively as polycyclic aromatic hydrocarbons (PAH). The conversion of iron or steel into sheet, wire or rods requires hot and cold mechanical transformation stages frequently employing water as a lubricant and coolant. Contaminants include hydraulic oils, tallow and particulate solids. Final treatment of iron and steel products before onward sale into manufacturing includes pickling in strong mineral acid to remove rust and prepare the surface for tin or chromium plating or for other surface treatments such as galvanisation or painting. The two acids commonly used are hydrochloric acid and sulfuric acid. Wastewater include acidic rinse waters together with waste acid. Although many plants operate acid recovery plants (particularly those using hydrochloric acid), where the mineral acid is boiled away from the iron salts, there remains a large volume of highly acid ferrous sulfate or ferrous chloride to be disposed of. Many steel industry wastewaters are contaminated by hydraulic oil, also known as soluble oil. Metal working Many industries perform work on metal feedstocks (e.g. sheet metal, ingots) as they fabricate their final products. The industries include automobile, truck and aircraft manufacturing; tools and hardware manufacturing; electronic equipment and office machines; ships and boats; appliances and other household products; and stationary industrial equipment (e.g. compressors, pumps, boilers). Typical processes conducted at these plants include grinding, machining, coating and painting, chemical etching and milling, solvent degreasing, electroplating and anodizing. Wastewater generated from these industries may contain heavy metals (common heavy metal pollutants from these industries include cadmium, chromium, copper, lead, nickel, silver and zinc), cyanide and various chemical solvents, oil, and grease. Mines and quarries The principal waste-waters associated with mines and quarries are slurries of rock particles in water. These arise from rainfall washing exposed surfaces and haul roads and also from rock washing and grading processes. Volumes of water can be very high, especially rainfall related arisings on large sites. Some specialized separation operations, such as coal washing to separate coal from native rock using density gradients, can produce wastewater contaminated by fine particulate haematite and surfactants. Oils and hydraulic oils are also common contaminants. Wastewater from metal mines and ore recovery plants are inevitably contaminated by the minerals present in the native rock formations. Following crushing and extraction of the desirable materials, undesirable materials may enter the wastewater stream. For metal mines, this can include unwanted metals such as zinc and other materials such as arsenic. Extraction of high value metals such as gold and silver may generate slimes containing very fine particles in where physical removal of contaminants becomes particularly difficult. Additionally, the geologic formations that harbour economically valuable metals such as copper and gold very often consist of sulphide-type ores. The processing entails grinding the rock into fine particles and then extracting the desired metal(s), with the leftover rock being known as tailings. These tailings contain a combination of not only undesirable leftover metals, but also sulphide components which eventually form sulphuric acid upon the exposure to air and water that inevitably occurs when the tailings are disposed of in large impoundments. The resulting acid mine drainage, which is often rich in heavy metals (because acids dissolve metals), is one of the many environmental impacts of mining. Nuclear industry The waste production from the nuclear and radio-chemicals industry is dealt with as Radioactive waste. Researchers have looked at the bioaccumulation of strontium by Scenedesmus spinosus (algae) in simulated wastewater. The study claims a highly selective biosorption capacity for strontium of S. spinosus, suggesting that it may be appropriate for use of nuclear wastewater. Oil and gas extraction Oil and gas well operations generate produced water, which may contain oils, toxic metals (e.g. arsenic, cadmium, chromium, mercury, lead), salts, organic chemicals and solids. Some produced water contains traces of naturally occurring radioactive material. Offshore oil and gas platforms also generate deck drainage, domestic waste and sanitary waste. During the drilling process, well sites typically discharge drill cuttings and drilling mud (drilling fluid). Petroleum refining and petrochemicals Pollutants discharged at petroleum refineries and petrochemical plants include conventional pollutants (BOD, oil and grease, suspended solids), ammonia, chromium, phenols and sulfides. Pharmaceutical manufacturing Pharmaceutical plants typically generate a variety of process wastewaters, including solvents, spent acid and caustic solutions, water from chemical reactions, product wash water, condensed steam, blowdown from air pollution control scrubbers, and equipment washwater. Non-process wastewaters typically include cooling water and site runoff. Pollutants generated by the industry include acetone, ammonia, benzene, BOD, chloroform, cyanide, ethanol, ethyl acetate, isopropanol, methylene chloride, methanol, phenol and toluene. Treatment technologies used include advanced biological treatment (e.g. activated sludge with nitrification), multimedia filtration, cyanide destruction (e.g. hydrolysis), steam stripping and wastewater recycling. Pulp and paper industry Effluent from the pulp and paper industry is generally high in suspended solids and BOD. Plants that bleach wood pulp for paper making may generate chloroform, dioxins (including 2,3,7,8-TCDD), furans, phenols and chemical oxygen demand (COD). Stand-alone paper mills using imported pulp may only require simple primary treatment, such as sedimentation or dissolved air flotation. Increased BOD or COD loadings, as well as organic pollutants, may require biological treatment such as activated sludge or upflow anaerobic sludge blanket reactors. For mills with high inorganic loadings like salt, tertiary treatments may be required, either general membrane treatments like ultrafiltration or reverse osmosis or treatments to remove specific contaminants, such as nutrients. Smelters The pollutants discharged by nonferrous smelters vary with the base metal ore. Bauxite smelters generate phenols but typically use settling basins and evaporation to manage these wastes, with no need to routinely discharge wastewater. Aluminum smelters typically discharge fluoride, benzo(a)pyrene, antimony and nickel, as well as aluminum. Copper smelters typically generate cadmium, lead, zinc, arsenic and nickel, in addition to copper, in their wastewater. Lead smelters discharge lead and zinc. Nickel and cobalt smelters discharge ammonia and copper in addition to the base metals. Zinc smelters discharge arsenic, cadmium, copper, lead, selenium and zinc. Typical treatment processes used in the industry are chemical precipitation, sedimentation and filtration. Textile mills Textile mills, including carpet manufacturers, generate wastewater from a wide variety of processes, including cleaning and finishing, yarn manufacturing and fabric finishing (such as bleaching, dyeing, resin treatment, waterproofing and retardant flameproofing). Pollutants generated by textile mills include BOD, SS, oil and grease, sulfide, phenols and chromium. Insecticide residues in fleeces are a particular problem in treating waters generated in wool processing. Animal fats may be present in the wastewater, which if not contaminated, can be recovered for the production of tallow or further rendering. Textile dyeing plants generate wastewater that contain synthetic (e.g., reactive dyes, acid dyes, basic dyes, disperse dyes, vat dyes, sulphur dyes, mordant dyes, direct dyes, ingrain dyes, solvent dyes, pigment dyes) and natural dyestuff, gum thickener (guar) and various wetting agents, pH buffers and dye retardants or accelerators. Following treatment with polymer-based flocculants and settling agents, typical monitoring parameters include BOD, COD, color (ADMI), sulfide, oil and grease, phenol, TSS and heavy metals (chromium, zinc, lead, copper). Industrial oil contamination Industrial applications where oil enters the wastewater stream may include vehicle wash bays, workshops, fuel storage depots, transport hubs and power generation. Often the wastewater is discharged into local sewer or trade waste systems and must meet local environmental specifications. Typical contaminants can include solvents, detergents, grit, lubricants and hydrocarbons. Water treatment Many industries have a need to treat water to obtain very high quality water for their processes. This might include pure chemical synthesis or boiler feed water. Also, some water treatment processes produce organic and mineral sludges from filtration and sedimentation which require treatment. Ion exchange using natural or synthetic resins removes calcium, magnesium and carbonate ions from water, typically replacing them with sodium, chloride, hydroxyl and/or other ions. Regeneration of ion-exchange columns with strong acids and alkalis produces a wastewater rich in hardness ions which are readily precipitated out, especially when in admixture with other wastewater constituents. Wood preserving Wood preserving plants generate conventional and toxic pollutants, including arsenic, COD, copper, chromium, abnormally high or low pH, phenols, suspended solids, oil and grease. Treatment methods The various types of contamination of wastewater require a variety of strategies to remove the contamination. Most industrial processes, such as petroleum refineries, chemical and petrochemical plants have onsite facilities to treat their wastewaters so that the pollutant concentrations in the treated wastewater comply with the regulations regarding disposal of wastewaters into sewers or into rivers, lakes or oceans. Constructed wetlands are being used in an increasing number of cases as they provided high quality and productive on-site treatment. Other industrial processes that produce a lot of waste-waters such as paper and pulp production have created environmental concern, leading to development of processes to recycle water use within plants before they have to be cleaned and disposed. An industrial wastewater treatment plant may include one or more of the following rather than the conventional treatment sequence of sewage treatment plants: An API oil-water separator, for removing separate phase oil from wastewater. A clarifier, for removing solids from wastewater. A roughing filter, to reduce the biochemical oxygen demand of wastewater. A carbon filtration plant, to remove toxic dissolved organic compounds from wastewater. An advanced electrodialysis reversal (EDR) system with ion-exchange membranes. Brine treatment Brine treatment involves removing dissolved salt ions from the waste stream. Although similarities to seawater or brackish water desalination exist, industrial brine treatment may contain unique combinations of dissolved ions, such as hardness ions or other metals, necessitating specific processes and equipment. Brine treatment systems are typically optimized to either reduce the volume of the final discharge for more economic disposal (as disposal costs are often based on volume) or maximize the recovery of fresh water or salts. Brine treatment systems may also be optimized to reduce electricity consumption, chemical usage, or physical footprint. Brine treatment is commonly encountered when treating cooling tower blowdown, produced water from steam-assisted gravity drainage (SAGD), produced water from natural gas extraction such as coal seam gas, frac flowback water, acid mine or acid rock drainage, reverse osmosis reject, chlor-alkali wastewater, pulp and paper mill effluent, and waste streams from food and beverage processing. Brine treatment technologies may include: membrane filtration processes, such as reverse osmosis; ion-exchange processes such as electrodialysis or weak acid cation exchange; or evaporation processes, such as brine concentrators and crystallizers employing mechanical vapour recompression and steam. Due to the ever increasing discharge standards, there has been an emergence of the use of advance oxidation processes for the treatment of brine. Some notable examples such as Fenton's oxidation and ozonation have been employed for degradation of recalcitrant compounds in brine from industrial plants. Reverse osmosis may not be viable for brine treatment, due to the potential for fouling caused by hardness salts or organic contaminants, or damage to the reverse osmosis membranes from hydrocarbons. Evaporation processes are the most widespread for brine treatment as they enable the highest degree of concentration, as high as solid salt. They also produce the highest purity effluent, even distillate-quality. Evaporation processes are also more tolerant of organics, hydrocarbons, or hardness salts. However, energy consumption is high and corrosion may be an issue as the prime mover is concentrated salt water. As a result, evaporation systems typically employ titanium or duplex stainless steel materials. Brine management Brine management examines the broader context of brine treatment and may include consideration of government policy and regulations, corporate sustainability, environmental impact, recycling, handling and transport, containment, centralized compared to on-site treatment, avoidance and reduction, technologies, and economics. Brine management shares some issues with leachate management and more general waste management. In the recent years, there has been greater prevalence in brine management due to global push for zero liquid discharge (ZLD)/minimal liquid discharge (MLD). In ZLD/MLD techniques, a closed water cycle is used to minimize water discharges from a system for water reuse. This concept has been gaining traction in recent years, due to increased water discharges and recent advancement in membrane technology. Increasingly, there has been also greater efforts to increase the recovery of materials from brines, especially from mining, geothermal wastewater or desalination brines. Various literature demosntrates the vaibility of extraction of valuable materials like sodium bicarbonates, sodium chlorides and precious metals (like rubidium, cesium and lithium). The concept of ZLD/MLD encompasses the downstream management of wastewater brines, to reduce discharges and also derive valuable products from it. Solids removal Most solids can be removed using simple sedimentation techniques with the solids recovered as slurry or sludge. Very fine solids and solids with densities close to the density of water pose special problems. In such case filtration or ultrafiltration may be required. Although flocculation may be used, using alum salts or the addition of polyelectrolytes. Wastewater from industrial food processing often requires on-site treatment before it can be discharged to prevent or reduce sewer surcharge fees. The type of industry and specific operational practices determine what types of wastewater is generated and what type of treatment is required. Reducing solids such as waste product, organic materials, and sand is often a goal of industrial wastewater treatment. Some common ways to reduce solids include primary sedimentation (clarification), dissolved air flotation (DAF), belt filtration (microscreening), and drum screening. Oils and grease removal The effective removal of oils and grease is dependent on the characteristics of the oil in terms of its suspension state and droplet size, which will in turn affect the choice of separator technology. Oil in industrial waste water may be free light oil, heavy oil, which tends to sink, and emulsified oil, often referred to as soluble oil. Emulsified or soluble oils will typically required "cracking" to free the oil from its emulsion. In most cases this is achieved by lowering the pH of the water matrix. Most separator technologies will have an optimum range of oil droplet sizes that can be effectively treated. Each separator technology will have its own performance curve outlining optimum performance based on oil droplet size. the most common separators are gravity tanks or pits, API oil-water separators or plate packs, chemical treatment via dissolved air flotations, centrifuges, media filters and hydrocyclones. Analyzing the oily water to determine droplet size can be performed with a video particle analyser. API oil-water separators Hydrocyclone Hydrocyclone separators operate on the process where wastewater enters the cyclone chamber and is spun under extreme centrifugal forces more than 1000 times the force of gravity. This force causes the water and oil droplets (or solid particles) to separate. The separated materials is discharged from one end of the cyclone where treated water is discharged through the opposite end for further treatment, filtration or discharge. Hydrocyclones can also be utilised in a variety of context from solid-liquid separation to oil-water separation. Removal of biodegradable organics Biodegradable organic material of plant or animal origin is usually possible to treat using extended conventional sewage treatment processes such as activated sludge or trickling filter. Problems can arise if the wastewater is excessively diluted with washing water or is highly concentrated such as undiluted blood or milk. The presence of cleaning agents, disinfectants, pesticides, or antibiotics can have detrimental impacts on treatment processes. Activated sludge process Trickling filter process A trickling filter consists of a bed of rocks, gravel, slag, peat moss, or plastic media over which wastewater flows downward and contacts a layer (or film) of microbial slime covering the bed media. Aerobic conditions are maintained by forced air flowing through the bed or by natural convection of air. The process involves adsorption of organic compounds in the wastewater by the microbial slime layer, diffusion of air into the slime layer to provide the oxygen required for the biochemical oxidation of the organic compounds. The end products include carbon dioxide gas, water and other products of the oxidation. As the slime layer thickens, it becomes difficult for the air to penetrate the layer and an inner anaerobic layer is formed. Removal of other organics Synthetic organic materials including solvents, paints, pharmaceuticals, pesticides, products from coke production and so forth can be very difficult to treat. Treatment methods are often specific to the material being treated. Methods include advanced oxidation processing, distillation, adsorption, ozonation, vitrification, incineration, chemical immobilisation or landfill disposal. Some materials such as some detergents may be capable of biological degradation and in such cases, a modified form of wastewater treatment can be used. Removal of acids and alkalis Acids and alkalis can usually be neutralised under controlled conditions. Neutralisation frequently produces a precipitate that will require treatment as a solid residue that may also be toxic. In some cases, gases may be evolved requiring treatment for the gas stream. Some other forms of treatment are usually required following neutralisation. Waste streams rich in hardness ions as from de-ionisation processes can readily lose the hardness ions in a buildup of precipitated calcium and magnesium salts. This precipitation process can cause severe furring of pipes and can, in extreme cases, cause the blockage of disposal pipes. A 1-metre diameter industrial marine discharge pipe serving a major chemicals complex was blocked by such salts in the 1970s. Treatment is by concentration of de-ionisation waste waters and disposal to landfill or by careful pH management of the released wastewater. Removal of toxic materials Toxic materials including many organic materials, metals (such as zinc, silver, cadmium, thallium, etc.) acids, alkalis, non-metallic elements (such as arsenic or selenium) are generally resistant to biological processes unless very dilute. Metals can often be precipitated out by changing the pH or by treatment with other chemicals. Many, however, are resistant to treatment or mitigation and may require concentration followed by landfilling or recycling. Dissolved organics can be incinerated within the wastewater by the advanced oxidation process. Smart capsules Molecular encapsulation is a technology that has the potential to provide a system for the recyclable removal of lead and other ions from polluted sources. Nano-, micro- and milli- capsules, with sizes in the ranges 10 nm–1μm,1μm–1mm and >1mm, respectively, are particles that have an active reagent (core) surrounded by a carrier (shell).There are three types of capsule under investigation: alginate-based capsules, carbon nanotubes, polymer swelling capsules. These capsules provide a possible means for the remediation of contaminated water. Removal of thermal pollution To remove heat from wastewater generated by power plants or manufacturing plants, and thus to reduce thermal pollution, the following technologies are used: cooling ponds, engineered bodies of water designed for cooling by evaporation, convection, and radiation cooling towers, which transfer waste heat to the atmosphere through evaporation or heat transfer cogeneration, a process where waste heat is recycled for domestic or industrial heating purposes. Other disposal methods Some facilities such as oil and gas wells may be permitted to pump their wastewater underground through injection wells. However, wastewater injection has been linked to induced seismicity. Costs and trade waste charges Economies of scale may favor a situation where industrial wastewater (with pre-treatment or without treatment) is discharged to the sewer and then treated at a large municipal sewage treatment plant. Typically, trade waste charges are applied in that case. Or it might be more economical to have full treatment of industrial wastewater on the same site where it is generated and then discharging this treated industrial wastewater to a suitable surface water body. This effectively reduces wastewater treatment charges collected by municipal sewage treatment plants by pre-treating wastewaters to reduce concentrations of pollutants measured to determine user fees. Industrial wastewater plants may also reduce raw water costs by converting selected wastewaters to reclaimed water used for different purposes. Society and culture Global goals The international community has defined the treatment of industrial wastewater as an important part of sustainable development by including it in Sustainable Development Goal 6. Target 6.3 of this goal is to "By 2030, improve water quality by reducing pollution, eliminating dumping and minimizing release of hazardous chemicals and materials, halving the proportion of untreated wastewater and substantially increasing recycling and safe reuse globally". One of the indicators for this target is the "proportion of domestic and industrial wastewater flows safely treated". See also Best management practice for water pollution (BMP) List of waste water treatment technologies Purified water (for industrial use) Water purification (for drinking water) References Further reading External links Water Environment Federation - Professional society Industrial Wastewater Treatment Technology Database - EPA Waste treatment technology Sewerage Industrial processes Water pollution Sanitation cs:Čištění odpadních vod
Industrial wastewater treatment
[ "Chemistry", "Engineering", "Environmental_science" ]
6,740
[ "Water treatment", "Water pollution", "Sewerage", "Environmental engineering", "Waste treatment technology" ]
1,558,266
https://en.wikipedia.org/wiki/Acetylacetone
Acetylacetone is an organic compound with the chemical formula . It is classified as a 1,3-diketone. It exists in equilibrium with a tautomer . The mixture is a colorless liquid. These tautomers interconvert so rapidly under most conditions that they are treated as a single compound in most applications. Acetylacetone is a building block for the synthesis of many coordination complexes as well as heterocyclic compounds. Properties Tautomerism The keto and enol tautomers of acetylacetone coexist in solution. The enol form has C2v symmetry, meaning the hydrogen atom is shared equally between the two oxygen atoms. In the gas phase, the equilibrium constant, Kketo→enol, is 11.7, favoring the enol form. The two tautomeric forms can be distinguished by NMR spectroscopy, IR spectroscopy and other methods. The equilibrium constant tends to be high in nonpolar solvents; when Kketo→enol is equal or greater than 1, the enol form is favoured. The keto form becomes more favourable in polar, hydrogen-bonding solvents, such as water. The enol form is a vinylogous analogue of a carboxylic acid. Acid–base properties Acetylacetone is a weak acid. It forms the acetylacetonate anion (commonly abbreviated ): In the acetylacetonate anion, both C-O bonds are equivalent. Both C-C central bonds are equivalent as well, with one hydrogen atom bonded to the central carbon atom (the C3 atom). Those two equivalencies are because there is a resonance between the four bonds in the O-C2-C3-C4-O linkage in the acetylacetonate anion, where the bond order of those four bonds is about 1.5. Both oxygen atoms equally share the negative charge. The acetylacetonate anion is a bidentate ligand. IUPAC recommended pKa values for this equilibrium in aqueous solution at 25 °C are 8.99 ± 0.04 (I = 0), 8.83 ± 0.02 (I = 0.1 M ) and 9.00 ± 0.03 (I = 1.0 M ; I = Ionic strength). Values for mixed solvents are available. Very strong bases, such as organolithium compounds, will deprotonate acetylacetone twice. The resulting dilithium species can then be alkylated at the carbon atom at the position 1. Preparation Acetylacetone is prepared industrially by the thermal rearrangement of isopropenyl acetate. Laboratory routes to acetylacetone also begin with acetone. Acetone and acetic anhydride () upon the addition of boron trifluoride () catalyst: A second synthesis involves the base-catalyzed condensation (e.g., by sodium ethoxide ) of acetone and ethyl acetate, followed by acidification of the sodium acetylacetonate (e.g., by hydrogen chloride HCl): Because of the ease of these syntheses, many analogues of acetylacetonates are known. Some examples are benzoylacetone, dibenzoylmethane (dbaH) and tert-butyl analogue 2,2,6,6-tetramethyl-3,5-heptanedione. Trifluoroacetylacetone and hexafluoroacetylacetonate are also used to generate volatile metal complexes. Reactions Condensations Acetylacetone is a versatile bifunctional precursor to heterocycles because both keto groups may undergo condensation. For example, condensation with hydrazine produces pyrazoles while condensation with urea provides pyrimidines. Condensation with two aryl- or alkylamines gives NacNacs, wherein the oxygen atoms in acetylacetone are replaced by NR (R = aryl, alkyl). Coordination chemistry Sodium acetylacetonate, Na(acac), is the precursor to many acetylacetonate complexes. A general method of synthesis is to treat a metal salt with acetylacetone in the presence of a base: Both oxygen atoms bind to the metal to form a six-membered chelate ring. In some cases the chelate effect is so strong that no added base is needed to form the complex. Biodegradation The enzyme acetylacetone dioxygenase cleaves a central carbon-carbon bond of acetylacetone, producing acetate and 2-oxopropanal. The enzyme is iron(II)-dependent, but it has been proven to bind to zinc as well. Acetylacetone degradation has been characterized in the bacterium Acinetobacter johnsonii. References External links Diketones Chelating agents Ligands 3-Hydroxypropenals Enols
Acetylacetone
[ "Chemistry" ]
1,047
[ "Enols", "Ligands", "Coordination chemistry", "Functional groups", "Chelating agents", "Process chemicals" ]
1,558,322
https://en.wikipedia.org/wiki/Sexual%20frustration
Sexual frustration is a sense of dissatisfaction stemming from a discrepancy between a person's desired and achieved sexual activity. It may result from physical, mental, emotional, social, financial, religious or spiritual barriers. It can derive from displeasure during sex due to issues such as anorgasmia, anaphrodisia, premature ejaculation, delayed ejaculation or erectile dysfunction. A sense of incompatibility or discrepancy in libido between partners may be involved. It may also relate to broader existential frustration. Historical methods of dealing with sexual frustration have included fasting and the taking of libido suppressants such as anaphrodisiacs (food supplements) or antaphrodisiacs (medicinal supplements). It can also affect the sexually active, especially hypersexual people. It is a natural stage of the development throughout youth, when going through puberty as a teenager. Ways to cope with sexual frustration include engaging in solo sex, meditating, exercising, exploring new techniques, discussing and being open with one's partner about sexual frustrations, or seeking professional assistance through a sex therapist. Adolescents Adolescents may experience sexual frustration due to a variety of factors, including societal expectations, hormonal changes, and the complexities of navigating relationships. For some adolescents, sexting serves as an outlet for sexual exploration within a virtual space, particularly for those not yet ready for physical sexual activity. Menopause During menopause, individuals may experience reduced sexual desire and activity. However, engaging in sex remains important for many older people. Couples in their 50s or older expect ongoing sexual involvement, with an emphasis on traditional intercourse over other forms. Common sexual dysfunctions, like ejaculatory issues in males and genital atrophy in females, pose challenges. Lack of awareness about these changes may hinder communication with partners, potentially leading to sexual frustration and abstinence. Other groups Autism People with autism spectrum disorder (ASD) may face sexual frustration far more than most other people due to challenges in social interaction, communication difficulties, and sensory sensitivities associated with ASD. These individuals often struggle to interpret social cues and establish meaningful relationships, leading to a sense of isolation. Sensory sensitivities can also contribute to discomfort in intimate situations. Additionally, the lack of tailored resources and support for sexuality education exacerbates their frustration. Sexual frustration's impact on aggression and crime Sexual frustration has been identified as a factor contributing to immoral conducts throughout history. However, it is not prominently addressed in major criminological theories. This historical oversight can be attributed to misguided perspectives stemming from misconceptions that disregard female sexual frustration, misrepresent male biology, and fail to consider psychological and qualitative dimensions, including the option of masturbation. Sexual frustration extends beyond individuals facing involuntary celibacy; it also affects those engaged in sexual activity. The frustration arising from unmet sexual desires, unavailability of partners, and unsatisfactory sexual experiences appears to heighten the risks of aggression, violence, and criminal tendencies associated with the pursuit of relief, power, revenge, and displaced frustration. While sexual frustration alone is not adequate to fully explain aggression, violence, or crime, recognizing its impact on behavior remains crucial. See also Edging (sexual practice) Erotic sexual denial Orgastic potency Sexual abstinence Sexual tension Incel References Interpersonal conflict Human sexuality
Sexual frustration
[ "Biology" ]
712
[ "Human sexuality", "Behavior", "Human behavior", "Sexuality" ]
1,558,400
https://en.wikipedia.org/wiki/HD%20107146
HD 107146 is a star in the constellation Coma Berenices that is located about from Earth. The apparent magnitude of 7.028 makes this star too faint to be seen with the unaided eye. The physical properties of this star are similar to the Sun, including the stellar classification G2V, making this a solar analog. The mass of this star is about 109% of the solar mass () and it has about 99% the radius of the Sun (). It is a young star with an age between 80 and 200 Myr. The axis of rotation is estimated at degrees to the line of sight and it completes a rotation in a relatively brief 3.5 days. Circumstellar disc In 2003, astronomers recognized the excess infrared and submillimeter emission indicative of circumstellar dust, the first time such a debris disk phenomenon was noted around a star of similar spectral types to the Sun, though having a much younger age. In 2004 the Hubble Space Telescope detected the presence of a spatially resolved disk surrounding the star. The star's circumstellar disc has dimensions of approximately . The dusty ring is cool, with a temperature of , and has a dust mass of 0.250 and nearly no gas. Analysis of the debris disk in the far-infrared and submillimeter wavelengths, carried out using the Hubble Space Telescope, suggests the presence of small grains in the disk. The disk appears to be slightly elongated to form an ellipse with its minor axis at a position angle of ; working under the assumption that the disk is in fact circular gives it an inclination of from the plane of the sky. An analysis published in 2009 suggests the possible presence of a planet at a separation of 45-75 AU, in the wide gap centered at 75.4 AU which may be carved by the planet, but no planet with mass exceeding 1-2 was observed in the gap. Gallery References Coma Berenices 107146 G-type main-sequence stars Circumstellar disks 060074 BD+17 246 Solar analogs J12190650+1632541 Hypothetical planetary systems
HD 107146
[ "Astronomy" ]
440
[ "Coma Berenices", "Constellations" ]
1,558,454
https://en.wikipedia.org/wiki/Phosalone
Phosalone is an organophosphate chemical commonly used as an insecticide and acaricide. It is developed by Rhône-Poulenc in France but EU eliminated it from pesticide registration in December 2006. The median lethal dose of oral exposure in rat is 85 mg/kg and that of dermal is 390 mg/kg.。It is a weak acetylcholinesterase inhibitor. It is taken by not only oral and inhalation but skin and it causes toxic symptoms peculiar to organophosphorus compounds such as miosis, hypersalivation, hyperhidrosis, chest pressure, pulmonary edema and fecal incontinence. It is flammable and decomposes to toxic gases such as phosphorus oxides, sulfur oxides and nitrogen oxides. It is harmful especially to water creatures. References External links Phosalone Fact Sheet EPA Webpage on Phosolone Acetylcholinesterase inhibitors Organophosphate insecticides Chloroarenes Carbamates Benzoxazoles Phosphorodithioates Ethyl esters
Phosalone
[ "Chemistry" ]
223
[ "Functional groups", "Phosphorodithioates", "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
1,558,470
https://en.wikipedia.org/wiki/Asa%E1%B9%83khyeya
An () is a Buddhist name for the number 10140, or alternatively for the number as it is described in the Avatamsaka Sutra. The value of the number is different depending upon the translation. It is in the translation of Buddhabhadra, in that of Shikshananda and in that of Thomas Cleary, who may have made an error in calculation. In these religious traditions, the word has the meaning of 'incalculable'. Asaṃkhyeya is a Sanskrit word that appears often in the Buddhist texts. For example, Shakyamuni Buddha is said to have practiced for 3 great asaṃkhyeya kalpas before becoming a Buddha. See also History of large numbers References Buddhism and science Large integers
Asaṃkhyeya
[ "Mathematics" ]
152
[ "Mathematical objects", "Numbers", "Number stubs" ]
1,558,612
https://en.wikipedia.org/wiki/Nickel%28II%29%20chloride
Nickel(II) chloride (or just nickel chloride) is the chemical compound NiCl2. The anhydrous salt is yellow, but the more familiar hydrate NiCl2·6H2O is green. Nickel(II) chloride, in various forms, is the most important source of nickel for chemical synthesis. The nickel chlorides are deliquescent, absorbing moisture from the air to form a solution. Nickel salts have been shown to be carcinogenic to the lungs and nasal passages in cases of long-term inhalation exposure. Production and syntheses Large scale production and uses of nickel chloride are associated with the purification of nickel from its ores. It is generated upon extraction nickel matte and residues obtained from roasting refining nickel-containing ores using hydrochloric acid. Electrolysis of nickel chloride solutions are used in the production of nickel metal. Other significant routes to nickel chloride arise from processing of ore concentrates such as various reactions involving copper chlorides: Laboratory routes Nickel chloride is not usually prepared in the laboratory because it is inexpensive and has a long shelf-life. The yellowish dihydrate, NiCl2·2H2O, is produced by heating the hexahydrate between 66 and 133 °C. The hydrates convert to the anhydrous form upon heating in thionyl chloride or by heating under a stream of HCl gas. Simply heating the hydrates does not afford the anhydrous dichloride. The dehydration is accompanied by a color change from green to yellow. In case one needs a pure compound without presence of cobalt, nickel chloride can be obtained by cautiously heating hexaamminenickel chloride: \overset{hexammine\atop nickel~chloride}{[Ni(NH3)6]Cl2} ->[175-200^\circ\ce{C}] NiCl2{} + 6NH3 Structure of NiCl2 and its hydrates NiCl2 adopts the CdCl2 structure. In this motif, each Ni2+ center is coordinated to six Cl− centers, and each chloride is bonded to three Ni(II) centers. In NiCl2 the Ni-Cl bonds have "ionic character". Yellow NiBr2 and black NiI2 adopt similar structures, but with a different packing of the halides, adopting the CdI2 motif. In contrast, NiCl2·6H2O consists of separated trans-[NiCl2(H2O)4] molecules linked more weakly to adjacent water molecules. Only four of the six water molecules in the formula is bound to the nickel, and the remaining two are water of crystallization, so the formula of nickel(II) chloride hexahydrate is [NiCl2(H2O)4]·2H2O. Cobalt(II) chloride hexahydrate has a similar structure. The hexahydrate occurs in nature as the very rare mineral nickelbischofite. The dihydrate NiCl2·2H2O adopts a structure intermediate between the hexahydrate and the anhydrous forms. It consists of infinite chains of NiCl2, wherein both chloride centers are bridging ligands. The trans sites on the octahedral centers occupied by aquo ligands. A tetrahydrate NiCl2·4H2O is also known. Reactions Nickel(II) chloride solutions are acidic, with a pH of around 4 due to the hydrolysis of the Ni2+ ion. Coordination complexes Most of the reactions ascribed to "nickel chloride" involve the hexahydrate, although specialized reactions require the anhydrous form. Reactions starting from NiCl2·6H2O can be used to form a variety of nickel coordination complexes because the H2O ligands are rapidly displaced by ammonia, amines, thioethers, thiolates, and organophosphines. In some derivatives, the chloride remains within the coordination sphere, whereas chloride is displaced with highly basic ligands. Illustrative complexes include: NiCl2 is the precursor to acetylacetonate complexes Ni(acac)2(H2O)2 and the benzene-soluble (Ni(acac)2)3, which is a precursor to Ni(1,5-cyclooctadiene)2, an important reagent in organonickel chemistry. In the presence of water scavengers, hydrated nickel(II) chloride reacts with dimethoxyethane (dme) to form the molecular complex NiCl2(dme)2. The dme ligands in this complex are labile. Applications in organic synthesis NiCl2 and its hydrate are occasionally useful in organic synthesis. As a mild Lewis acid, e.g. for the regioselective isomerization of dienols: In combination with CrCl2 for the coupling of an aldehyde and a vinylic iodide to give allylic alcohols. For selective reductions in the presence of LiAlH4, e.g. for the conversion of alkenes to alkanes. As a precursor to Brown's P-1 and P-2 nickel boride catalyst through reaction with NaBH4. As a precursor to finely divided Ni by reduction with Zn, for the reduction of aldehydes, alkenes, and nitro aromatic compounds. This reagent also promotes homo-coupling reactions, that is 2RX → R-R where R = aryl, vinyl. As a catalyst for making dialkyl arylphosphonates from phosphites and aryl iodide, ArI: ArI + P(OEt)3 → ArP(O)(OEt)2 + EtI NiCl2-dme (or NiCl2-glyme) is used due to its increased solubility in comparison to the hexahydrate. Safety Nickel(II) chloride is irritating upon ingestion, inhalation, skin contact, and eye contact. Prolonged inhalation exposure to nickel and its compounds has been linked to increased cancer risk to the lungs and nasal passages. References External links NIOSH Pocket Guide to Chemical Hazards Nickel compounds Chlorides Metal halides IARC Group 1 carcinogens Coordination complexes
Nickel(II) chloride
[ "Chemistry" ]
1,323
[ "Chlorides", "Inorganic compounds", "Coordination complexes", "Coordination chemistry", "Salts", "Metal halides" ]
1,558,629
https://en.wikipedia.org/wiki/Iron%28II%29%20chloride
Iron(II) chloride, also known as ferrous chloride, is the chemical compound of formula FeCl2. It is a paramagnetic solid with a high melting point. The compound is white, but typical samples are often off-white. FeCl2 crystallizes from water as the greenish tetrahydrate, which is the form that is most commonly encountered in commerce and the laboratory. There is also a dihydrate. The compound is highly soluble in water, giving pale green solutions. Production Hydrated forms of ferrous chloride are generated by treatment of wastes from steel production with hydrochloric acid. Such solutions are designated "spent acid," or "pickle liquor" especially when the hydrochloric acid is not completely consumed: Fe + 2 HCl → FeCl2 + H2 The production of ferric chloride involves the use of ferrous chloride. Ferrous chloride is also a byproduct from the production of titanium, since some titanium ores contain iron. Anhydrous FeCl2 Ferrous chloride is prepared by addition of iron powder to a solution of hydrochloric acid in methanol. This reaction gives the methanol solvate of the dichloride, which upon heating in a vacuum at about 160 °C converts to anhydrous FeCl2. The net reaction is shown: Fe + 2 HCl → FeCl2 + H2 FeBr2 and FeI2 can be prepared analogously. An alternative synthesis of anhydrous ferrous chloride is the reduction of FeCl3 with chlorobenzene: 2 FeCl3 + C6H5Cl → 2 FeCl2 + C6H4Cl2 + HCl For the preparation of ferrocene ferrous chloride is generated in situ by comproportionation of FeCl3 with iron powder in tetrahydrofuran (THF). Ferric chloride decomposes to ferrous chloride at high temperatures. Hydrates The dihydrate, FeCl2(H2O)2, crystallizes from concentrated hydrochloric acid. The dihydrate is a coordination polymer. Each Fe center is coordinated to four doubly bridging chloride ligands. The octahedron is completed by a pair of mutually trans aquo ligands. Reactions FeCl2 and its hydrates form complexes with many ligands. For example, solutions of the hydrates react with two molar equivalents of [(C2H5)4N]Cl to give the salt [(C2H5)4N]2[FeCl4]. The anhydrous FeCl2, which is soluble in THF, is a standard precursor in organometallic synthesis. FeCl2 is used to generate NHC complexes in situ for cross coupling reactions. Applications Unlike the related ferrous sulfate and ferric chloride, ferrous chloride has few commercial applications. Aside from use in the laboratory synthesis of iron complexes, ferrous chloride serves as a coagulation and flocculation agent in wastewater treatment, especially for wastes containing chromate or sulfides. It is used for odor control in wastewater treatment. It is used as a precursor to make various grades of hematite that can be used in a variety of pigments. It is the precursor to hydrated iron(III) oxides that are magnetic pigments. FeCl2 finds some use as a reagent in organic synthesis. Natural occurrence Lawrencite, (Fe,Ni)Cl2, is the natural counterpart, and a typically (though rarely occurring) meteoritic mineral. The natural form of the dihydrate is rokühnite - a very rare mineral. Related, but more complex (in particular, basic or hydrated) minerals are hibbingite, droninoite and kuliginite. References See also Iron(III) chloride Iron(II) sulfate Chlorides Iron(II) compounds Metal halides
Iron(II) chloride
[ "Chemistry" ]
840
[ "Chlorides", "Inorganic compounds", "Metal halides", "Salts" ]
1,558,634
https://en.wikipedia.org/wiki/GPS%20drawing
GPS drawing, also known as GPS art, is a method of drawing where an artist uses a Global Positioning System (GPS) device and follows a pre-planned route to create a large-scale picture or pattern. The .GPX data file recorded during the drawing process is then visualised, usually overlaying it as a line on a map of the area. Artists usually run or cycle the route—while cars, vans, boats and aeroplanes are utilized to create larger pieces. The first known GPS drawing was made by Reid Stowe in 1999. "Voyage of the Turtle" is an ocean sized drawing with a 5,500 mile circumference in the Atlantic made using a sailboat. The GPS data was recorded in logbooks and was therefore very low resolution. In 2000, after the US Military GPS satellite signals were opened up to the public, artists Jeremy Wood and Hugh Pryor were able to use a newly available GPS tracker to record their movements. To display their drawings Hugh Pryor wrote a computer program which convented the GPX data into a single line to be shown on screen or to be turned into an image file. With these tools in place GPS drawing as distinct artform was able to develop. Planning GPS artists can spend many hours finding a certain image or text hidden in a map or can sometimes simply see an existing image in a map due to pareidolia. In many cities and towns the road layout and landscape restricts the routes available so artists have to find creative ways to show their pictures or characters. In cities where there is a strong grid pattern 8-bit-style or pixelated images can be created of almost any object or shape. Many artists will create paper or digital maps of their route to follow on their journey. Several websites have arisen (including RouteDoodle.com and GPSArtify.com) to aid in the planning process. Artistic style There are many approaches to GPS drawing which an artist can choose depending on their means of travel and the landscape around them. Roads, trails, and paths only One style uses only pre-existing roads, paths, trails, etc. This can make it more challenging to find a route and plan the artwork. Working on pre-existing routes can make navigation easier, and the artwork is more likely to reflect the original plan. This is how the majority of GPS drawings are made. Freehand In freehand GPS drawing, an artist creates a shape on open ground, air, or water without following existing paths. This means the artist has to watch their progress in real time on their GPS device. Artists can run or cycle over open ground such as parks, fields, and car parks. Artists in cars and other motor vehicles can draw shapes on large open areas such as deserts, airfields, and beaches. Almost all artworks created by aircraft and watercraft use this technique as they are not restricted by human and physical geography. Freehand GPS drawing opens unlimited possibilities but without waypoints and existing routes it is very easy to lose track of your progress and make mistakes. Connect the dots By pausing the GPS device and restarting it at different locations an artist is able to draw straight lines across the map in a similar way to a connect the dots puzzle. This means the artist can draw over the built environment and over physical barriers such as rivers and hills. Adding extra images Some artists add extra images or lines to the map after they have created the route. They can simply add googly eyes to an animal or face or go further and add lines and other features which help viewers see what they have drawn. Other times an artist will show a photo or other image alongside their drawing if it is not clear at first glance what has been drawn. Other methods Artists can collaborate with each other or members of the public to create larger images, visualisations, collages and even GPS animations from multiple GPX files or routes images. GPS devices can also be given to people or attached to vehicles which are tracked as they go about normal life or take part in specific activities and the GPX data is then visualised. In the freestyle method of GPS drawing, the path followed by the GPS receiver is random or semi random following set of pre determined rules. Burbing Burbinga term derived from the word suburbis the practice of cycling every road in a suburb and tracking this on GPS to create an intricate pattern. One of the first examples of burbing was created by cyclist Christian Lloyd in 2014. Burbing became a more widespread trend during the COVID-19 pandemic, when wider travel was restricted. Display Most people use a route mapping app or other service to display their drawing online and to share on social media. Popular apps include Strava, Map My Run, and Garmin. Many artists also import their route into Google Maps, OpenStreetMap, Viewranger, and other map services before capturing the image to display and share. This gives the artist the option of expanding and cropping the image, orienting it another way, or tilting the map to add perspective. Some artists use false color maps with contrasting colors for their route to create vivid images. Artist Jeremy Wood often displays his drawings without a showing any map underneath. He is able to do this as the drawings are so detailed you can see the shape of the built environment or landscape in the lines. One work, "Traverse Me", maps out the University of Warwick campus and includes the map title, other text and images, a compass, scale, and date signature. It was made by walking 238 miles over 17 days. Examples and artists In 1999, Reid Stowe was probably the first artist to employ waypoints on a GPS-verified journey in order to render a large-scale art object. This work of GPS art, representing a baby sea turtle (1900 miles long and 1400 miles wide, with a perimeter of 5,500 miles), was performed with a two-masted schooner during the Voyage of the Sea Turtle. He made two more large GPS-verified drawings on his 1000-day voyage. The idea was first implemented on land by artists Hugh Pryor and Jeremy Wood, whose work includes a 13-mile wide fish in Oxfordshire, spiders with legs 21 miles long in Port Meadow, Oxford, and "the world's biggest "IF'" with a total length is 537 km, and the height of the drawing in typographic units is 319,334,400 points. Typical computer fonts at standard resolutions are between 8 and 12 points. The largest text written using a GPS device was "PEACE on Earth (60,794.07km)" in 2015 created by Yassan. This was created by travelling around the entire globe by plane. Yassan also made headlines by proposing to his girlfriend with "Marry Me" a 7,163.7 km route covering most of Japan. In 2018 artist Nathan Rae created a #WeLoveManchester piece as part of the commemorations of the Manchester Arena Bombing. One of the most prolific GPS artists is the artist known as WallyGPX who, as of October 2018, has created over 500 pieces of GPS art. He uses pencil and paper to plan the routes around his home city of Baltimore which he then creates by bicycle. References External links GPSDrawing.com GPS Visualizer - utility for making maps from raw data gpsdrawing.info gpsart.info - GPS ART Guide in Japan www.strav.art/ RouteDoodle.com - GPS art making tool Drawing Visual arts genres
GPS drawing
[ "Technology", "Engineering" ]
1,527
[ "Global Positioning System", "Wireless locating", "Aircraft instruments", "Aerospace engineering" ]
1,558,827
https://en.wikipedia.org/wiki/Tagged%20queuing
Tagged queuing is a method for allowing a hardware device or controller to process commands received from a device driver out of order. It requires that the device driver attaches a tag to each command which the controller or device can later use to identify the response to the command. Tagged queueing can speed up processing considerably if a controller serves devices of very different speeds, such as an SCSI controller serving a mix of CD-ROM drives and high-speed disks. In such cases if a request to fetch data from the CD-ROM is shortly followed by a request to read from the disk, the controller doesn't have to wait for the CD-ROM to fetch the data, it can instead instruct the disk to fetch the data and return the value to the device driver, while the CD-ROM is probably still seeking. References Computer peripherals
Tagged queuing
[ "Technology" ]
172
[ "Computer peripherals", "Computing stubs", "Components" ]
1,558,886
https://en.wikipedia.org/wiki/Ingalls%20Building
The Ingalls Building, built in 1903 in Cincinnati, Ohio, is the world's first reinforced concrete skyscraper. The 16-story building was designed by the Cincinnati architectural firm Elzner & Anderson and was named for its primary financial investor, Melville E. Ingalls. The building was considered a daring engineering feat at the time, but its success contributed to the acceptance of concrete construction in high-rise buildings in the United States. It was converted to a hotel, the Courtyard by Marriott Cincinnati Downtown, in 2021. The Ingalls building is bordered by East 4th Street and Vine Street in the Cincinnati Central Business District. Overcoming skepticism Prior to 1902, the tallest reinforced concrete structure in the world was only six stories high. Since concrete possesses very low tensile (pulling) strength, many people from both the public and the engineering community believed that a concrete tower as tall as the plan for the Ingalls Building would collapse under wind loads or even its own weight. When the building was completed and the supports removed, one reporter allegedly stayed awake through the night in order to be the first to report on the building's demise. Ingalls and engineer Henry N. Hooper were convinced, however, that Ernest L. Ransome's system of casting twisted steel bars inside of concrete slabs as reinforcement (patented in 1884) and casting slab, beams and joists as a unit would allow them to create a rigid structure. The architects also prized the cost savings and fireproofing advantages of concrete over steel frame construction. Finally, after two years of convincing, city officials issued Ingalls a building permit and the work began. Construction Hooper designed a monolithic "concrete box of eight-inch [200 mm] walls, with concrete floors and roof, concrete beams, concrete columns, concrete stairs -- no steel. It consists merely of bars embedded in concrete, with the ends interlaced." (Ali) The amount of concrete produced during construction—100 cubic yards (76 m3) in each ten-hour shift—was limited by the rate at which the builders could place it. An extra wet mix was used to ensure complete contact with the rebars and uniform density in the columns. Floor slabs were poured without joints at the rate of three stories per month. Columns measured 30 by 34 inches (760 by 860 mm) for the first ten floors and 12 inches (300 mm) square for the rest. Three sets of forms were used, rotating from the bottom to the top of the building when the concrete had cured. Completed in eight months, the finished building measures 50 by 100 feet (15 by 30 m) at its base and 210 feet (64 m) tall. The exterior concrete walls are eight inches thick (200 mm) in unbroken slabs 16 feet (5 m) square with a veneer 4 to 6 inches (100 to 150 mm) thick. The Beaux Arts Classical exterior is covered on the first three stories with white marble, on the next eleven stories with glazed gray brick, and on the top floor and cornice with glazed white terra cotta. Later history Still in use today, the building was designated a National Historic Civil Engineering Landmark in 1974 by the American Society of Civil Engineers. In 1975, it was added to the National Register of Historic Places. The building was purchased on January 17, 2013, by CLA OH LLC (an affiliate of Claremont Group, a New York City-based real estate development firm) from CapCar Realty 1.1 LLC, for $1.45 million. In November 2013, Claremont Group CEO Perry Chopra disclosed his intentions to convert the office building into 40 to 50 condos, with ground-floor retail. However, in April 2015, a real estate broker announced that the building was again for sale, after Claremont Group decided not to execute the condominium project. It was converted to a hotel, the Courtyard by Marriott Cincinnati Downtown, in 2021. See also List of historic civil engineering landmarks References External links Concrete Tower Scrapes Skies (February, 1999). Engineering News-Record Ingalls Building. Ingalls Building. American Society of Civil Engineers Skyscraper office buildings in Cincinnati Concrete pioneers National Register of Historic Places in Cincinnati Historic Civil Engineering Landmarks 1903 establishments in Ohio Buildings and structures completed in 1903
Ingalls Building
[ "Engineering" ]
861
[ "Civil engineering", "Historic Civil Engineering Landmarks" ]
1,559,005
https://en.wikipedia.org/wiki/Intel%208259
The Intel 8259 is a programmable interrupt controller (PIC) designed for the Intel 8085 and 8086 microprocessors. The initial part was 8259, a later A suffix version was upward compatible and usable with the 8086 or 8088 processor. The 8259 combines multiple interrupt input sources into a single interrupt output to the host microprocessor, extending the interrupt levels available in a system beyond the one or two levels found on the processor chip. The 8259A was the interrupt controller for the ISA bus in the original IBM PC and IBM PC AT. The 8259 was introduced as part of Intel's MCS 85 family in 1976. The 8259A was included in the original PC introduced in 1981 and maintained by the PC/XT when introduced in 1983. A second 8259A was added with the introduction of the PC/AT. The 8259 has coexisted with the Intel APIC Architecture since its introduction in symmetric multiprocessor PCs. Modern PCs have begun to phase out the 8259A in favor of the Intel APIC Architecture. However, while not anymore a separate chip, the 8259A interface is still provided by the Platform Controller Hub or southbridge on modern x86 motherboards. Functional description The main signal pins on an 8259 are as follows: eight interrupt request input lines named IRQ0 through IRQ7, an interrupt request output line named INTR, interrupt acknowledgment line named INTA, D0 through D7 for communicating the interrupt level or vector offset. Other connections include CAS0 through CAS2 for cascading between 8259s. Up to eight slave 8259s may be cascaded to a master 8259 to provide up to 64 IRQs. 8259s are cascaded by connecting the INT line of one slave 8259 to the IRQ line of one master 8259. End of interrupt (EOI) operations support specific EOI, non-specific EOI, and auto-EOI. A specific EOI specifies the IRQ level it is acknowledging in the ISR. A non-specific EOI resets the IRQ level in the ISR. Auto-EOI resets the IRQ level in the ISR immediately after the interrupt is acknowledged. Edge and level interrupt trigger modes are supported by the 8259A. Fixed priority and rotating priority modes are supported. The 8259 may be configured to work with an 8080/8085 or an 8086/8088. On the 8086/8088, the interrupt controller will provide an interrupt number on the data bus when an interrupt occurs. The interrupt cycle of the 8080/8085 will issue three bytes on the data bus (corresponding to a CALL instruction in the 8080/8085 instruction set). The 8259A provides additional functionality compared to the 8259 (in particular buffered mode and level-triggered mode) and is upward compatible with it. Programming considerations DOS and Windows Programming an 8259 in conjunction with DOS and Microsoft Windows has introduced a number of confusing issues for the sake of backwards compatibility, which extends as far back as the original PC introduced in 1981. The first issue is more or less the root of the second issue. DOS device drivers are expected to send a non-specific EOI to the 8259s when they finish servicing their device. This prevents the use of any of the 8259's other EOI modes in DOS, and excludes the differentiation between device interrupts rerouted from the master 8259 to the slave 8259. The second issue deals with the use of IRQ2 and IRQ9 from the introduction of a slave 8259 in the PC/AT. The slave 8259's INT output is connected to the master's IR2. The IRQ2 line of the ISA bus, originally connected to this IR2, was rerouted to IR1 of the slave. Thus the old IRQ2 line now generates IRQ9 in the CPU. To allow backwards compatibility with DOS device drivers that still set up for IRQ2, a handler is installed by the BIOS for IRQ9 that redirects interrupts to the original IRQ2 handler. On the PC, the BIOS (and thus also DOS) traditionally maps the master 8259 interrupt requests (IRQ0-IRQ7) to interrupt vector offset 8 (INT08-INT0F) and the slave 8259 (in PC/AT and later) interrupt requests (IRQ8-IRQ15) to interrupt vector offset 112 (INT70-INT77). This was done despite the first 32 (INT00-INT1F) interrupt vectors being reserved by the processor for internal exceptions (this was ignored for the design of the PC for some reason). Because of the reserved vectors for exceptions most other operating systems map (at least the master) 8259 IRQs (if used on a platform) to another interrupt vector base offset. Other operating systems Since most other operating systems allow for changes in device driver expectations, other 8259 modes of operation, such as Auto-EOI, may be used. This is especially important for modern x86 hardware in which a significant amount of time may be spent on I/O address space delay when communicating with the 8259s. This also allows a number of other optimizations in synchronization, such as critical sections, in a multiprocessor x86 system with 8259s. Edge and level triggered modes Since the ISA bus does not support level triggered interrupts, level triggered mode may not be used for interrupts connected to ISA devices. This means that on PC/XT, PC/AT, and compatible systems the 8259 must be programmed for edge triggered mode. On MCA systems, devices use level triggered interrupts and the interrupt controller is hardwired to always work in level triggered mode. On newer EISA, PCI, and later systems the Edge/Level Control Registers (ELCRs) control the mode per IRQ line, effectively making the mode of the 8259 irrelevant for such systems with ISA buses. The ELCR is programmed by the BIOS at system startup for correct operation. The ELCRs are located 0x4d0 and 0x4d1 in the x86 I/O address space. They are 8-bits wide, each bit corresponding to an IRQ from the 8259s. When a bit is set, the IRQ is in level triggered mode; otherwise, the IRQ is in edge triggered mode. Spurious interrupts The 8259 generates spurious interrupts in response to a number of conditions. The first is an IRQ line being deasserted before it is acknowledged. This may occur due to noise on the IRQ lines. In edge triggered mode, the noise must maintain the line in the low state for 100 ns. When the noise diminishes, a pull-up resistor returns the IRQ line to high, thus generating a false interrupt. In level triggered mode, the noise may cause a high signal level on the systems INTR line. If the system sends an acknowledgment request, the 8259 has nothing to resolve and thus sends an IRQ7 in response. This first case will generate spurious IRQ7's. A similar case can occur when the 8259 unmask and the IRQ input de-assertion are not properly synchronized. In many systems, the IRQ input is deasserted by an I/O write, and the processor doesn't wait until the write reaches the I/O device. If the processor continues and unmasks the 8259 IRQ before the IRQ input is deasserted, the 8259 will assert INTR again. By the time the processor recognizes this INTR and issues an acknowledgment to read the IRQ from the 8259, the IRQ input may be deasserted, and the 8259 returns a spurious IRQ7. The second is the master 8259's IRQ2 is active high when the slave 8259's IRQ lines are inactive on the falling edge of an interrupt acknowledgment. This second case will generate spurious IRQ15's, but is rare. PC/XT and PC/AT The PC/XT ISA system had one 8259 controller, while PC/AT and later systems had two 8259 controllers, master and slave. IRQ0 through IRQ7 are the master 8259's interrupt lines, while IRQ8 through IRQ15 are the slave 8259's interrupt lines. The labels on the pins on an 8259 are IR0 through IR7. IRQ0 through IRQ15 are the names of the ISA bus's lines to which the 8259s are attached. Variants See also Advanced Programmable Interrupt Controller (APIC) IF (x86 flag) Interrupt handler Interrupt latency Non-maskable interrupt (NMI) References Gilluwe, Frank van. The Undocumented PC. A-W Developers Press, 1997. McGivern, Joseph. Interrupt-Driven PC System Design. Annabooks, 1998. IBM Personal System/2 Hardware Interface Technical Reference – Architectures. IBM, 1990. IBM Publication 84F8933 External links 8259A Programmable Interrupt Controller Intel chipsets IBM PC compatibles Input/output integrated circuits Interrupts
Intel 8259
[ "Technology" ]
1,936
[ "Interrupts", "Events (computing)" ]
1,559,092
https://en.wikipedia.org/wiki/Pericallis%20%C3%97%20hybrida
Pericallis × hybrida, known as cineraria, florist's cineraria or common ragwort is a flowering plant in the family Asteraceae. It originated as a hybrid between Pericallis cruenta and P. lanata, both natives of the Canary Islands. The hybrid was first developed in the British royal gardens in 1777. It was originally known as Cineraria × hybrida, but the genus Cineraria is now restricted to a group of South African species, with the Canary Island species being transferred to the genus Pericallis; some botanists also treat it in a broad view of the large and widespread genus Senecio. Some varieties are sold under the trade name Senetti. Cultivation and uses Florist's cinerarias can be raised freely from seeds. For spring flowering the seeds are sown in mid spring in well-drained pots or pans, in soil of three parts loam to two parts leaf mould, with one-sixth sand; cover the seed thinly with fine soil, and press the surface firm. When the seedlings are large enough to handle, prick them out in pans or pots of similar soil, and when more advanced pot them singly in 10 cm pots, using soil a trifle less sandy. They should be grown in shallow frames facing the north. If so situated that the sun shines upon the plants in the middle of the day, they must be slightly shaded; give plenty of air, and never allow them to get dry. When well established with roots, shift them into 15 cm pots, which should be liberally supplied with manure water as they get filled with roots. In winter remove to a pit or house, where a little heat can be supplied whenever there is a risk of their getting frozen. They should stand on a moist bottom, but must not be subjected to cold draughts. When the flowering stems appear, give manure water at every alternate watering. Seeds sown in early spring, and grown on in this way, will be in flower by Christmas if kept in a temperature of from 5° to 7 °C at night, with a little more warmth in the day. Those sown in April and May will follow them during the early spring months, the latter set of plants being subjected to a temperature of 4° to 5 °C during the night. If grown much warmer than this, the chrysanthemum leaf miner may damage the leaves, tunnelling its way between the upper and lower surfaces and making whitish irregular markings all over. Such affected leaves must be picked off and burned. Aphids are also a major pest. References External links Florist's Cineraria information from Virginia Tech Florist's Cineraria at Botany hybrida Hybrid plants
Pericallis × hybrida
[ "Biology" ]
571
[ "Hybrid plants", "Plants", "Hybrid organisms" ]
1,559,185
https://en.wikipedia.org/wiki/Discrete%20trial%20training
Discrete trial training (DTT) is a technique used by practitioners of applied behavior analysis (ABA) that was developed by Ivar Lovaas at the University of California, Los Angeles (UCLA). DTT uses mass instruction and reinforcers that create clear contingencies to shape new skills. Often employed as an early intensive behavioral intervention (EIBI) for up to 25–40 hours per week for children with autism, the technique relies on the use of prompts, modeling, and positive reinforcement strategies to facilitate the child's learning. It previously used aversives to punish unwanted behaviors. DTT has also been referred to as the "Lovaas/UCLA model", "rapid motor imitation antecedent", "listener responding", "errorless learning", and "mass trials". Technique Discrete trial training (DTT) is a process whereby an activity is divided into smaller distinct sub-tasks and each of these is repeated continuously until a person is proficient. The trainer rewards successful completion and uses errorless correction procedures if there is unsuccessful completion by the subject to condition them into mastering the process. When proficiency is gained in each sub-task, they are re-combined into the whole activity: in this way proficiency at complex activities can be taught. DTT is carried out in a one-on-one therapist to student ratio at the table. Intervention can start when a child is as young as two years old and can last from two to six years. Progression through goals of the program are determined individually and are not determined by which year the client has been in the program. The first year seeks to reduce self-stimulating/self-regulatory ("stimming") behavior (this includes ("stimming") in ways that pose no inherent harm), teach listener responding, eye contact, and rapid fine and gross motor imitation, as well as to establish playing with toys in what the therapist considers the "correct" way, and integrate the family into the treatment protocol. The second year teaches early expressive language and abstract linguistic skills. The third year strives to include the individual's community in the treatment to optimize "mainstreaming" by focusing on peer interaction, basic socializing skills, basic social rules, emotional expression and variation, in addition to observational learning and pre-academic skills, such as reading, writing, and arithmetic. Rarely is the technique implemented for the first time with adults. DTT is typically performed five to seven days a week with each session lasting from five to eight hours, totaling an average of 30–40 hours per week. Sessions are divided into trials with intermittent breaks, and the therapist is positioned directly across the table from the student receiving treatment. Each trial is composed of the therapist giving an instruction (i.e., "Look at me", "Do this", "Point to", etc.), in reference to an object, color, simple imitative gesture, etc., which is followed by a prompt (verbal, gestural, physical, etc.). The concept is centered on shaping the child to respond correctly to the instructions throughout the trials. Should the child fail to respond to an instruction, the therapist uses either a "partial prompt" (a simple nudge or touch on the hand or arm) or a "full prompt" to facilitate the child to successfully complete the task. Correct responses are reinforced with a reward, and the prompts are discontinued as the child begins to master each skill. The intervention is often used in conjunction with the Picture Exchange Communication System (PECS) as it primes the child for an easy transition between treatment types. The PECS program serves as another common intervention technique used to conform individuals with autism. As many as 25% of autistic individuals have no functional speech. The program teaches spontaneous social communication through symbols or pictures by relying on ABA techniques. PECS operates on a similar premise to DTT in that it uses systematic chaining to teach the individual to pair the concept of expressive speech with an object. It is structured in a similar fashion to DTT, in that each session begins with a preferred reinforcer survey to ascertain what would most motivate the child and effectively facilitate learning. Effectiveness Limited research shows DTT to be effective in enhancing spoken language, academic and adaptive skills, as many studies are of low quality research design and there needs to be more larger sample sizes. Society and culture In media A 1965 article in Life magazine entitled Screams, Slaps and Love has a lasting impact on public attitudes towards Lovaas's therapy. Giving little thought to how their work might be portrayed, Lovaas and parent advocate Bernie Rimland, M.D., were surprised when the magazine article appeared, since it focussed on text and selected images showing the use of aversives, including a close up of a child being slapped. Even after the use of aversives had been largely discontinued, the article continued to have an effect, galvanizing public concerns about behavior modification techniques. United States cost In April 2002 treatment cost in the U.S. was about US$4,200 per month ($50,000 annually) per child. The 20–40 hours per week intensity of the program, often conducted at home, may place additional stress on already challenged families. History Discrete trial training is rooted in the hypothesis of Charles Ferster that autism was caused in part by a person's inability to react appropriately to "social reinforcers", such as praise or criticism. Lovaas's early work concentrated on showing that it was possible to strengthen autistic people's responses to these social reinforcers, but he found these improvements were not associated with any general improvement in overall behavior. In a 1987 paper, psychologists Frank Gresham and Donald MacMillan described a number of weaknesses in Lovass's research and judged that it would be better to call the evidence for his interventions "promising" rather than "compelling". Lovaas's original technique used aversives such as striking, shouting, and electrical shocks to punish undesired behaviors. By 1979, Lovaas had abandoned the use of aversives, and in 2012 the use of electric shocks was described as being inconsistent with contemporary practice. See also Professional practice of behavior analysis References External links Lovaas Institute for Early Intervention Autism Therapy Center Treatment of autism Behaviorism Behavior modification
Discrete trial training
[ "Biology" ]
1,307
[ "Behavior modification", "Human behavior", "Behavior", "Behaviorism" ]
1,559,221
https://en.wikipedia.org/wiki/Construction%20grammar
Construction grammar (often abbreviated CxG) is a family of theories within the field of cognitive linguistics which posit that constructions, or learned pairings of linguistic patterns with meanings, are the fundamental building blocks of human language. Constructions include words (aardvark, avocado), morphemes (anti-, -ing), fixed expressions and idioms (by and large, jog X's memory), and abstract grammatical rules such as the passive voice (The cat was hit by a car) or the ditransitive (Mary gave Alex the ball). Any linguistic pattern is considered to be a construction as long as some aspect of its form or its meaning cannot be predicted from its component parts, or from other constructions that are recognized to exist. In construction grammar, every utterance is understood to be a combination of multiple different constructions, which together specify its precise meaning and form. Advocates of construction grammar argue that language and culture are not designed by people, but are 'emergent' or automatically constructed in a process which is comparable to natural selection in species or the formation of natural constructions such as nests made by social insects. Constructions correspond to replicators or memes in memetics and other cultural replicator theories. It is argued that construction grammar is not an original model of cultural evolution, but for essential part the same as memetics. Construction grammar is associated with concepts from cognitive linguistics that aim to show in various ways how human rational and creative behaviour is automatic and not planned. History Construction grammar was first developed in the 1980s by linguists such as Charles Fillmore, Paul Kay, and George Lakoff, in order to analyze idioms and fixed expressions. Lakoff's 1977 paper "Linguistic Gestalts" put forward an early version of CxG, arguing that the meaning of an expression was not simply a function of the meanings of its parts. Instead, he suggested, constructions themselves must have meanings. Another early study was "There-Constructions," which appeared as Case Study 3 in George Lakoff's Women, Fire, and Dangerous Things. It argued that the meaning of the whole was not a function of the meanings of the parts, that odd grammatical properties of Deictic There-constructions followed from the pragmatic meaning of the construction, and that variations on the central construction could be seen as simple extensions using form-meaning pairs of the central construction. Fillmore et al.'s (1988) paper on the English let alone construction was a second classic. These two papers propelled cognitive linguists into the study of CxG. Since the late 1990s there has been a shift towards a general preference for the usage-based model. The shift towards the usage-based approach in construction grammar has inspired the development of several corpus-based methodologies of constructional analysis (for example, collostructional analysis). Concepts One of the most distinctive features of CxG is its use of multi-word expressions and phrasal patterns as the building blocks of syntactic analysis. One example is the Correlative Conditional construction, found in the proverbial expression The bigger they come, the harder they fall. Construction grammarians point out that this is not merely a fixed phrase; the Correlative Conditional is a general pattern (The Xer, the Yer) with "slots" that can be filled by almost any comparative phrase (e.g. The more you think about it, the less you understand). Advocates of CxG argue these kinds of idiosyncratic patterns are more common than is often recognized, and that they are best understood as multi-word, partially filled constructions. Construction grammar rejects the idea that there is a sharp dichotomy between lexical items, which are arbitrary and specific, and grammatical rules, which are completely general. Instead, CxG posits that there are linguistic patterns at every level of generality and specificity: from individual words, to partially filled constructions (e.g. drive X crazy), to fully abstract rules (e.g. subject–auxiliary inversion). All of these patterns are recognized as constructions. In contrast to theories that posit an innate universal grammar for all languages, construction grammar holds that speakers learn constructions inductively as they are exposed to them, using general cognitive processes. It is argued that children pay close attention to each utterance they hear, and gradually make generalizations based on the utterances they have heard. Because constructions are learned, they are expected to vary considerably across different languages. Grammatical construction In construction grammar, as in general semiotics, the grammatical construction is a pairing of form and content. The formal aspect of a construction is typically described as a syntactic template, but the form covers more than just syntax, as it also involves phonological aspects, such as prosody and intonation. The content covers semantic as well as pragmatic meaning. The semantic meaning of a grammatical construction is made up of conceptual structures postulated in cognitive semantics: image-schemas, frames, conceptual metaphors, conceptual metonymies, prototypes of various kinds, mental spaces, and bindings across these (called "blends"). Pragmatics just becomes the cognitive semantics of communication—the modern version of the old Ross-Lakoff performative hypothesis from the 1960s. The form and content are symbolically linked in the sense advocated by Langacker. Thus a construction is treated like a sign in which all structural aspects are integrated parts and not distributed over different modules as they are in the componential model. Consequentially, not only constructions that are lexically fixed, like many idioms, but also more abstract ones like argument structure schemata, are pairings of form and conventionalized meaning. For instance, the ditransitive schema [S V IO DO] is said to express semantic content X CAUSES Y TO RECEIVE Z, just like kill means X CAUSES Y TO DIE. In construction grammar, a grammatical construction, regardless of its formal or semantic complexity and make up, is a pairing of form and meaning. Thus words and word classes may be regarded as instances of constructions. Indeed, construction grammarians argue that all pairings of form and meaning are constructions, including phrase structures, idioms, words and even morphemes. Syntax–lexicon continuum Unlike the componential model, construction grammar denies any strict distinction between the two and proposes a syntax–lexicon continuum. The argument goes that words and complex constructions are both pairs of form and meaning and differ only in internal symbolic complexity. Instead of being discrete modules and thus subject to very different processes they form the extremes of a continuum (from regular to idiosyncratic): syntax > subcategorization frame > idiom > morphology > syntactic category > word/lexicon (these are the traditional terms; construction grammars use a different terminology). Grammar as an inventory of constructions In construction grammar, the grammar of a language is made up of taxonomic networks of families of constructions, which are based on the same principles as those of the conceptual categories known from cognitive linguistics, such as inheritance, prototypicality, extensions, and multiple parenting. Four different models are proposed in relation to how information is stored in the taxonomies: Full-entry model In the full-entry model information is stored redundantly at all relevant levels in the taxonomy, which means that it operates, if at all, with minimal generalization. Usage-based model The usage-based model is based on inductive learning, meaning that linguistic knowledge is acquired in a bottom-up manner through use. It allows for redundancy and generalizations, because the language user generalizes over recurring experiences of use. Default inheritance model According to the default inheritance model, each network has a default central form-meaning pairing from which all instances inherit their features. It thus operates with a fairly high level of generalization, but does also allow for some redundancy in that it recognizes extensions of different types. Complete inheritance model In the complete inheritance model, information is stored only once at the most superordinate level of the network. Instances at all other levels inherit features from the superordinate item. The complete inheritance does not allow for redundancy in the networks. Principle of no synonymy Because construction grammar does not operate with surface derivations from underlying structures, it adheres to functionalist linguist Dwight Bolinger's principle of no synonymy, on which Adele Goldberg elaborates in her book. This means that construction grammarians argue, for instance, that active and passive versions of the same proposition are not derived from an underlying structure, but are instances of two different constructions. As constructions are pairings of form and meaning, active and passive versions of the same proposition are not synonymous, but display differences in content: in this case the pragmatic content. Some construction grammars As mentioned above, Construction grammar is a "family" of theories rather than one unified theory. There are a number of formalized Construction grammar frameworks. Some of these are: Berkeley Construction Grammar Berkeley Construction Grammar (BCG: formerly also called simply Construction Grammar in upper case) focuses on the formal aspects of constructions and makes use of a unification-based framework for description of syntax, not unlike head-driven phrase structure grammar. Its proponents/developers include Charles Fillmore, Paul Kay, Laura Michaelis, and to a certain extent Ivan Sag. Immanent within BCG works like Fillmore and Kay 1995 and Michaelis and Ruppenhofer 2001 is the notion that phrasal representations—embedding relations—should not be used to represent combinatoric properties of lexemes or lexeme classes. For example, BCG abandons the traditional practice of using non-branching domination (NP over N' over N) to describe undetermined nominals that function as NPs, instead introducing a determination construction that requires ('asks for') a non-maximal nominal sister and a lexical 'maximality' feature for which plural and mass nouns are unmarked. BCG also offers a unification-based representation of 'argument structure' patterns as abstract verbal lexeme entries ('linking constructions'). These linking constructions include transitive, oblique goal and passive constructions. These constructions describe classes of verbs that combine with phrasal constructions like the VP construction but contain no phrasal information in themselves. Sign Based Construction Grammar In the mid-2000s, several of the developers of BCG, including Charles Fillmore, Paul Kay, Ivan Sag and Laura Michaelis, collaborated in an effort to improve the formal rigor of BCG and clarify its representational conventions. The result was Sign Based Construction Grammar (SBCG). SBCG is based on a multiple-inheritance hierarchy of typed feature structures. The most important type of feature structure in SBCG is the sign, with subtypes word, lexeme and phrase. The inclusion of phrase within the canon of signs marks a major departure from traditional syntactic thinking. In SBCG, phrasal signs are licensed by correspondence to the mother of some licit construct of the grammar. A construct is a local tree with signs at its nodes. Combinatorial constructions define classes of constructs. Lexical class constructions describe combinatoric and other properties common to a group of lexemes. Combinatorial constructions include both inflectional and derivational constructions. SBCG is both formal and generative; while cognitive-functional grammarians have often opposed their standards and practices to those of formal, generative grammarians, there is in fact no incompatibility between a formal, generative approach and a rich, broad-coverage, functionally based grammar. It simply happens that many formal, generative theories are descriptively inadequate grammars. SBCG is generative in a way that prevailing syntax-centered theories are not: its mechanisms are intended to represent all of the patterns of a given language, including idiomatic ones; there is no 'core' grammar in SBCG. SBCG a licensing-based theory, as opposed to one that freely generates syntactic combinations and uses general principles to bar illicit ones: a word, lexeme or phrase is well formed if and only if it is described by a lexeme or construction. Recent SBCG works have expanded on the lexicalist model of idiomatically combining expressions sketched out in Sag 2012. Goldbergian/Lakovian construction grammar The type of construction grammar associated with linguists like Goldberg and Lakoff looks mainly at the external relations of constructions and the structure of constructional networks. In terms of form and function, this type of construction grammar puts psychological plausibility as its highest desideratum. It emphasizes experimental results and parallels with general cognitive psychology. It also draws on certain principles of cognitive linguistics. In the Goldbergian strand, constructions interact with each other in a network via four inheritance relations: polysemy link, subpart link, metaphorical extension, and finally instance link. Cognitive grammar Sometimes, Ronald Langacker's cognitive grammar framework is described as a type of construction grammar. Cognitive grammar deals mainly with the semantic content of constructions, and its central argument is that conceptual semantics is primary to the degree that form mirrors, or is motivated by, content. Langacker argues that even abstract grammatical units like part-of-speech classes are semantically motivated and involve certain conceptualizations. Radical construction grammar William A. Croft's radical construction grammar is designed for typological purposes and takes into account cross-linguistic factors. It deals mainly with the internal structure of constructions. Radical construction grammar is totally non-reductionist, and Croft argues that constructions are not derived from their parts, but that the parts are derived from the constructions they appear in. Thus, in radical construction grammar, constructions are linked to Gestalts. Radical construction grammar rejects the idea that syntactic categories, roles, and relations are universal and argues that they are not only language-specific, but also construction specific. Thus, there are no universals that make reference to formal categories, since formal categories are language- and construction-specific. The only universals are to be found in the patterns concerning the mapping of meaning onto form. Radical construction grammar rejects the notion of syntactic relations altogether and replaces them with semantic relations. Like Goldbergian/Lakovian construction grammar and cognitive grammar, radical construction grammar is closely related to cognitive linguistics, and like cognitive grammar, radical construction grammar appears to be based on the idea that form is semantically motivated. Embodied construction grammar Embodied construction grammar (ECG), which is being developed by the Neural Theory of Language (NTL) group at ICSI, UC Berkeley, and the University of Hawaii, particularly including Benjamin Bergen and Nancy Chang, adopts the basic constructionist definition of a grammatical construction, but emphasizes the relation of constructional semantic content to embodiment and sensorimotor experiences. A central claim is that the content of all linguistic signs involves mental simulations and is ultimately dependent on basic image schemas of the kind advocated by Mark Johnson and George Lakoff, and so ECG aligns itself with cognitive linguistics. Like construction grammar, embodied construction grammar makes use of a unification-based model of representation. A non-technical introduction to the NTL theory behind embodied construction grammar as well as the theory itself and a variety of applications can be found in Jerome Feldman's From Molecule to Metaphor: A Neural Theory of Language (MIT Press, 2006). Fluid construction grammar Fluid construction grammar (FCG) was designed by Luc Steels and his collaborators for doing experiments on the origins and evolution of language. FCG is a fully operational and computationally implemented formalism for construction grammars and proposes a uniform mechanism for parsing and production. Moreover, it has been demonstrated through robotic experiments that FCG grammars can be grounded in embodiment and sensorimotor experiences. FCG integrates many notions from contemporary computational linguistics such as feature structures and unification-based language processing. Constructions are considered bidirectional and hence usable both for parsing and production. Processing is flexible in the sense that it can even cope with partially ungrammatical or incomplete sentences. FCG is called 'fluid' because it acknowledges the premise that language users constantly change and update their grammars. The research on FCG is conducted at Sony CSL Paris and the AI Lab at the Vrije Universiteit Brussel. Implemented construction grammar Most of the above approaches to construction grammar have not been implemented as a computational model for large scale practical usage in Natural Language Processing frameworks but interest in construction grammar has been shown by more traditional computational linguists as a contrast to the current boom in more opaque deep learning models. This is largely due to the representational convenience of CxG models and their potential to integrate with current tokenizers as a perceptual layer for further processing in neurally inspired models. Approaches to integrate constructional grammar with existing Natural Language Processing frameworks include hand-built feature sets and templates and used computational models to identify their prevalence in text collections, but some suggestions for more emergent models have been proposed, e.g. in the 2023 Georgetown University Roundtable on Linguistics. Criticism Esa Itkonen, who defends humanistic linguistics and opposes Darwinian linguistics, questions the originality of the work of Adele Goldberg, Michael Tomasello, Gilles Fauconnier, William Croft and George Lakoff. According to Itkonen, construction grammarians have appropriated old ideas in linguistics adding some false claims. For example, construction type and conceptual blending correspond to analogy and blend, respectively, in the works of William Dwight Whitney, Leonard Bloomfield, Charles Hockett, and others. At the same time, the claim made by construction grammarians, that their research represents a continuation of Saussurean linguistics, has been considered misleading. German philologist Elisabeth Leiss regards construction grammar as regress, linking it with the 19th century social darwinism of August Schleicher. There is a dispute between the advocates of construction grammar and memetics, an evolutionary approach which adheres to the Darwinian view of language and culture. Advocates of construction grammar argue that memetics takes the perspective of intelligent design to cultural evolution while construction grammar rejects human free will in language construction; but, according to memetician Susan Blackmore, this makes construction grammar the same as memetics. Lastly, the most basic syntactic patterns of English, namely the core grammatical relations subject-verb, verb object and verb-indirect object, are counter-evidence for the very concept of constructions as pairings of linguistic patterns with meanings. Instead of the postulated form-meaning pairing, core grammatical relations possess a wide variability of semantics, exhibiting a neutralization of semantic distinctions. For instance, in a detailed discussion of the dissociation of grammatical case-roles from semantics, Talmy Givon lists the multiple semantic roles of subjects and direct objects in English. As these phenomena are well-established, some linguists propose that core grammatical relations be excluded from CxG as they are not constructions, leaving the theory to be a model merely of idioms or infrequently used, minor patterns. As the pairing of the syntactic construction and its prototypical meaning are learned in early childhood, children should initially learn the basic constructions with their prototypical semantics, that is, 'agent of action' for the subject in the SV relation, 'affected object of agent's action' for the direct object term in VO, and 'recipient in transfer of possession of object' for the indirect-object in VI. Anat Ninio examined the speech of a large sample of young English-speaking children and found that they do not in fact learn the syntactic patterns with the prototypical semantics claimed to be associated with them, or with any single semantics. The major reason is that such pairings are not consistently modelled for them in parental speech. Examining the maternal speech addressed to the children, Ninio also found that the pattern of subjects, direct objects and indirect objects in mothers’ speech does not provide the required prototypical semantics for the construction to be established. Adele Goldberg and her associates had previously reported similar negative results concerning the pattern of direct objects in parental speech. These findings are a blow to the CxG theory that relies on a learned association of form and prototypical meaning in order to set up the constructions said to form the basic units of syntax. See also Anankastic conditional Construction Morphology Snowclone References Further reading Bergen, Benjamin and Nancy Chang. Embodied Construction Grammar in Simulation-Based Language Understanding. In press. J.-O. Östman and M. Fried (eds.). Construction Grammar(s): Cognitive and Cross-Language Dimensions. Johns Benjamins. Croft, William A. (2001). Radical Construction Grammar: Syntactic Theory in Typological Perspective. Oxford: Oxford University Press. Croft, William A. and D. Alan Cruse (2004). Cognitive Linguistics. Cambridge: Cambridge University Press. Feldman, Jerome A. (2006). From Molecule to Metaphor: A Neural Theory of Language . Cambridge : MIT Press. Fillmore, Charles, Paul Kay and Catherine O'Connor (1988). Regularity and Idiomaticity in Grammatical Constructions: The Case of let alone. Language 64: 501–38. Goldberg, Adele. (1995) Constructions: A Construction Grammar Approach to Argument Structure. Chicago: University of Chicago Press. Goldberg, Adele (2006). Constructions at Work: The Nature of Generalization in Language. Oxford: Oxford University Press. Hilpert, Martin (2014). Construction Grammar and its Application to English. Edinburgh: Edinburgh University Press. Lakoff, George (1987). Women, Fire, and Dangerous Things: What Categories Reveal about the Mind. Chicago: CSLI. Langacker, Ronald (1987, 1991). Foundations of Cognitive Grammar. 2 vols. Stanford: Stanford University Press. Michaelis, Laura A. and Knud Lambrecht (1996). Toward a Construction-Based Model of Language Function: The Case of Nominal Extraposition. Language 72: 215–247. Michaelis, Laura A. and Josef Ruppenhofer (2001). Beyond Alternations: A Construction-Based Account of the Applicative Construction in German. Stanford: CSLI Publications. Michaelis, Laura A. (2004). Type Shifting in Construction Grammar: An Integrated Approach to Aspectual Coercion. Cognitive Linguistics 15: 1–67. De Beule Joachim and Steels Luc (2005). Hierarchy in Fluid Construction Grammar. Lecture Notes in Artificial Intelligence (LNCS/LNAI) 3698 (2005) pages 1–15). Berlin: Springer. Steels, Luc and De Beule, Joachim (2006). Unify and merge in fluid construction grammars. In: Vogt, P., Sugita, Y., Tuci, E. and Nehaniv, C., editors, Symbol Grounding and Beyond: Proceedings of the Third International Workshop on the Emergence and Evolution of Linguistic Communication, EELC 2006, Rome, Italy, September 30–October 1, 2006, Lecture Notes in Computer Science (LNCS/LNAI) Vol. 4211, Berlin. Springer-Verlag. pp. 197–223. External links Construction Grammar Fluid Construction Grammar VUB Artificial Intelligence Laboratory Sony CSL Paris NTL Project Memetics Cognitive linguistics Sociobiology Grammar frameworks
Construction grammar
[ "Biology" ]
4,804
[ "Behavioural sciences", "Behavior", "Sociobiology" ]
1,559,276
https://en.wikipedia.org/wiki/Isoxathion
Isoxathion is a chemical compound with the molecular formula C13H16NO4PS. It is an insecticide, specifically an isoxazole organothiophosphate insecticide. References External links Data sheet Acetylcholinesterase inhibitors Organophosphate insecticides Isoxazoles Organothiophosphate esters Ethyl esters
Isoxathion
[ "Chemistry" ]
77
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
1,559,322
https://en.wikipedia.org/wiki/Shoot%2C%20shovel%2C%20and%20shut%20up
Shoot, shovel, and shut up, also known as the 3-S treatment, refers to a method for dealing with unwanted or unwelcome animals primarily in rural areas. There have been reports of the frequently illegal triple-step procedure being used to dispatch mischievous pets, endangered species, and even sick livestock. Individuals often engage in this practice as a means to protect property or pets from predatory species that are protected by law, especially if other measures to protect their animals are unfeasible. For instance, eagles, a protected species, have been known to occasionally attack and kill young livestock on ranches. Similarly, there have been multiple incidents where hawks have attacked and killed small farm poultry and pets. Farmers and pet owners caught killing such animals have been prosecuted, regardless of their reasons for doing so. When applied to marauding dogs, the implication is that the offending canine will be killed by firearm and, as far as the owner is concerned, disappear with no apparent clues because of the reticence of the person employing the method. The phrase was used in this sense in Living Well on Practically Nothing by Edward H. Romney, who pointed out that while one might get away with using the 3-S treatment in rural areas, suburban neighborhoods have different norms. This practice is more common in rural areas, where there is less population to witness illegal activities. Pittsburgh Tribune-Review columnist Ralph R. Reiland wrote an essay called "Shoot, Shovel & Shut Up," describing landowners' reactions to finding red-cockaded woodpeckers on their property. Under the Endangered Species Act, landowners who have a population of such birds on their property may be subject to restrictions on building and other land uses that would interfere with the animals' habitat. Therefore, it was considered prudent to eliminate the birds before the government noticed their presence. The Fall 2001 issue of the Sierra Citizen notes, "'Shoot, shovel and shut up' is the mantra of many in the so-called 'property rights' movement ... It refers to the practice of killing and burying evidence of any plants or animals that might be threatened or endangered." The property rights movement argues that the Endangered Species Act should be amended to compensate property owners for protecting endangered species, rather than making an endangered species a financial drain on the owners, and that the current act actually hastens the decline of some endangered species when listed by causing property owners to "shoot, shovel, and shut up" to avoid expected losses. Jim Robbins writes in Wolves Across the Border that Mon Teigen, director of the Montana Stockgrowers Association, "believes labyrinthine federal endangered species regulations may lead a few ranchers to control wolves with the three-S method". In 2005, after a court ruled that ranchers could not shoot wolves caught attacking livestock, the Associated Press reported that "Sharon Beck, an Eastern Oregon rancher and former president of the Oregon Cattlemen's Association, said the ruling leaves ranchers little recourse but to break the law -- known around the West as 'shoot, shovel and shut up' -- when wolves move into their areas". The phrase has also been used in reference to mad cow disease. More than 30 countries banned beef imports from Canada after one of Albertan farmer Marwyn Peaster's cattle tested positive for the illness. Alberta Premier Ralph Klein, in frustration over the situation, said that any "self-respecting rancher would have shot, shovelled and shut up". References Animal welfare Animal killing Pest control techniques Human–animal interaction Canadian political phrases Environmental crime Endangered species
Shoot, shovel, and shut up
[ "Biology" ]
727
[ "Animals", "Biota by conservation status", "Human–animal interaction", "Humans and other species", "Endangered species" ]
1,559,807
https://en.wikipedia.org/wiki/Guo%20Shoujing
Guo Shoujing (, 1231–1316), courtesy name Ruosi (), was a Chinese astronomer, hydraulic engineer, mathematician, and politician of the Yuan dynasty. The later Johann Adam Schall von Bell (1591–1666) was so impressed with the preserved astronomical instruments of Guo that he called him "the Tycho Brahe of China." Jamal ad-Din cooperated with him. Early life In 1231, in Xingtai, Hebei province, China, Guo Shoujing was born into a poor family. He was raised primarily by his paternal grandfather, Guo Yong, who was famous throughout China for his expertise in a wide variety of topics, ranging from the study of the Five Classics to astronomy, mathematics, and hydraulics. Guo Shoujing was a child prodigy, showing exceptional intellectual promise. By his teens, he obtained a blueprint for a water clock which his grandfather was working on, and realized its principles of operation. He improved the design of a type of water clock called a lotus clepsydra, a water clock with a bowl shaped like a lotus flower on the top into which the water dripped. After he had mastered the construction of such water clocks, he began to study mathematics at the age of 16. From mathematics, he began to understand hydraulics, as well as astronomy. Career At 20, Guo became a hydraulic engineer. In 1251, as a government official, he helped repair a bridge over the Dahuoquan River. Kublai realized the importance of hydraulic engineering, irrigation, and water transport, which he believed could help alleviate uprisings within the empire, and sent Liu Bingzhong and his student Guo to look at these aspects in the area between Dadu (now Beijing or Peking) and the Yellow River. To provide Dadu with a new supply of water, Guo had a 30 km channel built to bring water from the Baifu spring in the Shenshan Mountain to Dadu, which required connecting the water supply across different river basins, canals with sluices to control the water level. The Grand Canal, which linked the river systems of the Yangtze, the Huai, and the Huang since the early 7th century, was repaired and extended to Dadu in 1292–93 with the use of corvée (unpaid labor). After the success of this project, Kublai Khan sent Guo off to manage similar projects in other parts of the empire. He became the chief advisor of hydraulics, mathematics, and astronomy for Kublai Khan. Guo began to construct astronomical observation devices. He has been credited with inventing the gnomon, the square table, the abridged or simplified armilla, and a water powered armillary sphere called the Ling Long Yi. The gnomon is used to measure the angle of the sun, determine the seasons, and is the basis of the sundial, but Guo Shoujing revised this device to become much more accurate and improved the ability to tell time more precisely. The square table was used to measure the azimuth of celestial bodies by the equal altitude method and could also be used as protractor. The abridged or simplified armilla was used to measure the angle of the sun, as well as the position of any celestial body. The Ling Long Yi is similar to an abridged armilla except larger, more complex, and more accurate. Kublai Khan, after observing Guo's mastery of astronomy, ordered that he, Zhang, and Wang Xun make a more accurate calendar. They built 27 observatories throughout China in order to gain thorough observations for their calculations. In 1280, Guo completed the calendar, calculating a year to be 365.2425 days, just 26 seconds off the year's current measurement. In 1283, Guo was promoted to director of the Observatory in Beijing and, in 1292, he became the head of the Water Works Bureau. Throughout his life he also did extensive work with spherical trigonometry. After Kublai Khan's death, Guo continued to be an advisor to Kublai's successors, working on hydraulics and astronomy. Personal life Death His year of death is variously reported as 1314 or 1316. Analysis of his contributions Guo Shoujing was a major influence in the development of science in China. The tools he invented for astronomy allowed him to calculate an accurate length for the year, which allowed Chinese culture to set up a whole new system of exact dates and times, allowing for increasingly accurate recording of history and a sense of continuity throughout the country. The calendar stabilized the Chinese culture allowing subsequent dynasties to rule more effectively. Through his work in astronomy, he was also able to more accurately establish the location of celestial bodies and the angles of the Sun relative to Earth. He invented a tool which could be used as an astrological compass, helping people find north using the stars instead of magnets. Within the field of hydraulics, even at a young age, Guo was revolutionizing old inventions. His work on clocks, irrigation, reservoirs, and equilibrium stations within other machines allowed for a more effective or accurate result. The watches he perfected through his work in hydraulics allowed for an extremely accurate reading of the time. For irrigation, he provided hydraulics systems which distributed water equally and swiftly, which allowed communities to trade more effectively, and therefore prosper. His most memorable engineering feat is the man-made Kunming Lake in Beijing, which provided water for all of the surrounding area of Beijing and allowed for the best grain transport system in the country. His work with other such reservoirs allowed people in inner China access to water for planting, drinking, and trading. Guo's work in mathematics was regarded as the most highly knowledgeable in China for 400 years. Guo worked on spherical trigonometry, using a system of approximation to find arc lengths and angles. He stated that pi was equal to 3, leading to a complex sequence of equations which came up with an answer more accurate than the answer that would have resulted if he did the same sequence of equations, but instead having pi equal to 3.1415. As people began to add onto his work, the authenticity of his work was questioned. Some believe that he took Middle Eastern mathematical and theoretical ideas and used them as his own, taking all the credit. However, he never left China which would have made it more difficult for him to access others' ideas. Otherwise, Guo was highly regarded throughout history, by many cultures, as a precursor of the Gregorian calendar as well as the man who perfected irrigation techniques in the new millennium. Many historians regard him as the most prominent Chinese astronomer, engineer, and mathematician of all time. His calendar would be used for the next 363 years, the longest period during which a calendar would be used in Chinese history. He also used mathematical functions in his work relating to spherical trigonometry, building upon the knowledge of Shen Kuo's (1031–1095) earlier work in trigonometry. It is debated amongst scholars whether or not his work in trigonometry was based entirely on the work of Shen, or whether it was partially influenced by Islamic mathematics which was largely accepted at Kublai's court. Sal Restivo asserts that Guo Shoujing's work in trigonometry was directly influenced by Shen's work. An important work in trigonometry in China would not be printed again until the collaborative efforts of Xu Guangqi and his Italian Jesuit associate Matteo Ricci in 1607, during the late Ming Dynasty. Influence Guo Shoujing was cited by Tang Shunzhi 唐順之 (1507–1560) as an example of solid practical scholarship, anticipating the rise of the Changzhou School of Thought and spread of the "evidential learning". Asteroid 2012 Guo Shou-Jing is named after him, as is the Large Sky Area Multi-Object Fibre Spectroscopic Telescope near Beijing. See also History of Beijing References Citations Sources Asiapac Editorial. (2004). Origins of Chinese Science and Technology. Translated by Yang Liping and Y.N. Han. Singapore: Asiapac Books Pte. Ltd. . Engelfriet, Peter M. (1998). Euclid in China: The Genesis of the First Translation of Euclid's Elements in 1607 & Its Reception Up to 1723. Leiden: Koninklijke Brill. . Ho, Peng Yoke. (2000). Li, Qi, and Shu: An Introduction to Science and Civilization in China. Mineola: Dover Publications. . Needham, Joseph (1986). Science and Civilization in China: Volume 3, Mathematics and the Sciences of the Heavens and the Earth. Taipei: Caves Books, Ltd. Restivo, Sal. (1992). Mathematics in Society and History: Sociological Inquiries. Dordrecht: Kluwer Academic Publishers. . O'Connor, J. J., and E. F. Robertson. "Guo Shoujing." School of Mathematics and Statistics. Dec. 2003. University of St. Andrews, Scotland. 7 Dec. 2008 <http://www-history.mcs.st-andrews.ac.uk/Biographies/Guo_Shoujing.html>. "China." Encyclopædia Britannica. 2008. Encyclopædia Britannica Online School Edition. 24 Nov. 2008 <http://school.eb.com/eb/article-71727>. Kleeman, Terry, and Tracy Barrett, eds. The Ancient Chinese World. New York, NY: Oxford UP, Incorporated, 2005. Shea, Marilyn. "Guo Shoujing - 郭守敬." China Experience. May 2007. University of Maine at Farmington. 15 Nov. 2008 <http://hua.umf.maine.edu/China/astronomy/tianpage/0018Guo_Shoujing6603w.html >. "China." Encyclopædia Britannica. 2008. Encyclopædia Britannica Online School Edition. 24 Nov. 2008 <http://school.eb.com/eb/article-71735>. External links Article on the Shoushi calendar from the National University of Singapore Culture story site Guo Shoujing at the University of Maine Article about Guo Shoujing by J J O'Connor and E F Robertson at St Andrews University Biography of Guo Shoujing 1231 births 1314 deaths 1316 deaths 13th-century Chinese astronomers 13th-century Chinese mathematicians 14th-century Chinese astronomers 14th-century Chinese mathematicians Astronomical instrument makers Chinese scientific instrument makers Engineers from Hebei Hydraulic engineers Mathematicians from Hebei Mongol Empire scholars Politicians from Xingtai Scientists from Hebei Yuan dynasty government officials
Guo Shoujing
[ "Astronomy" ]
2,224
[ "Astronomical instrument makers", "Astronomical instruments" ]
1,559,901
https://en.wikipedia.org/wiki/Mean%20motion
In orbital mechanics, mean motion (represented by n) is the angular speed required for a body to complete one orbit, assuming constant speed in a circular orbit which completes in the same time as the variable speed, elliptical orbit of the actual body. The concept applies equally well to a small body revolving about a large, massive primary body or to two relatively same-sized bodies revolving about a common center of mass. While nominally a mean, and theoretically so in the case of two-body motion, in practice the mean motion is not typically an average over time for the orbits of real bodies, which only approximate the two-body assumption. It is rather the instantaneous value which satisfies the above conditions as calculated from the current gravitational and geometric circumstances of the body's constantly-changing, perturbed orbit. Mean motion is used as an approximation of the actual orbital speed in making an initial calculation of the body's position in its orbit, for instance, from a set of orbital elements. This mean position is refined by Kepler's equation to produce the true position. Definition Define the orbital period (the time period for the body to complete one orbit) as P, with dimension of time. The mean motion is simply one revolution divided by this time, or, with dimensions of radians per unit time, degrees per unit time or revolutions per unit time. The value of mean motion depends on the circumstances of the particular gravitating system. In systems with more mass, bodies will orbit faster, in accordance with Newton's law of universal gravitation. Likewise, bodies closer together will also orbit faster. Mean motion and Kepler's laws Kepler's 3rd law of planetary motion states, the square of the periodic time is proportional to the cube of the mean distance, or where a is the semi-major axis or mean distance, and P is the orbital period as above. The constant of proportionality is given by where μ is the standard gravitational parameter, a constant for any particular gravitational system. If the mean motion is given in units of radians per unit of time, we can combine it into the above definition of the Kepler's 3rd law, and reducing, which is another definition of Kepler's 3rd law. μ, the constant of proportionality, is a gravitational parameter defined by the masses of the bodies in question and by the Newtonian constant of gravitation, G (see below). Therefore, n is also defined Expanding mean motion by expanding μ, where M is typically the mass of the primary body of the system and m is the mass of a smaller body. This is the complete gravitational definition of mean motion in a two-body system. Often in celestial mechanics, the primary body is much larger than any of the secondary bodies of the system, that is, . It is under these circumstances that m becomes unimportant and Kepler's 3rd law is approximately constant for all of the smaller bodies. Kepler's 2nd law of planetary motion states, a line joining a planet and the Sun sweeps out equal areas in equal times, or for a two-body orbit, where is the time rate of change of the area swept. Letting t = P, the orbital period, the area swept is the entire area of the ellipse, dA = ab, where a is the semi-major axis and b is the semi-minor axis of the ellipse. Hence, Multiplying this equation by 2, From the above definition, mean motion n = . Substituting, and mean motion is also which is itself constant as a, b, and are all constant in two-body motion. Mean motion and the constants of the motion Because of the nature of two-body motion in a conservative gravitational field, two aspects of the motion do not change: the angular momentum and the mechanical energy. The first constant, called specific angular momentum, can be defined as and substituting in the above equation, mean motion is also The second constant, called specific mechanical energy, can be defined, Rearranging and multiplying by , From above, the square of mean motion n2 = . Substituting and rearranging, mean motion can also be expressed, where the −2 shows that ξ must be defined as a negative number, as is customary in celestial mechanics and astrodynamics. Mean motion and the gravitational constants Two gravitational constants are commonly used in Solar System celestial mechanics: G, the Newtonian constant of gravitation and k, the Gaussian gravitational constant. From the above definitions, mean motion is By normalizing parts of this equation and making some assumptions, it can be simplified, revealing the relation between the mean motion and the constants. Setting the mass of the Sun to unity, M = 1. The masses of the planets are all much smaller, . Therefore, for any particular planet, and also taking the semi-major axis as one astronomical unit, The Gaussian gravitational constant k = , therefore, under the same conditions as above, for any particular planet and again taking the semi-major axis as one astronomical unit, Mean motion and mean anomaly Mean motion also represents the rate of change of mean anomaly, and hence can also be calculated, where M1 and M0 are the mean anomalies at particular points in time, and Δt (≡ t1-t0) is the time elapsed between the two. M0 is referred to as the mean anomaly at epoch t0, and Δt is the time since epoch. Formulae For Earth satellite orbital parameters, the mean motion is typically measured in revolutions per day. In that case, where d is the quantity of time in a day, G is the gravitational constant, M and m are the masses of the orbiting bodies, a is the length of the semi-major axis. To convert from radians per unit time to revolutions per day, consider the following: From above, mean motion in radians per unit time is: therefore the mean motion in revolutions per day is where P is the orbital period, as above. See also Gaussian gravitational constant Kepler orbit Mean anomaly Mean longitude Mean motion resonance Orbital elements Notes References External links Glossary entry mean motion at the US Naval Observatory's Astronomical Almanac Online Orbits Equations of astronomy
Mean motion
[ "Physics", "Astronomy" ]
1,277
[ "Concepts in astronomy", "Equations of astronomy" ]
1,559,907
https://en.wikipedia.org/wiki/The%20Hype%20About%20Hydrogen
The Hype About Hydrogen: Fact and Fiction in the Race to Save the Climate is a book by Joseph J. Romm, published in 2004 by Island Press and updated in 2005. The book has been translated into German as Der Wasserstoff-Boom. Romm is an expert on clean energy, advanced vehicles, energy security, and greenhouse gas mitigation. Over 200 publications, including Scientific American, Forbes magazine and The New York Times, have cited this book. The book was named one of the best science and technology books of 2004 by Library Journal. The thrust of the book is that hydrogen is not economically feasible to use for transportation, nor will its use reduce global warming, because of the greenhouse gases generated during production and transportation of hydrogen, the low energy content per volume and weight of the container, the cost of the fuel cells, and the cost of the infrastructure for refueling. The author argues that a major effort to introduce hydrogen cars before 2030 would actually undermine efforts to reduce emissions of heat-trapping greenhouse gases such as carbon dioxide. Description of the book The Hype about Hydrogen contends that global warming and U.S. reliance on foreign fuel imports cannot be solved by the hypothetical hydrogen economy that has been advanced as a possible solution to these problems, and that "neither government policy nor business investment should be based on the belief that hydrogen cars will have meaningful commercial success in the near or medium term." The book explains how fuel cells work and compares different types. It then reviews the difficulties in marketing fuel cells for applications other than transportation and argues that these are in fact easier and more likely to happen sooner than transportation applications. The history of hydrogen and its methods of production are then described. The book discusses steam methane reforming, the most common and cost-effective method of hydrogen production, which involves reacting natural gas with water and emits large amounts of CO2 (a greenhouse gas). As of 2019, 98% of hydrogen was produced either by this method or by methods with even greater greenhouse emissions (like coal gassification), which Romm attributes to the inefficiency of alternative methods such as electrolysis. The monetary costs of hydrogen fueling infrastructure for the U.S. are then estimated at half a trillion U.S. dollars, and the book describes additional energy and environment costs to liquefy and compress hydrogen for use in fueling stations. The book goes on to discuss the hypothetical evolution of the cost of vehicles with fuel cells and with hydrogen-powered internal combustion engines, as well as possible adoption strategies. It then reviews the issue of the greenhouse effect and offers four reasons why hydrogen would not be useful in reducing greenhouse gas emissions: Internal combustion engines continue to improve in efficiency. Since hydrogen is likely to be made from combustion of fossil fuels, it produces CO2 and other greenhouse gases as part of the fuel cycle. Fuel cells are likely to be much more expensive than competing technologies. Fuels used to make hydrogen could achieve larger reductions in greenhouse gas emissions if used to replace the least efficient of the electric power plants. The book then describes pilot projects in Iceland and California. In its conclusion, the book states that hydrogen will not be widely available as a transportation fuel for a long time, and describes other strategies, including energy conservation techniques, to combat global warming. Critical reception The Hype about Hydrogen was named one of the best science and technology books of 2004 by Library Journal. The New York Review of Books stated that the book gives "the most direct answers" to the question on the promise of a near-term hydrogen economy, calling Romm "a hydrogen realist". The environmental community newsletter TerraGreen agrees with Romm in the claim that "the car of the near future is the hybrid vehicle", and cites the book's good reception by Toyota's advanced technologies group. The San Diego Union-Tribunes 2004 review noted that Romm's "clear logic" reaches conclusions similar to an authoritative study issued by the National Academy of Sciences. Three UC Davis scientists who also reviewed the book agreed on its basic premises, but claimed that Romm had made selective use of sources, for example, citing the highest cost estimates, adopting extremely high estimates of efficiency for advanced gasoline vehicles, and giving weight to controversial non-peer-reviewed studies. Romm and Prof. Andrew A. Frank co-authored an article, "Hybrid Vehicles Gain Traction", published in the April 2006 issue of Scientific American, in which they argue that hybrid cars that can be plugged into the electric grid (Plug-in hybrid electric vehicles), rather than hydrogen fuel cell vehicles, will soon become standard in the automobile industry. See also Hydrogen vehicle List of books about energy issues Plug-in hybrid electric vehicle Who Killed the Electric Car? Hell and High Water References External links Online excerpts from the book Does a Hydrogen Economy Make Sense? 2007 Toronto Star article on hydrogen vehicles discussing Romm's views 2004 non-fiction books 2004 in the environment Energy policy Hydrogen economy Current affairs books Climate change books Books about energy issues Island Press books
The Hype About Hydrogen
[ "Environmental_science" ]
1,022
[ "Environmental social science", "Energy policy" ]
1,559,922
https://en.wikipedia.org/wiki/Spacecraft%20flight%20dynamics
Spacecraft flight dynamics is the application of mechanical dynamics to model how the external forces acting on a space vehicle or spacecraft determine its flight path. These forces are primarily of three types: propulsive force provided by the vehicle's engines; gravitational force exerted by the Earth and other celestial bodies; and aerodynamic lift and drag (when flying in the atmosphere of the Earth or other body, such as Mars or Venus). The principles of flight dynamics are used to model a vehicle's powered flight during launch from the Earth; a spacecraft's orbital flight; maneuvers to change orbit; translunar and interplanetary flight; launch from and landing on a celestial body, with or without an atmosphere; entry through the atmosphere of the Earth or other celestial body; and attitude control. They are generally programmed into a vehicle's inertial navigation systems, and monitored on the ground by a member of the flight controller team known in NASA as the flight dynamics officer, or in the European Space Agency as the spacecraft navigator. Flight dynamics depends on the disciplines of propulsion, aerodynamics, and astrodynamics (orbital mechanics and celestial mechanics). It cannot be reduced to simply attitude control; real spacecraft do not have steering wheels or tillers like airplanes or ships. Unlike the way fictional spaceships are portrayed, a spacecraft actually does not bank to turn in outer space, where its flight path depends strictly on the gravitational forces acting on it and the propulsive maneuvers applied. Basic principles A space vehicle's flight is determined by application of Newton's second law of motion: where F is the vector sum of all forces exerted on the vehicle, m is its current mass, and a is the acceleration vector, the instantaneous rate of change of velocity (v), which in turn is the instantaneous rate of change of displacement. Solving for a, acceleration equals the force sum divided by mass. Acceleration is integrated over time to get velocity, and velocity is in turn integrated to get position. Flight dynamics calculations are handled by computerized guidance systems aboard the vehicle; the status of the flight dynamics is monitored on the ground during powered maneuvers by a member of the flight controller team known in NASA's Human Spaceflight Center as the flight dynamics officer, or in the European Space Agency as the spacecraft navigator. For powered atmospheric flight, the three main forces which act on a vehicle are propulsive force, aerodynamic force, and gravitation. Other external forces such as centrifugal force, Coriolis force, and solar radiation pressure are generally insignificant due to the relatively short time of powered flight and small size of spacecraft, and may generally be neglected in simplified performance calculations. Propulsion The thrust of a rocket engine, in the general case of operation in an atmosphere, is approximated by: where, is the exhaust gas mass flow is the effective exhaust velocity (sometimes otherwise denoted as c in publications) is the effective jet velocity when pamb = pe is the flow area at nozzle exit plane (or the plane where the jet leaves the nozzle if separated flow) is the static pressure at nozzle exit plane is the ambient (or atmospheric) pressure The effective exhaust velocity of the rocket propellant is proportional to the vacuum specific impulse and affected by the atmospheric pressure: where: has units of seconds is the gravitational acceleration at the surface of the Earth The specific impulse relates the delta-v capacity to the quantity of propellant consumed according to the Tsiolkovsky rocket equation: where: is the initial total mass, including propellant, in kg (or lb) is the final total mass in kg (or lb) is the effective exhaust velocity in m/s (or ft/s) is the delta-v in m/s (or ft/s) Aerodynamic force Aerodynamic forces, present near a body with a significant atmosphere such as Earth, Mars or Venus, are analyzed as: lift, defined as the force component perpendicular to the direction of flight (not necessarily upward to balance gravity, as for an airplane); and drag, the component parallel to, and in the opposite direction of flight. Lift and drag are modeled as the products of a coefficient times dynamic pressure acting on a reference area: where: CL is roughly linear with α, the angle of attack between the vehicle axis and the direction of flight (up to a limiting value), and is 0 at α = 0 for an axisymmetric body; CD varies with α2; CL and CD vary with Reynolds number and Mach number; q, the dynamic pressure, is equal to 1/2 ρv2, where ρ is atmospheric density, modeled for Earth as a function of altitude in the International Standard Atmosphere (using an assumed temperature distribution, hydrostatic pressure variation, and the ideal gas law); and Aref is a characteristic area of the vehicle, such as cross-sectional area at the maximum diameter. Gravitation The gravitational force that a celestial body exerts on a space vehicle is modeled with the body and vehicle taken as point masses; the bodies (Earth, Moon, etc.) are simplified as spheres; and the mass of the vehicle is much smaller than the mass of the body so that its effect on the gravitational acceleration can be neglected. Therefore the gravitational force is calculated by: where: is the gravitational force (weight); is the space vehicle's mass; and is the radial distance of the vehicle to the planet's center; and is the radial distance from the planet's surface to its center; and is the gravitational acceleration at the surface of the planet g is the gravitational acceleration at altitude, which varies with the inverse square of the radial distance to the planet's center: Powered flight The equations of motion used to describe powered flight of a vehicle during launch can be as complex as six degrees of freedom for in-flight calculations, or as simple as two degrees of freedom for preliminary performance estimates. In-flight calculations will take perturbation factors into account such as the Earth's oblateness and non-uniform mass distribution; and gravitational forces of all nearby bodies, including the Moon, Sun, and other planets. Preliminary estimates can make some simplifying assumptions: a spherical, uniform planet; the vehicle can be represented as a point mass; solution of the flight path presents a two-body problem; and the local flight path lies in a single plane) with reasonably small loss of accuracy. The general case of a launch from Earth must take engine thrust, aerodynamic forces, and gravity into account. The acceleration equation can be reduced from vector to scalar form by resolving it into its tangential (speed ) and angular (flight path angle relative to local vertical) time rate-of-change components relative to the launch pad. The two equations thus become: where: F is the engine thrust; α is the angle of attack; m is the vehicle's mass; D is the vehicle's aerodynamic drag; L is its aerodynamic lift; r is the radial distance to the planet's center; and g is the gravitational acceleration at altitude. Mass decreases as propellant is consumed and rocket stages, engines or tanks are shed (if applicable). The planet-fixed values of v and θ at any time in the flight are then determined by numerical integration of the two rate equations from time zero (when both v and θ are 0): Finite element analysis can be used to integrate the equations, by breaking the flight into small time increments. For most launch vehicles, relatively small levels of lift are generated, and a gravity turn is employed, depending mostly on the third term of the angle rate equation. At the moment of liftoff, when angle and velocity are both zero, the theta-dot equation is mathematically indeterminate and cannot be evaluated until velocity becomes non-zero shortly after liftoff. But notice at this condition, the only force which can cause the vehicle to pitch over is the engine thrust acting at a non-zero angle of attack (first term) and perhaps a slight amount of lift (second term), until a non-zero pitch angle is attained. In the gravity turn, pitch-over is initiated by applying an increasing angle of attack (by means of gimbaled engine thrust), followed by a gradual decrease in angle of attack through the remainder of the flight. Once velocity and flight path angle are known, altitude and downrange distance are computed as: The planet-fixed values of v and θ are converted to space-fixed (inertial) values with the following conversions: where ω is the planet's rotational rate in radians per second, φ is the launch site latitude, and Az is the launch azimuth angle. Final vs, θs and r must match the requirements of the target orbit as determined by orbital mechanics (see Orbital flight, above), where final vs is usually the required periapsis (or circular) velocity, and final θs is 90 degrees. A powered descent analysis would use the same procedure, with reverse boundary conditions. Orbital flight Orbital mechanics are used to calculate flight in orbit about a central body. For sufficiently high orbits (generally at least in the case of Earth), aerodynamic force may be assumed to be negligible for relatively short term missions (though a small amount of drag may be present which results in decay of orbital energy over longer periods of time.) When the central body's mass is much larger than the spacecraft, and other bodies are sufficiently far away, the solution of orbital trajectories can be treated as a two-body problem. This can be shown to result in the trajectory being ideally a conic section (circle, ellipse, parabola or hyperbola) with the central body located at one focus. Orbital trajectories are either circles or ellipses; the parabolic trajectory represents first escape of the vehicle from the central body's gravitational field. Hyperbolic trajectories are escape trajectories with excess velocity, and will be covered under Interplanetary flight below. Elliptical orbits are characterized by three elements. The semi-major axis a is the average of the radius at apoapsis and periapsis: The eccentricity e can then be calculated for an ellipse, knowing the apses: The time period for a complete orbit is dependent only on the semi-major axis, and is independent of eccentricity: where is the standard gravitational parameter of the central body. The orientation of the orbit in space is specified by three angles: The inclination i, of the orbital plane with the fundamental plane (this is usually a planet or moon's equatorial plane, or in the case of a solar orbit, the Earth's orbital plane around the Sun, known as the ecliptic.) Positive inclination is northward, while negative inclination is southward. The longitude of the ascending node Ω, measured in the fundamental plane counter-clockwise looking southward, from a reference direction (usually the vernal equinox) to the line where the spacecraft crosses this plane from south to north. (If inclination is zero, this angle is undefined and taken as 0.) The argument of periapsis ω, measured in the orbital plane counter-clockwise looking southward, from the ascending node to the periapsis. If the inclination is 0, there is no ascending node, so ω is measured from the reference direction. For a circular orbit, there is no periapsis, so ω is taken as 0. The orbital plane is ideally constant, but is usually subject to small perturbations caused by planetary oblateness and the presence of other bodies. The spacecraft's position in orbit is specified by the true anomaly, , an angle measured from the periapsis, or for a circular orbit, from the ascending node or reference direction. The semi-latus rectum, or radius at 90 degrees from periapsis, is: The radius at any position in flight is: and the velocity at that position is: Types of orbit Circular For a circular orbit, ra = rp = a, and eccentricity is 0. Circular velocity at a given radius is: Elliptical For an elliptical orbit, e is greater than 0 but less than 1. The periapsis velocity is: and the apoapsis velocity is: The limiting condition is a parabolic escape orbit, when e = 1 and ra becomes infinite. Escape velocity at periapsis is then Flight path angle The specific angular momentum of any conic orbit, h, is constant, and is equal to the product of radius and velocity at periapsis. At any other point in the orbit, it is equal to: where φ is the flight path angle measured from the local horizontal (perpendicular to r.) This allows the calculation of φ at any point in the orbit, knowing radius and velocity: Note that flight path angle is a constant 0 degrees (90 degrees from local vertical) for a circular orbit. True anomaly as a function of time It can be shown that the angular momentum equation given above also relates the rate of change in true anomaly to r, v, and φ, thus the true anomaly can be found as a function of time since periapsis passage by integration: Conversely, the time required to reach a given anomaly is: Orbital maneuvers Once in orbit, a spacecraft may fire rocket engines to make in-plane changes to a different altitude or type of orbit, or to change its orbital plane. These maneuvers require changes in the craft's velocity, and the classical rocket equation is used to calculate the propellant requirements for a given delta-v. A delta-v budget will add up all the propellant requirements, or determine the total delta-v available from a given amount of propellant, for the mission. Most on-orbit maneuvers can be modeled as impulsive, that is as a near-instantaneous change in velocity, with minimal loss of accuracy. In-plane changes Orbit circularization An elliptical orbit is most easily converted to a circular orbit at the periapsis or apoapsis by applying a single engine burn with a delta v equal to the difference between the desired orbit's circular velocity and the current orbit's periapsis or apoapsis velocity: To circularize at periapsis, a retrograde burn is made: To circularize at apoapsis, a posigrade burn is made: Altitude change by Hohmann transfer A Hohmann transfer orbit is the simplest maneuver which can be used to move a spacecraft from one altitude to another. Two burns are required: the first to send the craft into the elliptical transfer orbit, and a second to circularize the target orbit. To raise a circular orbit at , the first posigrade burn raises velocity to the transfer orbit's periapsis velocity: The second posigrade burn, made at apoapsis, raises velocity to the target orbit's velocity: A maneuver to lower the orbit is the mirror image of the raise maneuver; both burns are made retrograde. Altitude change by bi-elliptic transfer A slightly more complicated altitude change maneuver is the bi-elliptic transfer, which consists of two half-elliptic orbits; the first, posigrade burn sends the spacecraft into an arbitrarily high apoapsis chosen at some point away from the central body. At this point a second burn modifies the periapsis to match the radius of the final desired orbit, where a third, retrograde burn is performed to inject the spacecraft into the desired orbit. While this takes a longer transfer time, a bi-elliptic transfer can require less total propellant than the Hohmann transfer when the ratio of initial and target orbit radii is 12 or greater. Burn 1 (posigrade): Burn 2 (posigrade or retrograde), to match periapsis to the target orbit's altitude: Burn 3 (retrograde): Change of plane Plane change maneuvers can be performed alone or in conjunction with other orbit adjustments. For a pure rotation plane change maneuver, consisting only of a change in the inclination of the orbit, the specific angular momentum, h, of the initial and final orbits are equal in magnitude but not in direction. Therefore, the change in specific angular momentum can be written as: where h is the specific angular momentum before the plane change, and Δi is the desired change in the inclination angle. From this it can be shown that the required delta-v is: From the definition of h, this can also be written as: where v is the magnitude of velocity before plane change and φ is the flight path angle. Using the small-angle approximation, this becomes: The total delta-v for a combined maneuver can be calculated by a vector addition of the pure rotation delta-v and the delta-v for the other planned orbital change. Translunar flight Vehicles sent on lunar or planetary missions are generally not launched by direct injection to departure trajectory, but first put into a low Earth parking orbit; this allows the flexibility of a bigger launch window and more time for checking that the vehicle is in proper condition for the flight. Escape velocity is not required for flight to the Moon; rather the vehicle's apogee is raised high enough to take it through a point where it enters the Moon's gravitational sphere of influence (SOI). This is defined as the distance from a satellite at which its gravitational pull on a spacecraft equals that of its central body, which is where D is the mean distance from the satellite to the central body, and mc and ms are the masses of the central body and satellite, respectively. This value is approximately from Earth's Moon. An accurate solution of the trajectory requires treatment as a three-body problem, but a preliminary estimate may be made using a patched conic approximation of orbits around the Earth and Moon, patched at the SOI point and taking into account the fact that the Moon is a revolving frame of reference around the Earth. Translunar injection This must be timed so that the Moon will be in position to capture the vehicle, and might be modeled to a first approximation as a Hohmann transfer. However, the rocket burn duration is usually long enough, and occurs during a sufficient change in flight path angle, that this is not very accurate. It must be modeled as a non-impulsive maneuver, requiring integration by finite element analysis of the accelerations due to propulsive thrust and gravity to obtain velocity and flight path angle: where: F is the engine thrust; α is the angle of attack; m is the vehicle's mass; r is the radial distance to the planet's center; and g is the gravitational acceleration, which varies with the inverse square of the radial distance: Altitude , downrange distance , and radial distance from the center of the Earth are then computed as: Mid-course corrections A simple lunar trajectory stays in one plane, resulting in lunar flyby or orbit within a small range of inclination to the Moon's equator. This also permits a "free return", in which the spacecraft would return to the appropriate position for reentry into the Earth's atmosphere if it were not injected into lunar orbit. Relatively small velocity changes are usually required to correct for trajectory errors. Such a trajectory was used for the Apollo 8, Apollo 10, Apollo 11, and Apollo 12 crewed lunar missions. Greater flexibility in lunar orbital or landing site coverage (at greater angles of lunar inclination) can be obtained by performing a plane change maneuver mid-flight; however, this takes away the free-return option, as the new plane would take the spacecraft's emergency return trajectory away from the Earth's atmospheric re-entry point, and leave the spacecraft in a high Earth orbit. This type of trajectory was used for the last five Apollo missions (13 through 17). Lunar orbit insertion In the Apollo program, the retrograde lunar orbit insertion burn was performed at an altitude of approximately on the far side of the Moon. This became the pericynthion of the initial orbits, with an apocynthion on the order of . The delta v was approximately . Two orbits later, the orbit was circularized at . For each mission, the flight dynamics officer prepared 10 lunar orbit insertion solutions so the one could be chosen with the optimum (minimum) fuel burn and best met the mission requirements; this was uploaded to the spacecraft computer and had to be executed and monitored by the astronauts on the lunar far side, while they were out of radio contact with Earth. Interplanetary flight In order to completely leave one planet's gravitational field to reach another, a hyperbolic trajectory relative to the departure planet is necessary, with excess velocity added to (or subtracted from) the departure planet's orbital velocity around the Sun. The desired heliocentric transfer orbit to a superior planet will have its perihelion at the departure planet, requiring the hyperbolic excess velocity to be applied in the posigrade direction, when the spacecraft is away from the Sun. To an inferior planet destination, aphelion will be at the departure planet, and the excess velocity is applied in the retrograde direction when the spacecraft is toward the Sun. For accurate mission calculations, the orbital elements of the planets must be obtained from an ephemeris, such as that published by NASA's Jet Propulsion Laboratory. Simplifying assumptions For the purpose of preliminary mission analysis and feasibility studies, certain simplified assumptions may be made to enable delta-v calculation with very small error: All the planets' orbits except Mercury have very small eccentricity, and therefore may be assumed to be circular at a constant orbital speed and mean distance from the Sun. All the planets' orbits (except Mercury) are nearly coplanar, with very small inclination to the ecliptic (3.39 degrees or less; Mercury's inclination is 7.00 degrees). The perturbating effects of the other planets' gravity are negligible. The spacecraft will spend most of its flight time under only the gravitational influence of the Sun, except for brief periods when it is in the sphere of influence of the departure and destination planets. Since interplanetary spacecraft spend a large period of time in heliocentric orbit between the planets, which are at relatively large distances away from each other, the patched-conic approximation is much more accurate for interplanetary trajectories than for translunar trajectories. The patch point between the hyperbolic trajectory relative to the departure planet and the heliocentric transfer orbit occurs at the planet's sphere of influence radius relative to the Sun, as defined above in Orbital flight. Given the Sun's mass ratio of 333,432 times that of Earth and distance of , the Earth's sphere of influence radius is (roughly 1,000,000 kilometers). Heliocentric transfer orbit The transfer orbit required to carry the spacecraft from the departure planet's orbit to the destination planet is chosen among several options: A Hohmann transfer orbit requires the least possible propellant and delta-v; this is half of an elliptical orbit with aphelion and perihelion tangential to both planets' orbits, with the longest outbound flight time equal to half the period of the ellipse. This is known as a conjunction-class mission. There is no "free return" option, because if the spacecraft does not enter orbit around the destination planet and instead completes the transfer orbit, the departure planet will not be in its original position. Using another Hohmann transfer to return requires a significant loiter time at the destination planet, resulting in a very long total round-trip mission time. Science fiction writer Arthur C. Clarke wrote in his 1951 book The Exploration of Space that an Earth-to-Mars round trip would require 259 days outbound and another 259 days inbound, with a 425-day stay at Mars. Increasing the departure apsis speed (and thus the semi-major axis) results in a trajectory which crosses the destination planet's orbit non-tangentially before reaching the opposite apsis, increasing delta-v but cutting the outbound transit time below the maximum. A gravity assist maneuver, sometimes known as a "slingshot maneuver" or Crocco mission after its 1956 proposer Gaetano Crocco, results in an opposition-class mission with a much shorter dwell time at the destination. This is accomplished by swinging past another planet, using its gravity to alter the orbit. A round trip to Mars, for example, can be significantly shortened from the 943 days required for the conjunction mission, to under a year, by swinging past Venus on return to the Earth. Hyperbolic departure The required hyperbolic excess velocity v∞ (sometimes called characteristic velocity) is the difference between the transfer orbit's departure speed and the departure planet's heliocentric orbital speed. Once this is determined, the injection velocity relative to the departure planet at periapsis is: The excess velocity vector for a hyperbola is displaced from the periapsis tangent by a characteristic angle, therefore the periapsis injection burn must lead the planetary departure point by the same angle: The geometric equation for eccentricity of an ellipse cannot be used for a hyperbola. But the eccentricity can be calculated from dynamics formulations as: where is the specific angular momentum as given above in the Orbital flight section, calculated at the periapsis: and ε is the specific energy: Also, the equations for r and v given in Orbital flight depend on the semi-major axis, and thus are unusable for an escape trajectory. But setting radius at periapsis equal to the r equation at zero anomaly gives an alternate expression for the semi-latus rectum: which gives a more general equation for radius versus anomaly which is usable at any eccentricity: Substituting the alternate expression for p also gives an alternate expression for a (which is defined for a hyperbola, but no longer represents the semi-major axis). This gives an equation for velocity versus radius which is likewise usable at any eccentricity: The equations for flight path angle and anomaly versus time given in Orbital flight are also usable for hyperbolic trajectories. Launch windows There is a great deal of variation with time of the velocity change required for a mission, because of the constantly varying relative positions of the planets. Therefore, optimum launch windows are often chosen from the results of porkchop plots that show contours of characteristic energy (v∞2) plotted versus departure and arrival time. Atmospheric entry Controlled entry, descent, and landing of a vehicle are achieved by shedding the excess kinetic energy through aerodynamic heating from drag, which requires some means of heat shielding, and/or retrograde thrust. Terminal descent is usually achieved by means of parachutes and/or air brakes. Attitude control Since spacecraft spend most of their flight time coasting unpowered through the vacuum of space, they are unlike aircraft in that their flight trajectory is not determined by their attitude (orientation), except during atmospheric flight to control the forces of lift and drag, and during powered flight to align the thrust vector. Nonetheless, attitude control is often maintained in unpowered flight to keep the spacecraft in a fixed orientation for purposes of astronomical observation, communications, or for solar power generation; or to place it into a controlled spin for passive thermal control, or to create artificial gravity inside the craft. Attitude control is maintained with respect to an inertial frame of reference or another entity (the celestial sphere, certain fields, nearby objects, etc.). The attitude of a craft is described by angles relative to three mutually perpendicular axes of rotation, referred to as roll, pitch, and yaw. Orientation can be determined by calibration using an external guidance system, such as determining the angles to a reference star or the Sun, then internally monitored using an inertial system of mechanical or optical gyroscopes. Orientation is a vector quantity described by three angles for the instantaneous direction, and the instantaneous rates of roll in all three axes of rotation. The aspect of control implies both awareness of the instantaneous orientation and rates of roll and the ability to change the roll rates to assume a new orientation using either a reaction control system or other means. Newton's second law, applied to rotational rather than linear motion, becomes: where is the net torque about an axis of rotation exerted on the vehicle, Ix is its moment of inertia about that axis (a physical property that combines the mass and its distribution around the axis), and is the angular acceleration about that axis in radians per second per second. Therefore, the acceleration rate in degrees per second per second is Analogous to linear motion, the angular rotation rate (degrees per second) is obtained by integrating α over time: and the angular rotation is the time integral of the rate: The three principal moments of inertia Ix, Iy, and Iz about the roll, pitch and yaw axes, are determined through the vehicle's center of mass. The control torque for a launch vehicle is sometimes provided aerodynamically by movable fins, and usually by mounting the engines on gimbals to vector the thrust around the center of mass. Torque is frequently applied to spacecraft, operating absent aerodynamic forces, by a reaction control system, a set of thrusters located about the vehicle. The thrusters are fired, either manually or under automatic guidance control, in short bursts to achieve the desired rate of rotation, and then fired in the opposite direction to halt rotation at the desired position. The torque about a specific axis is: where r is its distance from the center of mass, and F is the thrust of an individual thruster (only the component of F perpendicular to r is included.) For situations where propellant consumption may be a problem (such as long-duration satellites or space stations), alternative means may be used to provide the control torque, such as reaction wheels or control moment gyroscopes. Notes References Sidi, M.J. "Spacecraft Dynamics & Control. Cambridge, 1997. Thomson, W.T. "Introduction to Space Dynamics." Dover, 1961. Wertz, J.R. "Spacecraft Attitude Determination and Control." Kluwer, 1978. Wiesel, W.E. "Spaceflight Dynamics." McGraw-Hill, 1997. Astrodynamics Spaceflight concepts
Spacecraft flight dynamics
[ "Engineering" ]
6,169
[ "Astrodynamics", "Aerospace engineering" ]
1,559,937
https://en.wikipedia.org/wiki/Fernery
A fernery is a specialized garden for the cultivation and display of ferns. In many countries, ferneries are indoors or at least sheltered or kept in a shadehouse to provide a moist environment, filtered light and protection from frost and other extremes; on the other hand, some ferns native to arid regions require protection from rain and humid conditions, and grow best in full sun. In mild climates, ferneries are often outside and have an array of different species that grow under similar conditions. In 1855, parts of England were gripped by 'pteridomania' (the fern craze). This term was coined by Charles Kingsley, clergyman, naturalist (and later author of The Water Babies). It involved both British and exotic varieties being collected and displayed; many associated structures were constructed and paraphernalia was used to maintain the collections. In 1859, the Fernery at Tatton Park Gardens beside Tatton Hall had been built to a design by George Stokes, Joseph Paxton's assistant and son-in-law, to the west of the conservatory to house tree ferns from New Zealand and a collection of other ferns. The Fernery was also seen in the TV miniseries Brideshead Revisited. In 1874, the fernery in Benmore Botanic Garden (part of the Royal Botanic Garden Edinburgh) was built by James Duncan (a plant collector and sugar refiner). This was a large and expensive project since the fernery was based in a heated conservatory. In 1992, it was listed Historic Scotland for its architectural and botanical value and has been described by the Royal Commission on the Ancient and Historical Monuments of Scotland as "extremely rare and unique in its design". In 1903, Hever Castle in Kent was acquired and restored by the American millionaire William Waldorf Astor who used it as a family residence. He added the Italian Garden (including a fernery) to display his collection of statuary and ornaments. References Types of garden Ferns
Fernery
[ "Engineering", "Biology" ]
392
[ "Architecture stubs", "Ferns", "Plants", "Architecture" ]
1,560,035
https://en.wikipedia.org/wiki/Secure%20voice
Secure voice (alternatively secure speech or ciphony) is a term in cryptography for the encryption of voice communication over a range of communication types such as radio, telephone or IP. History The implementation of voice encryption dates back to World War II when secure communication was paramount to the US armed forces. During that time, noise was simply added to a voice signal to prevent enemies from listening to the conversations. Noise was added by playing a record of noise in sync with the voice signal and when the voice signal reached the receiver, the noise signal was subtracted out, leaving the original voice signal. In order to subtract out the noise, the receiver needed to have exactly the same noise signal and the noise records were only made in pairs; one for the transmitter and one for the receiver. Having only two copies of records made it impossible for the wrong receiver to decrypt the signal. To implement the system, the army contracted Bell Laboratories and they developed a system called SIGSALY. With SIGSALY, ten channels were used to sample the voice frequency spectrum from 250 Hz to 3 kHz and two channels were allocated to sample voice pitch and background hiss. In the time of SIGSALY, the transistor had not been developed and the digital sampling was done by circuits using the model 2051 Thyratron vacuum tube. Each SIGSALY terminal used 40 racks of equipment weighing 55 tons and filled a large room. This equipment included radio transmitters and receivers and large phonograph turntables. The voice was keyed to two vinyl phonograph records that contained a frequency-shift keying (FSK) audio tone. The records were played on large precise turntables in sync with the voice transmission. From the introduction of voice encryption to today, encryption techniques have evolved drastically. Digital technology has effectively replaced old analog methods of voice encryption and by using complex algorithms, voice encryption has become much more secure and efficient. One relatively modern voice encryption method is Sub-band coding. With Sub-band Coding, the voice signal is split into multiple frequency bands, using multiple bandpass filters that cover specific frequency ranges of interest. The output signals from the bandpass filters are then lowpass translated to reduce the bandwidth, which reduces the sampling rate. The lowpass signals are then quantized and encoded using special techniques like, pulse-code modulation (PCM). After the encoding stage, the signals are multiplexed and sent out along the communication network. When the signal reaches the receiver, the inverse operations are applied to the signal to get it back to its original state. A speech scrambling system was developed at Bell Laboratories in the 1970s by Subhash Kak and Nikil Jayant. In this system permutation matrices were used to scramble coded representations (such as pulse-code modulation and variants) of the speech data. Motorola developed a voice encryption system called Digital Voice Protection (DVP) as part of their first generation of voice encryption techniques. DVP uses a self-synchronizing encryption technique known as cipher feedback (CFB). The extremely high number of possible keys associated with the early DVP algorithm, makes the algorithm very robust and gives a high level of security. As with other symmetric keyed encryption systems, the encryption key is required to decrypt the signal with a special decryption algorithm. Digital A digital secure voice usually includes two components, a digitizer to convert between speech and digital signals and an encryption system to provide confidentiality. It is difficult in practice to send the encrypted signal over the same voiceband communication circuits used to transmit unencrypted voice, e.g. analog telephone lines or mobile radios, due to bandwidth expansion. This has led to the use of Voice Coders (vocoders) to achieve tight bandwidth compression of the speech signals. NSA's STU-III, KY-57 and SCIP are examples of systems that operate over existing voice circuits. The STE system, by contrast, requires wide bandwidth ISDN lines for its normal mode of operation. For encrypting GSM and VoIP, which are natively digital, the standard protocol ZRTP could be used as an end-to-end encryption technology. Secure voice's robustness greatly benefits from having the voice data compressed into very low bit-rates by special component called speech coding, voice compression or voice coder (also known as vocoder). The old secure voice compression standards include (CVSD, CELP, LPC-10e and MELP, where the latest standard is the state of the art MELPe algorithm. Digital methods using voice compression: MELP or MELPe The MELPe or enhanced-MELP (Mixed Excitation Linear Prediction) is a United States Department of Defense speech coding standard used mainly in military applications and satellite communications, secure voice, and secure radio devices. Its development was led and supported by NSA, and NATO. The US government's MELPe secure voice standard is also known as MIL-STD-3005, and the NATO's MELPe secure voice standard is also known as STANAG-4591. The initial MELP was invented by Alan McCree around 1995. That initial speech coder was standardized in 1997 and was known as MIL-STD-3005. It surpassed other candidate vocoders in the US DoD competition, including: (a) Frequency Selective Harmonic Coder (FSHC), (b) Advanced Multi-Band Excitation (AMBE), (c) Enhanced Multiband Excitation (EMBE), (d) Sinusoid Transform Coder (STC), and (e) Subband LPC Coder (SBC). Due to its lower complexity than Waveform Interpolative (WI) coder, the MELP vocoder won the DoD competition and was selected for MIL-STD-3005. Between 1998 and 2001, a new MELP-based vocoder was created at half the rate (i.e. 1200 bit/s) and substantial enhancements were added to the MIL-STD-3005 by SignalCom (later acquired by Microsoft), AT&T Corporation, and Compandent which included (a) additional new vocoder at half the rate (i.e. 1200 bit/s), (b) substantially improved encoding (analysis), (c) substantially improved decoding (synthesis), (d) Noise-Preprocessing for removing background noise, (e) transcoding between the 2400 bit/s and 1200 bit/s bitstreams, and (f) new postfilter. This fairly significant development was aimed to create a new coder at half the rate and have it interoperable with the old MELP standard. This enhanced-MELP (also known as MELPe) was adopted as the new MIL-STD-3005 in 2001 in form of annexes and supplements made to the original MIL-STD-3005, enabling the same quality as the old 2400 bit/s MELP's at half the rate. One of the greatest advantages of the new 2400 bit/s MELPe is that it shares the same bit format as MELP, and hence can interoperate with legacy MELP systems, but would deliver better quality at both ends. MELPe provides much better quality than all older military standards, especially in noisy environments such as battlefield and vehicles and aircraft. In 2002, following extensive competition and testing, the 2400 and 1200 bit/s US DoD MELPe was adopted also as NATO standard, known as STANAG-4591. As part of NATO testing for new NATO standard, MELPe was tested against other candidates such as France's HSX (Harmonic Stochastic eXcitation) and Turkey's SB-LPC (Split-Band Linear Predictive Coding), as well as the old secure voice standards such as FS1015 LPC-10e (2.4 kbit/s), FS1016 CELP (4.8 kbit/s) and CVSD (16 kbit/s). Subsequently, the MELPe won also the NATO competition, surpassing the quality of all other candidates as well as the quality of all old secure voice standards (CVSD, CELP and LPC-10e). The NATO competition concluded that MELPe substantially improved performance (in terms of speech quality, intelligibility, and noise immunity), while reducing throughput requirements. The NATO testing also included interoperability tests, used over 200 hours of speech data, and was conducted by three test laboratories worldwide. Compandent Inc, as a part of MELPe-based projects performed for NSA and NATO, provided NSA and NATO with special test-bed platform known as MELCODER device that provided the golden reference for real-time implementation of MELPe. The low-cost FLEXI-232 Data Terminal Equipment (DTE) made by Compandent, which are based on the MELCODER golden reference, are very popular and widely used for evaluating and testing MELPe in real-time, various channels & networks, and field conditions. The NATO competition concluded that MELPe substantially improved performance (in terms of speech quality, intelligibility, and noise immunity), while reducing throughput requirements. The NATO testing also included interoperability tests, used over 200 hours of speech data, and was conducted by three test laboratories worldwide. In 2005, a new 600 bit/s rate MELPe variation by Thales Group (France) was added (without extensive competition and testing as performed for the 2400/1200 bit/s MELPe) to the NATO standard STANAG-4591, and there are more advanced efforts to lower the bitrates to 300 bit/s and even 150 bit/s. In 2010, Lincoln Labs., Compandent, BBN, and General Dynamics also developed for DARPA a 300 bit/s MELP device. Its quality was better than the 600 bit/s MELPe, but its delay was longer. See also Scrambler MELPe MELP Cryptography Pseudorandom noise SIGSALY SCIP Secure telephone Secure Terminal Equipment VINSON VoIP VPN NSA encryption systems ZRTP Fishbowl (secure phone) References Cryptography Secure communication Speech codecs
Secure voice
[ "Mathematics", "Engineering" ]
2,121
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
1,560,094
https://en.wikipedia.org/wiki/Waloddi%20Weibull
Ernst Hjalmar Waloddi Weibull (18 June 1887 – 12 October 1979) was a Swedish civil engineer, materials scientist, and applied mathematician. The Weibull distribution is named after him. Education and career Weibull joined the Swedish Coast Guard in 1905 as a midshipman. He moved up the ranks with promotion to sublieutenant in 1907, Captain in 1916 and Major in 1940. While in the coast guard he took courses at the Royal Institute of Technology. In 1924 he graduated and became a full professor. He obtained his doctorate from the University of Uppsala in 1932. He was employed in Swedish and German industry as a consulting engineer. In 1914, while on expeditions to the Mediterranean, the Caribbean and the Pacific Ocean on the research ship Albatross, Weibull wrote his first paper on the propagation of explosive waves. He developed the technique of using explosive charges to determine the type of ocean bottom sediments and their thickness. The same technique is still used today in offshore oil exploration. Research contributions In 1939 he published his paper on the Weibull distribution in probability theory and statistics. In 1941 he received a personal research professorship in Engineering Physics at the Royal Institute of Technology in Stockholm from the arms producer Bofors. Weibull published many papers on strength of materials, fatigue, rupture in solids, bearings, and of course, the Weibull distribution, as well as one book on fatigue analysis in 1961. 27 of these papers were reports to the US Air Force at Wilbur Wright Field on Weibull analysis. In 1951 he presented his paper on the Weibull distribution to the American Society of Mechanical Engineers (ASME), using seven case studies. Legacy The American Society of Mechanical Engineers awarded Weibull their gold medal in 1972. The Great Gold Medal from the Royal Swedish Academy of Engineering Sciences was personally presented to him by King Carl XVI Gustaf of Sweden in 1978. Personal life Weibull came from a family that had strong ties to Scania. He was a cousin of the historian brothers Lauritz, Carl Gustaf and Curt Weibull. Weibull died on 12 October 1979 in Annecy, France. References External links Weibull Distribution A photograph of Weibull on the Portraits of Statisticians page. 1887 births 1979 deaths Swedish statisticians Swedish mechanical engineers Academic staff of the KTH Royal Institute of Technology Uppsala University alumni 20th-century Swedish engineers ASME Medal recipients People from Kristianstad Municipality Swedish materials scientists Materials scientists and engineers Swedish civil engineers KTH Royal Institute of Technology alumni
Waloddi Weibull
[ "Materials_science", "Engineering" ]
520
[ "Materials scientists and engineers", "Materials science" ]
1,560,117
https://en.wikipedia.org/wiki/Delta%20bond
In chemistry, a delta bond (δ bond) is a covalent chemical bond, in which four lobes of an atomic orbital on one atom overlap four lobes of an atomic orbital on another atom. This overlap leads to the formation of a bonding molecular orbital with two nodal planes which contain the internuclear axis and go through both atoms. The Greek letter δ in their name refers to d orbitals, since the orbital symmetry of the δ bond is the same as that of the usual (4-lobed) type of d orbital when seen down the bond axis. This type of bonding is observed in atoms that have occupied d orbitals with low enough energy to participate in covalent bonding, for example, in organometallic species of transition metals. Some rhenium, molybdenum, technetium, and chromium compounds contain a quadruple bond, consisting of one σ bond, two π bonds and one δ bond. The orbital symmetry of the δ bonding orbital is different from that of a π antibonding orbital, which has one nodal plane containing the internuclear axis and a second nodal plane perpendicular to this axis between the atoms. The δ notation was introduced by Robert Mulliken in 1931. The first compound identified as having a δ bond was potassium octachlorodirhenate(III). In 1965, F. A. Cotton reported that there was δ-bonding as part of the rhenium–rhenium quadruple bond in the [Re2Cl8]2− ion. Another example of a δ bond is proposed in cyclobutadieneiron tricarbonyl between an iron d orbital and the four p orbitals of the attached cyclobutadiene molecule. References Chemical bonding
Delta bond
[ "Physics", "Chemistry", "Materials_science" ]
365
[ "Chemical bonding", "Condensed matter physics", "nan" ]
1,560,138
https://en.wikipedia.org/wiki/CA%20Harvest%20Software%20Change%20Manager
CA Harvest Software Change Manager (originally known as CCC/Harvest) is a software tool for the configuration management (revision control, SCM, etc.) of source code and other software development assets. History The first CCC (acronym for 'Change and Configuration Control') product was released in the early 70s and was designed as a project for a Defense Department contractor in Santa Barbara CA. (The company at the time was Hughes Aircraft, now Santa Barbara Research Center for Raytheon.) It became the first commercially available CM tool. CCC was designed to manage all the components that went into an aircraft engine, and seeing as the same engine was used by both the U.S. Air Force and U.S. Navy (for the F-14 Tomcat and F-15 Eagle) it required robust and reliable parallel development. The first version of CCC/Harvest was commercially developed by Softool Corporation, a CM-focused software company founded in 1977 in Goleta, CA. Other CCC tools included CCC/Manager, CCC/DM Turnkey and CCC/QuickTrak. Softool was acquired in late 1995 by Platinum Technology, which was later acquired in May 1999 by Computer Associates (now known as CA Technologies) who added CCC/Harvest to their AllFusion suite. In 2002, the 'CCC' part of the name was dropped, and 'Change Manager' was added so it became known as AllFusion Harvest Change Manager. Later this was changed to CA Harvest Software Change Manager. Distinguishing features Change Packages: Harvest can provide both version control and change management. The developer makes changes in Harvest against a change package (creating a "change set"). The change package(s) will initially consist of a number of files that the developer has either created or amended. This is the version control component of Harvest. Life Cycles: Once the developer is satisfied with his/her changes, the changes progress through a pre-defined life cycle (i.e. into a number of sequential TEST stages and finally into PRODUCTION). At all these stages of this "life cycle", the package must have approvals from the appropriate users or user groups. These approvals are recorded permanently for audit purposes. For example, a test manager may have to approve packages prior to moving to the TEST stage, and the production change management team may have to approve packages prior to moving to the PROD state. Projects (Environments): Central to Harvest's philosophy is the concept of a Harvest "project". Projects are fully customizable according to an application's, organization's, or team's needs. The term project refers to the entire control framework in Harvest and includes: A branch or separate line of development where changes can be isolated (the version control component) The definition of processes and how changes progress through the promotional life-cycle Access control for processes and file References External links CMCrossroads open CA SCM forum CA SCM customer talks about upgrading to CA SCM r12 Hudson Plugin for CA SCM Trinem whitepaper discusses configuration management and Harvest CA SCM & Citrix CA SCM & Apache's ANT pureSCM highlights CA SCM Configuration management Proprietary version control systems CA Technologies
CA Harvest Software Change Manager
[ "Engineering" ]
661
[ "Systems engineering", "Configuration management" ]
1,560,204
https://en.wikipedia.org/wiki/Proof%20of%20work
Proof of work (PoW) is a form of cryptographic proof in which one party (the prover) proves to others (the verifiers) that a certain amount of a specific computational effort has been expended. Verifiers can subsequently confirm this expenditure with minimal effort on their part. The concept was first implemented in Hashcash by Moni Naor and Cynthia Dwork in 1993 as a way to deter denial-of-service attacks and other service abuses such as spam on a network by requiring some work from a service requester, usually meaning processing time by a computer. The term "proof of work" was first coined and formalized in a 1999 paper by Markus Jakobsson and Ari Juels. The concept was adapted to digital tokens by Hal Finney in 2004 through the idea of "reusable proof of work" using the 160-bit secure hash algorithm 1 (SHA-1). Proof of work was later popularized by Bitcoin as a foundation for consensus in a permissionless decentralized network, in which miners compete to append blocks and mine new currency, each miner experiencing a success probability proportional to the computational effort expended. PoW and PoS (proof of stake) remain the two best known Sybil deterrence mechanisms. In the context of cryptocurrencies they are the most common mechanisms. A key feature of proof-of-work schemes is their asymmetry: the work – the computation – must be moderately hard (yet feasible) on the prover or requester side but easy to check for the verifier or service provider. This idea is also known as a CPU cost function, client puzzle, computational puzzle, or CPU pricing function. Another common feature is built-in incentive-structures that reward allocating computational capacity to the network with value in the form of cryptocurrency. The purpose of proof-of-work algorithms is not proving that certain work was carried out or that a computational puzzle was "solved", but deterring manipulation of data by establishing large energy and hardware-control requirements to be able to do so. Proof-of-work systems have been criticized by environmentalists for their energy consumption. Background The concept of Proof of Work (PoW) has its roots in early research on combating spam and preventing denial-of-service attacks. One of the earliest implementations of PoW was Hashcash, created by British cryptographer Adam Back in 1997. It was designed as an anti-spam mechanism that required email senders to perform a small computational task, effectively proving that they expended resources (in the form of CPU time) before sending an email. This task was trivial for legitimate users but would impose a significant cost on spammers attempting to send bulk messages. Hashcash's system was based on the concept of finding a hash value that met certain criteria, a task that required computational effort and thus served as a "proof of work." The idea was that by making it computationally expensive to send large volumes of email, spamming would be reduced. One popular system, used in Hashcash, uses partial hash inversions to prove that computation was done, as a goodwill token to send an e-mail. For instance, the following header represents about 252 hash computations to send a message to calvin@comics.net on January 19, 2038: X-Hashcash: 1:52:380119:calvin@comics.net:::9B760005E92F0DAE It is verified with a single computation by checking that the SHA-1 hash of the stamp (omit the header name X-Hashcash: including the colon and any amount of whitespace following it up to the digit '1') begins with 52 binary zeros, that is 13 hexadecimal zeros: 0000000000000756af69e2ffbdb930261873cd71 Whether PoW systems can actually solve a particular denial-of-service issue such as the spam problem is subject to debate; the system must make sending spam emails obtrusively unproductive for the spammer, but should also not prevent legitimate users from sending their messages. In other words, a genuine user should not encounter any difficulties when sending an email, but an email spammer would have to expend a considerable amount of computing power to send out many emails at once. Proof-of-work systems are being used by other, more complex cryptographic systems such as bitcoin, which uses a system similar to Hashcash. Variants There are two classes of proof-of-work protocols. Challenge–response protocols assume a direct interactive link between the requester (client) and the provider (server). The provider chooses a challenge, say an item in a set with a property, the requester finds the relevant response in the set, which is sent back and checked by the provider. As the challenge is chosen on the spot by the provider, its difficulty can be adapted to its current load. The work on the requester side may be bounded if the challenge-response protocol has a known solution (chosen by the provider), or is known to exist within a bounded search space. Solution–verification protocols do not assume such a link: as a result, the problem must be self-imposed before a solution is sought by the requester, and the provider must check both the problem choice and the found solution. Most such schemes are unbounded probabilistic iterative procedures such as Hashcash. Known-solution protocols tend to have slightly lower variance than unbounded probabilistic protocols because the variance of a rectangular distribution is lower than the variance of a Poisson distribution (with the same mean). A generic technique for reducing variance is to use multiple independent sub-challenges, as the average of multiple samples will have a lower variance. There are also fixed-cost functions such as the time-lock puzzle. Moreover, the underlying functions used by these schemes may be: CPU-bound where the computation runs at the speed of the processor, which greatly varies in time, as well as from high-end server to low-end portable devices. Memory-bound where the computation speed is bound by main memory accesses (either latency or bandwidth), the performance of which is expected to be less sensitive to hardware evolution. Network-bound if the client must perform few computations, but must collect some tokens from remote servers before querying the final service provider. In this sense, the work is not actually performed by the requester, but it incurs delays anyway because of the latency to get the required tokens. Finally, some PoW systems offer shortcut computations that allow participants who know a secret, typically a private key, to generate cheap PoWs. The rationale is that mailing-list holders may generate stamps for every recipient without incurring a high cost. Whether such a feature is desirable depends on the usage scenario. List of proof-of-work functions Here is a list of known proof-of-work functions: Integer square root modulo a large prime Weaken Fiat–Shamir signatures Ong–Schnorr–Shamir signature broken by Pollard Partial hash inversion This paper formalizes the idea of a proof of work and introduces "the dependent idea of a bread pudding protocol", a "re-usable proof-of-work" (RPoW) system. Hash sequences Puzzles Diffie-Hellman–based puzzle Moderate Mbound Hokkaido Cuckoo Cycle Merkle tree–based Guided tour puzzle protocol Proof of useful work (PoUW) At the IACR conference Crypto 2022 researchers presented a paper describing Ofelimos, a blockchain protocol with a consensus mechanism based on "proof of useful work" (PoUW). Rather than miners consuming energy in solving complex, but essentially useless, puzzles to validate transactions, Ofelimos achieves consensus while simultaneously providing a decentralized optimization problem solver. The protocol is built around Doubly Parallel Local Search (DPLS), a local search algorithm that is used as the PoUW component. The paper gives an example that implements a variant of WalkSAT, a local search algorithm to solve Boolean problems. Bitcoin-type proof of work In 2009, the bitcoin network went online. Bitcoin is a proof-of-work digital currency that, like Finney's RPoW, is also based on the Hashcash PoW. But in bitcoin, double-spend protection is provided by a decentralized P2P protocol for tracking transfers of coins, rather than the hardware trusted computing function used by RPoW. Bitcoin has better trustworthiness because it is protected by computation. Bitcoins are "mined" using the Hashcash proof-of-work function by individual miners and verified by the decentralized nodes in the P2P bitcoin network. The difficulty is periodically adjusted to keep the block time around a target time. Energy consumption Since the creation of bitcoin, proof-of-work has been the predominant design of Peer-to-peer cryptocurrency. Studies have estimated the total energy consumption of cryptocurrency mining. The PoW mechanism requires a vast amount of computing resources, which consume a significant amount of electricity. 2018 estimates from the University of Cambridge equate bitcoin's energy consumption to that of Switzerland. History modification Each block that is added to the blockchain, starting with the block containing a given transaction, is called a confirmation of that transaction. Ideally, merchants and services that receive payment in the cryptocurrency should wait for at least one confirmation to be distributed over the network, before assuming that the payment was done. The more confirmations that the merchant waits for, the more difficult it is for an attacker to successfully reverse the transaction in a blockchain—unless the attacker controls more than half the total network power, in which case it is called a 51% attack. ASICs and mining pools Within the bitcoin community there are groups working together in mining pools. Some miners use application-specific integrated circuits (ASICs) for PoW. This trend toward mining pools and specialized ASICs has made mining some cryptocurrencies economically infeasible for most players without access to the latest ASICs, nearby sources of inexpensive energy, or other special advantages. Some PoWs claim to be ASIC-resistant, i.e. to limit the efficiency gain that an ASIC can have over commodity hardware, like a GPU, to be well under an order of magnitude. ASIC resistance has the advantage of keeping mining economically feasible on commodity hardware, but also contributes to the corresponding risk that an attacker can briefly rent access to a large amount of unspecialized commodity processing power to launch a 51% attack against a cryptocurrency. Environmental concerns Miners compete to solve crypto challenges on the bitcoin blockchain, and their solutions must be agreed upon by all nodes and reach consensus. The solutions are then used to validate transactions, add blocks and generate new bitcoins. Miners are rewarded for solving these puzzles and successfully adding new blocks. However, the bitcoin-style mining process is very energy intensive because the proof of work is shaped like a lottery mechanism. The underlying computational work has no other use but to provide security to the network that provides open access and has to work in adversarial conditions. Miners have to use a lot of energy to add a new block containing a transaction to the blockchain. The energy used in this competition is what fundamentally gives bitcoin its level of security and resistance to attacks. Also, miners have to invest computer hardwares that need large spaces as fixed cost. In January 2022 Vice-Chair of the European Securities and Markets Authority Erik Thedéen called on the EU to ban the proof of work model in favor of the proof of stake model due its lower energy emissions. In November 2022 the state of New York enacted a two-year moratorium on cryptocurrency mining that does not completely use renewable energy as a power source for two years. Existing mining companies will be grandfathered in to continue mining without the use of renewable energy but they will not be allowed to expand or renew permits with the state. No new mining companies that do not completely use renewable energy will be allowed to begin mining. See also bitcoin Bitmessage Cryptocurrency Proof of authority Proof of burn Proof of personhood Proof of space Proof of stake Proof of elapsed time Consensus (computer science) Notes On most Unix systems this can be verified with echo -n 1:52:380119:calvin@comics.net:::9B760005E92F0DAE | openssl sha1 References External links bit gold Bit gold. Describes a complete money system (including generation, storage, assay, and transfer) based on proof of work functions and the machine architecture problem raised by the use of these functions. Merkle Proof Standardised Format for Simplified Payment Verification (SPV). Cryptography Cryptocurrencies Energy consumption
Proof of work
[ "Mathematics", "Engineering" ]
2,718
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
1,560,271
https://en.wikipedia.org/wiki/Pyridinium%20chlorochromate
Pyridinium chlorochromate (PCC) is a yellow-orange salt with the formula [C5H5NH]+[CrO3Cl]−. It is a reagent in organic synthesis used primarily for oxidation of alcohols to form carbonyls. A variety of related compounds are known with similar reactivity. PCC offers the advantage of the selective oxidation of alcohols to aldehydes or ketones, whereas many other reagents are less selective. Structure and preparation PCC consists of a pyridinium cation, [C5H5NH]+, and a tetrahedral chlorochromate anion, [CrO3Cl]−. Related salts are also known, such as 1-butylpyridinium chlorochromate, [C5H5N(C4H9)][CrO3Cl] and potassium chlorochromate. PCC is commercially available. Discovered by accident, the reagent was originally prepared via addition of pyridine into a cold solution of chromium trioxide in concentrated hydrochloric acid: C5H5N + HCl + CrO3 → [C5H5NH][CrO3Cl] In one alternative method, formation of toxic chromyl chloride (CrO2Cl2) fumes during the making of the aforementioned solution were minimized by simply changing the order of addition: a cold solution of pyridine in concentrated hydrochloric acid was added to solid chromium trioxide under stirring. Uses Oxidation of alcohols PCC is used as an oxidant. In particular, it has proven to be highly effective in oxidizing primary and secondary alcohols to aldehydes and ketones, respectively. The reagent is more selective than the related Jones' Reagent, so there is little chance of over-oxidation to form carboxylic acids if acidified potassium permanganate is used as long as water is not present in the reaction mixture. A typical PCC oxidation involves addition of an alcohol to a suspension of PCC in dichloromethane. The general reaction is: 2 [C5H5NH][CrO3Cl] + 3 R2CHOH → 2 [C5H5NH]Cl + Cr2O3 + 3 R2C=O + 3 H2O For example, the triterpene lupeol was oxidized to lupenone: Babler oxidation With tertiary alcohols, the chromate ester formed from PCC can isomerize via a [3,3]-sigmatropic reaction and following oxidation yield an enone, in a reaction known as the Babler oxidation: This type of oxidative transposition reaction has been synthetically utilized, e.g. for the synthesis of morphine. Using other common oxidants in the place of PCC usually leads to dehydration, because such alcohols cannot be oxidized directly. Other reactions PCC also converts suitable unsaturated alcohols and aldehydes to cyclohexenones. This pathway, an oxidative cationic cyclization, is illustrated by the conversion of (−)-citronellol to (−)-pulegone. PCC also effects allylic oxidations, for example, in conversion of dihydrofurans to furanones. Related reagents Other more convenient or less toxic reagents for oxidizing alcohols include dimethyl sulfoxide, which is used in Swern and Pfitzner–Moffatt oxidations, and hypervalent iodine compounds, such as the Dess–Martin periodinane. Safety One disadvantage to the use of PCC is its toxicity, which it shares with other hexavalent chromium compounds. See also Oxidation with chromium(VI)-amine complexes References Further reading External links IARC Monographs Supplement 7, Chromium and Chromium Compounds History of PCC National Pollutant Inventory, Chromium(VI) Compounds Fact Sheets Chromates Oxidizing agents Pyridinium compounds IARC Group 1 carcinogens
Pyridinium chlorochromate
[ "Chemistry" ]
884
[ "Chromates", "Redox", "Oxidizing agents", "Salts" ]
1,560,475
https://en.wikipedia.org/wiki/Blondi
Blondi (1941 – 29 April 1945) was Adolf Hitler's German Shepherd, a gift as a puppy from Martin Bormann in 1941. Hitler kept Blondi even after his move into the Führerbunker located underneath the garden of the Reich Chancellery on 16 January 1945. Hitler was very fond of Blondi, keeping her by his side and allowing her to sleep in his bed while in the bunker. According to Hitler's secretary Traudl Junge, this affection was not shared by Eva Braun, Hitler's companion, who preferred her two Scottish Terrier dogs named Negus and Stasi. Blondi played a role in Nazi propaganda by portraying Hitler as an animal lover. Dogs like Blondi were coveted as "", being close to the wolf, and became very fashionable during the Nazi era. On 29 April 1945, one day before his death, Hitler expressed doubts about the cyanide capsules he had received through Heinrich Himmler's SS. To verify the capsules' potency, Hitler ordered SS physician Werner Haase to test one on Blondi, who died as a result. Blondi's puppies In March or in early April (likely 4 April) 1945, she had a litter of five puppies with Gerdy Troost's German Shepherd, Harras. Adolf Hitler named one of the puppies "Wulf", his favorite nickname and the meaning of his own first name, Adolf ("noble wolf"), and he began to train her. One of Blondi's puppies was reserved for Eva Braun's sister Gretl. Eva sent Gretl a letter containing a photo of Blondi and three of her puppies, Gretl's being indicated with an arrow. Other dogs During his military service in World War I, Hitler rescued a stray white Fox Terrier named Fuchsl. Hitler had great affection for the dog, and when he was not on duty at the front, he would spend much of his free time playing with the dog in the barracks and teaching it tricks. Hitler was profoundly distraught when his unit had to move and the dog was lost in August 1917. He had been given a German Shepherd before named "Prinz" in 1921, during his years of poverty, but he had been forced to lodge the dog elsewhere. However, the dog managed to escape and return to him. Hitler, who adored the loyalty and obedience of the dog, thereafter developed a great liking for the breed. He also owned a German Shepherd called "Muckl". Before Blondi, Hitler had two German Shepherd dogs, a mother [born 1926] and daughter [born ca. 1930] – both named Blonda. In some photos taken during the 1930s the younger Blonda is incorrectly labeled as Blondi (in most cases photograph inscriptions were written later). In May 1942, Hitler bought another young German Shepherd "from a minor official in the post office in Ingolstadt" to keep Blondi company. He called her Bella. According to Traudl Junge, Eva Braun was very fond of her two Scottish Terrier dogs named Negus and Stasi. She usually kept them away from Blondi. Death of Blondi and other dogs During the course of 29 April 1945, Hitler learned of the death of his ally Benito Mussolini at the hands of Italian partisans on 28 April. This, along with the fact that the Soviet Red Army was closing in on his location, strengthened Hitler in his resolve not to allow himself or his wife to be captured. That afternoon, Hitler expressed doubts about the cyanide capsules he had received through Heinrich Himmler's SS. By this point, Hitler regarded Himmler as a traitor. To verify the capsules' contents, Hitler had SS physician Werner Haase summoned to the Führerbunker that afternoon to test one on his dog Blondi. A cyanide capsule was crushed in the mouth of the dog, which died as a result. Hitler was expressionless as he viewed the dog's corpse, but he became completely inconsolable. According to a report commissioned by Joseph Stalin and based on eyewitness accounts, Hitler's dog-handler, Feldwebel Fritz Tornow, took Blondi's pups and shot them in the garden of the bunker complex on 30 April 1945, after Hitler and Eva Braun had committed suicide that same day. He also killed Eva Braun's two dogs, Gerda Christian's dogs, and his own dachshund. Tornow was later captured by the Allies. Erna Flegel, who met Hitler and worked at the emergency casualty-station in the Reich Chancellery, stated in 2005 that Blondi's death had affected the people in the bunker more than Eva Braun's suicide. After the battle in Berlin ended on 2 May 1945, the remains of Hitler, Braun, and two dogs (thought to be Blondi and her offspring Wulf) were discovered in a shell crater by a unit of SMERSH, the Soviet counter-intelligence agency. The dog thought to be Blondi was exhumed and photographed by the Soviets. See also List of individual dogs Notes References Sources External links 1941 animal births 1945 animal deaths Individual dogs in politics German shepherds Adolf Hitler Individual animals in Germany History of animal testing
Blondi
[ "Chemistry" ]
1,088
[ "Animal testing", "History of animal testing" ]
1,560,502
https://en.wikipedia.org/wiki/83%20Leonis
83 Leonis, abbreviated 83 Leo, is a binary star system approximately 59 light-years away in the constellation of Leo (the Lion). The primary star of the system is a cool orange subgiant star, while the secondary star is an orange dwarf star. The two stars are separated by at least 515 astronomical units from each other. Both stars are presumed to be cooler than the Sun. The primary star is also known as HD 99491 and the secondary star as HD 99492. In 2005, an exoplanet was confirmed to be orbiting the secondary star within the system. Stellar system The primary component, 83 Leonis A, is a 6th magnitude star. It is not visible to the unaided eye, but easily visible with small binoculars. The star is classified as a subgiant, meaning that it has ceased fusing hydrogen in its core and started to evolve towards red gianthood. The secondary component, 83 Leonis B, is an 8th magnitude orange dwarf, somewhat less massive (0.88 MSun), smaller and cooler than the Sun. It is visible only with binoculars or better equipment. Components A and B share common proper motion, which confirms them as a physical pair. The projected separation between the stars is 515 AU, but the true separation may be much higher. There is yet another, magnitude 14.4 component listed in the Washington Double Star Catalog. However, this star is moving into a different direction and is therefore not a true member of the 83 Leonis system. Planetary system Planet 83 Leonis Bb was discovered in Jan 2005 by the California and Carnegie Planet Search team, who use the radial velocity method to detect planets. The planet's minimum mass is less than half of the mass of Saturn. It orbits very close to the star, completing one orbit in about 17 days. In 2010, a second planet, 83 Leonis Bc, was claimed, but was found to be a false positive in 2016. However, in 2023 a different second planet was discovered, also given the designation "c". See also 16 Cygni 30 Arietis List of stars in Leo References External links Binary stars Leonis, 83 099491 055846 4414 0429 Leo (constellation) K-type main-sequence stars K-type subgiants Planetary systems with two confirmed planets BD+03 2502/3
83 Leonis
[ "Astronomy" ]
485
[ "Leo (constellation)", "Constellations" ]
1,560,725
https://en.wikipedia.org/wiki/Ralph%20Ernest%20Powers
Ralph Ernest Powers (April 27, 1875 – January 31, 1952) was an American amateur mathematician who worked on prime numbers. He is credited with discovering the Mersenne primes and , in 1911 and 1914 respectively. In 1934 he verified that the Mersenne number is composite. Life Powers was born in Fountain, Colorado Territory. Details of his life are little-known, though he appears to have been an employee of the Denver and Rio Grande Western Railroad. Soon after Powers announced the discovery of , the Frenchman E. Fauquembergue claimed that he had discovered it earlier, but many of Fauquembergue's other claims were later demonstrated as erroneous; thus, many prefer recognizing Powers as the discoverer, including the well-known Internet resource the PrimePages. After his own discoveries of Mersenne primes in 1911 and 1914, no Mersenne primes were discovered until Raphael M. Robinson used a computer to find the next two, on January 30, 1952, the night before Powers's death. Works ‘The Tenth Perfect Number', American Mathematical Monthly, Vol. 18 (1911), pp. 195–7 ’On Mersenne's Numbers', Proceedings of the London Mathematical Society, Vol. 13 (1914), p. xxxix 'A Mersenne Prime', Bulletin of the American Mathematical Society, Vol. 20, No. 10 (1914), p. 531 ’Certain composite Mersenne's numbers', Proceedings of the London Mathematical Society Vol. 15 (1916), p. xxii (with D. H. Lehmer) 'On Factoring Large Numbers', Bulletin of the American Mathematical Society, Vol. 37. No. 10 (1931), pp. 770–76 ’Note on a Mersenne Number', Bulletin of the American Mathematical Society, Vol. 40, No. 12 (1934), p. 883 See also Continued fraction factorization References External links The Prime Pages website Mersenne and Fermat Numbers (Robinson); brief treatment of Powers The Tenth Perfect Number, an article by Powers announcing the primality of M89 1875 births 1952 deaths Number theorists People from Fountain, Colorado
Ralph Ernest Powers
[ "Mathematics" ]
455
[ "Number theorists", "Number theory" ]
1,560,803
https://en.wikipedia.org/wiki/Mobile%20QoS
Quality of service (QoS) mechanism controls the performance, reliability and usability of a telecommunications service. Mobile cellular service providers may offer mobile QoS to customers just as the fixed line PSTN services providers and Internet service providers may offer QoS. QoS mechanisms are always provided for circuit switched services, and are essential for non-elastic services, for example streaming multimedia. It is also essential in networks dominated by such services, which is the case in today's mobile communication networks. Mobility adds complication to the QoS mechanisms, for several reasons: A phone call or other session may be interrupted after a handover, if the new base station is overloaded. Unpredictable handovers make it impossible to give an absolute QoS guarantee during a session initiation phase. The pricing structure is often based on per-minute or per-megabyte fee rather than flat rate, and may be different for different content services. A crucial part of QoS in mobile communications is grade of service, involving outage probability (the probability that the mobile station is outside the service coverage area, or affected by co-channel interference, i.e. crosstalk) blocking probability (the probability that the required level of QoS can not be offered) and scheduling starvation. These performance measures are affected by mechanisms such as mobility management, radio resource management, admission control, fair scheduling, channel-dependent scheduling etc. Factors affecting QoS Many factors affect the quality of service of a mobile network. It is correct to look at QoS mainly from the customer's point of view, that is, QoS as judged by the user. There are standard metrics of QoS to the user that can be measured to rate the QoS. These metrics are: the coverage, accessibility (includes GoS), and the audio quality. In coverage the strength of the signal is measured using test equipment and this can be used to estimate the size of the cell. Accessibility is about determining the ability of the network to handle successful calls from mobile-to-fixed networks and from mobile-to-mobile networks. The audio quality considers monitoring a successful call for a period of time for the clarity of the communication channel. All these indicators are used by the telecommunications industry to rate the quality of service of a network. Measurement of QoS The QoS in industry is also measured from the perspective of an expert (e.g. teletraffic engineer). This involves assessing the network to see if it delivers the quality that the network planner has been required to target. Certain tools and methods (protocol analysers, drive tests and Operation and Maintenance measurements), are used for this QoS measurement: Protocol analysers are connected to BTSs, BSCs, and MSCs for a period of time to check for problems in the cellular network. When a problem is discovered the staff can record it and it can be analysed. Drive tests allow the mobile network to be tested through the use of a team of people who take the role of users and take the QoS measures discussed above to rate the QoS of the network. This test does not apply to the entire network, so it is always a statistical sample. In the Operation and Maintenance Centres (OMCs), counters are used in the system for various events which provide the network operator with information on the state and quality of the network. Finally, customer complaints are a vital source of feedback on the QoS, and must not be ignored. Cellular GoS In general, grade of service (GoS) is measured by looking at traffic carried, traffic offered and calculating the traffic blocked and lost. The proportion of lost calls is the measure of GoS. For cellular circuit groups an acceptable GoS is 0.02. This means that two users of the circuit group out of a hundred will encounter a call refusal during the busy hour at the end of the planning period. The grade of service standard is thus the acceptable level of traffic that the network can lose. GoS is calculated from the Erlang-B formula, as a function of the number of channels required for the offered traffic intensity. Cellular audio quality The audio quality of a cellular network depends on, among other factors, the modulation scheme (e.g., FSK, QPSK) in use, matching to the channel characteristics and the processing of the received signal at the receiver using DSPs. References Teletraffic Wireless networking
Mobile QoS
[ "Technology", "Engineering" ]
901
[ "Wireless networking", "Computer networks engineering" ]
1,561,053
https://en.wikipedia.org/wiki/Variance%20swap
A variance swap is an over-the-counter financial derivative that allows one to speculate on or hedge risks associated with the magnitude of movement, i.e. volatility, of some underlying product, like an exchange rate, interest rate, or stock index. One leg of the swap will pay an amount based upon the realized variance of the price changes of the underlying product. Conventionally, these price changes will be daily log returns, based upon the most commonly used closing price. The other leg of the swap will pay a fixed amount, which is the strike, quoted at the deal's inception. Thus the net payoff to the counterparties will be the difference between these two and will be settled in cash at the expiration of the deal, though some cash payments will likely be made along the way by one or the other counterparty to maintain agreed upon margin. Structure and features The features of a variance swap include: the variance strike the realized variance the vega notional: Like other swaps, the payoff is determined based on a notional amount that is never exchanged. However, in the case of a variance swap, the notional amount is specified in terms of vega, to convert the payoff into dollar terms. The payoff of a variance swap is given as follows: where: = variance notional (a.k.a. variance units), = annualised realised variance, and = variance strike. The annualised realised variance is calculated based on a prespecified set of sampling points over the period. It does not always coincide with the classic statistical definition of variance as the contract terms may not subtract the mean. For example, suppose that there are observed prices where for to . Define the natural log returns. Then where is an annualisation factor normally chosen to be approximately the number of sampling points in a year (commonly 252) and is set be the swaps contract life defined by the number . It can be seen that subtracting the mean return will decrease the realised variance. If this is done, it is common to use as the divisor rather than , corresponding to an unbiased estimate of the sample variance. It is market practice to determine the number of contract units as follows: where is the corresponding vega notional for a volatility swap. This makes the payoff of a variance swap comparable to that of a volatility swap, another less popular instrument used to trade volatility. Pricing and valuation The variance swap may be hedged and hence priced using a portfolio of European call and put options with weights inversely proportional to the square of strike. Any volatility smile model which prices vanilla options can therefore be used to price the variance swap. For example, using the Heston model, a closed-form solution can be derived for the fair variance swap rate. Care must be taken with the behaviour of the smile model in the wings as this can have a disproportionate effect on the price. We can derive the payoff of a variance swap using Ito's Lemma. We first assume that the underlying stock is described as follows: Applying Ito's formula, we get: Taking integrals, the total variance is: We can see that the total variance consists of a rebalanced hedge of and short a log contract. Using a static replication argument, i.e., any twice continuously differentiable contract can be replicated using a bond, a future and infinitely many puts and calls, we can show that a short log contract position is equal to being short a futures contract and a collection of puts and calls: Taking expectations and setting the value of the variance swap equal to zero, we can rearrange the formula to solve for the fair variance swap strike: where: is the initial price of the underlying security, is an arbitrary cutoff, is the strike of the each option in the collection of options used. Often the cutoff is chosen to be the current forward price , in which case the fair variance swap strike can be written in the simpler form: Analytically pricing variance swaps with discrete-sampling One might find discrete-sampling of the realized variance, says, as defined earlier, more practical in valuing the variance strike since, in reality, we are only able to observe the underlying price discretely in time. This is even more persuasive since there is an assertion that converges in probability to the actual one as the number of price's observation increases. Suppose that in the risk-neutral world with a martingale measure , the underlying asset price solves the following SDE: where: imposes the swap contract expiry date, is (time-dependent) risk-free interest rate, is (time-dependent) price volatility, and is a Brownian motion under the filtered probability space where is the natural filtration of . Given as defined above by the payoff at expiry of variance swaps, then its expected value at time , denoted by is To avoid arbitrage opportunity, there should be no cost to enter a swap contract, meaning that is zero. Thus, the value of fair variance strike is simply expressed by which remains to be calculated either by finding its closed-form formula or utilizing numerical methods, like Monte Carlo methods. Uses Many traders find variance swaps interesting or useful for their purity. An alternative way of speculating on volatility is with an option, but if one only has interest in volatility risk, this strategy will require constant delta hedging, so that direction risk of the underlying security is approximately removed. What is more, a replicating portfolio of a variance swap would require an entire strip of options, which would be very costly to execute. Finally, one might often find the need to be regularly rolling this entire strip of options so that it remains centered on the current price of the underlying security. The advantage of variance swaps is that they provide pure exposure to the volatility of the underlying price, as opposed to call and put options which may carry directional risk (delta). The profit and loss from a variance swap depends directly on the difference between realized and implied volatility. Another aspect that some speculators may find interesting is that the quoted strike is determined by the implied volatility smile in the options market, whereas the ultimate payout will be based upon actual realized variance. Historically, implied variance has been above realized variance, a phenomenon known as the variance risk premium, creating an opportunity for volatility arbitrage, in this case known as the rolling short variance trade. For the same reason, these swaps can be used to hedge options on realized variance. Related instruments Closely related strategies include straddle, volatility swap, correlation swap, gamma swap, conditional variance swap, corridor variance swap, forward-start variance swap, option on realized variance and correlation trading. References Derivatives (finance) Swaps (finance) Mathematical finance Financial economics Banking
Variance swap
[ "Mathematics" ]
1,416
[ "Applied mathematics", "Mathematical finance" ]
1,561,160
https://en.wikipedia.org/wiki/Devarda%27s%20alloy
Devarda's alloy (CAS # 8049-11-4) is an alloy of aluminium (44% – 46%), copper (49% – 51%) and zinc (4% – 6%). Devarda's alloy is used as reducing agent in analytical chemistry for the determination of nitrates after their reduction to ammonia under alkaline conditions. It is named for Italian chemist Arturo Devarda (1859–1944), who synthesised it at the end of the 19th century to develop a new method to analyze nitrate in Chile saltpeter. It was often used in the quantitative or qualitative analysis of nitrates in agriculture and soil science before the development of ion chromatography, the predominant analysis method largely adopted worldwide today. General mechanism When a solution of nitrate ions is mixed with aqueous sodium hydroxide, adding Devarda's alloy and heating the mixture gently, liberates ammonia gas. After conversion under the form of ammonia, the total nitrogen is then determined by Kjeldahl method. The reduction of nitrate by the Devarda's alloy is given by the following equation: 3 + 8 Al + 5 + 18 → 3 + 8 Distinction between NO3− and NO2− with spot tests To distinguish between nitrate and nitrite, dilute HCl must be added to the nitrate. The brown ring test can also be used. Similarity with the Marsh test Devarda's alloy is a reducing agent that was commonly used in wet analytical chemistry to produce so-called nascent hydrogen under alkaline conditions in situ. In the Marsh test, used for arsenic determination, hydrogen is generated by contacting zinc powder with hydrochloric acid. So, hydrogen can be conveniently produced at low or high pH, according to the volatility of the species to be detected. Acid conditions in the Marsh test promote the fast escape of the arsine gas (AsH3), while in hyperalkaline solution, the degassing of the reduced ammonia (NH3) is greatly facilitated. Long-debated question of the nascent hydrogen Since the mid-19th century the existence of true nascent hydrogen has repeatedly been challenged. It was assumed by the supporters of this theory that, before two hydrogen atoms can recombine into a more stable H2 molecule, the labile H· free radicals are more reactive than molecular H2, a relatively weak reductant in the absence of a metal catalyst. Nascent hydrogen was supposed to be responsible for the reduction of arsenate or nitrate in arsine or ammonia respectively. Nowadays, isotopic evidence has closed the nascent hydrogen debate, presently considered to be a Gedanken artifact of romanticism. See also Kjeldahl method Nitrate test Nitrite test Nascent hydrogen Marsh test Detection of: Arsine Stibine Raney nickel, prepared from an Al, Ni alloy also dissolved in concentrated NaOH Zinc-copper couple References External links Devarda's alloy - Substance Summary (SID 24856330) (PubChem Substance) Safety data for Devarda's alloy Sigma Aldrich: Devarda’s alloy purum, for preparative purposes, grit GFS Chemicals: Devarda's alloy, reagent (ACS) National Pollutant Inventory - Copper and compounds fact sheet Further reading Aluminium alloys Copper alloys Name reactions Hydrogen Reducing agents Aluminium–copper alloys
Devarda's alloy
[ "Chemistry" ]
705
[ "Redox", "Copper alloys", "Aluminium alloys", "Name reactions", "Reducing agents", "Alloys" ]
1,561,268
https://en.wikipedia.org/wiki/Synthetic%20membrane
An artificial membrane, or synthetic membrane, is a synthetically created membrane which is usually intended for separation purposes in laboratory or in industry. Synthetic membranes have been successfully used for small and large-scale industrial processes since the middle of the twentieth century. A wide variety of synthetic membranes is known. They can be produced from organic materials such as polymers and liquids, as well as inorganic materials. Most commercially utilized synthetic membranes in industry are made of polymeric structures. They can be classified based on their surface chemistry, bulk structure, morphology, and production method. The chemical and physical properties of synthetic membranes and separated particles as well as separation driving force define a particular membrane separation process. The most commonly used driving forces of a membrane process in industry are pressure and concentration gradient. The respective membrane process is therefore known as filtration. Synthetic membranes utilized in a separation process can be of different geometry and flow configurations. They can also be categorized based on their application and separation regime. The best known synthetic membrane separation processes include water purification, reverse osmosis, dehydrogenation of natural gas, removal of cell particles by microfiltration and ultrafiltration, removal of microorganisms from dairy products, and dialysis. Membrane types and structure Synthetic membrane can be fabricated from a large number of different materials. It can be made from organic or inorganic materials including solids such as metals, ceramics, homogeneous films, polymers, heterogeneous solids (polymeric mixtures, mixed glasses), and liquids. Ceramic membranes are produced from inorganic materials such as aluminium oxides, silicon carbide, and zirconium oxide. Ceramic membranes are very resistant to the action of aggressive media (acids, strong solvents). They are very stable chemically, thermally, and mechanically, and biologically inert. Even though ceramic membranes have a high weight and substantial production costs, they are ecologically friendly and have long working life. Ceramic membranes are generally made as monolithic shapes of tubular capillaries. Liquid membranes Liquid membranes refer to synthetic membranes made of non-rigid materials. Several types of liquid membranes can be encountered in industry: emulsion liquid membranes, immobilized (supported) liquid membranes, supported molten-salt membranes, and hollow-fiber contained liquid membranes. Liquid membranes have been extensively studied but thus far have limited commercial applications. Maintaining adequate long-term stability is a key problem, due to the tendency of membrane liquids to evaporate, dissolve in the phases in contact with them, or creep out of the membrane support. Polymeric membranes Polymeric membranes lead the membrane separation industry market because they are very competitive in performance and economics. Many polymers are available, but the choice of membrane polymer is not a trivial task. A polymer has to have appropriate characteristics for the intended application. The polymer sometimes has to offer a low binding affinity for separated molecules (as in the case of biotechnology applications), and has to withstand the harsh cleaning conditions. It has to be compatible with chosen membrane fabrication technology. The polymer has to be a suitable membrane former in terms of its chains rigidity, chain interactions, stereoregularity, and polarity of its functional groups. The polymers can range form amorphous and semicrystalline structures (can also have different glass transition temperatures), affecting the membrane performance characteristics. The polymer has to be obtainable and reasonably priced to comply with the low cost criteria of membrane separation process. Many membrane polymers are grafted, custom-modified, or produced as copolymers to improve their properties. The most common polymers in membrane synthesis are cellulose acetate, Nitrocellulose, and cellulose esters (CA, CN, and CE), polysulfone (PS), polyether sulfone(PES), polyacrilonitrile (PAN), polyamide, polyimide, polyethylene and polypropylene (PE and PP), polytetrafluoroethylene (PTFE), polyvinylidene fluoride (PVDF), polyvinylchloride (PVC). Polymer electrolyte membranes Polymer membranes may be functionalized into ion-exchange membranes by the addition of highly acidic or basic functional groups, e.g. sulfonic acid and quaternary ammonium, enabling the membrane to form water channels and selectively transport cations or anions, respectively. The most important functional materials in this category include proton-exchange membranes and alkaline anion-exchange membranes, that are at the heart of many technologies in water treatment, energy storage, energy generation. Applications within water treatment include reverse osmosis, electrodialysis, and reversed electrodialysis. Applications within energy storage include rechargeable metal-air electrochemical cells and various types of flow battery. Applications within energy generation include proton-exchange membrane fuel cells (PEMFCs), alkaline anion-exchange membrane fuel cells (AEMFCs), and both the osmotic- and electrodialysis-based osmotic power or blue energy generation. Ceramic membranes Ceramic membranes are made from inorganic materials (such as alumina, titania, zirconia oxides, recrystallised silicon carbide or some glassy materials). By contrast with polymeric membranes, they can be used in separations where aggressive media (acids, strong solvents) are present. They also have excellent thermal stability which make them usable in high temperature membrane operations. Surface chemistry One of the critical characteristics of a synthetic membrane is its chemistry. Synthetic membrane chemistry usually refers to the chemical nature and composition of the surface in contact with a separation process stream. The chemical nature of a membrane's surface can be quite different from its bulk composition. This difference can result from material partitioning at some stage of the membrane's fabrication, or from an intended surface postformation modification. Membrane surface chemistry creates very important properties such as hydrophilicity or hydrophobicity (related to surface free energy), presence of ionic charge, membrane chemical or thermal resistance, binding affinity for particles in a solution, and biocompatibility (in case of bioseparations). Hydrophilicity and hydrophobicity of membrane surfaces can be expressed in terms of water (liquid) contact angle θ. Hydrophilic membrane surfaces have a contact angle in the range of 0°<θ<90° (closer to 0°), where hydrophobic materials have θ in the range of 90°<θ<180°. The contact angle is determined by solving the Young's equation for the interfacial force balance. At equilibrium three interfacial tensions corresponding to solid/gas (γSG), solid/liquid (γSL), and liquid/gas (γLG) interfaces are counterbalanced. The consequence of the contact angle's magnitudes is known as wetting phenomena, which is important to characterize the capillary (pore) intrusion behavior. Degree of membrane surface wetting is determined by the contact angle. The surface with smaller contact angle has better wetting properties (θ=0°-perfect wetting). In some cases low surface tension liquids such as alcohols or surfactant solutions are used to enhance wetting of non-wetting membrane surfaces. The membrane surface free energy (and related hydrophilicity/hydrophobicity) influences membrane particle adsorption or fouling phenomena. In most membrane separation processes (especially bioseparations), higher surface hydrophilicity corresponds to the lower fouling. Synthetic membrane fouling impairs membrane performance. As a consequence, a wide variety of membrane cleaning techniques have been developed. Sometimes fouling is irreversible, and the membrane needs to be replaced. Another feature of membrane surface chemistry is surface charge. The presence of the charge changes the properties of the membrane-liquid interface. The membrane surface may develop an electrokinetic potential and induce the formation of layers of solution particles which tend to neutralize the charge. Membrane morphology Synthetic membranes can be also categorized based on their structure (morphology). Three such types of synthetic membranes are commonly used in separation industry: dense membranes, porous membranes, and asymmetric membranes. Dense and porous membranes are distinct from each other based on the size of separated molecules. Dense membrane is usually a thin layer of dense material utilized in the separation processes of small molecules (usually in gas or liquid phase). Dense membranes are widely used in industry for gas separations and reverse osmosis applications. Dense membranes can be synthesized as amorphous or heterogeneous structures. Polymeric dense membranes such as polytetrafluoroethylene and cellulose esters are usually fabricated by compression molding, solvent casting, and spraying of a polymer solution. The membrane structure of a dense membrane can be in a rubbery or a glassy state at a given temperature depending on its glass transition temperature . Porous membranes are intended on separation of larger molecules such as solid colloidal particles, large biomolecules (proteins, DNA, RNA) and cells from the filtering media. Porous membranes find use in the microfiltration, ultrafiltration, and dialysis applications. There is some controversy in defining a "membrane pore". The most commonly used theory assumes a cylindrical pore for simplicity. This model assumes that pores have the shape of parallel, nonintersecting cylindrical capillaries. But in reality a typical pore is a random network of the unevenly shaped structures of different sizes. The formation of a pore can be induced by the dissolution of a "better" solvent into a "poorer" solvent in a polymer solution. Other types of pore structure can be produced by stretching of crystalline structure polymers. The structure of porous membrane is related to the characteristics of the interacting polymer and solvent, components concentration, molecular weight, temperature, and storing time in solution. The thicker porous membranes sometimes provide support for the thin dense membrane layers, forming the asymmetric membrane structures. The latter are usually produced by a lamination of dense and porous membranes. See also Membrane technology Notes References Pinnau, I., Freeman, B.D., Membrane Formation and Modification, ACS, 1999. Osada, Y., Nakagawa, T., Membrane Science and Technology, New York: Marcel Dekker, Inc,1992. Perry, R.H., Green D.H., Perry’s Chemical Engineers’ Handbook,7th edition, McGraw-Hill, 1997. Zeman, Leos J., Zydney, Andrew L., Microfiltration and Ultrafitration, Principles and Applications., New York: Marcel Dekker, Inc,1996. Mulder M., Basic Principles of Membrane Technology, Kluwer Academic Publishers, Netherlands, 1996. Jornitz, Maik W., Sterile Filtration, Springer, Germany, 2006 Jacob J., Pradanos P., Calvo J.I, Hernandez A., Jonsson G. Fouling kinetics and associated dynamics of structural modifications. J. Coll and Surf. 138(1997): 173–183. Van Reis R., Zydney A. Bioprocess membrane technology. J Mem Sci. 297(2007): 16–50. Madaeni S.S. The effect of large particles on microfiltration of small particles J. Por Mat. 8(2001): 143–148. Martinez F., Martin A., Pradanos P., Calvo J.I., Palacio L.., Hernandez A. Protein adsorption and deposition onto microfiltration membranes: the role of solute-solid interactions. J. Coll Interf Sci. 221(2000): 254–261. Palacio L., Ho C., Pradanos P., Calvo J.I, Kherif G., Larbot A., Hernandez A. Fouling, structure and charges of composite inorganic microfiltration membrane. J. Coll and Surf. 138(1998): 291–299. Templin T., Johnston D., Singh V., Tumbleson M.E., Belyea R.L. Rausch K.D. Membrane separation of solids from corn processing streams. Biores Tech. 97(2006): 1536–1545. Zydney A. L., Ho C. Effect of Membrane Morphology on System Capacity During Normal Flow Microfiltration. Biotechnol, Bioeng. 83(2003): 537–543. Ripperger S., Schulz G. Microporous membranes in biotechnical applications. Bioprocess Eng. 1(1986): 43–49. Ho C., Zydney A. Protein fouling of asymmetric and composite microfiltration membranes. Ind Eng Chem Res. 40(2001): 1412–1421. Artificial, Syntetic Membrane|date=December 2008 Filtration Membrane technology
Synthetic membrane
[ "Chemistry", "Engineering" ]
2,684
[ "Separation processes", "Chemical equipment", "Membrane technology", "Filtration", "nan" ]
1,561,321
https://en.wikipedia.org/wiki/Acoustic%20cryptanalysis
Acoustic cryptanalysis is a type of side channel attack that exploits sounds emitted by computers or other devices. Most of the modern acoustic cryptanalysis focuses on the sounds produced by computer keyboards and internal computer components, but historically it has also been applied to impact printers, and electromechanical deciphering machines. History Victor Marchetti and John D. Marks eventually negotiated the declassification of CIA acoustic intercepts of the sounds of cleartext printing from encryption machines. Technically this method of attack dates to the time of FFT hardware being cheap enough to perform the task; in this case the late 1960s to mid-1970s. However, using other more primitive means such acoustical attacks were made in the mid-1950s. In his book Spycatcher, former MI5 operative Peter Wright discusses use of an acoustic attack against Egyptian Hagelin cipher machines in 1956. The attack was codenamed "ENGULF". Known attacks In 2004, Dmitri Asonov and Rakesh Agrawal of the IBM Almaden Research Center announced that computer keyboards and keypads used on telephones and automated teller machines (ATMs) are vulnerable to attacks based on the sounds produced by different keys. Their attack employed a neural network to recognize the key being pressed. By analyzing recorded sounds, they were able to recover the text of data being entered. These techniques allow an attacker using covert listening devices to obtain passwords, passphrases, personal identification numbers (PINs), and other information entered via keyboards. In 2005, a group of UC Berkeley researchers performed a number of practical experiments demonstrating the validity of this kind of threat. Also in 2004, Adi Shamir and Eran Tromer demonstrated that it may be possible to conduct timing attacks against a CPU performing cryptographic operations by analyzing variations in acoustic emissions. Analyzed emissions were ultrasonic noise emanating from capacitors and inductors on computer motherboards, not electromagnetic emissions or the human-audible humming of a cooling fan. Shamir and Tromer, along with new collaborator Daniel Genkin and others, then went on to successfully implement the attack on a laptop running a version of GnuPG (an RSA implementation), using either a mobile phone located close to the laptop, or a laboratory-grade microphone located up to 4 m away, and published their experimental results in December 2013. Acoustic emissions occur in coils and capacitors because of small movements when a current surge passes through them. Capacitors in particular change diameter slightly as their many layers experience electrostatic attraction/repulsion or piezoelectric size change. A coil or capacitor which emits acoustic noise will, conversely, also be microphonic, and the high-end audio industry takes steps with coils and capacitors to reduce these microphonics (immissions) because they can muddy a hi-fi amplifier's sound. In March 2015, it was made public that some inkjet printers using ultrasonic heads can be read back using high frequency MEMS microphones to record the unique acoustic signals from each nozzle and using timing reconstruction with known printed data, that is, "confidential" in 12-point font. Thermal printers can also be read using similar methods but with less fidelity as the signals from the bursting bubbles are weaker. The hack also involved implanting a microphone, chip storage IC and burst transmitter with long-life Li+ battery into doctored cartridges substituted for genuine ones sent by post to the target, typically a bank, then retrieved from the garbage using challenge-response RFID chip. A similar work on reconstructing printouts made by dot-matrix printers was publicized in 2011. A new acoustic cryptanalysis technique discovered by a research team at Israel's Ben-Gurion University Cybersecurity Research Center allows data to be extracted using a computer's speakers and headphones. Forbes published a report stating that researchers found a way to see information being displayed, by using microphone, with 96.5% accuracy. In 2016, Genkin, Shamir, and Tromer published another paper that described a key extraction attack that relied on the acoustic emissions from laptop devices during the decryption process. They demonstrated the success of their attack with both a simple mobile phone and a more sensitive microphone. Countermeasures This kind of cryptanalysis can be defeated by generating sounds that are in the same spectrum and same form as keypresses. If sounds of actual keypresses are randomly replayed, it may be possible to totally defeat such kinds of attacks. It is advisable to use at least 5 different recorded variations (36 x 5 = 180 variations) for each keypress to get around the issue of FFT fingerprinting. Alternatively, white noise of a sufficient volume (which may be simpler to generate for playback) will also mask the acoustic emanations of individual keypresses. See also TEMPEST ACOUSTINT Acoustic location References Cryptanalysis Cryptographic attacks Side-channel attacks
Acoustic cryptanalysis
[ "Physics", "Technology" ]
1,004
[ "Cryptographic attacks", "Computer security exploits", "Classical mechanics", "Acoustics" ]
1,561,335
https://en.wikipedia.org/wiki/54%20Piscium
54 Piscium is an orange dwarf star approximately 36 light-years away in the constellation of Pisces. In 2003, an extrasolar planet was confirmed to be orbiting the star, and in 2006, a brown dwarf was also discovered orbiting it. Stellar components The Flamsteed designation 54 Piscium originated in the star catalogue of the British astronomer John Flamsteed, first published in 1712. It has an apparent magnitude of 5.86, allowing it to be seen with the unaided eye under suitable viewing conditions. The star has a classification of K0V, with the luminosity class V indicating this is a main sequence star that is generating energy at its core through the thermonuclear fusion of hydrogen into helium. The effective temperature of the photosphere is about 5,062 K, giving it the characteristic orange hue of a K-type star. It has been calculated that the star may have 76 percent of the Sun's mass and 46 percent of the luminosity. The radius has been directly determined by interferometry to be 94 percent that of the Sun's radius using the CHARA array. The rotational period of 54 Piscium is about 40.2 days. The age of the star is about 6.4 billion years, based on chromospheric activity and isochronal analysis. There is some uncertainty in the scientific press concerning the higher ratio of elements heavier than hydrogen compared to those found in the Sun; what astronomers term the metallicity. Santos et al. (2004) report the logarithm of the abundance ratio of iron to hydrogen, [Fe/H], to be 0.12 dex, whereas Cenarro et al. (2007) published a value of –0.15 dex. Long term observation of this star's magnetic activity levels suggests that it is entering a Maunder minimum period, which means it may undergo an extended period of low starspot numbers. It has a Sun-like activity cycle that has been decreasing in magnitude. As of 2010, the most recent period of peak activity was 1992–1996, which showed a lower level of activity than the previous peak in 1976–1980. In 2006, a direct image of 54 Piscium showed that there was a brown dwarf companion to 54 Piscium A. 54 Piscium B is thought to be a "methane brown dwarf" of the spectral type "T7.5V". The luminosity of this substellar object suggests that it has a mass of 0.051 that of the Sun (50 times the mass of Jupiter) and 0.082 times the Sun's radius. Similar to Gliese 570 D, this brown dwarf is thought to have a surface temperature of about . When 54 Piscium B was directly imaged by NASA's Spitzer Space Telescope, it was shown that the brown dwarf had a projected separation of around 476 astronomical units from the primary star. 54 Piscium B was the first brown dwarf to be detected around a star with an already known extrasolar planet (based on radial velocity surveys). Planetary system The star rotates at an inclination of 83 degrees relative to Earth. On January 16, 2003, a team of astronomers (led by Geoff Marcy) announced the discovery of an extrasolar planet (named 54 Piscium b) around 54 Piscium. The planet has been estimated to have a mass of only 20 percent that of Jupiter (making the planet around the same size and mass of Saturn). The planet orbits its sun at a distance of 0.28 astronomical units (which would be within the orbit of Mercury), which takes approximately 62 days to complete. It has been assumed that the planet shares the star's inclination and so has real mass close to its minimum mass; however, several "hot Jupiters" are known to be oblique relative to the stellar axis. The planet has a high eccentricity of about 0.65. The highly elliptical orbit suggested that the gravity of an unseen object farther away from the star was pulling the planet outward. That cause was verified with the discovery of the brown dwarf within the system. The orbit of an Earth-like planet would need to be centered within 0.68 AU (around the orbital distance of Venus), which in a Keplerian system means a 240-day orbital period. In a later simulation with the brown dwarf, 54 Piscium b's orbit "sweeps clean" most test particles within 0.5 AU, leaving only asteroids "in low-eccentricity orbits near the known planet's apastron distance, near the 1:2 mean-motion resonance". Also, observation has ruled out Neptune-class or heavier planets with a period of one year or less; which still allows for Earth-sized planets at 0.6 AU or more. A two planet fit to the radial velocities with two circular planets in a 2:1 orbital resonance is possible however it does not significantly improve the solution, and therefore does not justify the additional complexity. See also 107 Piscium 109 Piscium Delta Trianguli Upsilon Andromedae References External links K-type main-sequence stars Piscium, 54 Maunder Minimum Suspected variables T-type brown dwarfs Planetary systems with one confirmed planet Pisces (constellation) Durchmusterung objects Piscium, 054 0027 03651 003093 0566
54 Piscium
[ "Astronomy" ]
1,116
[ "Maunder Minimum", "Magnetism in astronomy", "Pisces (constellation)", "Constellations" ]
1,561,380
https://en.wikipedia.org/wiki/Nasal%20cannula
The nasal cannula (NC) is a device used to deliver supplemental oxygen or increased airflow to a patient or person in need of respiratory help. This device consists of a lightweight tube which on one end splits into two prongs which are placed in the nostrils curving toward the sinuses behind the nose, and from which a mixture of air and oxygen flows. The other end of the tube is connected to an oxygen supply such as a portable oxygen generator, or a wall connection in a hospital via a flowmeter. The cannula is generally attached to the patient by way of the tube hooking around the patient's ears or by an elastic headband, and the prongs curve toward the paranasal sinuses. The earliest, and most widely used form of adult nasal cannula carries 1–3 litres of oxygen per minute. Cannulae with smaller prongs intended for infant or neonatal use can carry less than one litre per minute. Flow rates of up to 60 litres of air/oxygen per minute can be delivered through wider bore humidified nasal cannula. The nasal cannula was invented by Wilfred Jones and patented in 1949 by his employer, BOC. Applications Supplemental oxygen A nasal cannula is generally used wherever small amounts of supplemental oxygen are required, without rigid control of respiration, such as in oxygen therapy. Most cannulae can only provide oxygen at low flow rates—up to 5 litres per minute (L/min)—delivering an oxygen concentration of 28–44%. Rates above 5 L/min can result in discomfort to the patient, drying of the nasal passages, and possibly nose bleeds (epistaxis). Also with flow rates above 6 L/min, the laminar flow becomes turbulent and the oxygen therapy being delivered is only as effective as delivering 5–6 L/min. The nasal cannula is often used in elderly patients or patients who can benefit from oxygen therapy but do not require it to self respirate. These patients do not need oxygen to the degree of wearing a non-rebreather mask. It is especially useful in those patients where vasoconstriction could negatively impact their condition, such as those suffering from strokes. A nasal cannula may also be used by pilots and passengers in small, unpressurized aircraft that do not exceed certain altitudes. The cannula provides extra oxygen to compensate for the lower oxygen content available for breathing at the low ambient air pressures of high altitude, preventing hypoxia. Special aviation cannula systems are manufactured for this purpose. Since the early 2000s, with the introduction of nasal cannula which uses heated humidification for respiratory gas humidification, flows above 6 LPM have become possible without the associated discomfort, and with the added benefit of improving mucociliary clearance. Nasal high-flow therapy High flows of an air/oxygen blend can be administered via a nasal cannula to accurately deliver high volume of oxygen therapy. Respiratory gas humidification allows the high flows to be delivered comfortably via the cannula. Nasal high-flow therapy can be used as an effective alternative to face mask oxygen and allows the patient to continue to talk, eat and drink while receiving the therapy. Definition: Non-invasive delivery of oxygen air mixture delivered via a nasal cannula at flows that exceed the patient's inspiratory flow demands with gas that has been optimally conditioned by warming and humidifying the gas to close to 100% relative humidity at body temperature. Reservoir cannula A reservoir cannula is an oxygen conserving supplemental oxygen administration device which accumulates constant flow oxygen in a small reservoir below the nose during exhalation and delivers it in a bolus it at the beginning of the next inhalation, which ensures that most of it reaches the parts of the lung in which gas exchange occurs, and little is wasted in dead space. See also References Medical equipment 1949 introductions Breathing apparatus
Nasal cannula
[ "Biology" ]
811
[ "Medical equipment", "Medical technology" ]
1,561,447
https://en.wikipedia.org/wiki/Game%20client
A game client is a network client that connects an individual user to the main game server, used mainly in multiplayer video games. It collects data such as score, player status, position and movement from a single player and send it to the game server, which allows the server to collect each individual's data and show every player in game, whether it is an arena game on a smaller scale or a massive game with thousands of players on the same map. Even though the game server displays each player's information for every player in a game, players still have their own unique perspective from the information collected by the game client, so that every player's perspective of the game is different, even though the world for every player is the same. The game client also allows the information sharing among users. An example would be item exchange in many MMORPG games where a player exchange an item he/she doesn't want for an item he/she wants, the game clients interconnect with each other and allows the sharing of information, in this exchanging items. Since many games requires a centralized space for players to gather and a way for users to exchange their information, many game clients are a hybrid of client-server and peer-to-peer application structures. History The World Wide Web was born on a NeXTCube with a 256Mhz cpu, 2GB of disk, and a gray scale monitor running NeXTSTEP OS. Sir Tim Berners-Lee put the first web page online on August 6, 1991, while working for CERN in Geneva Switzerland. Online gaming started in the early seventies. At that time Dial-up bulletin boards provided players with a way of playing games over the internet. In the 1990s, new technologies enabled gaming sites to pop up all over the internet. The client-server system provided online gaming a way to function on a large scale. Functions A game client has 4 primary functions: Receive inputs, Analyzes data, Gives feedback, Adjust system Receives input A game client receives input from an individual user. In an FPS game, for example, a player does many different actions such as move, shoot and communicate. Each of them will require the player to control the input devices. After receiving those inputs, the game client will send it back to the server. Analyzes data The game client decodes and displays information that makes up the game world, including objects stored in the computer and action results made by players, and then translate these information onto the user interface and the output devices. Gives feedback The server process the information and send it back to the client. The client will display the processed information to the player according to the player's point of view, so that each player will have a different perspective of the screen due to their private clients. Adjust system The client will also detect any changes made according to the players during the gaming session, including layouts and settings. Since a game is real-time and players are constantly sending actions, the client is constantly processing information and adjusting the system accordingly. Example application Here is an example of how the game client works, using the game League of Legends. In this example, a player named 7Turtle7 is using the character Kha'Zix to attack a neutral character known as the "Red Brambleback". Multiple things are happening from the client's perspective. 1. The client pulls data stored in the computer archives. That includes the player's statistics, map objects, mobs, artworks, character behaviors and other static data to create the player's surroundings. 2. 7Turtle7 attempts to attack Red Brambleback. The client send data of 7Turtle7's and Red Brambleback's data on statistics like position, health, mana, damage, defense and many other data to the server and allows the server to calculate the new world state after 7Turtle7 strikes Red Brambleback. 3. The server process the data and send it back to all other player's clients, informing what 7Turtle7 just did and how the client should give feedback on it. After 7Turtle7's client receives this information, it creates the output and sends it back to 7Turtle7. In this example, we can see that a red number indicating damage done on Red Brambleback appeared and a 3 appeared on the abilities panel indicating the cool-down time of the ability 7Turtle7 just used. Character behaviors, in this case the Red Brambleback, also changes due to the attack. It now becomes a hostile create that will attack 7Turtle7 according to in-game programming since 7Turtle7 attacked the Red Brambleback first. Artwork outputs such as the attack animation, health bar and mana bar also changes. 4. The other game clients are also aware of the attack made by 7Turtle7, but depending on their perspective, their clients determine whether this information is displayed to them or not. A player's client on the opposing team, for example, is aware of the attack, but it would not display the changes to that player since the game sets it so that one has to discover 7Turtle7 doing such a move, only then it will be displayed on their outputs. 5. Even though the client sends and receives data from an individual's perspective, there is data that is shared with everyone in a game or doesn't share at all. Take the top-right corner of 7Turtle7's perspective, for example, there is a time indicator, and that time is the same for everyone inside the game. There is also the FPS and Ping indicator, which is exclusive to 7Turtle7 and not shared through the client. Usage Technology adoption To many game developers, adopting technology is the key to their engineering. Standardized platforms such as HTML 5 and JavaScript can allow media integrations and deeper developments. A game client provides the ability to do so. User experience Balancing the game is a big issue for the developers. A large number of users on their client connected to the server could cause high resource usage, but at the same time the users need to stay connected to the game. Game clients will provide this type information to a centralized server. Employees cooperation As the game develops, new feature will be added. Instead of a small, cohesive team that doesn't require much cooperation at the start of a game, a developed game usually has several departments working together to figure out a solution, and that requires all departments to work in harmony. Updates Sometimes the game development team creates new contents or fix previous bugs, which means they need to let every player's clients to synchronize with the server. One way a game developer can fix bugs or add new contents to a game is through patches. The digital distribution platform will alert the user that there is an update is available, and client apply those update patches for the users automatically to ensure every user has the same perspective of the game content when changes have been made. Some examples of a digital distribution platforms include steam, origin and battle.net, which provide the same services when it comes to game clients. See also Game server Client (computing) Netcode Online game References Clients (computing) Video game development Multiplayer video games Video game platforms
Game client
[ "Technology" ]
1,479
[ "Computing platforms", "Video game platforms" ]
1,561,499
https://en.wikipedia.org/wiki/Statewatch
Statewatch is a UK-based charity founded in 1991 that produces and promotes critical research, policy analysis and investigative journalism to inform debates, movements and campaigns on civil liberties, human rights and democratic standards. Its work primarily focuses on Europe, and in particular the institutions and agencies of the European Union, but it also engages with issues at the national level in the UK and member states and with organisations elsewhere in the world. Mission and objectives According to their strategic plan, Statewatch's vision is: “An open Europe of democracy, civil liberties, personal and political rights, free movement, freedom of information, equality and diversity.” As of 2022, its mission is: “To monitor, analyse and expose state activity that threatens civil liberties, human rights and democratic standards in order to inform and enable a culture of diversity, debate and dissent.” To achieve this, Statewatch produces news, analyses and in-depth publications covering a range of topics related to state activity across Europe and the UK. These include topics such as policing; surveillance and security technologies; counter-terrorism; asylum and immigration; criminal law; racism and discrimination; and secrecy, transparency, and freedom of information. It is well known for publishing official documents from EU institutions, in particular the Council of the EU, and publishes more than a hundred such documents each year. Statewatch has filed several successful complaints with the European Ombudsman on issues concerning secrecy, transparency and openness in EU institutions and agencies. The organisation regularly publishes new material on its website and produces a bi-weekly email newsletter. History Statewatch was officially founded in 1991 as the operating arm of the Libertarian Research & Education Trust (Charity number: 1154784), which was initially set up in 1982. This built on the work of “State Research” (1977-1982), which produced a bi-monthly bulletin and carried out research on issues concerning state power and civil liberties in the UK. 1990s Statewatch began operating in 1991, following an initiative by the founder and subsequent director, Tony Bunyan, and a group of other individuals from across Europe who perceived a need to produce research, reporting and analysis on civil liberties issues in the context of the new EU laws, policies and institutions that would be introduced by the Treaty of Maastricht. The original output of this initiative was the Statewatch Bulletin, which was initially published in print six times per year, with articles written by Statewatch staff and members of the organisation's network of contributors, based in countries across Europe. Statewatch also hosted an online database through which users could search the organisation's Library & Archive, including official EU documents. The technical limitations of the early web meant that to view material, users had to visit the organisation's office or request photocopies in the post. The online database hosted by Statewatch was part of the organisation's work to create more transparency and openness around the powers and activities of EU institutions developing justice and home affairs laws and policies. The organisation filed hundreds of requests for access to documents, in particular to the Council of the EU, and was also able to obtain substantial numbers of documents through more informal means. By 1998, Statewatch had submitted eight complaints to the European Ombudsman against the Council concerning public access to documents. As a result, the right of the Ombudsman to investigate secrecy complaints was written into the Amsterdam Treaty together with a commitment to “enshrine” the public's right of access to information in an EC Regulation. The organisation subsequently played a key role in a coalition of groups that fought to ensure the Regulation ensured the greatest degree of openness possible. Many of the documents obtained during that period are now available online in the Justice and Home Affairs Archive. In 1998, Statewatch received an award from the Campaign for Freedom of Information for its work on fighting for EU openness and access to documents. In 2001, the European Information Association gave Statewatch the Chadwyck-Healey Award for achievement in European Information for its work on openness and the new code of access to EU documents. Since 1999, Statewatch has published Statewatch News, an online news service that is a source for documents leaked from within EU institutions; for other original reporting; and for the circulation of material from related groups and campaigns. The documents published by the organisation, as well as its research and reporting, are regularly reported on by mainstream media outlets and used by civil society organisations for their own research, campaigning and advocacy. Early 2000s Statewatch Journal and Statewatch News covered a range of notable topics through the early 2000s. This included key issues such as the Genoa G8 protests in 2001, security and policing in Northern Ireland, UK stop-and-search statistics, detention centres and abuses against migrants and refugees, and the policing of protests, in particular those organised by the anti-globalisation movement. The organisation's 10th anniversary conference in 2001 brought together hundreds of people from across Europe to discuss and debate topics such as surveillance, the role of civil society organisations in monitoring the state, racism in Europe, and freedom of information. During this time, Statewatch also reported on the effects of the “War on Terror” on civil liberties, human rights and democratic standards. The organisation published news and reports on the “EU Scoreboard”, George W. Bush’s letter to the EU, new measures on data retention and the surveillance of air travel and profiling of passengers, amongst others. With the American Civil Liberties Union and Privacy International, they launched the Policy Laundering project, analysing how governments were writing counter-terrorism measures into law by passing them through international organisations, rather than national parliaments. They also kept several observatories, including one on the Passenger Name Record Directive, and produced a number of in-depth publications, including Countering Civil Rights, The War on Freedom and Democracy, and Journalism, Civil Liberties and the "War on Terrorism" (with the International Federation of Journalists). Statewatch also contributed to research on the technological solutionism of governments that gained momentum during the War on Terror. Measures introduced by the EU and European national governments frequently relied on the promise of new technologies to detect or prevent terrorism and crime. Statewatch primarily focused on the EU security research programme, which funds the development of new security and surveillance technologies. In collaboration with the Transnational Institute, the organisation published the reports Arming Big Brother: the EU's Security Research Programme (in 2006), and NeoConOpticon: The EU Security-Industrial Complex (2009), which documented and analysed the ways in which the EU was using public funding to support the development of controversial and intrusive new security technologies, in many cases by large military and defence corporations. In 2009, Statewatch also published The Shape of Things to Come, which warned that the EU had embarked on several highly controversial paths, including harnessing digitisation to gather personal details on the everyday lives of everyone living in the European Union. Statewatch was one of few organisations focusing on EU policy with regard to civil liberties and human rights at this time. Through this work, the organisation became recognised as a crucial information source at a time when the internet was not fully embedded in everyday life. Amongst the subscribers to the Bulletin/Journal were governmental institutions, social centres, activist groups, universities, and thousands of individuals; the Statewatch website received (and continues to receive) hundreds of thousands of hits every year. 2010s Statewatch continued work along similar themes into the 2010s. It continued producing the quarterly editions of the Bulletin/Journal, articles published via Statewatch News, and gave talks and presentations at events and conferences in countries across Europe. A conference held in 2011 for the 20th anniversary of the organisation once again brought together hundreds of people from across Europe for workshops and panel discussions on border control, immigration and asylum; state surveillance; the policing of protest; and racism and Islamophobia, amongst other topics. Statewatch published two in-depth reports on drones during this period: Back from the battlefield: domestic drones in the UK, and Eurodrones, Inc.The reports, published at a time when states were seeking to find ways to make it possible to fly drones in civil airspace, argued that the technology would enhance the powers of agencies such as the police, yet were being treated as a technical matter that did not merit democratic or public debate. At the same time, the growing spread and use of the web to access information led to a decline in the number of subscribers to the Statewatch Bulletin/Journal. The final edition was published in 2014, with articles intended for an edition that was never to make it to print published as an online collection. Statewatch News continued publication, providing access to a wide array of articles, press releases, sources, and hundreds of leaked EU documents every year. Prominent amongst that output were articles exposing the European Commission providing funding to set up surveillance systems prior to legislation being passed; joint EU police operations targeting irregular migrants; the provision of hundreds of millions of euros for the development of drone technology; and EU funding for remote car-stopping technology, amongst other things. These articles received substantial coverage in the mainstream press and were also used by a wide variety of other groups for their work: for example, activists campaigning against racial profiling by the police, or MEPs seeking to stop EU legislation on the mandatory police surveillance of air travel. As a partner in the project Securing Europe through Counter-terrorism: Impact, Legitimacy and Effectiveness (SECILE), Statewatch led the workstream on researching EU counter-terrorism legislation and conducted a 'stocktake' of EU counter-terrorism measures enacted since 11 September 2001, as well as collecting and analysing data about their implementation and assessment. This provided an empirical basis for other aspects of the project. Statewatch's research found that between legislative and non-legislative instruments, the EU had adopted at least 239 separate counter-terrorism measures since 9/11. 88 of those (36%) were legally binding, yet just three public consultations had been held, and only 22 impact assessments were conducted by the European Commission. In 2017, the report Market forces: the development of the EU security-industrial complex provided an update on the themes that were first examined in the reports Arming Big Brother: the EU's Security Research Programme and NeoConOpticon: The EU Security-Industrial Complex. The report highlighted the ongoing provisions of millions of euros in public funding to major weapons and IT corporations, many of whom also played a role in determining the priorities of the research programme. At the same time, Statewatch was engaged in a major effort to draw public and political attention to the EU's “interoperability” agenda, through which a number of large policing and migration databases would be interconnected, and a “Common Identity Repository” to store data on up to 300 million foreign nationals in the EU would be constructed. This led to cooperation with the Platform for International Cooperation on Undocumented Migrants (PICUM), through the publication of Data Protection, Immigration Enforcement and Fundamental Rights: What the EU's Regulations on Interoperability Mean for People with Irregular Status. This report analysed the potential effects of the interoperability architecture for people living in the EU without official documents. The following year, Statewatch published Automated suspicion: The EU's new travel surveillance initiatives, a report that analysed how the EU's “interoperable” databases would introduce the algorithmic profiling of all travellers. In 2019, the organisation was awarded the Hostwriter Story Prize as part of a consortium of journalists working on the project Invisible Borders, which investigated the introduction of biometric identity controls by European and African governments. Awards The organization and its former Director/Director Emeritus, Tony Bunyan, have received several awards for their civil rights activism. These include: 1998: The Campaign for Freedom of Information gave Statewatch an award for its work on fighting for EU openness (access to documents) 1999: Privacy International gave Statewatch an award for its work in exposing EU-FBI telecommunications surveillance plans 2001: The European Information Association gave Statewatch the "Chadwyck-Healey Award for achievement in European Information" for its work on openness and the new code of access to EU documents 2001 and 2004: the European Voice newspaper selected Tony Bunyan, Statewatch Director, as one of the 50 most influential people in the EU for Statewatch's work on access to documents in the EU (2001) and civil liberties and the “war on terror” (2004) 2011: Liberty awards Statewatch the human rights Long Walk Award: "For dedication to openness, democracy and informed debate about European institutions, keeping us reliably informed and suitably engaged for the last 20 years” 2019: The project Invisible Borders, undertaken with a team of journalists from across Europe, wins first place in the Hostwriter Story Prize competition Archives and Databases The organization has an extensive Library & Archive and three free databases: a large database of all its news, articles and links since 1991, the Statewatch European Monitoring and Documentation Centre (SEMDOC) which monitors all new justice and home affairs measures since 1993. Statewatch Library & Archive Statewatch maintains a Library & Archive in its office in London, which is open for visits by the public. The archive contains material primarily produced in the UK between the 1960s and the 1990s, with some dates going back even further. The collection includes around 800 books; over 2,500 items of ‘grey literature’ (pamphlets, zines, reports and more) on political and social struggles and movements; over 1,000 EU documents that are not currently hosted in the online Justice and Home Affairs Archive; the ABC Case Archive; complete and partial runs of more than 60 magazines and journals; and more than 350 political badges. Topics covered by the material include police powers and public order; anti-racism and anti-fascism; criminal law; surveillance; prisons and detention sites; immigration, asylum and borders; and the powers and activities of security intelligence agencies. In addition to the physical archive, Statewatch hosts multiple online databases, including the database of all its news research since 1991, the Statewatch European Monitoring and Documentation Centre (SEMDOC), and the EU Justice and Home Affairs (JHA) Archive. Statewatch European Monitoring and Documentation Centre (SEMDOC) The SEMDOC archive covered every measure, proposed and adopted, in the field of EU justice and home affairs policy from 1993 to 2019. It contains a legislative observatory of past, current and future JHA measures, although it is no longer updated. The EU Justice and Home Affairs (JHA) Archive The JHA Archive contains over 9,000 bibliographic records and full-text documents on EU Justice and Home Affairs policy from 1976 to 2000. The earliest records begin at the time the Trevi Group (an ad hoc intergovernmental cooperation on Terrorism, Radicalism and Violence) was created. This archive is used as a resource to demonstrate the historical development of EU JHA policy. Many documents from that period remain classified or have not been published in the Council of the EU's online register, and the European Commission's incomplete public register begins in 2002. Statewatch Database The Statewatch Database contains over 35,000 items. It includes everything Statewatch has published since 1991, including Statewatch News, the Statewatch Bulletin/Journal and the State Research archive alongside official reports and documentation, analyses, links and more. References External links Civil liberties advocacy groups Information privacy International organisations based in London Non-profit organisations based in London Organisations based in the City of London Organizations established in 1991 Organisations related to the European Union Watchdog journalism 1991 establishments in Europe
Statewatch
[ "Engineering" ]
3,183
[ "Cybersecurity engineering", "Information privacy" ]
14,551,963
https://en.wikipedia.org/wiki/GPRC5B
G-protein coupled receptor family C group 5 member B is a protein that in humans is encoded by the GPRC5B gene. Function The protein encoded by this gene is a member of the type 3 G protein-coupled receptor family. Members of this superfamily are characterized by a signature 7-transmembrane domain motif. The specific function of this protein is unknown; however, this protein may mediate the cellular effects of retinoic acid on the G protein signal transduction cascade. See also Retinoic acid-inducible orphan G protein-coupled receptor References Further reading External links G protein-coupled receptors
GPRC5B
[ "Chemistry" ]
126
[ "G protein-coupled receptors", "Signal transduction" ]
14,551,965
https://en.wikipedia.org/wiki/Secretin%20receptor%20family
Secretin receptor family (class B GPCR subfamily) consists of secretin receptors regulated by peptide hormones from the glucagon hormone family. The family is different from adhesion G protein-coupled receptors. The secretin-receptor family of GPCRs include vasoactive intestinal peptide receptors and receptors for secretin, calcitonin and parathyroid hormone/parathyroid hormone-related peptides. These receptors activate adenylyl cyclase and the phosphatidyl-inositol-calcium pathway. The receptors in this family have seven transmembrane helices, like rhodopsin-like GPCRs. However, there is no significant sequence identity between these two GPCR families and the secretin-receptor family has its own characteristic 7TM signature. The secretin-receptor family GPCRs exist in many animal species. Data mining with the Pfam signature has identified members in fungi, although due to their presumed non-hormonal function they are more commonly referred to as Adhesion G protein-coupled receptors, making the Adhesion subfamily the more basal group. Three distinct sub-families (B1-B3) are recognized. Subfamily B1 Subfamily B1 contains classical hormone receptors, such as receptors for secretin and glucagon, that are all involved in cAMP-mediated signalling pathways. Pituitary adenylate cyclase-activating polypeptide type 1 receptor PACAPR (ADCYAP1R1) Calcitonin receptor CALCR Calcitonin receptor-like receptor CALCRL Corticotropin-releasing hormone receptor CRHR1; CRHR2 Glucose-dependent insulinotropic polypeptide receptor/Gastric inhibitory polypeptide receptor GIPR Glucagon receptor GCGR Glucagon receptor-related GLP1R; GLP2R; Growth hormone releasing hormone receptor GHRHR Parathyroid hormone receptor PTHR1; PTHR2 Secretin receptor SCTR Vasoactive intestinal peptide receptor VIPR1; VIPR2 Subfamily B2 Subfamily B2 contains receptors with long extracellular N-termini, such as the leukocyte cell-surface antigen CD97; calcium-independent receptors for latrotoxin and brain-specific angiogenesis inhibitor receptors amongst others. They are otherwise known as Adhesion G protein-coupled receptors. Brain-specific angiogenesis inhibitor BAI1; BAI2; BAI3 CD97 antigen CD97 EMR hormone receptor CELSR1; CELSR2; CELSR3; EMR1; EMR2; EMR3; EMR4 GPR56 orphan receptor GPR56; GPR64; GPR97; GPR110; GPR111; GPR112; GPR113; GPR114; GPR115; GPR123; GPR125; GPR126; GPR128; GPR133; GPR144; GPR157 Latrophilin receptor ELTD1; LPHN1; LPHN2; LPHN3 Ig-hepta receptor GPR116 Subfamily B3 Subfamily B3 includes Methuselah and other Drosophila proteins. Other than the typical seven-transmembrane region, characteristic structural features include an amino-terminal extracellular domain involved in ligand binding, and an intracellular loop (IC3) required for specific G-protein coupling. Diuretic hormone receptor Unclassified members HCTR-5; HCTR-6; KPG 006; KPG 008 References Protein domains Protein families G protein-coupled receptors
Secretin receptor family
[ "Chemistry", "Biology" ]
767
[ "Protein classification", "Signal transduction", "G protein-coupled receptors", "Protein domains", "Protein families" ]
14,551,977
https://en.wikipedia.org/wiki/GPRC5C
G-protein coupled receptor family C group 5 member C is a protein that in humans is encoded by the GPRC5C gene. Function The protein encoded by this gene is a member of the type 3 G protein-coupled receptor family. Members of this superfamily are characterized by a signature 7-transmembrane domain motif. The specific function of this protein is unknown; however, this protein may mediate the cellular effects of retinoic acid on the G protein signal transduction cascade. Two transcript variants encoding different isoforms have been found for this gene. See also Retinoic acid-inducible orphan G protein-coupled receptor References Further reading G protein-coupled receptors
GPRC5C
[ "Chemistry" ]
138
[ "G protein-coupled receptors", "Signal transduction" ]
14,551,989
https://en.wikipedia.org/wiki/GPRC5D
G-protein coupled receptor family C group 5 member D is a protein that in humans is encoded by the GPRC5D gene. GPRC5D is a class C orphan G protein-coupled receptor predominantly expressed in multiple myeloma cells and hard keratinized tissues, with low expression in normal human tissues, rendering it an appealing target for multiple myeloma cells. Structure Structural analysis of the complex between GPRC5D and talquetamab, a bispecific antibody for the treatment of multiple myeloma, has revealed that GPRC5D exists as a dimer. GPRC5D forms a symmetric dimer via TM4 and TM4/TM5 interactions. The study further demonstrated that only one talquetamab molecule can bind to the dimeric form of GPRC5D. Talquetamab Fab recognizes ECLs and TM3/5/7 of one GPRC5D protomer via six CDRs. Function The protein encoded by this gene is a member of the G protein-coupled receptor family; however, the specific function of this gene has not yet been determined. See also Retinoic acid-inducible orphan G protein-coupled receptor References Further reading G protein-coupled receptors
GPRC5D
[ "Chemistry" ]
259
[ "G protein-coupled receptors", "Signal transduction" ]
14,552,040
https://en.wikipedia.org/wiki/GPRC5A
Retinoic acid-induced protein 3 is a protein that in humans is encoded by the GPRC5A gene. This gene and its encoded mRNA was first identified as a phorbol ester-induced gene, and named Phorbol Ester Induced Gen 1 (PEIG-1); two years later it was rediscovered as a retinoic acid-inducible gene, and named Retinoic Acid-Inducible Gene 1 (RAIG1). Its encoded protein was later named Retinoic acid-induced protein 3. Function This gene encodes a member of the type 3 G protein-coupled receptor family, characterized by the signature 7-transmembrane domain motif. The encoded protein may be involved in interaction between retinoic acid and G protein signalling pathways. Retinoic acid plays a critical role in development, cellular growth, and differentiation. This gene may play a role in embryonic development and epithelial cell differentiation. Tryptamine and other indole related chemicals produced by gut microflora bind and activate the receptor. Post transcriptional regulation GPRC5A is one of only a handful of genes known in the literature that are post-transcriptionally controlled by miRNAs through their 5'UTR. Clinical significance GPRC5A is dysregulated in many human cancers and in other diseases. See also Retinoic acid-inducible orphan G protein-coupled receptor References Further reading External links G protein-coupled receptors
GPRC5A
[ "Chemistry" ]
300
[ "G protein-coupled receptors", "Signal transduction" ]
14,552,225
https://en.wikipedia.org/wiki/Brookfield%20Engineering
Brookfield Engineering is an engineering and manufacturing company with headquarters in Middleboro, Massachusetts. It is a subsidiary of the conglomerate Ametek. Its product line includes laboratory viscometers, rheometers, texture analyzers, and powder flow testers as well as in-line process instrumentation. These instruments are used by research, design, and process control departments. Company history The company was established in 1934 by Don Brookfield Sr., who graduated from MIT with a degree in electrochemical engineering. Brookfield Engineering was a family-run business until 1986, when it became an ESOP company. It has been ISO certified since the 1990s. Brookfield Engineering has dealers in 60 countries and regional offices in the US, UK, Germany, India and China. All manufacturing is located in the US at company headquarters. Principle of operation Classical Brookfield viscometers employ the principle of rotational viscometry—the torque required to turn an object, such as a spindle, in a fluid indicates the viscosity of the fluid. Torque is applied through a calibrated spring to a disk or bob spindle immersed in test fluid and the spring deflection measures the viscous drag of the fluid against the spindle. The amount of viscous drag is proportional to the amount of torque required to rotate the spindle, and thus to the viscosity of a Newtonian fluid. In the case of non-Newtonian fluids, Brookfield viscosities measured under the same conditions (model, spindle, speed, temperature, time of test, container, and any other sample preparation procedures that may affect the behavior of the fluid) can be compared. When developing a new test method, trial and error is often necessary in order to determine the proper spindle and speeds. Successful test methods will deliver a % torque reading between 10 and 100. The rheological behavior of the test fluid can be observed using the same spindle at different speeds, but because the geometry of the fluid around a rotating bob or disk spindle in a large container does not allow a single shear rate to be assigned, proper rheometry is not feasible using this setup. Apart from its rotating bob viscometers, Brookfield now also produces defined-geometry rheometers which allow complete rheological analysis of fluids. See also ASTM International Bulk density Deutsches Institut für Normung Food Rheology Mouthfeel Rheology References Viscosity Rheology Companies based in Plymouth County, Massachusetts Companies based in Massachusetts 1934 establishments in Massachusetts Technology companies established in 1934 Manufacturing companies established in 1934
Brookfield Engineering
[ "Physics", "Chemistry" ]
528
[ "Physical phenomena", "Physical quantities", "Wikipedia categories named after physical quantities", "Viscosity", "Physical properties", "Rheology", "Fluid dynamics" ]
14,552,252
https://en.wikipedia.org/wiki/ATP-binding%20domain%20of%20ABC%20transporters
In molecular biology, ATP-binding domain of ABC transporters is a water-soluble domain of transmembrane ABC transporters. ABC transporters belong to the ATP-Binding Cassette superfamily, which uses the hydrolysis of ATP to translocate a variety of compounds across biological membranes. ABC transporters are minimally constituted of two conserved regions: a highly conserved ATP binding cassette (ABC) and a less conserved transmembrane domain (TMD). These regions can be found on the same protein or on two different ones. Most ABC transporters function as a dimer and therefore are constituted of four domains, two ABC modules and two TMDs. Biological function ABC transporters are involved in the export or import of a wide variety of substrates ranging from small ions to macromolecules. The major function of ABC import systems is to provide essential nutrients to bacteria. They are found only in prokaryotes and their four constitutive domains are usually encoded by independent polypeptides (two ABC proteins and two TMD proteins). Prokaryotic importers require additional extracytoplasmic binding proteins (one or more per systems) for function. In contrast, export systems are involved in the extrusion of noxious substances, the export of extracellular toxins and the targeting of membrane components. They are found in all living organisms and in general the TMD is fused to the ABC module in a variety of combinations. Some eukaryotic exporters encode the four domains on the same polypeptide chain. Amino acid sequence The ABC module (approximately two hundred amino acid residues) is known to bind and hydrolyze ATP, thereby coupling transport to ATP hydrolysis in a large number of biological processes. The cassette is duplicated in several subfamilies. Its primary sequence is highly conserved, displaying a typical phosphate-binding loop: Walker A, and a magnesium binding site: Walker B. Besides these two regions, three other conserved motifs are present in the ABC cassette: the switch region which contains a histidine loop, postulated to polarize the attacking water molecule for hydrolysis, the signature conserved motif (LSGGQ) specific to the ABC transporter, and the Q-motif (between Walker A and the signature), which interacts with the gamma phosphate through a water bond. The Walker A, Walker B, Q-loop and switch region form the nucleotide binding site. 3D structure The 3D structure of a monomeric ABC module adopts a stubby L-shape with two distinct arms. ArmI (mainly beta-strand) contains Walker A and Walker B. The important residues for ATP hydrolysis and/or binding are located in the P-loop. The ATP-binding pocket is located at the extremity of armI. The perpendicular armII contains mostly the alpha helical subdomain with the signature motif. It only seems to be required for structural integrity of the ABC module. ArmII is in direct contact with the TMD. The hinge between armI and armII contains both the histidine loop and the Q-loop, making contact with the gamma phosphate of the ATP molecule. ATP hydrolysis leads to a conformational change that could facilitate ADP release. In the dimer the two ABC cassettes contact each other through hydrophobic interactions at the antiparallel beta-sheet of armI by a two-fold axis. Human proteins containing this domain ABCA1; ABCA10; ABCA12; ABCA13; ABCA2; ABCA3; ABCA4; ABCA5; ABCA6; ABCA7; ABCA8; ABCA9; ABCB1; ABCB10; ABCB11; ABCB4; ABCB5; ABCB6; ABCB7; ABCB8; ABCB9; ABCC1; ABCC10; ABCC11; ABCC12; ABCC2; ABCC3; ABCC4; ABCC5; ABCC6; ABCC8; ABCC9; ABCD1; ABCD2; ABCD3; ABCD4; ABCE1; ABCF1; ABCF2; ABCF3; ABCG1; ABCG2; ABCG4; ABCG5; ABCG8; CFTR; TAP1; TAP2; TAPL; References Protein domains ATP-binding cassette transporters
ATP-binding domain of ABC transporters
[ "Biology" ]
909
[ "Protein domains", "Protein classification" ]
14,552,315
https://en.wikipedia.org/wiki/TAAR5
Trace amine-associated receptor 5 is a protein that in humans is encoded by the TAAR5 gene. In vertebrates, TAAR5 is expressed in the olfactory epithelium. Human TAAR5 (hTAAR5) is a functional trace amine-associated receptor which acts as an olfactory receptor for tertiary amines. Trimethylamine and are full agonists of hTAAR5. The amber-woody fragrance timberol antagonizes this activity of trimethylamine. 3-Iodothyronamine is an inverse agonist of hTAAR5. Recent studies highlighted the significant role of TAAR5 in the central nervous system and periphery. Beta-galactosidase mapping of TAAR5 expression showed its localization not only in the glomeruli but also in deeper layers of olfactory bulb projecting to the limbic brain olfactory circuitry. Moreover, TAAR5 knockout mice show increased adult neurogenesis and elevated number of dopamine neurons. Also, it was observed statistically significant changes in osmotic erythrocyte fragility in TAAR5-KO mice. Mutations in the TAAR5 gene were found to affect human olfaction. Icelanders with a mutation in the gene were less likely to describe fish smell containing trimethylamine as unpleasant, and described licorice odor and cinnamon odor more intensely. See also Trace amine-associated receptor References G protein-coupled receptors
TAAR5
[ "Chemistry" ]
306
[ "G protein-coupled receptors", "Signal transduction" ]
14,552,361
https://en.wikipedia.org/wiki/TAAR8
Trace amine-associated receptor 8 is a protein that in humans is encoded by the TAAR8 gene. In humans, TAAR8 is the only trace amine-associated receptor that is known to be Gi/o-coupled. In humans, molecular modelling and docking experiments have shown that putrescine fits into the binding pocket of the human TAAR6 and TAAR8 receptors. G protein-coupled receptors (GPCRs, or GPRs) contain 7 transmembrane domains and transduce extracellular signals through heterotrimeric G proteins, supplied by OMIM See also Trace amine-associated receptor References Further reading G protein-coupled receptors
TAAR8
[ "Chemistry" ]
137
[ "G protein-coupled receptors", "Signal transduction" ]
14,552,378
https://en.wikipedia.org/wiki/TAAR1
Trace amine-associated receptor 1 (TAAR1) is a trace amine-associated receptor (TAAR) protein that in humans is encoded by the TAAR1 gene. TAAR1 is a primarily intracellular amine-activated and G protein-coupled receptor (GPCR) that is primarily expressed in several peripheral organs and cells (e.g., the stomach, small intestine, duodenum, and white blood cells), astrocytes, and in the intracellular milieu within the presynaptic plasma membrane (i.e., axon terminal) of monoamine neurons in the central nervous system (CNS). TAAR1 is one of six functional human TAARs, which are so named for their ability to bind endogenous amines that occur in tissues at trace concentrations. TAAR1 plays a significant role in regulating neurotransmission in dopamine, norepinephrine, and serotonin neurons in the CNS; it also affects immune system and neuroimmune system function through different mechanisms. Endogenous ligands of the TAAR1 include trace amines, monoamine neurotransmitters, and certain thyronamines. The trace amines β-phenethylamine, tyramine, tryptamine, and octopamine, the monoamine neurotransmitters dopamine and serotonin, and the thyronamine 3-iodothyronamine (3-IT) are all agonists of the TAAR1 in different species. Other endogenous agonists are also known. A variety of exogenous compounds and drugs are TAAR1 agonists as well, including various phenethylamines, amphetamines, tryptamines, and ergolines, among others. There are marked species differences in the interactions of ligands with the TAAR1, resulting in greatly differing affinities, potencies, and efficacies of TAAR1 ligands between species. Many compounds that are TAAR1 agonists in rodents are much less potent or inactive at the TAAR1 in humans. A number of selective TAAR1 ligands have been developed, for instance the TAAR1 full agonist RO5256390, the TAAR1 partial agonist RO5263397, and the TAAR1 antagonists EPPTB and RTI-7470-44. Selective TAAR1 agonists are used in scientific research, and a few TAAR1 agonists, such as ulotaront and ralmitaront, are being developed as novel pharmaceutical drugs, for instance to treat schizophrenia and substance use disorder. The TAAR1 was discovered in 2001 by two independent groups, Borowski et al. and Bunzow et al. Discovery TAAR1 was discovered independently by Borowski et al. and Bunzow et al. in 2001. To find the genetic variants responsible for TAAR1 synthesis, they used mixtures of oligonucleotides with sequences related to G protein-coupled receptors (GPCRs) of serotonin and dopamine to discover novel DNA sequences in rat genomic DNA and cDNA, which they then amplified and cloned. The resulting sequence was not found in any database and coded for TAAR1. Further characterization of the functional role of TAAR1 and other receptors from this family was performed by other researchers including Raul Gainetdinov and his colleagues. Structure TAAR1 shares structural similarities with the class A rhodopsin GPCR subfamily. It has 7 transmembrane domains with short N and C terminal extensions. TAAR1 is 62–96% identical with TAARs2-15, which suggests that the TAAR subfamily has recently evolved; while at the same time, the low degree of similarity between TAAR1 orthologues suggests that they are rapidly evolving. TAAR1 shares a predictive peptide motif with all other TAARs. This motif overlaps with transmembrane domain VII, and its identity is NSXXNPXX[Y,H]XXX[Y,F]XWF. TAAR1 and its homologues have ligand pocket vectors that utilize sets of 35 amino acids known to be involved directly in receptor-ligand interaction. Gene All human TAAR genes are located on a single chromosome spanning 109 kb of human chromosome 6q23.1, 192 kb of mouse chromosome 10A4, and 216 kb of rat chromosome 1p12. Each TAAR is derived from a single exon, except for TAAR2, which is coded by two exons. The human TAAR1 gene is thought to be an intronless gene. Tissue distribution To date, TAAR1 has been identified and cloned in five different mammal genomes: human, mouse, rat, monkey, and chimpanzee. In rats, mRNA for TAAR1 is found at low to moderate levels in peripheral tissues like the stomach, kidney, intestines and lungs, and at low levels in the brain. Rhesus monkey Taar1 and human TAAR1 share high sequence similarity, and TAAR1 mRNA is highly expressed in the same important monoaminergic regions of both species. These regions include the dorsal and ventral caudate nucleus, putamen, substantia nigra, nucleus accumbens, ventral tegmental area, locus coeruleus, amygdala, and raphe nucleus. hTAAR1 has also been identified in human astrocytes. Outside of the human central nervous system, hTAAR1 also occurs as an intracellular receptor and is primarily expressed in the stomach, intestines, duodenum, pancreatic , and white blood cells. In the duodenum, TAAR1 activation increases (GLP-1) and peptide YY (PYY) release; in the stomach, hTAAR1 activation has been observed to increase somatostatin (growth hormone) secretion from delta cells. hTAAR1 is the only human trace amine-associated receptor subtype that is not expressed within the human olfactory epithelium. Location within neurons TAAR1 is an intracellular receptor expressed within the presynaptic terminal of monoamine neurons in humans and other animals. In model cell systems, hTAAR1 has extremely poor membrane expression. A method to induce hTAAR1 membrane expression has been used to study its pharmacology via a bioluminescence resonance energy transfer cAMP assay. Because TAAR1 is an intracellular receptor in monoamine neurons, exogenous TAAR1 ligands must enter the presynaptic neuron through a membrane transport protein or be able to diffuse across the presynaptic membrane in order to reach the receptor and produce reuptake inhibition and neurotransmitter efflux. Consequently, the efficacy of a particular TAAR1 ligand in producing these effects in different monoamine neurons is a function of both its binding affinity at TAAR1 and its capacity to move across the presynaptic membrane at each type of neuron. The variability between a TAAR1 ligand's substrate affinity at the various monoamine transporters accounts for much of the difference in its capacity to produce neurotransmitter release and reuptake inhibition in different types of monoamine neurons. E.g., a TAAR1 ligand which can easily pass through the norepinephrine transporter, but not the serotonin transporter, will produce – all else equal – markedly greater TAAR1-induced effects in norepinephrine neurons as compared to serotonin neurons. TAAR1 ligands have also been found to enter neurons by transporters other than the monoamine transporters. Receptor oligomers TAAR1 forms GPCR oligomers with monoamine autoreceptors in neurons in vivo. These and other reported TAAR1 hetero-oligomers include: TAAR1 D2sh TAAR1 α2A TAAR1 TAAR2 [note 2] in the TAAR1- D2sh example shows that TAAR1 can be located at cell membranes, and in the case of enterochromaffin cells in the gut epithelium, TAAR1 can be activated by high doses of dietary 'trace' amines, proximal to vesicles packed with catecholamines, impacting the vagal nerve system and CNS. This raises questions about where T1AM might find TAAR1 and cause similar unexpected nerve firing. Ligands Agonists Endogenous The known endogenous agonists of the TAAR1 include trace amines like β-phenethylamine (PEA), monoamine neurotransmitters like dopamine, and thyronamines like 3-iodothyronamine (T1AM). Trace amines are endogenous amines which act as agonists at TAAR1 and are present in extracellular concentrations of 0.1–10nM in the brain, constituting less than 1% of total biogenic amines in the mammalian nervous system. Some of the human trace amines include tryptamine, phenethylamine (PEA), , , , , , , and synephrine. These share structural similarities with the three common monoamine neurotransmitters: serotonin, dopamine, and norepinephrine. Each ligand has a different potency, measured as increased cyclic AMP (cAMP) concentration after the binding event. The rank order of potency for the primary endogenous ligands at the human TAAR1 is: tyramine> β-phenethylamine> dopamine= octopamine. Tryptamine and histamine also interact with the human TAAR1 with lower potency, whereas serotonin and norepinephrine have been found to be inactive. Thyronamines are molecular derivatives of thyroid hormone involved in endocrine system function. 3-Iodothyronamine (T1AM) is one of the most potent TAAR1 agonists yet discovered. It also interacts with a number of other targets. Unlike the monoamine neurotransmitters and trace amines, T1A is not a monoamine transporter (MAT) substrate, although it does still weakly interact with the MATs. Activation of TAAR1 by T1AM results in the production of large amounts of cAMP. This effect is coupled with decreased body temperature and cardiac output. Other endogenous TAAR1 agonists include cyclohexylamine, isoamylamine, and trimethylamine, among others. Exogenous 2-Aminoindanes 2-Aminoindane (2-AI) – a potent TAAR1 partial or full agonist in rodents, weaker TAAR1 full agonist in humans 5-Iodo-2-aminoindane (5-IAI) – a potent TAAR1 partial or full agonist in rodents, weaker TAAR1 partial agonist in humans MDAI – a potent TAAR1 full agonist in rodents, weaker TAAR1 partial agonist in humans N-Methyl-2-AI (NM-2-AI) – a potent TAAR1 partial or full agonist in rodents, weaker TAAR1 partial agonist in humans A-77636 – an experimental dopamine agonist Amphetamines (α-methyl-β-phenethylamines) 4-Fluoroamphetamine – a potent TAAR1 partial agonist in rodents, but much less potent in humans 4-Hydroxyamphetamine – an amphetamine metabolite Amphetamine – a potent TAAR1 near-full agonist in rodents, but much less potent in humans Cathinone – weak rodent and human TAAR1 partial agonist MDA – a potent TAAR1 partial agonist in rodents, but much less potent weak partial agonist in humans MDMA – a moderate-efficacy TAAR1 partial agonist in rodents, but much less potent weak partial agonist in humans Methamphetamine – a potent TAAR1 near-full agonist in rodents, but much less potent in humans Phentermine – a weak human TAAR1 near-full agonist Selegiline (L-deprenyl) – a weak mouse TAAR1 partial agonist Solriamfetol – a wakefulness-promoting agent acting as a dual norepinephrine–dopamine reuptake inhibitor (NDRI) and human TAAR1 agonist AP163 – a TAAR1 agonist Apomorphine – a dopamine agonist and antiparkinsonian agent, potent rodent TAAR1 agonist but not in humans Asenapine – an atypical antipsychotic Chlorpromazine – a dopamine antagonist and antipsychotic Clonidine – an adrenergic agonist and antihypertensive agent, rodent and human TAAR1 agonist Cyproheptadine – a serotonin antagonist Ergolines and lysergamides Bromocriptine – a dopamine agonist and antiparkinsonian agent Dihydroergotamine – an antimigraine agent Ergometrine (ergonovine) – an obstetric drug Lisuride – a dopamine agonist and antiparkinsonian agent Lysergic acid diethylamide (LSD) – a serotonergic psychedelic and TAAR1 weak partial agonist in rodents but not in humans Metergoline – a prolactin inhibitor Fenoldopam – a dopamine agonist and antihypertensive agent Fenoterol – an adrenergic agonist Guanabenz – an adrenergic agonist and antihypertensive agent, highly potent rodent and human TAAR1 agonist Guanfacine – an adrenergic agonist and ADHD medication Idazoxan – adrenergic antagonist, potent mouse TAAR1 agonist but much weaker in humans Isoprenaline – an adrenergic agonist LK00764 – a rodent and human TAAR1 agonist MPTP – a monoaminergic neurotoxin Naphazoline – an adrenergic agonist Nomifensine – a norepinephrine–dopamine reuptake inhibitor (NDRI) and abandoned antidepressant o-Phenyl-3-iodotyramine (o-PIT) – a rodent and human TAAR1 agonist Oxymetazoline – an adrenergic agonist Phentolamine – an adrenergic antagonist and rat TAAR1 agonist Ralmitaront (RG-7906, RO6889450) – a TAAR1 partial agonist and investigational antipsychotic RG-7351 – a TAAR1 partial agonist and abandoned experimental antidepressant RG-7410 – a TAAR1 agonist and abandoned experimental antipsychotic Ring-methoxylated phenethylamines and amphetamines 2C-B – a serotonergic psychedelic, potent rat TAAR1 partial agonist but much weaker mouse and human TAAR1 partial agonist 2C-E – a serotonergic psychedelic, potent rat TAAR1 partial agonist but much weaker mouse TAAR1 partial agonist and not in humans 2C-T-7 – a serotonergic psychedelic, very high-affinity TAAR1 ligand in mice and rats but not in humans DOB – a very weak human TAAR1 agonist DOET – a weak human TAAR1 agonist DOI – a rat TAAR1 agonist DOM – a rhesus monkey TAAR1 agonist but not in humans Mescaline – a serotonergic psychedelic, potent rodent TAAR1 partial agonist but not in humans RO5073012 – a selective rodent and human TAAR1 low-efficacy partial agonist RO5166017 – a selective rat and human TAAR1 near-full agonist but partial agonist in mice RO5203648 – a selective rodent and human TAAR1 partial agonist RO5256390 – a selective rat and human TAAR1 full agonist but partial agonist in mice RO5263397 – a selective rodent and human TAAR1 partial agonist S18616 – an adrenergic agonist Synephrine – an adrenergic agonist Tolazoline – an adrenergic antagonist and rat TAAR1 agonist Tryptamines Dimethyltryptamine – a serotonergic psychedelic, TAAR1 partial agonist in rodents but not in humans Psilocin – a serotonergic psychedelic, TAAR1 partial agonist in rodents but not in humans Ulotaront (SEP-363856, SEP-856) – a human TAAR1 full agonist and investigational antipsychotic Although amphetamine, methamphetamine, and MDMA are potent TAAR1 agonists in rodents, they are much less potent in terms of TAAR1 agonism in humans. As examples, whereas amphetamine and methamphetamine have nanomolar potencies in activating the TAAR1 in rodents, they have micromolar potencies for TAAR1 agonism in humans. MDMA shows very weak potency and efficacy as a human TAAR1 agonist and has been regarded as inactive. Relatedly, it is not entirely clear whether agents like amphetamine and methamphetamine at typical doses produce significant TAAR1 agonism and associated effects in humans. However, TAAR1 agonism and consequent effects by these drugs may be more relevant in the context of very high recreational doses. TAAR1 activation has been found to have inhibitory effects on monoaminergic neurotransmission, and TAAR1 agonism by amphetamines may serve to auto-inhibit and constrain their effects. While some amphetamines are human TAAR1 agonists, many others are not. As examples, most cathinones (β-ketoamphetamines), such as methcathinone, mephedrone, and flephedrone, as well as other amphetamines, including ephedrine, 4-methylamphetamine (4-MA), para-chloroamphetamine (PCA), para-methoxyamphetamine (PMA), 4-methylthioamphetamine (4-MTA), MDEA, MBDB, 5-APDB, and 5-MAPDB, are inactive as human TAAR1 agonists. Many other drugs acting as monoamine releasing agents (MRAs) are also inactive as human TAAR1 agonists, for instance piperazines like benzylpiperazine (BZP), meta-chlorophenylpiperazine (mCPP), and 3-trifluoromethylphenylpiperazine (TFMPP), as well as the alkylamine methylhexanamine (DMAA). The negligible TAAR1 agonism with most cathinones might serve to enhance their effects and misuse potential as MRAs compared to their amphetamine counterparts. Monoaminergic activity enhancers (MAEs), such as selegiline, benzofuranylpropylaminopentane (BPAP), and phenylpropylaminopentane (PPAP), have been claimed to act as TAAR1 agonists to mediate their MAE effects, but TAAR1 agonism for BPAP and PPAP has yet to be assessed or confirmed. Selegiline is only a weak agonist of the mouse TAAR1, with dramatically lower potency than amphetamine or methamphetamine, and does not seem to have been assessed at the human TAAR1. Antagonists and inverse agonists EPPTB (RO5212773) – a selective mouse TAAR1 inverse agonist but far less potent rat and human TAAR1 neutral antagonist RTI-7470-44 – a potent and selective human TAAR1 neutral antagonist but far less potent mouse and rat TAAR1 antagonist A few other less well-known TAAR1 antagonists have also been discovered and characterized. RO5073012 is an antagonist-esque weak partial agonist of the rodent and human TAAR1 ( = 24–35%). Similarly, MDA and MDMA are weak to very weak partial agonists or antagonists of the human TAAR1 ( = 11% and 26%, respectively), albeit with very low potency. It has been claimed that rasagiline may act as a TAAR1 antagonist, but TAAR1 interactions have yet to be assessed or confirmed for this agent. Function Monoaminergic systems Before the discovery of TAAR1, trace amines were believed to serve very limited functions. They were thought to induce noradrenaline release from sympathetic nerve endings and compete for catecholamine or serotonin binding sites on cognate receptors, transporters, and storage sites. Today, they are believed to play a much more dynamic role by regulating monoaminergic systems in the brain. One of the downstream effects of active TAAR1 is to increase cAMP in the presynaptic cell via Gαs G-protein activation of adenylyl cyclase. This alone can have a multitude of cellular consequences. A main function of the cAMP may be to up-regulate the expression of trace amines in the cell cytoplasm. These amines would then activate intracellular TAAR1. Monoamine autoreceptors (e.g., D2 short, presynaptic α2, and presynaptic 5-HT1A) have the opposite effect of TAAR1, and together these receptors provide a regulatory system for monoamines. Notably, amphetamine and trace amines possess high binding affinities for TAAR1, but not for monoamine autoreceptors. The effect of TAAR1 agonists on monoamine transporters in the brain appears to be site-specific. Imaging studies indicate that monoamine reuptake inhibition by amphetamine and trace amines is dependent upon the presence of TAAR1 in the associated monoamine neurons. As of 2010, of TAAR1 and the dopamine transporter (DAT) has been visualized in rhesus monkeys, but of TAAR1 with the norepinephrine transporter (NET) and the serotonin transporter (SERT) has only been evidenced by messenger RNA (mRNA) expression. In neurons with TAAR1, TAAR1 agonists increase the concentrations of the associated monoamines in the synaptic cleft, thereby increasing post-synaptic receptor binding. Through direct activation of G protein-coupled inwardly-rectifying potassium channels (GIRKs), TAAR1 can reduce the firing rate of dopamine neurons, in turn preventing a hyper-dopaminergic state. Amphetamine and trace amines can enter the presynaptic neuron either through or by diffusing across the neuronal membrane directly. As a consequence of DAT uptake, amphetamine and trace amines produce competitive reuptake inhibition at the transporter. Upon entering the presynaptic neuron, these compounds activate TAAR1 which, through protein kinase A (PKA) and protein kinase C (PKC) signaling, causes DAT phosphorylation. Phosphorylation by either protein kinase can result in DAT internalization ( reuptake inhibition), but phosphorylation alone induces reverse transporter function (dopamine efflux). Immune system Expression of TAAR1 on lymphocytes is associated with activation of lymphocyte immuno-characteristics. In the immune system, TAAR1 transmits signals through active PKA and PKC phosphorylation cascades. In a 2012 study, Panas et al. observed that methamphetamine had these effects, suggesting that, in addition to brain monoamine regulation, amphetamine-related compounds may have an effect on the immune system. A recent paper showed that, along with TAAR1, TAAR2 is required for full activity of trace amines in PMN cells. Phytohaemagglutinin upregulates mRNA in circulating leukocytes; in these cells, TAAR1 activation mediates leukocyte chemotaxis toward TAAR1 agonists. TAAR1 agonists (specifically, trace amines) have also been shown to induce interleukin 4 secretion in T-cells and immunoglobulin E (IgE) secretion in B cells. Astrocyte-localized TAAR1 regulates EAAT2 levels and function in these cells; this has been implicated in methamphetamine-induced pathologies of the neuroimmune system. Clinical significance Low phenethylamine (PEA) concentration in the brain is associated with major depressive disorder, and high concentrations are associated with schizophrenia. Low PEA levels and under-activation of TAAR1 also appears to be associated with ADHD. It is hypothesized that insufficient PEA levels result in TAAR1 inactivation and overzealous monoamine uptake by transporters, possibly resulting in depression. Some antidepressants function by inhibiting monoamine oxidase (MAO), which increases the concentration of trace amines, which is speculated to increase TAAR1 activation in presynaptic cells. Decreased PEA metabolism has been linked to schizophrenia, a logical finding considering excess PEA would result in over-activation of TAAR1 and prevention of monoamine transporter function. Mutations in region q23.1 of human chromosome 6 – the same chromosome that codes for TAAR1 – have been linked to schizophrenia. Medical reviews from February 2015 and 2016 noted that TAAR1-selective ligands have significant therapeutic potential for treating psychostimulant addictions (e.g., cocaine, amphetamine, methamphetamine, etc.). Despite wide distribution outside of the CNS and PNS, TAAR1 does not affect hematological functions and the regulation of thyroid hormones across different stages of ageing. Such data represent that future TAAR1-based therapies should exert little hematological effect and thus will likely have a good safety profile. Research A large candidate gene association study published in September 2011 found significant differences in TAAR1 allele frequencies between a cohort of fibromyalgia patients and a chronic pain-free control group, suggesting this gene may play an important role in the pathophysiology of the condition; this possibly presents a target for therapeutic intervention. In preclinical research on rats, TAAR1 activation in pancreatic cells promotes insulin, peptide YY, and GLP-1 secretion; therefore, TAAR1 is potentially a biological target for the treatment of obesity and diabetes. Lack of TAAR1 does not significantly affect sexual motivation and routine lipid and metabolic blood biochemical parameters, suggesting that future TAAR1-based therapies should have a favorable safety profile. See also Locomotor activity § TAAR1 modulators Notes References External links G protein-coupled receptors TAAR1 agonists Amphetamine
TAAR1
[ "Chemistry" ]
5,824
[ "G protein-coupled receptors", "Signal transduction" ]
14,552,388
https://en.wikipedia.org/wiki/TAAR9
Trace amine-associated receptor 9 is a protein that in humans is encoded by the TAAR9 gene. TAAR9 is a member of a large family of rhodopsin G protein–coupled receptors (GPCRs, or GPRs). GPCRs contain 7 transmembrane domains and transduce extracellular signals through heterotrimeric G proteins.[supplied by OMIM] N-Methyl piperidine is a ligand of TAAR9 associated with aversive behavior in mice. N,N-dimethylcyclohexylamine is an additional binding agonist that also activaes TAAR7 variants. TAAR9 gene deletion in rats leads to significantly decreased low-density lipoprotein cholesterol levels in the blood. See also Trace amine-associated receptor References Further reading G protein-coupled receptors
TAAR9
[ "Chemistry" ]
181
[ "G protein-coupled receptors", "Signal transduction" ]
14,552,501
https://en.wikipedia.org/wiki/Restrictive%20dermopathy
Restrictive dermopathy (RD) is a rare, lethal autosomal recessive skin condition characterized by syndromic facies, tight skin, sparse or absent eyelashes, and secondary joint changes. Mechanism Restrictive dermopathy (RD) is caused either by the loss of the gene ZMPSTE24, which encodes a protein responsible for the cleavage of farnesylated prelamin A into mature non-farnesylated lamin, or by a mutation in the LMNA gene. This results in the accumulation of farnesyl-prelamin A at the nuclear membrane. Mechanistically, restrictive dermopathy is somewhat similar to Hutchinson–Gilford progeria syndrome (HGPS), a disease where the last step in lamin processing is hindered by a mutation that causes the loss of the ZMPSTE24 cleavage site in the lamin A gene. Diagnosis Treatment See also Relapsing linear acantholytic dermatosis List of cutaneous conditions Lamellar ichthyosis – Possible differential diagnosis References External links Autosomal recessive disorders Genodermatoses Progeroid syndromes Rare diseases
Restrictive dermopathy
[ "Biology" ]
235
[ "Senescence", "Progeroid syndromes" ]
14,552,648
https://en.wikipedia.org/wiki/TopHat%20%28telescope%29
TopHat was a scientific experiment launched from McMurdo Station in January 2001 to measure the cosmic microwave background radiation produced 300,000 years after the Big Bang. The balloon was launched on January 2, 2001 and proceeded to fly for 644 hours over the continent of Antarctica before landing on January 31, 2001. The balloon flew over the continent 38 kilometers (125,000 ft) above the ground. The working payload was shut down on January 10, 2001 after the liquid cryogens cooling the detectors were exhausted, and the balloon simply circled the continent until it was safe to land. The vorticial winds that typically carry balloons around the continent dissipated part of the way through the flight, and the balloon had to be terminated in a suboptimal location. The landing missed the targeted ice shelf by around one half mile, and while the discs containing the information were recovered safely using a Twin Otter, the gondola itself had not been recovered by August 2001. The telescope was called part of the "Submillimeter Astrophysics Experiment" for Dr. Edward Cheng of NASA’s Goddard Space Flight Center. It took roughly 6 years to build and deploy. It was built in association with NASA’s Goddard Space Flight Center, the University of Chicago, the University of Wisconsin–Madison and the Danish Space Research Institute. TopHat experiment was the first of its kind in that it placed the telescope on top of the actual balloon, where it rotated at a constant rate around a vertical axis and covered a 48-degree-diameter window of the sky. The placement allowed for the telescope to gain a unique view of the sky, with no obstructions. The balloon itself took up 29.5 million cubic feet (835,000 m3) of space. TopHat was built in part to follow up the observations of the BOOMERanG experiment which also studied the cosmic background radiation. TopHat was attempting to detect the clumpiness of matter, how much matter was in the universe, how the universe was expanding, and if it was indeed flat as had been observed by BOOMERANG. Around 300,000 years after the big bang, the temperature of the universe cooled enough so that hydrogen atoms formed and the photons of energy (the radiation of energy from the explosion) were able to escape and travel indefinitely. This oldest source of radiation has a temperature of 2.73 K and is uniform except for one part in 100,000 where the temperature is slightly different. The patchiness of matter indicates the earliest structures being formed in the universe and TopHat was designed to detect this patchiness on roughly degree scales. References Bless, R. C. Discovering the Cosmos. Suasalito, CA: University Science Books, 1996. "Classy Antarctic Balloon Captures the Earliest Light of the Univers." Imagine the Universe News. 1 Mar. 2001. High Energy Astrophysics Science Archive Research Center. . O'hanlan, Larry. "Balloon Captures Space Microwaves." Discovery.Com. 16 Jan. 2001. Discovery. *NASA.gov Tyahla, Lori. "The Latest TopHat Reports." MSAM TopHat. NASA Goddard Space Flight Center. http://topweb.gsfc.nasa.gov. Physics experiments Cosmic microwave background experiments Balloon-borne telescopes 2001 in science Astronomical observatories in the Antarctic Astronomical experiments in the Antarctic
TopHat (telescope)
[ "Physics" ]
682
[ "Experimental physics", "Physics experiments" ]
14,552,970
https://en.wikipedia.org/wiki/Stranski%E2%80%93Krastanov%20growth
Stranski–Krastanov growth (SK growth, also Stransky–Krastanov or 'Stranski–Krastanow') is one of the three primary modes by which thin films grow epitaxially at a crystal surface or interface. Also known as 'layer-plus-island growth', the SK mode follows a two step process: initially, complete films of adsorbates, up to several monolayers thick, grow in a layer-by-layer fashion on a crystal substrate. Beyond a critical layer thickness, which depends on strain and the chemical potential of the deposited film, growth continues through the nucleation and coalescence of adsorbate 'islands'. This growth mechanism was first noted by Ivan Stranski and Lyubomir Krastanov in 1938. It wasn't until 1958 however, in a seminal work by Ernst Bauer published in Zeitschrift für Kristallographie, that the SK, Volmer–Weber, and Frank–van der Merwe mechanisms were systematically classified as the primary thin-film growth processes. Since then, SK growth has been the subject of intense investigation, not only to better understand the complex thermodynamics and kinetics at the core of thin-film formation, but also as a route to fabricating novel nanostructures for application in the microelectronics industry. Modes of thin-film growth The growth of epitaxial (homogeneous or heterogeneous) thin films on a single crystal surface depends critically on the interaction strength between adatoms and the surface. While it is possible to grow epilayers from a liquid solution, most epitaxial growth occurs via a vapor phase technique such as molecular beam epitaxy (MBE). In Volmer–Weber (VW) growth, adatom–adatom interactions are stronger than those of the adatom with the surface, leading to the formation of three-dimensional adatom clusters or islands. Growth of these clusters, along with coarsening, will cause rough multi-layer films to grow on the substrate surface. Antithetically, during Frank–van der Merwe (FM) growth, adatoms attach preferentially to surface sites resulting in atomically smooth, fully formed layers. This layer-by-layer growth is two-dimensional, indicating that complete films form prior to growth of subsequent layers. Stranski–Krastanov growth is an intermediary process characterized by both 2D layer and 3D island growth. Transition from the layer-by-layer to island-based growth occurs at a critical layer thickness which is highly dependent on the chemical and physical properties, such as surface energies and lattice parameters, of the substrate and film. Figure 1 is a schematic representation of the three main growth modes for various surface coverages. Determining the mechanism by which a thin film grows requires consideration of the chemical potentials of the first few deposited layers. A model for the layer chemical potential per atom has been proposed by Markov as: where is the bulk chemical potential of the adsorbate material, is the desorption energy of an adsorbate atom from a wetting layer of the same material, the desorption energy of an adsorbate atom from the substrate, is the per atom misfit dislocation energy, and the per atom homogeneous strain energy. In general, the values of , , , and depend in a complex way on the thickness of the growing layers and lattice misfit between the substrate and adsorbate film. In the limit of small strains, , the criterion for a film growth mode is dependent on . VW growth: (adatom cohesive force is stronger than surface adhesive force) FM growth: (surface adhesive force is stronger than adatom cohesive force) SK growth can be described by both of these inequalities. While initial film growth follows an FM mechanism, i.e. positive differential μ, nontrivial amounts of strain energy accumulate in the deposited layers. At a critical thickness, this strain induces a sign reversal in the chemical potential, i.e. negative differential μ, leading to a switch in the growth mode. At this point it is energetically favorable to nucleate islands and further growth occurs by a VW type mechanism. A thermodynamic criterion for layer growth similar to the one presented above can be obtained using a force balance of surface tensions and contact angle. Since the formation of wetting layers occurs in a commensurate fashion at a crystal surface, there is often an associated misfit between the film and the substrate due to the different lattice parameters of each material. Attachment of the thinner film to the thicker substrate induces a misfit strain at the interface given by . Here and are the film and substrate lattice constants, respectively. As the wetting layer thickens, the associated strain energy increases rapidly. In order to relieve the strain, island formation can occur in either a dislocated or coherent fashion. In dislocated islands, strain relief arises by forming interfacial misfit dislocations. The reduction in strain energy accommodated by introducing a dislocation is generally greater than the concomitant cost of increased surface energy associated with creating the clusters. The thickness of the wetting layer at which island nucleation initiates, called the critical thickness , is strongly dependent on the lattice mismatch between the film and substrate, with a greater mismatch leading to smaller critical thicknesses. Values of can range from submonlayer coverage to up to several monolayers thick. Figure 2 illustrates a dislocated island during SK growth after reaching a critical layer height. A pure edge dislocation is shown at the island interface to illustrate the relieved structure of the cluster. In some cases, most notably the Si/Ge system, nanoscale dislocation-free islands can be formed during SK growth by introducing undulations into the near surface layers of the substrate. These regions of local curvature serve to elastically deform both the substrate and island, relieving accumulated strain and bringing the wetting layer and island lattice constant closer to its bulk value. This elastic instability at is known as the Grinfeld instability (formerly Asaro–Tiller–Grinfeld; ATG). The resulting islands are coherent and defect-free, garnering them significant interest for use in nanoscale electronic and optoelectronic devices. Such applications are discussed briefly later. A schematic of the resulting epitaxial structure is shown in figure 3 which highlights the induced radius of curvature at the substrate surface and in the island. Finally, strain stabilization indicative of coherent SK growth decreases with decreasing inter-island separation. At large areal island densities (smaller spacing), curvature effects from neighboring clusters will cause dislocation loops to form leading to defected island creation. Monitoring SK growth Wide beam techniques Analytical techniques such as Auger electron spectroscopy (AES), low-energy electron diffraction (LEED), and reflection high energy electron diffraction (RHEED), have been extensively used to monitor SK growth. AES data obtained in situ during film growth in a number model systems, such as Pd/W(100), Pb/Cu(110), Ag/W(110), and Ag/Fe(110), show characteristic segmented curves like those presented in figure 4. Height of the film Auger peaks plotted as a function of surface coverage Θ, initially exhibits a straight line, which is indicative of AES data for FM growth. There is a clear break point at a critical adsorbate surface coverage followed by another linear segment at a reduced slope. The paired break point and shallow line slope is characteristic of island nucleation; a similar plot for FM growth would exhibit many such line and break pairs while a plot of the VW mode would be a single line of low slope. In some systems, reorganization of the 2D wetting layer results in decreasing AES peaks with increasing adsorbate coverage. Such situations arise when many adatoms are required to reach a critical nucleus size on the surface and at nucleation the resulting adsorbed layer constitutes a significant fraction of a monolayer. After nucleation, metastable adatoms on the surface are incorporated into the nuclei, causing the Auger signal to fall. This phenomenon is particularly evident for deposits on a molybdenum substrate. Evolution of island formation during a SK transitions have also been successfully measured using LEED and RHEED techniques. Diffraction data obtained via various LEED experiments have been effectively used in conjunction with AES to measure the critical layer thickness at the onset of island formation. In addition, RHEED oscillations have proven very sensitive to the layer-to-island transition during SK growth, with the diffraction data providing detailed crystallographic information about the nucleated islands. Following the time dependence of LEED, RHEED, and AES signals, extensive information on surface kinetics and thermodynamics has been gathered for a number of technologically relevant systems. Microscopies Unlike the techniques presented in the last section in which probe size can be relatively large compared to island size, surface microscopies such scanning electron microscopy (SEM), transmission electron microscopy (TEM), scanning tunneling microscopy (STM), and Atomic force microscopy (AFM) offer the opportunity for direct viewing of deposit/substrate combination events. The extreme magnifications afforded by these techniques, often down to the nanometer length scale, make them particularly applicable for visualizing the strongly 3D islands. UHV-SEM and TEM are routinely used to image island formation during SK growth, enabling a wide range of information to be gathered, ranging from island densities to equilibrium shapes. AFM and STM have become increasingly utilized to correlate island geometry to the surface morphology of the surrounding substrate and wetting layer. These visualization tools are often used to complement quantitative information gathered during wide-beam analyses. Application to nanotechnology As mentioned previously, coherent island formation during SK growth has attracted increased interest as a means for fabricating epitaxial nanoscale structures, particularly quantum dots (QDs). Widely used quantum dots grown in the SK-growth-mode are based on the material combinations Si/Ge or InAs/GaAs. Significant effort has been spent developing methods to control island organization, density, and size on a substrate. Techniques such as surface dimpling with a pulsed laser and control over growth rate have been successfully applied to alter the onset of the SK transition or even suppress it altogether. The ability to control this transition either spatially or temporally enables manipulation of physical parameters of the nanostructures, like geometry and size, which, in turn, can alter their electronic or optoelectronic properties (i.e. band gap). For example, Schwarz–Selinger, et al. have used surface dimpling to create surface miscuts on Si that provide preferential Ge island nucleation sites surrounded by a denuded zone. In a similar fashion, lithographically patterned substrates have been used as nucleation templates for SiGe clusters. Several studies have also shown that island geometries can be altered during SK growth by controlling substrate relief and growth rate. Bimodal size distributions of Ge islands on Si are a striking example of this phenomenon in which pyramidal and dome-shaped islands coexist after Ge growth on a textured Si substrate. Such ability to control the size, location, and shape of these structures could provide invaluable techniques for 'bottom-up' fabrication schemes of next-generation devices in the microelectronics industry. See also Epitaxy Thin films Molecular-beam epitaxy References Thin films Research in Bulgaria
Stranski–Krastanov growth
[ "Materials_science", "Mathematics", "Engineering" ]
2,448
[ "Nanotechnology", "Planes (geometry)", "Thin films", "Materials science" ]
14,553,052
https://en.wikipedia.org/wiki/Surface%20plasmon
Surface plasmons (SPs) are coherent delocalized electron oscillations that exist at the interface between any two materials where the real part of the dielectric function changes sign across the interface (e.g. a metal-dielectric interface, such as a metal sheet in air). SPs have lower energy than bulk (or volume) plasmons which quantise the longitudinal electron oscillations about positive ion cores within the bulk of an electron gas (or plasma). The charge motion in a surface plasmon always creates electromagnetic fields outside (as well as inside) the metal. The total excitation, including both the charge motion and associated electromagnetic field, is called either a surface plasmon polariton at a planar interface, or a localized surface plasmon for the closed surface of a small particle. The existence of surface plasmons was first predicted in 1957 by Rufus Ritchie. In the following two decades, surface plasmons were extensively studied by many scientists, the foremost of whom were T. Turbadar in the 1950s and 1960s, and E. N. Economou, Heinz Raether, E. Kretschmann, and A. Otto in the 1960s and 1970s. Information transfer in nanoscale structures, similar to photonics, by means of surface plasmons, is referred to as plasmonics. Surface plasmon polaritons Excitation Surface plasmon polaritons can be excited by electrons or photons. In the case of photons, it cannot be done directly, but requires a prism, or a grating, or a defect on the metal surface. Dispersion relation At low frequency, an SPP approaches a Sommerfeld-Zenneck wave, where the dispersion relation (relation between frequency and wavevector) is the same as in free space. At a higher frequency, the dispersion relation bends over and reaches an asymptotic limit called the "plasma frequency" (see figure at right). For more details see surface plasmon polariton. Propagation length and skin depth As an SPP propagates along the surface, it loses energy to the metal due to absorption. It can also lose energy due to scattering into free-space or into other directions. The electric field falls off evanescently perpendicular to the metal surface. At low frequencies, the SPP penetration depth into the metal is commonly approximated using the skin depth formula. In the dielectric, the field will fall off far more slowly. SPPs are very sensitive to slight perturbations within the skin depth and because of this, SPPs are often used to probe inhomogeneities of a surface. For more details, see surface plasmon polariton. Localized surface plasmons Localized surface plasmons arise in small metallic objects, including nanoparticles. Since the translational invariance of the system is lost, a description in terms of wavevector, as in SPPs, can not be made. Also unlike the continuous dispersion relation in SPPs, electromagnetic modes of the particle are discretized. LSPs can be excited directly through incident waves; efficient coupling to the LSP modes correspond to resonances and can be attributed to absorption and scattering, with increased local-field enhancements. LSP resonances largely depend on the shape of the particle; spherical particles can be studied analytically by Mie theory. Experimental applications The excitation of surface plasmons is frequently used in an experimental technique known as surface plasmon resonance (SPR). In SPR, the maximum excitation of surface plasmons are detected by monitoring the reflected power from a prism coupler as a function of incident angle or wavelength. This technique can be used to observe nanometer changes in thickness, density fluctuations, or molecular absorption. Recent works have also shown that SPR can be used to measure the optical indexes of multi-layered systems, where ellipsometry failed to give a result. Surface plasmon-based circuits have been proposed as a means of overcoming the size limitations of photonic circuits for use in high performance data processing nano devices. The ability to dynamically control the plasmonic properties of materials in these nano-devices is key to their development. A new approach that uses plasmon-plasmon interactions has been demonstrated recently. Here the bulk plasmon resonance is induced or suppressed to manipulate the propagation of light. This approach has been shown to have a high potential for nanoscale light manipulation and the development of a fully CMOS-compatible electro-optical plasmonic modulator, said to be a future key component in chip-scale photonic circuits. Some other surface effects such as surface-enhanced Raman scattering and surface-enhanced fluorescence are induced by surface plasmon of noble metals, therefore sensors based on surface plasmons were developed. In surface second harmonic generation, the second harmonic signal is proportional to the square of the electric field. The electric field is stronger at the interface because of the surface plasmon resulting in a non-linear optical effect. This larger signal is often exploited to produce a stronger second harmonic signal. The wavelength and intensity of the plasmon-related absorption and emission peaks are affected by molecular adsorption that can be used in molecular sensors. For example, a fully operational prototype device detecting casein in milk has been fabricated. The device is based on monitoring changes in plasmon-related absorption of light by a gold layer. See also Biosensor Dual-polarization interferometry Extraordinary optical transmission Free electron model Gap surface plasmon Heat-assisted magnetic recording Multi-parametric surface plasmon resonance Plasma oscillation Plasmonic lens Plasmonics (journal) Spinplasmonics Surface plasmon resonance microscopy Waves in plasmas Notes References Quasiparticles Plasmonics
Surface plasmon
[ "Physics", "Chemistry", "Materials_science" ]
1,224
[ "Plasmonics", "Matter", "Surface science", "Nanotechnology", "Condensed matter physics", "Quasiparticles", "Solid state engineering", "Subatomic particles" ]
14,553,158
https://en.wikipedia.org/wiki/Cylindrical%20harmonics
In mathematics, the cylindrical harmonics are a set of linearly independent functions that are solutions to Laplace's differential equation, , expressed in cylindrical coordinates, ρ (radial coordinate), φ (polar angle), and z (height). Each function Vn(k) is the product of three terms, each depending on one coordinate alone. The ρ-dependent term is given by Bessel functions (which occasionally are also called cylindrical harmonics). Definition Each function of this basis consists of the product of three functions: where are the cylindrical coordinates, and n and k constants that differentiate the members of the set. As a result of the superposition principle applied to Laplace's equation, very general solutions to Laplace's equation can be obtained by linear combinations of these functions. Since all surfaces with constant ρ, φ and z  are conicoid, Laplace's equation is separable in cylindrical coordinates. Using the technique of the separation of variables, a separated solution to Laplace's equation can be expressed as: and Laplace's equation, divided by V, is written: The Z  part of the equation is a function of z alone, and must therefore be equal to a constant: where k  is, in general, a complex number. For a particular k, the Z(z) function has two linearly independent solutions. If k is real they are: or by their behavior at infinity: If k is imaginary: or: It can be seen that the Z(k,z) functions are the kernels of the Fourier transform or Laplace transform of the Z(z) function and so k may be a discrete variable for periodic boundary conditions, or it may be a continuous variable for non-periodic boundary conditions. Substituting for  , Laplace's equation may now be written: Multiplying by , we may now separate the P  and Φ functions and introduce another constant (n) to obtain: Since is periodic, we may take n to be a non-negative integer and accordingly, the the constants are subscripted. Real solutions for are or, equivalently: The differential equation for is a form of Bessel's equation. If k is zero, but n is not, the solutions are: If both k and n are zero, the solutions are: If k is a real number we may write a real solution as: where and are ordinary Bessel functions. If k is an imaginary number, we may write a real solution as: where and are modified Bessel functions. The cylindrical harmonics for (k,n) are now the product of these solutions and the general solution to Laplace's equation is given by a linear combination of these solutions: where the are constants with respect to the cylindrical coordinates and the limits of the summation and integration are determined by the boundary conditions of the problem. Note that the integral may be replaced by a sum for appropriate boundary conditions. The orthogonality of the is often very useful when finding a solution to a particular problem. The and functions are essentially Fourier or Laplace expansions, and form a set of orthogonal functions. When is simply , the orthogonality of , along with the orthogonality relationships of and allow the constants to be determined. If is the sequence of the positive zeros of then: In solving problems, the space may be divided into any number of pieces, as long as the values of the potential and its derivative match across a boundary which contains no sources. Example: Point source inside a conducting cylindrical tube As an example, consider the problem of determining the potential of a unit source located at inside a conducting cylindrical tube (e.g. an empty tin can) which is bounded above and below by the planes and and on the sides by the cylinder . (In MKS units, we will assume ). Since the potential is bounded by the planes on the z axis, the Z(k,z) function can be taken to be periodic. Since the potential must be zero at the origin, we take the function to be the ordinary Bessel function , and it must be chosen so that one of its zeroes lands on the bounding cylinder. For the measurement point below the source point on the z axis, the potential will be: where is the r-th zero of and, from the orthogonality relationships for each of the functions: Above the source point: It is clear that when or , the above function is zero. It can also be easily shown that the two functions match in value and in the value of their first derivatives at . Point source inside cylinder Removing the plane ends (i.e. taking the limit as L approaches infinity) gives the field of the point source inside a conducting cylinder: Point source in open space As the radius of the cylinder (a) approaches infinity, the sum over the zeroes of becomes an integral, and we have the field of a point source in infinite space: and R is the distance from the point source to the measurement point: Point source in open space at origin Finally, when the point source is at the origin, See also Spherical harmonics Notes References Differential equations
Cylindrical harmonics
[ "Mathematics" ]
1,044
[ "Mathematical objects", "Differential equations", "Equations" ]
14,553,266
https://en.wikipedia.org/wiki/Allotropes%20of%20oxygen
There are several known allotropes of oxygen. The most familiar is molecular oxygen (), present at significant levels in Earth's atmosphere and also known as dioxygen or triplet oxygen. Another is the highly reactive ozone (). Others are: Atomic oxygen (), a free radical. Singlet oxygen (), one of two metastable states of molecular oxygen. Tetraoxygen (), another metastable form. Solid oxygen, existing in six variously colored phases, of which one is octaoxygen (, red oxygen) and another one metallic (ζ-oxygen). Atomic oxygen Atomic oxygen, denoted O or O1, is very reactive, as the individual atoms of oxygen tend to quickly bond with nearby molecules. Its lowest-energy electronic state is a spin triplet, designated by the term symbol 3P. On Earth's surface, it exists naturally for a very short time. In outer space, the presence of ample ultraviolet radiation results in a low Earth orbit atmosphere in which 96% of the oxygen occurs in atomic form. Atomic oxygen has been detected on Mars by Mariner, Viking, and the SOFIA observatory. Dioxygen The common allotrope of elemental oxygen on Earth, , is generally known as oxygen, but may be called dioxygen, diatomic oxygen, molecular oxygen, dioxidene or oxygen gas to distinguish it from the element itself and from the triatomic allotrope ozone, . As a major component (about 21% by volume) of Earth's atmosphere, elemental oxygen is most commonly encountered in the diatomic form. Aerobic organisms use atmospheric dioxygen as the terminal oxidant in cellular respiration in order to obtain chemical energy. The ground state of dioxygen is known as triplet oxygen, , because it has two unpaired electrons. The first excited state, singlet oxygen, , has no unpaired electrons and is metastable. The doublet state requires an odd number of electrons, and so cannot occur in dioxygen without gaining or losing electrons, such as in the superoxide ion () or the dioxygenyl ion (). The ground state of has a bond length of 121 pm and a bond energy of 498 kJ/mol. It is a colourless gas with a boiling point of . It can be condensed from air by cooling with liquid nitrogen, which has a boiling point of . Liquid oxygen is pale blue in colour, and is quite markedly paramagnetic due to the unpaired electrons; liquid oxygen contained in a flask suspended by a string is attracted to a magnet. Singlet oxygen Singlet oxygen is the common name used for the two metastable states of molecular oxygen () with higher energy than the ground state triplet oxygen. Because of the differences in their electron shells, singlet oxygen has different chemical and physical properties than triplet oxygen, including absorbing and emitting light at different wavelengths. It can be generated in a photosensitized process by energy transfer from dye molecules such as rose bengal, methylene blue or porphyrins, or by chemical processes such as spontaneous decomposition of hydrogen trioxide in water or the reaction of hydrogen peroxide with hypochlorite. Ozone Triatomic oxygen (ozone, ) is a very reactive allotrope of oxygen that is a pale blue gas at standard temperature and pressure. Liquid and solid have a deeper blue color than ordinary , and they are unstable and explosive. In its gas phase, ozone is destructive to materials like rubber and fabric and is damaging to lung tissue. Traces of it can be detected as a pungent, chlorine-like smell, coming from electric motors, laser printers, and photocopiers, as it is formed whenever air is subjected to an electrical discharge. It was named "ozon" in 1840 by Christian Friedrich Schönbein, from ancient Greek ὄζειν (ozein: "to smell") plus the suffix -on, commonly used at the time to designate a derived compound and anglicized as -one. Ozone is thermodynamically unstable and tends to react toward the more common dioxygen form. It is formed by reaction of intact with atomic oxygen produced when UV radiation in the upper atmosphere splits . Ozone absorbs strongly in the ultraviolet and in the stratosphere functions as a shield for the biosphere against mutagenic and other damaging effects of solar UV radiation (see ozone layer). Tropospheric ozone is formed near the Earth's surface by the photochemical disintegration of nitrogen dioxide in the exhaust of automobiles. Ground-level ozone is an air pollutant that is especially harmful for senior citizens, children, and people with heart and lung conditions such as emphysema, bronchitis, and asthma. The immune system produces ozone as an antimicrobial (see below). Cyclic ozone Cyclic ozone is a theoretically predicted molecule in which its three atoms of oxygen bond in an equilateral triangle instead of an open angle. Tetraoxygen Tetraoxygen had been suspected to exist since the early 1900s, when it was known as oxozone. It was identified in 2001 by a team led by Fulvio Cacace at the University of Rome. The molecule was thought to be in one of the phases of solid oxygen later identified as . Cacace's team suggested that probably consists of two dumbbell-like molecules loosely held together by induced dipole dispersion forces. Phases of solid oxygen There are six known distinct phases of solid oxygen. One of them is a dark-red cluster. When oxygen is subjected to a pressure of 96 GPa, it becomes metallic, in a similar manner to hydrogen, and becomes more similar to the heavier chalcogens, such as selenium (exhibiting a pink-red color in its elemental state), tellurium and polonium, both of which show significant metallic character. At very low temperatures, this phase also becomes superconducting. References Further reading Theoretical analysis of some and lead-ref for others: Oxygen
Allotropes of oxygen
[ "Chemistry" ]
1,267
[ "Allotropes of oxygen", "Allotropes" ]
14,554,100
https://en.wikipedia.org/wiki/Substructural%20type%20system
Substructural type systems are a family of type systems analogous to substructural logics where one or more of the structural rules are absent or only allowed under controlled circumstances. Such systems can constrain access to system resources such as files, locks, and memory by keeping track of changes of state and prohibiting invalid states. Different substructural type systems Several type systems have emerged by discarding some of the structural rules of exchange, weakening, and contraction: Ordered type systems (discard exchange, weakening and contraction): Every variable is used exactly once in the order it was introduced. Linear type systems (allow exchange, but neither weakening nor contraction): Every variable is used exactly once. Affine type systems (allow exchange and weakening, but not contraction): Every variable is used at most once. Relevant type systems (allow exchange and contraction, but not weakening): Every variable is used at least once. Normal type systems (allow exchange, weakening and contraction): Every variable may be used arbitrarily. The explanation for affine type systems is best understood if rephrased as “every occurrence of a variable is used at most once”. Ordered type system Ordered types correspond to noncommutative logic where exchange, contraction and weakening are discarded. This can be used to model stack-based memory allocation (contrast with linear types which can be used to model heap-based memory allocation). Without the exchange property, an object may only be used when at the top of the modelled stack, after which it is popped off, resulting in every variable being used exactly once in the order it was introduced. Linear type systems Linear types correspond to linear logic and ensure that objects are used exactly once. This allows the system to safely deallocate an object after its use, or to design software interfaces that guarantee a resource cannot be used once it has been closed or transitioned to a different state. The Clean programming language makes use of uniqueness types (a variant of linear types) to help support concurrency, input/output, and in-place update of arrays. Linear type systems allow references but not aliases. To enforce this, a reference goes out of scope after appearing on the right-hand side of an assignment, thus ensuring that only one reference to any object exists at once. Note that passing a reference as an argument to a function is a form of assignment, as the function parameter will be assigned the value inside the function, and therefore such use of a reference also causes it to go out of scope. The single-reference property makes linear type systems suitable as programming languages for quantum computing, as it reflects the no-cloning theorem of quantum states. From the category theory point of view, no-cloning is a statement that there is no diagonal functor which could duplicate states; similarly, from the combinatory logic point of view, there is no K-combinator which can destroy states. From the lambda calculus point of view, a variable x can appear exactly once in a term. Linear type systems are the internal language of closed symmetric monoidal categories, much in the same way that simply typed lambda calculus is the language of Cartesian closed categories. More precisely, one may construct functors between the category of linear type systems and the category of closed symmetric monoidal categories. Affine type systems Affine types are a version of linear types allowing to discard (i.e. not use) a resource, corresponding to affine logic. An affine resource can be used at most once, while a linear one must be used exactly once. Relevant type system Relevant types correspond to relevant logic which allows exchange and contraction, but not weakening, which translates to every variable being used at least once. The resource interpretation The nomenclature offered by substructural type systems is useful to characterize resource management aspects of a language. Resource management is the aspect of language safety concerned with ensuring that each allocated resource is deallocated exactly once. Thus, the resource interpretation is only concerned with uses that transfer ownership – moving, where ownership is the responsibility to free the resource. Uses that don't transfer ownership – borrowing – are not in scope of this interpretation, but lifetime semantics further restrict these uses to be between allocation and deallocation. Resource-affine types Under the resource interpretation, an affine type can not be spent more than once. As an example, the same variant of Hoare's vending machine can be expressed in English, logic and in Rust: What it means for to be an affine type in this example (which it is unless it implements the trait) is that trying to spend the same coin twice is an invalid program that the compiler is entitled to reject: let coin = Coin {}; let candy = buy_candy(coin); // The lifetime of the coin variable ends here. let drink = buy_drink(coin); // Compilation error: Use of moved variable that does not possess the Copy trait. In other words, an affine type system can express the typestate pattern: Functions can consume and return an object wrapped in different types, acting like state transitions in a state machine that stores its state as a type in the caller's context – a typestate. An API can exploit this to statically enforce that its functions are called in a correct order. What it doesn't mean, however, is that a variable can't be used without using it up: // This function just borrows a coin: The ampersand means borrow. fn validate(_: &Coin) -> Result<(), ()> { Ok(()) } // The same coin variable can be used infinitely many times // as long as it is not moved. let coin = Coin {}; loop { validate(&coin)?; } What Rust is not able to express is a coin type that cannot go out of scope – that would take a linear type. Resource-linear types Under the resource interpretation, a linear type not only can be moved, like an affine type, but must be moved – going out of scope is an invalid program. { // Must be passed on, not dropped. let token = HotPotato {}; // Suppose not every branch does away with it: if !queue.is_full() { queue.push(token); } // Compilation error: Holding an undroppable object as the scope ends. } An attraction with linear types is that destructors become regular functions that can take arguments, can fail and so on. This may for example avoid the need to keep state that is only used for destruction. A general advantage of passing function dependencies explicitly is that the order of function calls – destruction order – becomes statically verifiable in terms of the arguments' lifetimes. Compared to internal references, this does not require lifetime annotations as in Rust. As with manual resource management, a practical problem is that any early return, as is typical of error handling, must achieve the same cleanup. This becomes pedantic in languages that have stack unwinding, where every function call is a potential early return. However, as a close analogy, the semantic of implicitly inserted destructor calls can be restored with deferred function calls. Resource-normal types Under the resource interpretation, a normal type does not restrict how many times a variable can be moved from. C++ (specifically nondestructive move semantics) falls in this category. auto coin = std::unique_ptr<Coin>(); auto candy = buy_candy(std::move(coin)); auto drink = buy_drink(std::move(coin)); // This is valid C++. Programming languages The following programming languages support linear or affine types: ATS Clean Idris Mercury F* LinearML Alms Haskell with Glasgow Haskell Compiler (GHC) 9.0.1 or above Granule Rust Swift 5.9 and above See also Effect system Linear logic Affine logic References Type theory
Substructural type system
[ "Mathematics" ]
1,662
[ "Type theory", "Mathematical logic", "Mathematical structures", "Mathematical objects" ]
14,554,303
https://en.wikipedia.org/wiki/Rozenburg%20refinery
The Rozenburg refinery is an oil refinery owned by Kuwait Petroleum Europort BV which is a subsidiary of Kuwait Petroleum International, KPI. It is sometimes referred to as the Europort refinery. The refinery has a capacity of 4 mtpa (80 kbpd) of crude oil and has a Nelson complexity index of approximately 6. See also Oil refinery Petroleum List of oil refineries External links Q8 Refineries Oil refineries in the Netherlands Port of Rotterdam
Rozenburg refinery
[ "Chemistry" ]
97
[ "Petroleum", "Petroleum stubs" ]
14,554,640
https://en.wikipedia.org/wiki/Rajiv%20Gandhi%20Institute%20of%20Petroleum%20Technology
Rajiv Gandhi Institute of Petroleum Technology (RGIPT), in Jais, Amethi (formerly in Raebareli), Uttar Pradesh, India, is a training and education institute focusing on STEM and petroleum industry. It is an institute of national importance equivalent to IITs. It was formally opened in July 2008. It has been accorded the Institute of National Importance status and a governance structure similar to that available to IITs & NITs etc. It admits undergraduate students from the rank list of students who have qualified for Joint Entrance Examination – Advanced (JEE Advanced) Examination. History 4 Jul 2007: The Union Cabinet approved the setting up of the Rajiv Gandhi Institute of Petroleum Technology at Jais in Amethi District of UP. 29 Nov 2007: The Rajya Sabha passed the Rajiv Gandhi Institute of Petroleum Technology (RGIPT) Bill 2007. 3 Dec 2007: The Lok Sabha passed the Rajiv Gandhi Institute of Petroleum Technology (RGIPT) Bill 2007. 20 Feb 2008: Congress President and UPA Chairperson Sonia Gandhi lays foundation stone of Rajiv Gandhi Institute of Petroleum Technology at Jais. 2008–2009: Start of academic session. 2011: Former Prime Minister of India Manmohan Singh laid the foundations of the Rajiv Gandhi Institute of Petroleum Technology (Assam centre). 2012: First BTech batch passes out. 2013: Bangalore centre gets approved. 2014: Veerappa Moily, former Minister of Petroleum and Natural Gas inaugurates Rajiv Gandhi Institute of Petroleum Technology at Kambali Pura Village, Hoskote Taluk, Bangalore District. 2015: First Convocation was conducted with Minister of Petroleum and Natural Gas Dharmendra Pradhan and other high level dignitaries from Ministry of Petroleum and Natural Gas as Chief Guests. 2016: Institute is moved to the permanent campus at Jais. 22 Oct 2016: Union ministers, Minister of Textiles Shrimati Smriti Irani, Minister of Petroleum and Natural Gas Shri Dharmendra Pradhan and Minister of Human Resource Development Shri Prakash Javadekar, inaugurated the permanent campus at Jais. May 2017: The institute increases the student intake for BTech program to 120 seats from 75 seats. 13 May 2017: Chief Minister of Assam Sarbananda Sonowal and Minister of Petroleum and Natural Gas Shri Dharmendra Pradhan jointly launched the full swing construction work of second campus at Sivasagar, Assam. Dec 2017: Sivasagar campus starts academic sessions for diploma in petroleum and chemical streams. Sept 2018: Bangalore Campus starts academic sessions for 3 MTech courses in the fields of Renewable Energy, Power and Energy Systems Engineering and Energy Science and Technology. Campus Main campus The institute started functioning from a leased land from Feroze Gandhi Institute of Engineering and Technology at Rae Bareli as academic center and ITI Township as Residential center for almost 9 years. On 15 October 2016, the institute was moved to a 300-acre newly constructed permanent campus at Jais, Uttar Pradesh. The infrastructure is not completely finished, but is expected to complete in 5 years. The new campus is a state-of-the-art campus with air conditioned classrooms having projectors, high definition televisions, computers for the faculty, video conferencing equipment. LAN connectivity has been provided in each room including hostel, classrooms along with permanent WI-FI setup has been provided. Jais, Amethi Campus Started academic sessions in June 2008 at a temporary campus at Rae Bareli. On 15 October 2016, the institute moved from temporary campus to the permanent campus at Jais. This campus offers the BTech, MTech, MBA, PhD programs and other research related activities. The cost for setting up this campus was . From 2020 onwards, The Institute began offering BTech Courses in Computer Science Engineering and Electronics Engineering. In the year 2021, a new four year B.Tech. programme in Mathematics & Computing has been started at RGIPT with the goal of setting the stage for developing skills in mathematics and computer science. Noida Campus This campus used to run MBA and related courses. On 15 October 2016, the institute moved from temporary campus to the permanent campus at Jais. Assam Campus Started academic sessions in 2017 at Sivasagar, Assam in 37 acres as a first phase while 63 acres is under construction. On 13 May 2017, Chief Minister of Assam Sarbananda Sonowal and Minister of Petroleum and Natural Gas Shri Dharmendra Pradhan jointly launched the full swing construction work of second campus at Sivasagar, Assam. This campus is being built at a cost of . Bangalore Campus Third campus of RGIPT, "Energy Institute, Bengaluru" is in Bengaluru. In July 2013, Karnataka government agreed to offer 150 acres of land to set up the Bangalore centre which is going to be Asia's first centre on fire and safety for oil and gas sector. Energy Institute, Bengaluru is currently offering 04 M.Tech. programmes namely M.Tech. in Renewable Energy, Energy Science and Technology, Power and Energy Systems Engineering & Electric Vehicle Technology and PhD programmes in all disciplines of Engineering specifically in the field of Energy. The institute has commenced its academical activities from its transit campus at NMIT Bangalore from 2018. This campus would initially house 10 research labs established over 1.8 lakh square feet in a research cum academic complex. An Energy Experience Centre, Incubation/E-Cell, Library, Auditorium and Seminar Halls etc. will support the activities of the campus. The institute coming up will cost . Organisation and administration Departments RGIPT has the following departments: Department of Computer Science and Engineering Department of Electronics Engineering Department of Petroleum Engineering Department of Renewable Energy Engineering Department of Chemical Engineering Department of Management Studies Department of Geology Department of Chemistry Department of Physics Department of English Department of Mathematical Sciences Department of Electrical Engineering Department of Computer Science and Design Engineering Department of Information Technology Future Departments Department of Instrumentation Engineering Department of Fire Engineering Department of Mechanical Engineering Department of Physics Budget Jais campus The total estimated capital cost: 861 crores ( 416 crores (Central Exchequer) + 250 crores (GOI) + 150 crores (Oil Industry Development Board) + 45 crores (Institute itself)) The institute's permanent campus infrastructure was provided with 129 crores during Congress led UPA government and additional 302 crores were provided by the current NDA government for completion of campus. During fiscal 2016–17, Ministry of Petroleum and Natural Gas has allocated additional 47 crores while the institute remains independent to generate more funds for operational purposes through college fees and other funds/awards. Sivasagar campus The Detailed Project Report has stated the cost for setting up and running Sivasagar campus will be in which will be for spent for land acquisition and construction of facilities and for running the institute. Bangalore campus The Detailed Project Report has stated the cost for setting up and running Bangalore campus will be in which will be for spent for land acquisition and construction of facilities and for research related activities. It one of the most expensive research institutes in India. Academics Admissions The institute admits BTech students from the rank list of qualified students in the JEE Advanced Examination and MTech through GATE. For MBA, institute admits candidates from qualification through CAT score. For admission to PhD programmes, candidates must have a valid scorecard of: GRE/NET (UGC/CSIR)/NBHM/ DBT/ GPAT/ Rajiv Gandhi National Fellowship/ Maulana Azad National Fellowship/ DST Inspire Award or any similar Fellowship. Rankings The National Institutional Ranking Framework (NIRF) ranked it 80 among engineering colleges in 2024. Academic programmes RGIPT offers courses in engineering, pure sciences, management and humanities with a focus on petroleum engineering. The programs and courses offered at RGIPT are changing as the school evolving into a full-fledged petroleum engineering university. Admission to the courses of BTech is done from candidates appearing for JEE Advanced Examination. Admission to the MTech courses are done through the Graduate Aptitude Test in Engineering (GATE). Admissions to the MBA program is done through the Common Admission Test (CAT). Interviewing cities are Delhi, Mumbai, Bangalore, Kolkata, Lucknow and Jais. Admissions to the MSc.Tech. and PhD courses are done through examinations conducted by the institute at Jais. Entrance examination is held at Sivasagar campus for all diploma engineering courses. Integrated Aerospace Development Program RGIPT has collaborated with some pioneer institutes for advancements in aviation/aerospace and Rocket Fuels/Petroleum Technology. Indian Institute of Technology Kanpur through an MOU has decided to be the mentor of the program. Fundamentally, the Aerospace program is by IET Raebareli from APJ Abdul Kalam Technical University. Currently, the program is running in complete collaboration with Indira Gandhi Rashtriya Uran Akademi. MNNIT Allahabad and Indian Institute of Information Technology, Allahabad are providing academic assistance. In future, Indian Institute of Science, Bangalore is said to be in full-fledged resource (academic and managerial) collaboration with FGIET/ IET Raebareli. Admission process depends on several factors including aptitude, intellect, personality and learning of the student. The selected students are a group who have qualified at-least one of the required admission tests like JEE-Mains/Advanced, WBJEE, SEE-AKTU and even SATs. The students have been provided residential services in-campus at Jais, Amethi. The educational program will require for transition in between college departments/campuses on an annual even-semester basis. Student life Energia This is an annual festival of three days of activities including sporting events, talent playoffs and similar activities. It was separated from the annual cultural festival in 2017 and since then, sporting events are held in this festival. Sawai godara is the current Energy Chief and prime person behind sport activities in the college In 2019, this event marked its third anniversary as sports were part of annual cultural fest since the beginning of the institute. This festival is currently financed by the institute while some of the amount is fetched by sponsorship. Kaltarang Kaltarang is the annual cultural and sports fest of Rajiv Gandhi Institute of Petroleum Technology, Amethi. It is a themed fest, generally held in the month of February every year, since the very first time in 2011. It is a three-day fest having events like rock band performance, fashion show, rock band competition, stand-up comedy, cyber gaming, themed events, various sports events. See also List of autonomous higher education institutes in India References Universities and colleges in Amethi district Engineering colleges in Uttar Pradesh Amethi district Petroleum engineering schools Energy in Uttar Pradesh Institutions of Petroleum in India Educational institutions established in 2007 2007 establishments in Uttar Pradesh
Rajiv Gandhi Institute of Petroleum Technology
[ "Engineering" ]
2,185
[ "Petroleum engineering", "Petroleum engineering schools", "Engineering universities and colleges" ]
14,555,039
https://en.wikipedia.org/wiki/International%20Electrical%20Congress
The International Electrical Congress was a series of international meetings, from 1881 to 1904, in the then new field of applied electricity. The first meeting was initiated by the French government, including official national representatives, leading scientists, and others. Subsequent meetings also included official representatives, leading scientists, and others. Primary aims were to develop reliable standards, both in relation to electrical units and electrical apparatus. Historical background In 1881, both within and across countries, different electrical units were being used. There were at least 12 different units of electromotive force, 10 different units of electric current and 15 different units of resistance. A number of international Congresses were held, and sometimes referred to as International Electrical Congress, Electrical Conference, and similar variations. Secondary sources make different judgments about how to classify the Congresses. In this article, the Congresses with representatives from national governments are identified as International Electrical Congress. Other Congresses — often addressing the same issues — are identified here as Concurrent Related International Electrical Congresses. Some of these related conferences were devoted to preparing for an International Electrical Congress. In 1906 the International Electrotechnical Commission was created. Congresses were organised under its auspices were also sometimes referred to as International Electrical Congress. In this article, Congresses organized by the Commission are listed under International Electrotechnical Congresses, while other related Congresses are listed under Related International Electrotechnical Conferences. International Electrical Congress Source: 1881 in Paris Held from 15 September-5 October 1881, in connection with the International Exposition of Electricity. Adolphe Cochery, Minister of Posts and Telegraphs of the French Government, was the Chairman. At the Congress, William Thomson (United Kingdom), Hermann von Helmholtz (Germany), and (Italy) were elected as foreign vice-presidents. About 200-250 persons participated, and a proceedings was published in 1882. Notable participants included: Helmholtz, Clausius, Kirchhoff, Werner Siemens, Ernst Mach, Rayleigh, and Lenz, among others. Important events The three main topics for the Congress were: electrical units, improvements in international telegraphy, and various applications of electricity. The Congress resolved to endorse the 1873 British Association for the Advancement of Science proposal for defining the ohm and the volt as practical units, and also made resolutions to define ampere, coulomb and farad, as units for current, quantity, and capacity respectively, to complete the practical system. It also resolved that an international committee should conduct new tests to determine the length of the column of mercury for measuring the ohm. 1893 in Chicago Held from 21 to 25 August, in connection with the World's Columbian Exposition, with almost 500 participants. Elisha Gray was the Congress president. A proceedings was published. Refinements to the units of measurement, including the Clark cell, were discussed. Laid down rules for the physical representation: ohm, ampere and volt. Ohm and ampere were defined in terms of the CGS electromagnetic system. The units were named international to distinguish them from the 1881 proposal, hence International System of Electrical and Magnetic Units. 1900 in Paris Held in 18–25 August in connection with the Paris Exposition Universelle. Éleuthère Mascart was the congress president. There were more than 900 participants, about half of which were from France, and about 120 technical papers presented. A two-volume proceedings was published in 1901 Dealt mainly with magnetic units. During this congress, names were proposed for four magnetic-circuit units in the C.G.S. System. Only two were accepted by vote: the C.G.S. unit for magnetic flux ( ) was named maxwell and C.G.S. unit of magnetising force (or magnetic field intensity) was named gauss (H). Some delegates mistakenly believed and reported that the gauss was adopted as the C.G.S. unit of flux density (B). This mistake has been reproduced in contemporary texts, which have cited a mistaken report. It is relevant to note that the Congress's official formulation for the gauss was in French, , which would be translated into English as magnetic field, which has been used to refer both to (B) and (H), noted in magnetic field. In 1930 the International Electrotechnical Commission decided that the magnetic field strength (H) was different from the magnetic flux density (B), but now assigned the gauss to refer to magnetic flux density (B), in contrast to the decision from this Congress. 1904 in St.Louis, Missouri Held from 12 to 17 September 1904, in connection with the Louisiana Purchase Exposition Recommended two permanent international commissions, one about electrical units and standards, the other about unification of nomenclature and characteristics of electrical machines and apparatus. These recommendations are considered the seed that initiated the creation of the International Electrotechnical Commission in 1906. Concurrent Related International Electrical Congresses During the period that the Electrical Congresses were held, other conferences and international Congresses were held, sometimes in preparation to the official Electrical Congresses. These events are listed here. 1882 in Paris Conférence international pour la détermination des unités électriques (International Conference for Determination of Electrical Units) Held 16–26 October. Was motivated by a resolution from the 1881 International Electrical Congress. A verbal transcript of the conference was published. 1884 in Paris International Conference for Determination of Electrical Units 1889 in Paris International Congress of Electricians Held 24–31 August, in connection with Exposition universelle de 1889. About 530 participants from at least 11 countries. Adopted several units, including practical units of power (watt) and work (joule), where 1 watt = 107 erg/second, and 1 joule = 107 erg. Considered practical magnetic units, but did not make any resolutions or recommendations. 1891 in Frankfurt Held 7–12 September, in connection with the International Electrotechnical Exhibition (Die Internationale Elektrotechnische Ausstellung 1891), organized by Elektrotechnische Gesellschaft. Galileo Ferraris was a vice-president at the conference. There were 715 participants (473 from Germany and 243 from other countries, including Austria, United Kingdom, USA, and France). An official report of the conference was published. Papers and discussions were organised in five main areas: Theory and Measuring Science; Strong Current Technology; Signalling, Telegraphy, and Telephony; Electrochemistry and Electric Current Applications; and Legislation to Mediate Conflicts between Cities around different currents used for electric lights, telephones, and telegraphs. 1892 in Edinburgh Held in connection with the British Association for the Advancement of Science annual meeting 1896 in Geneva Held 4–9 August, in connection with the . Insufficient and late communication about the organization of the Congress hampered widespread participation, so that the conference had about 200 participants, mostly from Switzerland, Austria, Germany and Belgium. Topics for discussion were magnetic units, photometric units, the long-distance transmission of power, the protection of high-tension lines against atmospheric discharge, and the problems and challenges of electric railway operation. International Electrotechnical Congress 1908 in London International Conference on Electric Units and Standards. Held in October. Organized by the Commission on Electric Units and Standards of the International Electrotechnical Commission Formal adoption of the "international units" (e.g., international ohm, international ampere), which were proposed originally in the 1893 meeting of the International Electrical Congress in Chicago. 1911 in Turin Held 10–17 September, organized by and the Italian Electrotechnical Committee of the International Electrotechnical Commission 1915 in San Francisco Was to be held 13–18 September, and organized by the American Institute of Electrical Engineers, but was cancelled because of the outbreak of World War I. Related International Electrotechnical Conferences 1905 in Berlin Internationale Konferenz über Elektrische Masseinheiten (International Conference on Electrical Units) Held 23–25 October at Physikalisch-Technischen Reichsanstalt at Charlottenburg. The 1904 Congress recommended holding an international conference to address discrepancies in the electrical units and their interpretation. Emil Warburg, president of the Physikalisch-Technische Reichsanstalt in Germany, invited representatives from corresponding national laboratories in the United States (National Bureau of Standards), the United Kingdom (National Physical Laboratory), and the official standards commissions in Austria and Belgium to an informal conference on electrical standards and units. Additionally Mascart (France), Rayleigh (United Kingdom) and Carhart (USA) were invited because of their expertise and influence. Thirteen of the fifteen invited persons participated in the conference, six from the Reichsanstalt, two from the Belgian Commission on Electrical Units, two from the Austrian Commission on Standardization, Richard Glazebrook from the National Physical Laboratory, Mascart, and Carhart. The non-attendees were Samuel Wesley Stratton, director of the National Bureau of Standard, who sent three papers outlining the positions and proposals of the Bureau, and Rayleigh. A proceedings was published. Concentrated on the redefinition of the ohm, ampere, and volt, as resolved in the 1904 Congress. The aim was to attain true international uniformity in definitions of these concepts. The main question was whether ohm, ampere, and volt should be independent of each other, or only two should be defined, and which two. The conference concluded that only two electrical units should be taken as fundamental: the international ohm and the international ampere. It also adopted the Western Cadmium Cell as the standard cell, and added rules about the preparation and use of the mercury tube, whose geometry was specified at the 1893 Congress. The conference resolved that another international conference in the course of a year should be held to establish an agreement about the electric standards in use, because different countries had different laws about electrical units. 1908 in Marseille Held 14–19 September, in connection with the L'exposition internationale des applications de l'électricité. A three-volume proceedings was published. References International standards International conferences 1881 conferences 1893 conferences 1900 conferences 1904 conferences 1908 conferences History of electrical engineering
International Electrical Congress
[ "Engineering" ]
2,041
[ "Electrical engineering", "History of electrical engineering" ]
14,556,061
https://en.wikipedia.org/wiki/Energy%20Research%20Institute%20of%20the%20Russian%20Academy%20of%20Sciences
Founded in July 1985, the Energy Research Institute of the Russian Academy of Sciences (ERIRAS) was originally an outgrowth of the general energy department at Institute for High Temperatures RAS (IVTAN). The staff at the newly formed organization comprised professionals gathered from other scientific organizations. At the time of its creation, ERIRAS' challenge was to develop the basic content and quantitative substantiation for the Energy Program of the USSR. The first director of ERIRAS was Lev Aleksandrovich Milentyev, an Academician of the Russian Academy of Sciences. At present, Academician Aleksei Aleksandrovich Makarov is the director of the institute. It is sanctioned. Organization The Institute employs approximately 80 staff members. It comprises eight scientific departments: Laboratory for the study of the interconnections between economics and energy Laboratory for the study of the improvement of energy consumption and energy savings Laboratory for the modeling of energy markets Laboratory for research on the methodology of energy policy development Laboratory for the study of the regulation and development of oil and gas systems Laboratory for the study of the regulation and development of electricity and heat systems Laboratory for the study of the regulation and development of the coal industry Centre for the study of international energy markets Activities The institute's mission is to find solutions to a wide range of contemporary challenges including: efficient energy production and consumption techniques to promote energy savings; appropriate pricing mechanisms for energy markets; and legislation, tax policy and other general issues related to domestic energy policy. ERIRAS work is divided into the following research areas: the study of the regularities of energy development the modeling of energy development the theory and methods of energy system studies the creation of a scientific basis for energy policy and the mechanisms for its implementation the identification of the rational spheres and magnitudes of energy savings, and the mechanisms for energy saving policy implementation which take into consideration environmental concerns the study of the priorities in energy technical policy the identification of rational mechanisms for the regulation of energy development within a market framework, including primary legislation, pricing techniques, and tax and investment policy the forecasting of energy markets the creation of a scientific basis for the development of the oil & gas industry the creation of a scientific basis for the development of the energy sectors (electricity & heat) the creation of a scientific basis for the development of the coal industry References External links Official site of ERIRAS Systems thinking Climate change organizations Environmental research institutes Institutes of the Russian Academy of Sciences Energy research institutes Research institutes in the Soviet Union Research institutes established in 1985 1985 establishments in the Soviet Union
Energy Research Institute of the Russian Academy of Sciences
[ "Engineering", "Environmental_science" ]
508
[ "Energy research institutes", "Energy organizations", "Environmental research institutes", "Environmental research" ]
14,556,606
https://en.wikipedia.org/wiki/Dehydrocholic%20acid
Dehydrocholic acid is a synthetic bile acid, manufactured by the oxidation of cholic acid. It acts as a hydrocholeretic, increasing bile output to clear increased bile acid load. References Bile acids Cholanes Ketones
Dehydrocholic acid
[ "Chemistry" ]
50
[ "Ketones", "Functional groups", "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
14,556,621
https://en.wikipedia.org/wiki/Idoneal%20number
In mathematics, Euler's idoneal numbers (also called suitable numbers or convenient numbers) are the positive integers D such that any integer expressible in only one way as x2 ± Dy2 (where x2 is relatively prime to Dy2) is a prime power or twice a prime power. In particular, a number that has two distinct representations as a sum of two squares is composite. Every idoneal number generates a set containing infinitely many primes and missing infinitely many other primes. Definition A positive integer n is idoneal if and only if it cannot be written as ab + bc + ac for distinct positive integers a, b, and c. It is sufficient to consider the set ; if all these numbers are of the form , , or 2s for some integer s, where is a prime, then is idoneal. Conjecturally complete listing The 65 idoneal numbers found by Leonhard Euler and Carl Friedrich Gauss and conjectured to be the only such numbers are 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 13, 15, 16, 18, 21, 22, 24, 25, 28, 30, 33, 37, 40, 42, 45, 48, 57, 58, 60, 70, 72, 78, 85, 88, 93, 102, 105, 112, 120, 130, 133, 165, 168, 177, 190, 210, 232, 240, 253, 273, 280, 312, 330, 345, 357, 385, 408, 462, 520, 760, 840, 1320, 1365, and 1848 . Results of Peter J. Weinberger from 1973 imply that at most two other idoneal numbers exist, and that the list above is complete if the generalized Riemann hypothesis holds (some sources incorrectly claim that Weinberger's results imply that there is at most one other idoneal number). See also List of unsolved problems in mathematics Notes References Z. I. Borevich and I. R. Shafarevich, Number Theory. Academic Press, NY, 1966, pp. 425–430. L. Euler, "An illustration of a paradox about the idoneal, or suitable, numbers", 1806 G. Frei, Euler's convenient numbers, Math. Intell. Vol. 7 No. 3 (1985), 55–58 and 64. O-H. Keller, Ueber die "Numeri idonei" von Euler, Beitraege Algebra Geom., 16 (1983), 79–91. [Math. Rev. 85m:11019] G. B. Mathews, Theory of Numbers, Chelsea, no date, p. 263. P. Ribenboim, "Galimatias Arithmeticae", in Mathematics Magazine 71(5) 339 1998 MAA or, 'My Numbers, My Friends', Chap.11 Springer-Verlag 2000 NY J. Steinig, On Euler's ideoneal numbers, Elemente Math., 21 (1966), 73–88. A. Weil, Number theory: an approach through history; from Hammurapi to Legendre, Birkhaeuser, Boston, 1984; see p. 188. P. Weinberger, Exponents of the class groups of complex quadratic fields, Acta Arith., 22 (1973), 117–124. Ernst Kani, Idoneal Numbers And Some Generalizations, Ann. Sci. Math. Québec 35, No 2, (2011), 197-227. External links K. S. Brown, Mathpages, Numeri Idonei M. Waldschmidt, Open Diophantine problems Integer sequences Unsolved problems in number theory Leonhard Euler
Idoneal number
[ "Mathematics" ]
794
[ "Sequences and series", "Unsolved problems in mathematics", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Mathematical objects", "Unsolved problems in number theory", "Combinatorics", "Mathematical problems", "Numbers", "Number theory" ]
14,557,176
https://en.wikipedia.org/wiki/Preisach%20model%20of%20hysteresis
In electromagnetism, the Preisach model of hysteresis is a model of magnetic hysteresis. Originally, it generalized hysteresis as the relationship between the magnetic field and magnetization of a magnetic material as the parallel connection of independent relay hysterons. It was first suggested in 1935 by Ferenc (Franz) Preisach in the German academic journal . In the field of ferromagnetism, the Preisach model is sometimes thought to describe a ferromagnetic material as a network of small independently acting domains, each magnetized to a value of either or . A sample of iron, for example, may have evenly distributed magnetic domains, resulting in a net magnetic moment of zero. Mathematically similar models seem to have been independently developed in other fields of science and engineering. One notable example is the model of capillary hysteresis in porous materials developed by Everett and co-workers. Since then, following the work of people like M. Krasnoselkii, A. Pokrovskii, A. Visintin, and I.D. Mayergoyz, the model has become widely accepted as a general mathematical tool for the description of hysteresis phenomena of different kinds. Nonideal relay The relay hysteron is the fundamental building block of the Preisach model. It is described as a two-valued operator denoted by . Its I/O map takes the form of a loop, as shown: Above, a relay of magnitude 1, defines the "switch-off" threshold, and defines the "switch-on" threshold. Graphically, if is less than , the output is "low" or "off." As we increase , the output remains low until reaches —at which point the output switches "on." Further increasing has no change. Decreasing , does not go low until reaches again. It is apparent that the relay operator takes the path of a loop, and its next state depends on its past state. Mathematically, the output of is expressed as: Where if the last time was outside of the boundaries , it was in the region of ; and if the last time was outside of the boundaries , it was in the region of . This definition of the hysteron shows that the current value of the complete hysteresis loop depends upon the history of the input variable . Discrete Preisach model The Preisach model consists of many relay hysterons connected in parallel, given weights, and summed. This can be visualized by a block diagram: Each of these relays has different and thresholds and is scaled by . With increasing , the true hysteresis curve is approximated better. In the limit as approaches infinity, we obtain the continuous Preisach model. Preisach plane One of the easiest ways to look at the Preisach model is using a geometric interpretation. Consider a plane of coordinates . On this plane, each point is mapped to a specific relay hysteron . Each relay can be plotted on this so-called Preisach plane with its values. Depending on their distribution on the Preisach plane, the relay hysterons can represent hysteresis with good accuracy. We consider only the half-plane as any other case does not have a physical equivalent in nature. Next, we take a specific point on the half plane and build a right triangle by drawing two lines parallel to the axes, both from the point to the line . We now present the Preisach density function, denoted . This function describes the amount of relay hysterons of each distinct values of . As a default we say that outside the right triangle . A modified formulation of the classical Preisach model has been presented, allowing analytical expression of the Everett function. This makes the model considerably faster and especially adequate for inclusion in electromagnetic field computation or electric circuit analysis codes. Vector Preisach model The vector Preisach model is constructed as the linear superposition of scalar models. For considering the uniaxial anisotropy of the material, Everett functions are expanded by Fourier coefficients. In this case, the measured and simulated curves are in a very good agreement. Another approach uses different relay hysteron, closed surfaces defined on the 3D input space. In general spherical hysteron is used for vector hysteresis in 3D, and circular hysteron is used for vector hysteresis in 2D. Applications The Preisach model has been applied to model hysteresis in a wide variety of fields, including to study irreversible changes in soil hydraulic conductivity as a result of saline and sodic conditions, the modeling of soil water retention and the effect of stress and strains on soil and rock structures. See also Jiles–Atherton model Stoner–Wohlfarth model References External links University College, Cork Hysteresis Tutorial Budapest University of Technology and Economics, Hungary Matlab implementation of the Preisach model developed by Zs. Szabó. Python implementation of Preisach Model. Matlab implementation of Preisach Model. Magnetic hysteresis Hysteresis
Preisach model of hysteresis
[ "Physics", "Materials_science", "Engineering" ]
1,063
[ "Physical phenomena", "Hysteresis", "Magnetic hysteresis", "Materials science" ]
14,557,383
https://en.wikipedia.org/wiki/Lucky%20numbers%20of%20Euler
Euler's "lucky" numbers are positive integers n such that for all integers k with , the polynomial produces a prime number. Characteristics When k is equal to n, the value cannot be prime since is divisible by n. Since the polynomial can be written as , using the integers k with produces the same set of numbers as . These polynomials are all members of the larger set of prime generating polynomials. Leonhard Euler published the polynomial which produces prime numbers for all integer values of k from 1 to 40. Only 6 lucky numbers of Euler exist, namely 2, 3, 5, 11, 17 and 41 . Note that these numbers are all prime numbers. The primes of the form k2 − k + 41 are 41, 43, 47, 53, 61, 71, 83, 97, 113, 131, 151, 173, 197, 223, 251, 281, 313, 347, 383, 421, 461, 503, 547, 593, 641, 691, 743, 797, 853, 911, 971, ... . Other lucky numbers Euler's lucky numbers are unrelated to the "lucky numbers" defined by a sieve algorithm. In fact, the only number which is both lucky and Euler-lucky is 3, since all other Euler-lucky numbers are congruent to 2 modulo 3, but no lucky numbers are congruent to 2 modulo 3. See also Heegner number List of topics named after Leonhard Euler Formula for primes Ulam spiral References Literature Le Lionnais, F. Les Nombres Remarquables. Paris: Hermann, pp. 88 and 144, 1983. Leonhard Euler, Extrait d'un lettre de M. Euler le pere à M. Bernoulli concernant le Mémoire imprimé parmi ceux de 1771, p. 318 (1774). Euler Archive - All Works. 461. External links Integer sequences Prime numbers Leonhard Euler
Lucky numbers of Euler
[ "Mathematics" ]
421
[ "Sequences and series", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Prime numbers", "Mathematical objects", "Combinatorics", "Numbers", "Number theory" ]
14,557,626
https://en.wikipedia.org/wiki/Phillips%20disaster%20of%201989
On 23 October 1989 at approximately 1:05 PM Central Daylight Time, a series of explosions occurred at Phillips Petroleum Company's Houston Chemical Complex (HCC) in Pasadena, Texas, near the Houston Ship Channel. The initial blast registered 3.5 on the Richter scale, and the resulting fires took 10 hours to bring under control, as efforts to battle the fire were hindered due to damaged water pipes for the fire hydrants from the blast. The initial explosion was found to have resulted from a release of extremely flammable process gasses used to produce high-density polyethylene, a plastic used for various consumer food container products. The US Occupational Safety and Health Administration fined Phillips Petroleum Company $5,666,200 and fined Fish Engineering and Construction, inc, the maintenance contractor, $729,600. The event killed 23 employees and injured 314. Prior to the disaster The HCC produced approximately per year of high-density polyethylene (HDPE), a plastic material used to make milk bottles and other containers. Approximately 1500 people worked at the facility, including 905 company employees and approximately 600 daily contract employees, who were engaged primarily in regular maintenance activities and new plant construction. Cause The accident resulted from a release of extremely flammable process gases that occurred during regular maintenance operations on one of the plant's polyethylene reactors. More than of highly flammable gases were released through an open valve almost instantaneously. During routine maintenance, isolation valves were closed and compressed air hoses that actuated them physically disconnected as a safety measure. The air connections for opening and closing this valve were identical, and had been improperly reversed when last re-connected. As a result, the valve would have been open while the switch in the control room was in the "valve closed" position. After that, the valve was opened when it was expected to stay closed, and finally passed the reactor content into air. A vapor cloud formed and travelled rapidly through the polyethylene plant. Within 90 to 120 seconds, the vapor cloud came into contact with an ignition source and exploded with the force of 2.4 tons of TNT. Ten to fifteen minutes later, that was followed by the explosion of the isobutane storage tank, then by the catastrophic failure of another polyethylene reactor, and finally by other explosions, probably about six in total. Explosions The incident started at approximately 1:05 PM local time on October 23, 1989, at 1400 Jefferson Road, Pasadena, Texas. A powerful and devastating explosion and fire ripped through the HCC, killing 23 people—all working at the facility—and injuring 314 others (185 Phillips Petroleum Company employees and 129 contract employees). In addition to the loss of life and injuries, the explosion affected all facilities within the complex, causing $715.5 million worth of damage plus an additional business disruption loss estimated at $700 million. The two polyethylene production plants nearest the source of the blast were destroyed, and in the HCC administration building nearly 0.5 mile away, windows were shattered and bricks ripped out. The initial explosion was equivalent to an earthquake registering 3.5 on the Richter scale and threw debris as far away as six miles. Early response The initial response was provided by the Phillips Petroleum Company fire brigade which was soon joined by members of the Channel Industries Mutual Aid association (CIMA). Cooperating governmental agencies were the Texas Air Control Board, the Harris County Pollution Control Board, the Federal Aviation Administration (FAA), the U.S. Coast Guard, the Occupational Safety and Health Administration (OSHA) and the U.S. Environmental Protection Agency (EPA). Firefighting The firefighting water system at the HCC was part of the process water system. When the first explosion occurred, some fire hydrants were sheared off at ground level by the blast. The result was inadequate water pressure for firefighting. The shut-off valves which could have been used to prevent the loss of water from ruptured lines in the plant were out of reach in the burning wreckage. No remotely operated fail-safe isolation valves existed in the combined plant/firefighting water system. In addition, the regular-service fire-water pumps were disabled by the fire which destroyed their electrical power cables. Of the three backup diesel-operated fire pumps, one had been taken out of service, and one ran out of fuel in about an hour. Firefighting water was brought in by hoses laid to remote sources: settling ponds, a cooling tower, a water main at a neighboring plant, and even the Houston Ship Channel. The fire was brought under control within about 10 hours as a result of the combined efforts of fire brigades from other nearby companies, local fire departments, and the Phillips Petroleum Company foam trucks and fire brigade. Search and rescue Search and rescue efforts were delayed until the fire and heat subsided and all danger of further explosions had passed. These operations were difficult because of the extensive devastation in the HCC and the danger of structural collapse. The Phillips Petroleum Company requested, and the FAA approved and implemented, a one-mile no-fly zone around the plant to prevent engine vibration and/or helicopter rotor downwash from dislodging any of the wreckage. The U.S. Coast Guard and Port of Houston fire boats evacuated to safety over 100 trapped people across the Houston Ship Channel. OSHA preserved evidence for evaluation regarding the cause of the catastrophe. List of casualties Phillips Petroleum Company employees Fatally wounded, listed by name, age, city of residence within Texas, and official date of death (following recovery and identification of remains or eventual death from injuries) Stephen Donald Huff, 21, 25 October 1989 Ruben Quilantan Alamillo, 35, Houston, 25 October 1989 James Edward Allen, 38, Pasadena, 2 November 1989 Albert Eloy Arce, 34, Deer Park, 7 November 1989 (listed as Eloy Albert Arce) James Henry Campbell Jr., 30, Baytown, 26 October 1989 Eloy Gonzales, 36, Houston, 1 November 1989 Mark Lloyd Greeson, 30, Pasadena, 28 October 1989 Delbert Lynn Haskell, 43, Deer Park, 29 October 1989 Scotty Dale Hawkins, 32, Houston, 28 October 1989 James Deowens Hubbard, 45, Houston, 25 October 1989 (listed as James Hubbard Jr.) Richard Leos, 30, La Porte, 29 October 1989 James Arthur Nichols, 40, Baytown, 27 October 1989 Jesse Thomas Northrup, 43, Brookshire, 28 October 1989 Mary Kathryn O'Connor, 34, Houston, 29 October 1989 Gerald Galen Pipher, 39, Deer Park, 30 October 1989 Cipriano Rodriguez Jr., 42, Pasadena, 27 October 1989 Jesse Oscar Trevino, 33, Pearland, 30 October 1989 Lino Ralph Trujillo, 39, Pasadena, 29 October 1989 Nathan Gene Warner, 30, Deer Park, 24 October 1989 Fish Engineering employees Fatally wounded and official dates of death Juan Manuel Garcia, 30 October 1989 Jose Lara Gonzalez, 23 October 1989 William Scott Martin, 25 October 1989 John Medrano, 30 October 1989 (listed as Juan Trejo-Medrano) A granite memorial at near 924 Jefferson Road, Pasadena, Texas was dedicated on the first anniversary of the disaster, and was declared by company officials to be open to the general public at all times. OSHA findings OSHA's major findings included: Lack of process hazard analysis Inadequate standard operating procedures (SOPs) Non-fail-safe block valve Inadequate maintenance permitting system Inadequate lockout/tagout procedures Lack of combustible gas detection and alarm system Presence of ignition sources Inadequate ventilation systems for nearby buildings Fire protection system not maintained in an adequate state of readiness. Additional factors found by OSHA included: Proximity of high-occupancy structures (control rooms) to hazardous operations Inadequate separation between buildings Crowded process equipment Insufficient separation between the reactors and the control room for emergency shutdown procedures. Quoting from a key OSHA document: "At the conclusion of the investigation (April 19, 1990), OSHA issued 566 willful and 9 serious violations with a combined total proposed penalty of $5,666,200 to Phillips Petroleum Company and 181 willful and 12 serious violations with a combined total proposed penalty of $729,600 to Fish Engineering and Construction, Inc., a maintenance contractor on the site." OSHA citations As a result of a settlement between OSHA and Phillips Petroleum Company, OSHA agreed to delete the willful characterization of the citations and Phillips Petroleum Company agreed to pay a $4 million fine and to institute process safety management procedures at HCC and the company's sister facilities at Sweeny, Texas; Borger, Texas; and Woods Cross, Utah. Facility today Today, the facility continues to manufacture polyethylene. This complex employs 450 workers for the production of speciality chemicals, including 150 operations and maintenance personnel. The facility experienced additional fatalities in 1999 and 2000. See also 1990 ARCO explosion Chevron Phillips List of industrial disasters References Explosions in 1989 1989 in Texas Industrial fires and explosions in the United States Chemical plant explosions Explosions in Texas Fires in Texas Pasadena, Texas Phillips 66 Urban fires in the United States October 1989 events in the United States 1989 disasters in the United States
Phillips disaster of 1989
[ "Chemistry" ]
1,882
[ "Chemical plant explosions", "Explosions" ]
14,557,673
https://en.wikipedia.org/wiki/The%20Guild%20%28web%20series%29
The Guild is an American comedy web series created and written by Felicia Day, who also stars as Cyd Sherman (AKA Codex). It premiered on July 27, 2007, and ran until 2013. The show revolves around the lives of a gamers' online guild, The Knights of Good, who play countless hours of a fantasy MMORPG video game referred to as The Game. The story focuses on Codex, the guild's Priestess, who attempts to lead a normal life after one of her guild-mates, warlock Zaboo (Sandeep Parikh), shows up on her doorstep. During The Guilds first season episodes were uploaded to YouTube and the series website. After the first season, The Guild entered a partnership with Microsoft, and episodes of the second through fifth seasons premiered on Xbox Live Marketplace, Zune Marketplace, and MSN Video before being later made available on the official Guild website, YouTube, and iTunes. According to Day, Microsoft's business model changed after season five; Day wanted to keep ownership, so the episode premieres moved to Day's YouTube channel Geek & Sundry. The series is also available via DVD and streaming on Apple TV. In 2013, after the end of the sixth season, Day confirmed that the web series is complete. Since its release, The Guild has received several accolades, including six Streamy Awards and four IAWTV Awards. History The Guild was written and created by Felicia Day, an avid gamer, in between acting roles in several American television shows and movies. After two years of gaming, Day decided to make something productive from her experiences and wrote the series as a sitcom television pilot. The series was purposely kept generic to avoid copyright problems and to appeal to a wider audience of massively multiplayer online role-playing game (MMORPG) fans, but Day based it on her experience with a World of Warcraft addiction. Day also hoped to show that the stereotypical man living in his parents' basement is not the only kind of gamer. She decided to produce the series online with Jane Selle Morgan and Kim Evey. Day already knew Sandeep Parikh and Jeff Lewis from Empty Stage Comedy Theatre, a Los Angeles-based comedy theatre, and their roles were written for them. The rest of the cast was filled through auditions. After filming the first three episodes in two and a half days, they ran out of money. After donations were solicited through PayPal, the fourth and fifth episodes were almost solely financed by donations. Format Each episode opens with Codex (Felicia Day) recapping the previous events in the story in the form of a video blog. Usually, it gives the audience a recap of the previous episode and shares Codex's feelings on the subject. The video blogs appear to be outside the timeline, as she is usually wearing an outfit (typically her pajamas) different from that in the episode itself, though some blogs take place in the time line with other characters or situations interrupting Codex. Each season is divided into 12 episodes (with the exception of season 1, which is divided into 10 episodes). Plot Season 1 (2007–2008) Cyd Sherman struggles to limit her time online, where she games as her alter ego Codex, a member of the Knights of Good. After the guild realizes that Zaboo has been offline for 39 hours, he appears on Codex's doorstep. Zaboo misunderstood Codex's in-game chats as flirting, and became a stalker living in the same apartment. On the in-game side, trouble also arises when Bladezz is banned from the game for using a macro (to spam homophobic slurs "a few thousand times") in the trade house. Codex uses this as an excuse to have the guild help her with her Zaboo problem. The guild (except Bladezz) reluctantly meets up in person—for the first time—at Cheesybeards, a local restaurant, only to find out that Vork had transferred all of their in-game valuables to Bladezz's account as part of a team building strategy. If they decided to kick out Bladezz, they would lose everything. Things get worse when Bladezz begins to slander the Knights of Good by showing inappropriate videos of the members' characters, and Codex is no closer to getting Zaboo to go home. Then, Zaboo's home comes to him in the form of his overbearing mother. Zaboo confesses that his mother controls every aspect of his life besides the Internet, which she is beginning to read about. He saw this as his only escape. Codex comes up with a plan to bring Bladezz down, using Zaboo's stalking skills. Zaboo finds out about Bladezz's modelling career and blackmails him into giving the gold and equipment back to the Guild. The Guild then fights off Zaboo's mom, and Bladezz redeems himself by landing the final blow. Codex soon realizes that she got Zaboo's mother's loot- Zaboo. Season 2 (2008–2009) Zaboo's mother takes revenge for losing Zaboo by having Codex evicted. Codex and Zaboo move into a new apartment, where Codex meets a new love interest: Wade (Fernando Chien), a stunt man. Codex tries to get Zaboo to move out by telling him that he needs to level up before they can be together. She arranges for him to live with Vork, who will take in-game gold as rent, something Zaboo is really good at: farming. Codex focuses on trying to get Wade interested in her. The Guild finds a valuable in-game orb which Clara and Tink fight over. Just as Vork lets it go up for bid, Clara's children unplug her computer from the Internet and, upon re-connecting, Clara finds out Tink wins it. Clara vows revenge on Vork for giving it to Tink and spends an entire weekend betraying Vork by corpse camping him on an alternate account as well as searching for her own orb. Bladezz believes Tink is romantically interested in him and begins to max out his mother's credit cards to buy her stuff, when, in fact, Tink is using him to get what she wants. Vork is annoyed with Zaboo's lack of logic and his antics in trying to 'man-up' for Codex. Codex finds out that the stunt-man has a "stupid tall hot girlfriend," Riley (Michele Boyd). The Game announces that the online play will be shut down for maintenance for four hours, during which Vork plans a strategy lecture for Zaboo and Bladezz, while Codex plans a quiet party with Clara and Tink. Bladezz coerces Vork to abandon the lecture in favor of a poker game (offline), hoping to make up some of what he spent on Tink. Clara advertises Codex's party and it becomes a crowded kegger. Among Clara's random invitees, Wade and Riley come to the party. After finding out that Riley is Wade's roommate and Wade is single, Tink and Clara try to hook Codex up with him. Zaboo, learning of this, persuades Vork and Bladezz to go to Codex's party to try to stop it. Vork discovers that Clara has been attacking him, and begins to question his quality of leadership. Bladezz confronts Tink about their relationship; upon learning that he has been used, Bladezz steals Tink's laptop and deletes her character. Meanwhile, Zaboo walks in on Wade and Codex kissing and challenges Wade to a fight. Wade is a much better fighter, but Zaboo's seriousness about Codex leads to Wade giving up his interest in her. Codex yells at Zaboo that she doesn't like him, and he leaves dejected. Then Codex sees a drunken Clara kissing with Wade, and decides to chase after Zaboo to apologize, but is hurt when she sees him making out with Riley. Season 3 (2009) Codex was able to recover from the disastrous party by the announcement of the new expansion pack for the game, Spires of Dragonor. The Knights of Good are first in line at GameStop until a rival guild, the Axis of Anarchy, cuts in front of them. After the Axis tricks a GameStop worker into sending the Knights to the back of line, Vork, still not over the events of the party, resigns as Guild Leader. Codex is elected as his successor, causing Tink to leave the Knights and join the Axis. While Vork goes on a self-discovery journey, Clara's husband George demands that she spend more time with the family after discovering her gaming has severely distanced her from him. As a result, Clara proposes that he take Tink's place after auditions for a sixth member fail. Riley, who becomes increasingly domineering to Zaboo, offers to join, but Codex chooses Clara's husband instead, adding "Mr. Wiggly" to the Guild. Meanwhile, Bladezz begins to be targeted by Tink and the Axis of Anarchy, who expose his modelling alias to his school and plant weapons in his locker; later, Bruiser (J. Teddy Garcia), a member of the Anarchists, seduces his mom. Codex issues a message on the game's public forum to stand up against the Axis for the behavior, and in retaliation the Axis puts a bounty on the Guild. Mr. Wiggly unknowingly gives away information about the Guild to other gamers in exchange for loot, which leads to his expulsion from the Guild. With this, he tells Clara to quit the game, and she does to save her marriage. To end the Axis's harassment of Bladezz, Codex and Zaboo track down the Anarchist Valkyrie at his job, where is he playing the game on company time. After they take away some of his character's possessions and threaten to expose him, Valkyrie tells them where and when the next Axis of Anarchy meeting will take place. Vork returns after regaining his confidence to lead and, with Codex, reassembles the Guild to challenge the Axis at the Internet café where they planned to have a group raid. The battle begins, but both sides lose members quickly. Some of the Knights die in-game when their real-life problems manifest: Clara's husband shows up, angry that she is playing the game; Riley destroys Zaboo's computer for not meeting her demands. Clara tells her husband that they are going to have another child and he forgets about their argument and redeems himself in the eyes of the guild by helping Clara kill the Anarchist Kwan in game. Zaboo breaks up with Riley, who then proceeds to make out with Venom. Finally, only Codex is left to face off Tink and Axis leader Fawkes (Wil Wheaton). After Codex makes Bladezz apologize to Tink, Tink decides that the Axis members are even bigger jerks than she can stand and lets Codex kill her in-game. Codex, in a hallucinatory conversation with her game character, musters the courage to defeat Fawkes. The Knights welcome Tink back into the guild, and Bladezz makes tentative peace with the Axis member who seduced his mother. Fawkes invites Codex for drinks; she initially refuses but, in a twist ending, wakes up in bed with him the next morning. Season 4 (2010) An unexpected and unintentional one-night stand with Fawkes (Wil Wheaton) causes Codex to stress over what the guild thinks of her and persuades him to cover for her in a pretend relationship. But after spending more time together, Codex realizes he is a "total tool-bag" and reevaluates her criteria for relationships with men. Her computer breaks and she is forced to get a job at Cheesybeards to pay for repairs but has no idea how to fulfil the expectations of her boss, Ollie (Frank Ashmore). Zaboo tries to be a good friend to Codex during her fake relationship with Fawkes instead of trying to win her love. He dives into this new pursuit with his usual smothering intensity. When the truth of the relationship is revealed he realizes that his feelings for Codex have changed and he wants to be her friend. An earnings competition for a new guild hall sparks a real-life business for Tink and Clara that strengthens and strains their friendship. Vork enlists Zaboo's mother, Avinashi (Viji Nathan), for her "brilliant economic mind" in his pursuit of his vision for the guild's hall and he sets up a stock market and loan company that is bankrupting players. However, her smothering tendencies enrage him to the point that he "make[s] a giant gesture that's really inappropriate" and proposes marriage in an attempt to repulse her. To his horror she accepts. Codex and Bladezz film an online Cheesybeards commercial but the result is so horrible that it spawns a series of prank calls to the establishment. Ollie is furious and fires Codex. The guild helps Codex get her job back by organizing a celebration at Cheesybeards that attracts a large population of gamers. Bladezz attempts to perform a magic trick involving fire, which ends up torching the restaurant (costing Codex and Bladezz their jobs). Zaboo begs Codex to intercede in the upcoming nuptials between his mother and Vork. And when Zaboo reveals he has used the money from auctioning a romantic painting of Codex and Fawkes he had commissioned to buy her a new computer she is touched by the gesture and resolves to break up the wedding. Avinashi and Vork are about to speak their vows to each other, at a virtual wedding ceremony in the newly purchased prison-like Knights of Good guild hall, when all of the guild members object. Codex manages to convince Zaboo's mother that it is wrong to marry "someone [she] can't stand in order to be close to someone who doesn't want to be near [her]". Zaboo helps by suggesting that she visit every few weeks when she gets lonely, causing Codex to realize that he possesses all the qualities on her new litmus test, and considers a relationship with him. The season wraps up with an official gamemaster crashing the ceremony to put an end to Vork's "Trogothian Stock Market" scheme. Codex convinces the GM, Kevinator (Simon Helberg), to change the design of the guild hall to the "bitchin' fairy palace" that Tink and Clara wanted. Kevinator is impressed to meet Bladezz, who has become an internet celebrity and invites the whole guild to a gaming convention. Season 5 (2011) The Knights of Good travel to MegaGameORama-Con, a three-day gaming convention. Bladezz believes that he is invited by Kevinator as a special guest, but his name is not on the invite list. With all nearby hotels booked, Rachel, a member of the convention staff, manages to secure a room for them. However, it is not offered for free, and Bladezz convinces the rest of the guild that he will clarify the situation to Kevinator. Meanwhile, Codex is more interested in getting close to Zaboo, but he becomes engrossed in attending the events and panels. On the first day of the convention, Bladezz and Vork discover that Kevinator had been fired from The Game before the day of the convention and Bladezz was one of his joke invites. To compensate for hotel fees, both of them start up a photo booth for the Cheeseybeard's pirate. Tink attempts to sell the T-shirts she and Clara made but is forced to find a booth to avoid from being caught by the convention staff for selling without a permit. When she and Clara come upon a steampunk-themed booth, Clara is more interested in it than selling the shirts. Zaboo is denied entry to a panel because the seats are full, causing him to form a seat-saving network. Codex tries out the new demo at The Game's booth, but unknowingly insults the creator, Floyd Petrovski (Ted Michaels). She becomes even more preoccupied when Zaboo spurns her advances and is continuously stalked by a convention-goer in a furry costume. When she follows Floyd to apologize, she discovers that he plans to sell The Game to a mainstream market. Codex becomes concerned about the future of the game, which is the only thing in her life holding her friendships together. Tink, who continuously changes costumes to hide her identity, reveals to Codex at a party that she is hiding from her adoptive family, who have attended the convention, fearing that they will discover her switching majors from pre-med to fashion design. Codex arranges a dinner with her family to reconcile against Tink's will. Meanwhile, Clara tries to join the steampunk group and is trained as their fourth member to help them win the costume contest, but the members of the group ultimately turn her away. Zaboo has become so preoccupied with his seat-saving network that he briefly goes power-hungry. He is stopped by Clara, who brings back his old personality, ending his involvement with the seat-saving network. Bladezz and Vork's booth becomes successful, but Vork rejects all of the celebrities who want to spend time with Bladezz. His attention, however, is turned towards Madeline (Erin Gray), an actress who played his favorite character, Charity, on the show Time Rings. The two are invited to a party that night, but Bladezz realizes that all the celebrities lead normal lives, finding them boring. Still, he rejects Rachel and her friends for the celebrities and openly humiliates them. Vork, on the other hand, ends up offending Madeline after he criticizes her decision to leave her show. The next day, Bladezz has lost all support from the celebrities and his fans, so he is unable to continue the Cheeseybeard pirate's photo booth. Zaboo helps Clara build a steampunk-themed blimp to help her win the costume contest. Codex and Tink discover that Codex's stalker is Fawkes, who wants to join their guild after the Axis of Anarchy broke up, but Codex rejects him. The girls later eavesdrop on Floyd's conversation and discover he plans on revealing his decision at the costume contest that night. Both of them convince the rest of the Guild to help them save The Game from going "freemium". The Guild is able to stop the changes with much success: Clara wins the costume contest, Bladezz is able to win back his fans, and Vork reconciles with Madeline. As all of them leave the convention the next morning, Floyd has decided to give Codex a job. Season 6 (2012–2013) Codex begins her new job working for Floyd Petrovski at the headquarters for "The Game", only to discover that he's a thoughtless tyrant who immediately turns all the other employees against her. Meanwhile, Tink discovers that the men she manipulated for services and gifts have all slandered her on local websites, losing all of her connections. Bladezz gets kicked out of his house by Bruiser and spends time at Clara's, convincing her long-suffering husband that she is devoted to her children by uploading videos of her parenting to the Internet, though he is more interested in monetizing the videos. Vork, who is dating Madeline, becomes disillusioned when Zaboo uncovers photos of her protesting nude, while Zaboo suffers separation anxiety from the members of the Guild going offline, seeking refuge from a collage of his ideal "sweetheart." Codex is pressured by her co-workers to convince Floyd to release the underwater expansion pack they have been planning for months but is forced to do menial chores in order to appease him. When the Guild visits her workplace, Tink steals Codex's key to the testing server and initiates a casual relationship with Donovan (Corey Craig), where they agree that he will do chores for her if she spends time with him. Unbeknownst to her, she begins to fall in love with him for real. Zaboo, who enters the server posing as an IT technician, becomes smitten with Sabina (Justine Ezarik), an NPC of The Game and the spitting image of his ideal girl. Vork, who has gotten through with an argument with Madeline about his personal goals, confronts Floyd about his unanswered complaints about The Game. This gets his character permanently banned, and he retaliates by protesting and gaining support from other gamers. Meanwhile, Bladezz is forced to spend time with Wiggly while Clara continues making videos. When Clara becomes Internet-famous, other parents turn to her for advice, one of them being Bladezz's mother. Clara encourages her to keep dating Bruiser, causing Bladezz to convince Wiggly to quit his job. The underwater expansion patch notes are leaked onto the Internet, greatly intensifying the protest. Codex is unsuccessful in finding the culprit but convinces Floyd to release the expansion pack anyway. Donovan reveals to Tink that he was the one who caused the leak in order to push Floyd to release the expansion pack, and Tink tells him that Codex and Vork are in the same guild. He uses this information against Floyd to blame the leak on Codex and gets her fired. Vork's protest culminates in a riot, but his acts have renewed Madeline's faith in him and the two reconcile. As her final act for Floyd, Codex quells the rioters by questioning their acts and informing that their poor attitudes contributed to the problems at the Game HQ. Floyd unexpectedly steps out and challenges the crowd to insult him to his face instead of typing online insults, but the entire crowd congratulate him on his work and cite their insecurities as part of their bad behaviors. Inspired, Floyd announces a troll-themed add-on for the Game. By the end of the day, Clara is successful in convincing Bruiser to break up with Bladezz's mother and secures a position at a vlogging network, Tink and Donovan begin a relationship, and Zaboo discovers his real-life Sabina. Codex, happy with getting her job back and realizing how much her friends are loyal to her, makes a final vlog and tearily shuts down her computer, bringing the season (and the series) to a close. Characters and cast Knights of GoodCodex (real name Cyd Sherman) is the Priest. Codex is shy and non-confrontational, tending to panic under stress. Outside the game she is a concert violinist (and former child prodigy), unemployed after setting fire to her boyfriend's cello. She is an addicted gamer who tries at first to control the time she spends online, but fails. For this reason, her therapist drops her. At the beginning of the series she is quite reclusive with no real-life friends; she is often self-conscious and awkward around men. Codex is portrayed by creator Felicia Day.Zaboo (real name Sujan Balakrishnan Goldberg) is the Warlock. Zaboo describes himself as a "HinJew", having a Hindu mother and Jewish father. He shows great skill with computers; for example, his stalking of Codex included obtaining (presumably through the Internet) the floor plan of her apartment and all her past residences. His obsessive attitude toward Codex reflects his mother's smothering. When talking, Zaboo often uses "-'d" after some key word or expression, self-commenting on what he just said (e.g. bladder'd, testosterone'd). While Zaboo doesn't appear to have a profession, he admits having attended college for four years (to which his mother drove him every day). Zaboo is portrayed by Sandeep Parikh.Vork (real name Herman Holden) is the guild leader and Warrior. He enjoys managing the guild and budgeting, and believes only in rules and logic. He lives frugally (and illegally) on his late grandfather's Social Security checks and is a certified notary public. When he became Guild Leader he "cut the fats of life" including electric power; he steals his senile neighbor's wifi (and shed), and keeps his food cold by buying ice with food stamps. In the penultimate episode of season 3 he reveals that he can speak fluent Korean; in season 4, he speaks Hindi to Zaboo's mother and claims to know all languages. Vork comes to believe that shared hatred of him is what keeps the guild together. In Season 4, his desire to own a guild hall leads him to manipulating an in-game exchange market, nearly causing him to be banned from the game. Vork is portrayed by Jeff Lewis.Bladezz (real name Simon Kemplar) is the Rogue. He is a high school student who spends most of his time outside of school in his mom's garage playing the Game. He is rude to the other male guild members, hits on the female guild members, and often makes lewd sexual jokes and comments. He is shown to be trumped by his tomboyish little sister many times throughout the series. He is worried about being sent to military school, and to save up for college his mom forced him into modeling; using the alias "Finn Smulders" in order to keep it a secret from his fellow high school students. In seasons 4 and 5, he becomes an Internet meme. Bladezz is portrayed by Vincent Caso.Clara (real name Clara Beane) is the Frost Mage. Clara is a neglectful stay-at-home wife and mother, college partier, and ex-cheerleader. Her three children are all young, with the youngest still breastfeeding. Though proud of her children, she tends to put gaming before her family, and sometimes tries to mix the two, recruiting her husband "Mr. Wiggly" to the guild. She uses her real name as her avatar name because her kids saw her old name, "Mom-inatrix". Clara expresses a ditsy, scatter brained, and eccentric personality with occasional bursts of insight. In the Season 5, Clara proves herself to be a capable mother -figure when she stops Zaboo from going mad with power and lack of sleep. Clara is portrayed by Robin Thorsen.Tinkerballa (real name April Lou) is the Ranger. Tink distances herself from the guild, trying not to let them know anything about her personal life, she even keeps her real name a secret from her fellow guildies introducing herself as Tinkerballa. Her real name isn't revealed until the fifth season. In reality, Tink is adopted and has two caucasian sisters. She also has been lying to her parents about being a pre-med student, when she has actually switched courses for a degree in costume design. She is shown to have a huge video game addiction, always having an alternate game in hand when not playing the guild's game, even when raiding. She is cold and manipulative, and uses men to get what she wants, including Bladezz who deletes Tink's character to avenge himself after being used by Tink. Following this, and Vork's refusal to punish Bladezz, she leaves the Knights of Good and joins the Axis of Anarchy, but later finds them too "douchey" even for her (she even says that she went on a date with Fawkes to join). She rejoins the Knights of Good during an in-game showdown with the Axis of Anarchy when they call her "Tainterballa" and she allows her avatar to be killed off intentionally to give Codex the shot at victory for the guild. Tink is revealed to be possibly the most social of the group although is incredibly grounded in the online world. In Season 5, Codex reunites Tink with her family at the gaming con. Tink is portrayed by Amy Okuda. Temporary membersMr. Wiggly (real name George Beane) is a Hunter. He is Clara's husband, nominated by Clara to replace Tink in Season 3; Codex hastily accepts him in order to prevent Zaboo from recruiting Riley. He is clearly inexperienced in gaming, mentioning that the last game he played was Pong (which Bladezz was unfamiliar with). Mr. Wiggly wants to spend time with Clara, but is at his wits' end with her distraction and infidelity. George is shown to be a conventional man who has a sense of responsibility, helping Clara when she spontaneously starts a gamer company. He is displeased when Clara leaves unannounced for the gaming con. Mr. Wiggly is played by Brett Sheridan. Axis of AnarchyFawkes is the leader of the Axis of Anarchy. He always wears a black kilt and often wears a black "Axis of Anarchy" T-shirt. In person he speaks with a cool, calm, almost polite tone, though he is prone to angered outbursts when online. He seems to be quite educated as well, as he is constantly quoting philosophers, authors, or figures of history. He often demonstrates that he does not believe in following rules, unless it benefits him. Because of their top ranking, he expects constant perfection from his guild. He is also quite full of himself and manipulative, convincing people they are into him. After the Knights of Good defeat the Axis of Anarchy at the end of Season 3, Fawkes invites Codex for drinks and although she doesn't intend to go, she does and ends up having a one-night stand with him. Attempting to justify this to herself and the Guild, she tries seeing where a relationship with Fawkes would go but he claims he is uninterested in dating and eventually she dumps him. However it emerges that Fawkes has developed feelings for Codex and asks her out properly, but she turns him down flatly. He returns in Season 5, revealing that the Axis broke up (claiming no fault of his own). Fawkes is played by Wil Wheaton.Venom is the only woman in her guild, until Tink joins. She is a paraplegic and uses a wheelchair, but seems to have no problem exploiting her disability for personal gain. She has a violent attitude and seems to dislike her guildmates. On at least two occasions she threatens suicide to get her way, and, when her character is killed in-game, feels ecstasy at this vicarious death. She works as a substitute art teacher at Bladezz's high school. Played by Teal Sherer.Bruiser is the guild's healer, and a corrupt police officer. Bruiser is probably the largest, loudest, and most vulgar member of the Axis. He had sex with Bladezz's mother to torment Bladezz, and in Season 6 is continuing to see her. Played by J. Teddy Garces.Kwan is a champion StarCraft player and earns millions playing in South Korea. He only speaks Korean, and has a female assistant named Nik (played by Toni Lee) who massages his hands and translates for him. He seems to be able to understand some English, and it's possible that Fawkes can understand some of what he says (as his translator does not translate what Kwan says into English). Played by Alexander Yi.Valkyrie (alt-character Artemis) is the attempted joker of the guild, though his jokes seem to be funny only to himself. Based on dialogue with his off-screen boss, he works for some form of design or decorating firm – he claimed to be dealing with a client whose damask came in the wrong color. He also has web skills, claiming to be the one who created finnsmulders.com. He also plays two girl characters because he claims to like looking at girls, although it is strongly implied he might be a closet homosexual. Played by Mike Rose. Game HQFloyd Petrovski is the creator of the Game and head of the company that produces it. While at Megagame-o-ramacon Floyd overhears Codex criticize parts of the upgrades available for playtest at the convention. After Codex apologizes for her comments and gathers the guild together to interfere with the Game's sale to a larger company, Floyd recognizes Codex's passion for the Game, and hires her as an assistant. Floyd proves to be a difficult boss, quickly alienating Codex from much of the staff. Floyd is played by Ted Michaels.Roy Aquino (Derek Basco), the graphic designer of the Game. On Codex's first day, Floyd lets her have Roy's office without his permission, causing Roy to dislike Codex from the very start. He subtly harasses Codex with drawings and bulletins. He is shown to have a stutter when under stress.Theodora (Alexandra Hoover), the Game COO and head producer. Despite being austere and cold-hearted, she is also extremely clumsy and tends to cover this up with awkward recoveries.Sula Morrison (Sujata Day), the hyperactive community coordinator. She wears glasses despite not having any vision impairment, claiming they make her appear smarter.Donovan (Corey Craig), the lead programmer and nephew to Floyd. Despite being handsome and athletic, he avoids people most of the time; Floyd describes him as a savant and Tink questions whether he has Asperger's Syndrome or not. Tink, nonetheless, gets him into doing her homework and yet develops feelings for him. He graduated top of his class from Caltech. Other charactersZaboo's mother (Avinashi Goldberg) epitomizes the over-controlling mother. She had Zaboo microchipped to keep track of his movements, punches Codex for (in her belief) trying to take her son away from her, manipulates him through a series of probably false ailments, and gets Codex evicted for helping him break free of her. She returns in Season 4, attempting to develop a real relationship with her son and ends up assisting Vork in earning gold to purchase a deluxe guild hall. She does manage to reconcile with Zaboo. She is played by Viji Nathan.Dena is Bladezz' precocious little sister. She is first seen when she arrives at the table at Cheesybeards, just after Bladezz has informed the Guildies that he is in control of all the Guild gold and equipment, ruining his big exit. Dena often practices playing bass guitar in Bladezz' basement, to his annoyance, and reads Sun Tzu's The Art of War (along with building Civil War dioramas). In Season 4, she trained Bladezz in being an actor (calling him terrible). Dena is played by Tara Caso, the sister of Vincent Caso who plays Bladezz.Wade Wei is Codex's attractive neighbor and brief love interest. Not a gamer himself, Wade works as a martial arts stunt double; on first meeting Codex, he bumps into her and falls down some stairs, practicing a new stunt. He's fond of showing off his moves and flirting with Codex, making her extremely nervous. Having a queasy stomach, Codex vomits on him upon seeing him in makeup from a zombie movie he is playing in. He harbors a revulsion for gamers, expressing his desire to punch them. Ironically, he describes Codex's gaming-derived knowledge of weaponry as "sexy," though it is worth noting that she does not explain how she knows so much about weapons when he makes the compliment. In episode 10 of season 2, it was revealed that he did motion capture stunt work for the Game. Wade is played by Fernando Chien.Riley is Wade's roommate with benefits and temporarily dated Zaboo. Riley is an FPS gamer and is ranked in Halo on Xbox Live. Riley is very dominant and enjoys submissive underdogs like Zaboo, and later Codex. At first she was thrilled to meet Codex, another girl gamer, until after she learned that Codex was an MMORPG player, the two began a grudging rivalry. This parodies the normal relationship between MOG and MMORPG players. In the finale of Season 3, Riley straddles Venom and starts making out with her to get back at Zaboo for breaking up with her. Riley and Venom then start a relationship, which goes well into Season 4 Episode 10, where they're seen discussing Wench outfits. The character of Riley is based on Team Unicorn member, Rileah Vanderbilt, and played by Team Unicorn member Michele Boyd.Ollie is the manager of Cheesybeards. He runs his restaurant in a very casual style, allowing Bladezz to use a computer while working at the grill. He speaks entirely in pirate-slang and uses a hook on his left hand (which is hinted to be real). Ollie is responsible for Codex getting a job at the restaurant, after she says she can improve advertising and increase business (though Bladezz sabotages these efforts). Ollie is slightly at odds with Jeannete, who strongly disapproves of Bladezz's slacker work ethic. He first appeared in episode 3 of season 4 and is played by Frank Ashmore.Jeanette is Bladezz's superior at Cheesybeards. She is fed up with his horrible work ethic, but is at odds with Ollie (who is okay with Bladezz). At the end of Season 4, it is revealed she was one of Fawkes' one-night stands and proceeds to beat him up (as she knows mixed martial arts). Jeanette was the first new character to appear in Season 4 and was played by Tymberlee Hill.Kevinator is the official gamemaster of the Game who appears at the end of Season 4 to undo Vork ruining the in game economy by hoarding craft items and selling them for massively inflated prices. Kevinator displays a high level of narcissism over his role, proclaiming ‘I am a god!’ before shooting lightning bolts around while giggling manically. After recognising Bladezz from the Cheesy Beards commercial and becoming friends, he provides the Knights of Good with tickets for a gaming convention focused on The Game. However, in Season 5, it is revealed that Kevinator was fired from the game just before the convention for unfair play. In addition, the invite for Bladezz turns out to be a joke, as he has a habit of inviting internet memes for his boss. Kevinator is played by Simon Helberg.Rachel is one of the staff members at Megagame-o-ramacon. She is amazed with meeting Bladezz (barely able to contain her excitement when meeting him), going out of her way to help get the Guild a room. The next day, Rachel is forced to get Tink and Clara to stop selling their shirts (with the help of stormtroopers). After being stood up by Bladezz she and her friends harass him by picketing and wrecking his booth at the convention, though Bladezz does manage to apologize later. She is played by Hayley Holmes.Madeline Twain''' is a guest of Megagame-o-ramacon, an actress regarded as legendary among the fans of the convention. The former heroine of the long ago cancelled science fiction television series "Time Rings", Madeline serves as mistress of ceremonies at the con's costume contest. Vork, revealed as a former head of the star's fan club, is still smitten with her, though somewhat resentful about her abrupt exit from the series. In Season 6, Vork and Madeline begin dating. Madeline is played by Erin Gray. ReceptionThe Guild had over 69 million upload views on YouTube as of September 2011. The series won several awards since its launch, including six Streamy Awards and four IAWTV Awards, and in February 2009, Rolling Stone named it one of "The Net's Best Serial Shows". A costume from the series has been accessioned by the National Museum of American History of the Smithsonian Institution. In 2014 The Guild was listed on New Media Rockstars Top 100 Channels, ranked at #37. In popular culture Joss Whedon credits The Guild as one of the inspirations for Dr. Horrible's Sing-Along Blog, which also starred Day. Awards and nominations Over the course of its duration, The Guild has won and been nominated for several awards. In other media Comic books On March 24, 2010, the first issue of the comic book limited series based on the show was released from Dark Horse Comics. It acts as a prequel to the show, and it was written by Felicia Day and illustrated by Jim Rugg. The second issue was released on April 23, 2010, and the third and final issue was released on May 26, 2010. The collected volume was released on November 24, 2010. A second five-issue series was released in 2011. Each issue focuses on a single character (Vork, Zaboo, Clara, Tinkerballa, and Bladezz) and is illustrated by a different artist. The collected volume, The Guild Volume 2, was released on June 27, 2012. An additional single character issue, The Guild: Fawkes, was issued on May 23, 2012, and takes place after season 4 of the web series. Music videosThe Guild cast have appeared as their characters in three music videos to promote the series. In "(Do You Wanna Date My) Avatar", the cast appears as their game characters. The song and video were released shortly before the start of Season 3 and were used to promote the show. "Game On" is a Bollywood-themed video about Zaboo trying to convince Codex to play the game. This second video was used in a similar fashion and promoted Season 4. The third song and video titled "I'm the One That's Cool" is a pop rock song "touting the rise of nerd culture and the geek shall inherit the earth credo". It was released to promote the launch of Felicia Day's new YouTube channel Geek & Sundry. "(Do You Wanna Date My) Avatar" and "I'm the One That's Cool" are available on the Rock Band Network. Production notes Season 1The Guild was originally intended to be a pilot episode for a TV series, but Felicia Day was advised that it would be much more suited for a web series. The show changed its format and script to fit a web series. The first episode "Wake-Up Call" premiered on YouTube on July 27, 2007. After the first three episodes, the group ran out of money; but a link to Day's PayPal brought enough donations to fund "Cheesybeards" and "Rather Be Raiding". The first season ended on May 15, 2008, consisting of 10 episodes and two specials (including the Christmas special, "Christmas Raid Carol").The Guild season 1 DVD was released on Amazon.com on May 19, 2009. For Canadian audiences, it was bundled with season 2, released on September 29, 2009, also available on Amazon.com. Season 2 Filming for season 2 began in August 2008. "Link the Loot" premiered on Xbox Live Marketplace, Microsoft Zune Marketplace, and MSN Video on November 25, 2008. Season 2 ended with "Fight!" on February 17, 2009, and featured "Love During Wartime" by The Main Drag. On November 24, 2008, Microsoft announced an exclusive distribution deal with Guild creator Felicia Day. All twelve episodes of season 2 premiered on Xbox 360, Zune, and MSN, with a four-week delay for release on The Guilds official website. The Microsoft releases were free, but supported by Sprint advertisements and product placements. Creator Felicia Day retains the IP rights to the series, with Microsoft paying an "unspecified" license fee upfront. Sometime in late February 2009, when all episodes of season two had been released, Day and her team were free to sign a new nonexclusive distribution deal should they choose to do so.The Guild season 2 DVD was released on Amazon.com on May 19, 2009, containing commentary tracks, gag reels, audition footage, and a "behind the scenes" documentary. It was also released for the Canadian audience along with season 1 on Amazon.com. Season 2 was nominated for eleven Streamy Awards and won three: Best Comedy Web Series, Best Ensemble Cast in a Web Series, and Best Female Actor in a Comedy Web Series (Felicia Day). Season 3 On August 17, 2009, a music video – "(Do You Wanna Date My) Avatar" by Felicia Day – was released on Xbox Live to promote season 3, which would premiere on August 25, 2009, on Xbox Live for members with Gold Accounts first. Later it was announced that it would be released for members with Silver Accounts, as well as Zune and MSN Video users, on September 1, 2009. The season premiered with "Expansion Time" on August 25, 2009, and ended on November 24, 2009, with "Hero". The season featured guest star Wil Wheaton as the leader of a rival guild out to destroy the Knights of Good. Season 4 In April 2010, The Guild's official website announced the show was renewed by Microsoft for a fourth season. On June 9, 2010, the official recap of season 3—an Auto-Tuned music video by The Gregory Brothers—was posted on Bing; the video included a message that season 4 would begin on July 13, 2010. On July 27, 2010, a second music video, called "Game On", was posted to promote season 4. On September 14, 2010, another promotional video was posted of the Cheesybeard's full commercial. Michael Chaves, director of The Curse of La Llorona, served as a visual effects artist for season 4. Season 5 Season 5 takes place at the gaming convention mentioned at the end of season 4. Shooting began on April 21, 2011. The first episode of Season 5 was released on Xbox Live and Zune on July 26, 2011, and was released on MSN on July 28. Season 6 Season 6 was mentioned at Comic-Con by Felicia Day. It was shot during the summer under new director Chris Preksta, creator of The Mercury Men''. It premiered on October 2, 2012, on YouTube channel Geek & Sundry. See also Dead Pixels References Further reading External links The Guild at Blip.TV 2000s YouTube series 2007 web series debuts 2010s YouTube series 2013 web series debuts American comedy web series Fictional organizations Massively multiplayer online role-playing games in fiction Streamy Award-winning channels, series or shows Works about video games Television shows adapted into comics
The Guild (web series)
[ "Technology" ]
9,697
[ "Works about video games", "Works about computing" ]
14,557,753
https://en.wikipedia.org/wiki/Delaware%20Biotechnology%20Institute
The Delaware Biotechnology Institute (DBI) at the University of Delaware is a partnership among government, academia and industry with an aim to establish Delaware as a notable hub for biotechnology and life sciences. Adjacent to the University of Delaware main campus, DBI's research facility is located in the Delaware Technology Park. The DBI laboratory houses more than 180 faculty and students. Research at the Delaware Biotechnology Institute has application in agriculture, environmental science, and human health, featuring leading-edge work in bioinformatics, genomics and small RNA biology, materials science, molecular medicine and proteomics. Partner Institutions University of Delaware Delaware State University Delaware Technical & Community College Wesley College References External links Delaware Biotechnology Institute official site University of Delaware Biotechnology organizations 2001 establishments in Delaware Organizations established in 2001
Delaware Biotechnology Institute
[ "Engineering", "Biology" ]
157
[ "Biotechnology organizations" ]
14,557,845
https://en.wikipedia.org/wiki/Baker%E2%80%93Nathan%20effect
In organic chemistry, the Baker–Nathan effect is observed with reaction rates for certain chemical reactions with certain substrates where the order in reactivity cannot be explained solely by an inductive effect of substituents. This effect was described in 1935 by John W. Baker and W. S. Nathan. They examined the chemical kinetics for the reaction of pyridine with benzyl bromide to form a pyridinium salt, and a series of benzyl bromides having different alkyl groups as substituents at the para position. The reaction is facilitated by electron-releasing substituents (the inductive effect) and in general the observed order (with decreasing reactivity) is tert-butyl > isopropyl > ethyl > methyl. The observed order in this particular reaction however was methyl > ethyl> isopropyl > tert-butyl. In 1935 Baker and Nathan explained the observed difference in terms of a conjugation effect and in later years after the advent of hyperconjugation (1939) as its predecessor. A fundamental problem with the effect is that differences in the observed order are relatively small and therefore difficult to measure accurately. Other researchers have found similar results or very different results. An alternative explanation for the effect is differential solvation as orders invert on going from the solution phase to the gas phase. Today, the conjugation of neighbouring pi orbitals and polarised sigma bonds is known as hyperconjugation. Numerous anomalous physical measurements, including bond lengths and dipole moments, have been examined through this concept. The original formulation of the Baker-Nathan effect is no longer employed due to more logical reasons for rate accelerations in solutions and its historical context is discussed by Saltzman. References Physical organic chemistry
Baker–Nathan effect
[ "Chemistry" ]
368
[ "Physical organic chemistry" ]
14,558,051
https://en.wikipedia.org/wiki/Schopf
SCHOPF Maschinenbau GmbH is a German company that produces specialist vehicles for the mining and aviation industries. The company was founded in 1948 by Jörg Schopf, a mechanical engineer. It started out with manufacturing equipment for the mining industry. It soon expanded into the growing sector of aviation by manufacturing stair lifts, tow trucks and loading machinery. It is now considered the global market leader in this field. Products supplied for both civil and military aviation include a complete range of tugs to handle aircraft in every weight range, the unique PowerPush remotely controlled pushback system, container / pallet loaders and passenger stairs. In supplying to NATO, major airlines, ground handling companies and airports around the globe, 90% of SCHOPF’s aviation related production is for export, with deliveries to more than 130 countries. SCHOPF’s range of mining vehicles comprises underground loaders suited for various materials and volumes, together with dump trucks produced in collaboration with an international partner. Product conception, design, manufacture and sales are carried out by a team of 130 workers, all based at the factory in Ostfildern near Stuttgart in south western Germany. The company's most powerful tow tractor, the F396P, is used to tow the world's largest cargo aircraft (Antonov An-225) and the Airbus A380, the largest passenger aircraft. SCHOPF was acquired in 2013 by fellow German company Goldhofer. References Airport infrastructure manufacturers Companies based in Baden-Württemberg Mining equipment companies Truck manufacturers of Germany
Schopf
[ "Engineering" ]
312
[ "Mining equipment", "Mining equipment companies" ]
14,558,397
https://en.wikipedia.org/wiki/Surface%20phonon
In solid state physics, a surface phonon is the quantum of a lattice vibration mode associated with a solid surface. Similar to the ordinary lattice vibrations in a bulk solid (whose quanta are simply called phonons), the nature of surface vibrations depends on details of periodicity and symmetry of a crystal structure. Surface vibrations are however distinct from the bulk vibrations, as they arise from the abrupt termination of a crystal structure at the surface of a solid. Knowledge of surface phonon dispersion gives important information related to the amount of surface relaxation, the existence and distance between an adsorbate and the surface, and information regarding presence, quantity, and type of defects existing on the surface. In modern semiconductor research, surface vibrations are of interest as they can couple with electrons and thereby affect the electrical and optical properties of semiconductor devices. They are most relevant for devices where the electronic active area is near a surface, as is the case in two-dimensional electron systems and in quantum dots. As a specific example, the decreasing size of CdSe quantum dots was found to result in increasing frequency of the surface vibration resonance, which can couple with electrons and affect their properties. Two methods are used for modeling surface phonons. One is the "slab method", which approaches the problem using lattice dynamics for a solid with parallel surfaces, and the other is based on Green's functions. Which of these approaches is employed is based upon what type of information is required from the computation. For broad surface phonon phenomena, the conventional lattice dynamics method can be used; for the study of lattice defects, resonances, or phonon state density, the Green's function method yields more useful results. Quantum description Surface phonons are represented by a wave vector along the surface, q, and an energy corresponding to a particular vibrational mode frequency, ω. The surface Brillouin zone (SBZ) for phonons consists of two dimensions, rather than three for bulk. For example, the face-centered cubic (100) surface is described by the directions ΓX and ΓM, referring to the [110] direction and [100] direction, respectively. The description of the atomic displacements by the harmonic approximation assumes that the force on an atom is a function of its displacement with respect to neighboring atoms, i.e. Hooke's law holds. Higher order anharmonicity terms can be accounted by using perturbative methods. The positions are then given by the relation where i is the place where the atom would sit if it were in equilibrium, mi is the mass of the atom that should sit at i, α is the direction of its displacement, ui,α is the amount of displacement of the atom from i, and are the force constants which come from the crystal potential. The solution to this gives the atomic displacement due to the phonon, which is given by where the atomic position i is described by l, m, and κ, which represent the specific atomic layer, l, the particular unit cell it is in, m, and the position of the atom with respect to its own unit cell, κ. The term x(l,m) is the position of the unit cell with respect to some chosen origin. Normal modes of vibration and types of surfaces phonons Phonons can be labeled by the manner in which the vibrations occur. If the vibration occurs lengthwise in the direction of the wave and involves contraction and relaxation of the lattice, the phonon is called a "longitudinal phonon". Alternatively, the atoms may vibrate side-to-side, perpendicular to wave propagation direction; this is known as a "transverse phonon”. In general, transverse vibrations tend to have smaller frequencies than longitudinal vibrations. The wavelength of the vibration also lends itself to a second label. "Acoustic" branch phonons have a wavelength of vibration that is much bigger than the atomic separation so that the wave travels in the same manner as a sound wave; "optical" phonons can be excited by optical radiation in the infrared wavelength or longer. Phonons take on both labels such that transverse acoustic and optical phonons are denoted TA and TO, respectively; likewise, longitudinal acoustic and optical phonons are denoted LA and LO. The type of surface phonon can be characterized by its dispersion in relation to the bulk phonon modes of the crystal. Surface phonon mode branches may occur in specific parts of the SBZ or encompass it entirely across. These modes can show up both in the bulk phonon dispersion bands as what is known as a resonance or outside these bands as a pure surface phonon mode. Thus surface phonons can be purely surface existing vibrations, or simply the expression of bulk vibrations in the presence of a surface, known as a surface-excess property. A particular mode, the Rayleigh phonon mode, exists across the entire BZ and is known by special characteristics, including a linear frequency versus wave number relation near the SBZ center. Experiment Two of the more common methods for studying surface phonons are electron energy loss spectroscopy and helium atom scattering. Electron energy loss spectroscopy The technique of electron energy loss spectroscopy (EELS) is based upon the fact that electron energy decreases upon interaction with matter. Since the interaction of low energy electrons is mainly in the surface, the loss is due to surface phonon scattering, which have an energy range of 10−3 eV to 1 eV. In EELS, an electron of known energy is incident upon the crystal, a phonon of some wave number, q, and frequency, ω, is then created, and the outgoing electron's energy and wave number are measured. If the incident electron energy, Ei, and wave number, ki, are chosen for the experiment and the scattered electron energy, Es, and wave number, ks, are known by measurement, as well as the angles with respect to the normal for the incident and scattered electrons, θi and θs, then values of q throughout the BZ can be obtained. Energy and momentum for the electron have the following relation, where m is the mass of an electron. Energy and momentum must be conserved, so the following relations must be true of the energy and momentum exchange throughout the encounter: where G is a reciprocal lattice vector that ensures that q falls in the first BZ and the angles θi and θs are measured with respect to the normal to the surface. The dispersion is often shown with q given in units of cm−1, in which 100 cm−1 = 12.41 meV. The electron incident angles for most EELS phonon study chambers can range from 135-θs and 90-θf for θf ranging between 55° and 65°.– Helium atom scattering Helium is the best suited atom to be used for surface scattering techniques, as it has a low enough mass that multiple phonon scattering events are unlikely, and its closed valence electron shell makes it inert, unlikely to bond with the surface upon which it impinges. In particular, 4He is used because this isotope allows for very precise velocity control, important for obtaining maximum resolution in the experiment. There are two main techniques used for helium atom scattering studies. One is a so-called time-of-flight measurement which consists of sending pulses of He atoms at the crystal surface and then measuring the scattered atoms after the pulse. The He beam velocity ranges from 644 to 2037 m/s. The other involves measuring the momentum of the scattered He atoms by a LiF grating monochromator. It is important to note that the He nozzle beam source used in many He scattering experiments poses some risk of error, as it adds components to the velocity distributions that can mimic phonon peaks; particularly in time-of-flight measurements, these peaks can look very much like inelastic phonon peaks. Thus, these false peaks have come to be known by the names "deceptons" or "phonions". Comparison of techniques EELS and helium scattering techniques each have their own particular merits that warrant the use of either depending on the sample type, resolution desired, etc. Helium scattering has a higher resolution than EELS, with a resolution of 0.5–1 meV compared to 7 meV. However, He scattering is available only for energy differences, Ei−Es, of less than about 30 meV, while EELS can be used for up to 500 meV. During He scattering, the He atom does not actually penetrate into the material, being scattered only once at the surface; in EELS, the electron can go as deep as a few monolayers, scattering more than once during the course of the interaction. Thus, the resulting data is easier to understand and analyze for He atom scattering than for EELS, since there are no multiple collisions to account for. He beams have a capabilities of delivering a beam of higher flux than electrons in EELS, but the detection of electrons is easier than the detection of He atoms. He scattering is also more sensitive to very low frequency vibrations, on the order of 1 meV. This is the reason for its high resolution in comparison to EELS. References Quasiparticles Bosons
Surface phonon
[ "Physics", "Materials_science" ]
1,888
[ "Matter", "Bosons", "Condensed matter physics", "Quasiparticles", "Subatomic particles" ]
14,558,800
https://en.wikipedia.org/wiki/Lateralization%20of%20bird%20song
Passerine birds produce song through the vocal organ, the syrinx, which is composed of bilaterally symmetric halves located where the trachea separates into the two bronchi. Using endoscopic techniques, it has been observed that song is produced by air passing between a set of medial and lateral labia on each side of the syrinx. Song is produced bilaterally, in both halves, through each separate set of labia unless air is prevented from flowing through one side of the syrinx. Birds regulate the airflow through the syrinx with muscles—M. syringealis dorsalis and M. tracheobronchialis dorsalis—that control the medial and lateral labia in the syrinx, whose action may close off airflow. Song may, hence, be produced unilaterally through one side of the syrinx when the labia are closed in the opposite side. Early experiments discover lateralization Lateral dominance of the hypoglossal nerve conveying messages from the brain to the syrinx was first observed in the 1970s. This lateral dominance was determined in a breed of canary, the waterschlager canary, bred for its long and complex song, by lesioning the ipsilateral tracheosyringeal branch of the hypoglossal nerve, disabling either the left or right syrinx. The numbers of song elements in the birds’ repertoires were greatly attenuated when the left side was cut, but only modestly attenuated when the right side was disabled, indicating left syringeal dominance of song production in these canaries. Similar lateralized effects have been observed in other species such as the white-crowned sparrow (Zonotrichia leucophrys), the Java sparrow (Lonchura oryzivora) and the zebra finch (Taeniopygia guttata), which is right-side dominant. However, denervation in these birds does not entirely silence the affected syllables but creates qualitative changes in phonology and frequency. Respiratory control and neurophysiology In waterslager canaries, which produce most syllables using the left syrinx, as soon as a unilaterally produced syllable finishes, the right side opens briefly to allow inspiratory airflow through both bronchi before being closed again for left syrinx song production. During this “mini-breath” the left side may remain partially or fully adducted, allowing less inspiratory airflow than the right side while remaining ready to quickly resume singing. When bilateral airflow and subsyringeal air sac pressure were monitored along with electromyographic activity of expiratory abdominal muscles in brown thrashers (Toxostoma rufum), it was observed that during unilateral production of song, expiratory abdominal muscle activity was the same on both sides. This indicates that while inspiration and syringeal song control may be lateralized, motor control of respiratory muscles possibly remains bilateral. Muscles of the syrinx are controlled by the tracheosyringeal branch of the hypoglossal nerve. Each syringeal half is ipsilaterally innervated by the hypoglossal motor nucleus (XIIts) in the brain, which in turn receives projections—mainly ipsilateral—from nucleus robustus (RA), an important song control nucleus that also regulates respiratory muscles. Laterality of song control has been observed all the way into the higher vocal center (HVC) brain region; unilateral lesions to HVC produce lateralized effects in the temporal patterning of song in the zebra finch. See also Bird song: Neuroanatomy Species-specific examples Canary (Serinus canaria) The waterschlager canary is the most robust example of unilateral syringeal dominance, creating song of which 90% of the syllables are produced by the left syrinx, as determined by recording respiratory pressure and airflow through each side during singing. Waterschlager canaries with left tracheosyringeal nerve cuts are only able to produce up to 26% of the pre-operation syllable repertoire. The waterschlager canary strain is conspecific to the domestic canary but has been inbred by humans for its beautiful song. The outbred domestic canary, however, does not exhibit the strong lateralization of the waterschlager canary. Possibly explaining their strong left lateralization, canaries of the waterschlager strain contain an inherited auditory defect that decreases their sensitivity by up to 40 dB to sounds higher than 2 kHz, which are produced mainly by the right side of the syrinx. Brown-headed cowbird (Molothrus ater) The brown-headed cowbird produces very rapid clusters of notes that alternate in frequency, with the right syrinx producing the high frequency notes and the left syrinx producing low frequency notes. The entire cluster is sung during a single respiratory expiration, called "pulsatile expiration", in which no inflow of air occurs between notes. By alternating note production successively between each side of the syrinx and without ceasing expiration, a cowbird is able to rapidly and abruptly switch the frequency of notes back and forth between high and low frequencies. To see a cartoon of how syllables are produced in alternate sides of the syrinx, click here Northern cardinal (Cardinalis cardinalis) Northern cardinals contain FM sweep syllables as part of their repertoire that begin around 6 or 7 kHz and sweep downward continuously to 2 kHz. Each of these syllables is sung unilaterally. However, the cardinal switches mid-syllable from employing the right syrinx at the high frequency beginning of the sweep to the left syrinx at the lower frequency end. This switch usually occurs when the sweep reaches the 3.5 to 4.0 kHz range. The transition is abrupt yet timed so precisely that neither sonograms nor audition can detect the switch. To see a cartoon of how the syrinx produces cardinal song, click here. Northern mockingbird (Mimus polyglottus) Because the mockingbird has the ability to mimic the songs of other species, it has been useful in determining whether the vocal motor patterns employed by particular species in producing their unique song types are constrained by acoustic properties or whether the unique song types may also be produced by different motor patterns generated by the same songbird vocal system. When juvenile mockingbirds were tutored with the recorded or synthesized song of a cardinal or cowbird, the mockingbirds employed the same respiratory and lateralized vocal pattern as the original species to produce its mimicked song. When the mockingbird motor pattern differed from the tutor motor pattern, the song output also differed, suggesting that the vocal motor pattern is largely determined by the acoustic restraints of the song type. Even though the mockingbird was able to mimic the FM sweeps of cardinals by employing the same motor pattern—switching mid-syllable from right syrinx to left syrinx—the mockingbird did not perform the transition seamlessly. This indicates that precise unilateral control of song production in the syrinx of certain birds, such as cardinals, has allowed them to become unique vocal specialists. Possible functions Because the lateralized control of songs of certain species, such as cardinals, demands such precision in motor control, the ability to produce high-quality, seamless syllables may provide an indicator of fitness to potential mates. Supporting this hypothesis, certain syllables called "sexy syllables" sung by male canaries at high frequency are more effective than others in eliciting sexual displays from females. These particular syllables all contain two notes that are sung alternately by each side of the syrinx. Thus, control of the rapid switching from one side of the syrinx to the other is required to produce these attractive syllables. Lateralization also allows for rapid and abrupt frequency changes. Studies of mockingbirds mimicking tone pairs in which the first tone was either higher or lower than a median tone of 2 kHz (either side is capable of producing this median tone) revealed that alternating sides of the syrinx for each note was necessary to reproduce them correctly. Correct mimicking was performed by singing the first syllable with the appropriate side of the syrinx—right for a high frequency tone and left for low frequency—and the second median tone with the opposite side. When the same side was used for both tones, the step-wise frequency change between the tones became slurred, suggesting that lateralization allows for abrupt frequency changes in song. See also Syrinx (bird anatomy) Bird vocalization Lateralization of brain function Animal communication Bioacoustics References External links X-ray video of a cardinal singing Macaulay Library at the Cornell Lab of Ornithology is the world's largest collection of animal sounds and associated video. xeno-canto a community database with c. 183,000 recordings of c. 9,000 bird species (August 2014) Neuroethology Bird sounds Passeri
Lateralization of bird song
[ "Biology" ]
1,861
[ "Ethology", "Behavior", "Neuroethology" ]
14,559,306
https://en.wikipedia.org/wiki/InXitu
inXitu was a company based in Mountain View, California, which developed portable X-ray diffraction (XRD) and X-ray fluorescence (XRF) analysis instruments. The company name was a combination of the terms in situ and X-ray, portraying the company's dedication to developing X-ray instruments that could be easily transported to the original site of the material being analyzed. Company history The basis for inXitu began in 2003 when Philippe Sarrazin worked with NASA to file a patent on techniques used to develop the CheMin instrument for the Mars Curiosity rover. Sarrazin left NASA to form inXitu Research, which received two Small Business Innovation Research grants from Ames Research Center in 2004 to continue work on CheMin. inXitu Research merged with Microwave Power Technology (MPT) in 2007 and incorporated as inXitu, Inc. MPT's research and development in high vacuum systems was meshed with inXitu's experience with XRD equipment, and in early 2008 the company released Terra, a commercial field-portable XRD/XRF instrument. Bradley Boyer joined the company as President and Chief Executive Officer in September 2008. inXitu formed a partnership with Innov-X in December 2008, in which inXitu would manufacture XRD equipment for sale under the Innov-X brand name. Also in 2008, inXitu worked with the Getty Conservation Institute to develop X-Duetto, a portable and non-destructive XRD/XRF device used for the analysis of works of art. It was commercially released as Duetto in mid 2009. The company released the BTX instrument in mid 2009, which is a desktop XRD/XRF device developed from Terra; the second generation BTX-II was released in early 2010. inXitu was purchased by Olympus in November 2011. References Diffraction Fluorescence Defunct technology companies of the United States X-ray equipment manufacturers
InXitu
[ "Physics", "Chemistry", "Materials_science" ]
397
[ "Luminescence", "Fluorescence", "Spectrum (physical sciences)", "Diffraction", "Crystallography", "Spectroscopy" ]
14,559,354
https://en.wikipedia.org/wiki/Artstein%27s%20theorem
Artstein's theorem states that a nonlinear dynamical system in the control-affine form has a differentiable control-Lyapunov function if and only if it admits a regular stabilizing feedback u(x), that is a locally Lipschitz function on Rn\{0}. The original 1983 proof by Zvi Artstein proceeds by a nonconstructive argument. In 1989 Eduardo D. Sontag provided a constructive version of this theorem explicitly exhibiting the feedback. See also Analysis and control of nonlinear systems Control-Lyapunov function References Control theory Theorems in dynamical systems
Artstein's theorem
[ "Mathematics" ]
126
[ "Theorems in dynamical systems", "Mathematical theorems", "Applied mathematics", "Control theory", "Applied mathematics stubs", "Mathematical problems", "Dynamical systems" ]
14,559,659
https://en.wikipedia.org/wiki/Aeronautical%20Systems%20Center
The Aeronautical Systems Center (ASC) is an inactivated Air Force product center that designed, developed and delivered weapon systems and capabilities for U.S. Air Force, other U.S. military, allied and coalition-partner warfighters. ASC formed in 1961, and over its lifetime it managed 420 Air Force, joint and international aircraft acquisition programs and related projects; executed an annual budget that reached $19 billion and employed a workforce of more than 11,000 people located at Wright-Patterson Air Force base and 38 other locations worldwide. ASC's portfolio included capabilities in fighter/attack, long-range strike, reconnaissance, mobility, agile combat support, special operations forces, training, unmanned aircraft systems, human systems integration and installation support. ASC was deactivated during a 20 July 2012 ceremony held at Wright-Patterson Air Force Base, Ohio. History Early Aviation The Airplane Engineering Department, precursor of ASC, was first established under the U.S. Army's Aviation Section, U.S. Signal Corps in late 1917 at McCook Field. Early on the department's focus was flight testing and training. The department was renamed the Airplane Engineering Division following World War I, it continued its mission of flight testing and training, but also began development and engineering. One early native model, the VCP-1 was designed by resident engineers, Alfred V. Verville and Virginius E. Clark. Another aircraft tested was the MB-1, eventually used as the standard mail plane. The division also expanded operations to Wilbur Wright Field. The division also pioneered aviation safety with the use of free-fall parachutes and the development of protective clothing, closed cockpits, heated and pressurized cabins, and oxygen systems. As the stockpile of aircraft and parts grew the division was able to spend more time finding ways to enhance tools and procedures for pilots. Advancements include things like an electric ignition system, anti-knock fuels, navigational aids, improved weather forecasting techniques, stronger propellers, advancements in aerial photography, and the design of landing and wing lights for night flying. In 1925 the division's roll shifted from design and building of to acquiring and evaluating aircraft prototypes submitted by the commercial aircraft industry. This left division engineers were left free to concentrate on developing standards unique to military aircraft, reviewing designs, modifying and testing procured machines, and developing ancillary equipment to enhance military aircraft. The Engineering Division merged with the Supply Division in 1926 to form the Material Division. The new unit required more space than McCook Field offered, so in an effort to keep the Air Service presence at Dayton, Ohio a local interest group led by John H. Patterson and his son Frederick bought of land, including Wilbur Wright Field and donated it to the Air Service, creating Wright Field. From Wright Field the division continued to work on aviation advancements including engine design, navigation and communications equipment, cockpit instrumentation, electrically heated flight clothing, and in-flight refueling equipment. The Physiological Research Laboratory led pioneering research in pilot exposure to extremes of speed, pressure, and temperature. Specific advancements of the division in the 1930s include the Norden bombsight, internal bomb bay, and power-operated gun turret. World War II The Material Division was re-designated the Material Command in 1942 as the role of the Army Air Force expanded. By 1943, well over 800 major, and thousands of minor research and development projects were in progress at Wright Field. Because many materials were scarce or unavailable during the war, scientists in the Materials Laboratory were involved in developing and testing a number of substitutes, including synthetic rubber for tires, nylon for parachutes, and plastic for canopies. The Armament Laboratory developed armored, self-sealing fuel tanks, increased bomb load capacity, gun turrets, and defensive armament. Despite the immediate needs of World War II the command continued to work on future projects. In 1944, Major Ezra Kotcher undertook pioneering work that led to the first supersonic airplane, the Bell X-1. Cold War The new independent Air Force created the Air Research and Development Command and placed the principal elements of engineering, the laboratories, and flight testing under Air Development Force, soon renamed Wright Air Development Center' (WADC). It had divisions including Weapons Systems, Weapons Components, Research, Aeronautics, All-Weather Flying, Flight Test, and Materiel, and 12 laboratories. Engineers at Wright Field evaluated captured foreign aircraft during and after World War II. Aircraft brought to Wright Field included allied aircraft such as the Russian Yakovlev Yak-9 and the British Supermarine Spitfire and de Havilland Mosquito, and enemy aircraft including the German Junkers Ju 88, Messerschmitt Bf 109, Focke-Wulf Fw 190, Messerschmitt Me 262, and the Japanese A6M Zero. Out of need for a secret location to test experimental aircraft, the flight testing of airframes moved to Rogers Dry Lake, Muroc, California, later named the Air Force Flight Test Center, Edwards Air Force Base. Some flight testing continued at Wright-Patterson but was confined to component and instrument testing and other specialized kinds of flight test. The most important addition to postwar flight testing at Wright Field was all-weather testing. It represented the first major attempt to solve the many problems encountered in flying under all weather conditions, both day and night. WADC developed two "workhorse" aircraft during the 1950s; the Boeing B-52 Stratofortress and Lockheed C-130 Hercules. WADC also developed experimental systems, known as the X-series aircraft, in an effort to advance aviation technology and the flight envelope, including the first flight of a vertical takeoff and landing (VTOL). WADC programs have also contributed to the space program through the X-20 and Zero-G training. The XQ-6 and XQ-9 target drones were conceived by the Wright Aeronautical Development Center but never reached the hardware phase. WADC was inactivated and replaced by the Wright Air Development Division in 1959 then by the Aeronautical Systems Division (ASD) in 1961. That year, the Air Force merged the Air Research and Development Command with the procurement functions of Air Material Command to form the Air Force Systems Command. In 1963, the Materials, Avionics, Aero Propulsion, and Flight Dynamics Laboratories were established and placed under one organization, the Research and Technology Division. Research during this time included examining different materials for aircraft structure, phased-array radar, and improved power plants. During the Vietnam War, ASD set up a special division called Limited War/Special Air Warfare to respond to the special requirements dictated by the conflict. Part of this concept was "Project 1559" which provided a means for rapidly evaluating new hardware ideas to determine their usefulness for conducting limited war. Support systems included a highly mobile tactical air control system, disposable parachutes, intrusion alarms for air base defense, and a grenade launcher for the AR-15 rifle. In response to the unique climate found in Southeast Asia ASD an evaluation of chemical rain repellents for fighter aircraft and discovered that varieties of repellant applied to cockpit windshields on the ground prior to the flight had a long life and could last several hours, even days. During the early 1970s the Department of Defense became concerned with the rising costs of military procurement and consequently abandoned the concept of buying a weapon system as a complete, finished package, and reorganized the acquisition cycle into five phases: conceptual, validation, development, production, and deployment. The Air Force viewed this as a more flexible approach; providing oversight, review, and evaluation during each phase. Under this new process the ASD continued enhancing airframes, and developing armaments. The 1980s brought additional funding restraints led to additional reorganization for the ASD. In addition to equipment engineering the ASD worked on process improvement as well by introducing Total Quality Management (TQM). ASD also helped operationalize stealth technology which had been introduced in the 1970s. Work also began on a system of very high speed integrated circuits that would allow advanced avionics architectures to integrate many aircraft subsystems such as weapons delivery, flight controls, and communications into smaller, more reliable subsystems. The Avionics and Flight Dynamics Laboratories coordinated research on an "all-glass" cockpit of the future that would allow a pilot, through voice activation, to mix or "enhance" data presented in picture-like symbols on one large TV-like screen. Post-Cold War In the post Cold War environment the Air Force again realigned its commands, merging the Air Force Logistics Command and the Air Force Systems Command to form the Air Force Materiel Command (AFMC). ASD was then relabeled the Aeronautical Systems Center (ASC) in 1992 and a massive reorganization ensued, however, ASC retained its leading role in the acquisition of new systems and the upgrade and modification of existing systems to support the Air Force's Core Competencies into the 21st century. In light of the new security climate ASC moved to upgrade the B-1 Lancer and B-2 Spirit from exclusively nuclear to conventional weapons. Subsequently, both airframes have seen active combat roles. ASC has also placed a premium on Information Superiority and focused heavily on building sensors for the U-2 and unmanned aerial vehicles. The Aeronautical Systems Center was inactivated on 20 July 2012; its units were merged into Air Force Life Cycle Management Center. Lineage Constituted as the Aeronautical Systems Division on 21 March 1961 Activated on 1 April 1961 Redesignated Aeronautical Systems Center on 1 July 1992 Inactivated on 1 October 2012 Assignments Air Force Systems Command, 1 April 1961 Air Force Materiel Command, 1 July 1992 – 1 October 2012 (attached to Air Force Life Cycle Management Center after 20 July 2012) Stations Wright-Patterson Air Force Base, 1 April 1961 – 1 October 2012 Subordinate units 77th Aeronautical Systems Wing 88th Air Base Wing 303d Aeronautical Systems Wing 311th Human Systems Wing 312th Aeronautical Systems Wing 326th Aeronautical Systems Wing 478th Aeronautical Systems Wing 516th Aeronautical Systems Wing 4950th Test Wing References Notes Bibliography External links Aeronautical Systems Center Home Page ASC Fact Sheet Jet Engine Inventors A Genesis Workshop Aeronautics organizations Logistics units and formations of the United States Air Force Alfred V. Verville Centers of the United States Air Force Military in Ohio Wright-Patterson Air Force Base 1961 establishments in the United States
Aeronautical Systems Center
[ "Engineering" ]
2,095
[ "Aeronautics organizations" ]
14,560,309
https://en.wikipedia.org/wiki/MRGPRX3
Mas-related G-protein coupled receptor member X3 is a protein that in humans is encoded by the MRGPRX3 gene. See also MAS1 oncogene References Further reading G protein-coupled receptors
MRGPRX3
[ "Chemistry" ]
46
[ "G protein-coupled receptors", "Signal transduction" ]
14,560,331
https://en.wikipedia.org/wiki/MRGPRX4
Mas-related G-protein coupled receptor member X4 is a protein that in humans is encoded by the MRGPRX4 gene. See also MAS1 oncogene References Further reading G protein-coupled receptors
MRGPRX4
[ "Chemistry" ]
46
[ "G protein-coupled receptors", "Signal transduction" ]
14,560,379
https://en.wikipedia.org/wiki/MRGPRF
MAS-related GPR, member F, also known as MRGPRF, is a human gene. See also MAS1 oncogene References Further reading G protein-coupled receptors
MRGPRF
[ "Chemistry" ]
39
[ "G protein-coupled receptors", "Signal transduction" ]
14,560,419
https://en.wikipedia.org/wiki/MRGPRX1
Mas-related G-protein coupled receptor member X1 is a protein that in humans is encoded by the MRGPRX1 gene. See also MAS1 oncogene References Further reading G protein-coupled receptors
MRGPRX1
[ "Chemistry" ]
46
[ "G protein-coupled receptors", "Signal transduction" ]
14,560,627
https://en.wikipedia.org/wiki/Quanta%20Services
Quanta Services is an American corporation that provides infrastructure services for electric power, pipeline, industrial and communications industries. Capabilities include the planning, design, installation, program management, maintenance and repair of most types of network infrastructure. In June 2009, Quanta Services was added to the S&P 500 index, replacing Ingersoll-Rand. Quanta Services employs about 40,000 people. Its operating companies achieved combined revenues of about $11 billion in 2018. It is headquartered in Houston, Texas. In 1998, Quanta went public on the New York Stock Exchange under the ticker symbol PWR. History Colson and PAR In 1997, John R. Colson founded Quanta Services and is a former executive chairman. After earning a degree in geology from the University of Missouri at Kansas City, Colson entered the military and served one year in Vietnam. He was discharged from the Army in 1971 and returned to Kansas City, taking temporary employment at PAR Electrical Contractors Inc., which built high-voltage transmission lines, distribution lines, substations, and provided other electric utility infrastructure services. Colson's initial job was to carry stakes for a survey team. Within three years, he got promoted as manager of engineering services. After six years, he had worked his way up to vice-president of operations. After becoming executive vice-president and general manager in the early 1980s, he bought the company and became the president in 1991, and ultimately emerged as its owner. Initial formation In the 1990s, the electrical contracting business was highly fragmented: populated by more than 50,000 companies. The vast majority were small, owner-operated enterprises. Deregulation in the electric utility industries in a number of states prompted utilities to become more cost-competitive, leading to the outsourcing of infrastructure work to contractors who could do the job more efficiently. Much of the transmission and distribution infrastructure in the United States was aging and in need of repair or replacement. In 1997, Colson spearheaded the combination of four contractors to form Quanta Services Inc., which then established its headquarters in Houston with Colson as its head. In addition to PAR, Quanta consisted of Union Power Construction Co., Trans Tech Electric Inc., and Potelco Inc. Initial public offering With BT Alex Brown Incorporated, BancAmerica Robertson Stephens, and Sanders Morris Mundy Inc. serving as underwriters, Quanta completed its IPO in February 1998, raising $45 million. Of that amount, $21 million was used to pay the cash portion of the buyouts of the four founding companies. Much of the balance, along with a $175 million line of credit arranged with a consortium of nine banks, was used on over a dozen acquisitions completed in 1998. Acquired telecom companies included Manuel Brothers, Smith Contracting, Telecom Network Specialists, North Pacific Construction Company, NorAm Telecommunications, Spalj Construction Company and Golden State Utility Company. Acquired electric contractors included Harker & Harker, Sumter Builders and Environmental Professional Associates. Hybrid acquisitions included Wilson Roadbores and Underground Construction Company. A secondary offering was completed in late January 1999. The company had planned to sell 3.5 million shares at $21 per share, but interest was so strong that in the end 4.6 million shares were sold at $23.25 per share. All told, Quanta realized $101.1 million. Money was used to fund the acquisition of 40 additional companies, which in total cost $323.6 million in cash and notes and 15 million shares of stock. Many of these additions were made to expand Quanta's business in gas transmission and cable television. UtiliCorp takeover bid In 2001, UtiliCorp United Inc. (now Aquila, Inc.), an energy company with whom PAR had been doing business since the 1950s, attempted a takeover of Quanta. UtiliCorp owned about 36% of Quanta, an investment that was originally part of a strategic alliance when UtiliCorp outsourced all of its maintenance needs to Quanta. Quanta resisted, and in October 2001, the two parties signed a standstill agreement. A month later Quanta adopted a "poison pill" plan to prevent a takeover, prompting UtiliCorp to sue. A proxy fight ensued in the spring of 2002. Quanta maintained that UtiliCorp, which was enduring difficult times, wanted to gain controlling interest in order to consolidate Quanta's earnings with its own balance sheet. The fight came to an end in May 2002, as Quanta fended off the takeover bid. Sale of telecommunication and fiber-optic licensing divisions On November 20, 2012, Quanta Services sold its telecommunications subsidiaries for $275 million in cash to Dycom. On August 4, 2015, Quanta Services sold its fiber optic licensing operations (Sunesys) to Crown Castle International Corp. (NYSE: CCI) for approximately $1 billion in cash. Acquisitions On August 30, 2007, Quanta Services acquired InfraSource Services through an all-stock deal. Before the merger, Engineering News-Record ranked Quanta Services as the second-largest specialty contractor in the United States and InfraSource Services as No. 8. This acquisition received popular attention after being given positive coverage on Jim Cramer's Mad Money show, in Smart Money and in TheStreet. In September 2009, Quanta Services announced that a deal had been struck to acquire Price Gregory, the largest U.S. gas pipeline construction company, for $350 million. With this acquisition, Quanta Services was expected to have consolidated 2009 revenue of $4.4 billion. On October 22, 2010, Quanta Services announced an agreement to acquire Canada's largest electric power line contractor, Valard Construction, for approximately $219 million. On September 2, 2021, Quanta Services announced that it entered into a definitive agreement to acquire Blattner Holding Company (Blattner), one of the largest and leading utility-scale renewable energy infrastructure solutions providers in North America, for $2.7 billion in stock and cash. Blattner generated full-year 2020 revenues and adjusted EBITDA (a non-GAAP measure) of approximately $2.4 billion and $291 million, respectively. In August 2022, Quanta purchased William E. Groves Construction Inc. of Madison, KY. Leadership On March 14, 2016, Earl C. “Duke” Austin succeeded former chief executive officer Jim O’Neil. Austin is currently president, chief executive officer and chief operating officer. He is a graduate of Sam Houston State University in Huntsville, Texas, and the former president of Quanta's Operating Unit: North Houston Pole Line. On April 2, 2012, Derrick A. Jensen succeeded former chief financial officer James H. Haddox. Jensen is a graduate of Oklahoma State University. Lower Rio Grande Valley Energized Reconductor Project On June 13, 2016, American Electric Power (AEP) received the 89th annual Edison Electric Institute's (EEI's) 2016 Edison Award, the electric power industry's most prestigious honor, for its Energized Reconductor Project in the Lower Rio Grande Valley (LRGV) of Texas. The 240 mile project was possible because of Quanta Energized Services (QES) live-line planning capabilities and North Houston Pole Line's construction expertise. References External links Companies listed on the New York Stock Exchange Companies based in Houston Construction and civil engineering companies of the United States Engineering companies of the United States Energy engineering and contractor companies American companies established in 1997 1998 initial public offerings
Quanta Services
[ "Engineering" ]
1,549
[ "Energy engineering and contractor companies", "Engineering companies" ]
14,562,016
https://en.wikipedia.org/wiki/Cruelty%20to%20Animals%20Act%201876
The Cruelty to Animals Act 1876 (39 & 40 Vict. c. 77) was an Act of the Parliament of the United Kingdom which set limits on the practice of, and instituted a licensing system for animal experimentation, amending the Cruelty to Animals Act 1849. It was a public general Act. The Act was replaced 110 years later by the Animals (Scientific Procedures) Act 1986. The Act The Act stipulated that researchers would be prosecuted for cruelty, unless they conformed to its provisions, which required that an experiment involving the infliction of pain upon animals to only be conducted when "the proposed experiments are absolutely necessary for the due instruction of the persons [so they may go on to use the instruction] to save or prolong human life". Furthermore, the Act stated that should the experiment occur, the animal must be anaesthetised, used only once (though several procedures regarded as part of the same experiment were permitted), and killed as soon as the study was over. Prosecutions under the Act could be made only with the approval of the Secretary of State. The Act was applicable to vertebrate animals only. History and controversy Opposition to vivisection had led the government to set up a Royal Commission on Vivisection in July 1875, which recommended that legislation be enacted to control it. This Act was created as a result, but was criticized by National Anti-Vivisection Society – itself founded in December 1875 – as "infamous but well-named," in that it made no provision for public accountability of licensing decisions. The law remained in force for 110 years, until it was replaced by the Animals (Scientific Procedures) Act 1986, which is the subject of similar criticism from the modern animal rights movement. Such was the perceived weakness of the Act, that vivisection opponents chose, on at least one occasion – the Brown Dog affair – to incite a libel suit rather than seek a prosecution under the Act. Penalties The Act states, in part: See also Wild Animals in Captivity Protection Act 1900 Animal welfare in the United Kingdom References Halsbury's Statutes of England. (The Complete Statutes of England). First Edition. Butterworth and Co (Publishers)  Ltd. 1929. Volume 1: . Page 367. Third Edition. 1968. Volume 2. Page 222. Cumulative Supplement. Part 1. 1985. Paragraph for page 221. John Mounteney Lely. The Statutes of Practical Utility. (Chitty's Statutes). Fifth Edition. Sweet and Maxwell. Stevens and Sons. London. 1894. Volume 1. Title "Animals". Pages 12 to 17. Paterson, William. The Practical Statutes of the Session 1876. Pages 245 to 255. Coleridge, Bernard. Commentary on the Cruelty to Animals Act, 1876. Victoria Street and International Society for the Protection of Animals from Vivisection. Victoria Street London. 1896. Google Books. Reprinted from the Zoophillist as chapter 23 of "The Anti-vivisection Question". Coleridge, Stephen, "The Administration of the Cruelty to Animals Act of 1876" (1900) 67 Fortnightly Review 392 (No 399, March) "The First Conviction under the Vivisection Act" (1876) 61 The Law Times 382 (7 October) External links Full text of the Act, accessed 12 May 2010. United Kingdom Acts of Parliament 1876 Repealed United Kingdom Acts of Parliament Cruelty to animals Animal welfare and rights legislation in the United Kingdom Animal testing in the United Kingdom Anti-vivisection movement
Cruelty to Animals Act 1876
[ "Chemistry" ]
712
[ "Animal testing", "Anti-vivisection movement", "Vivisection" ]
14,562,492
https://en.wikipedia.org/wiki/Reptation
A peculiarity of thermal motion of very long linear macromolecules in entangled polymer melts or concentrated polymer solutions is reptation. Derived from the word reptile, reptation suggests the movement of entangled polymer chains as being analogous to snakes slithering through one another. Pierre-Gilles de Gennes introduced (and named) the concept of reptation into polymer physics in 1971 to explain the dependence of the mobility of a macromolecule on its length. Reptation is used as a mechanism to explain viscous flow in an amorphous polymer. Sir Sam Edwards and Masao Doi later refined reptation theory. Similar phenomena also occur in proteins. Two closely related concepts are reptons and entanglement. A repton is a mobile point residing in the cells of a lattice, connected by bonds. Entanglement means the topological restriction of molecular motion by other chains. Theory and mechanism Reptation theory describes the effect of polymer chain entanglements on the relationship between molecular mass and chain relaxation time. The theory predicts that, in entangled systems, the relaxation time is proportional to the cube of molecular mass, : . The prediction of the theory can be arrived at by a relatively simple argument. First, each polymer chain is envisioned as occupying a tube of length , through which it may move with snake-like motion (creating new sections of tube as it moves). Furthermore, if we consider a time scale comparable to , we may focus on the overall, global motion of the chain. Thus, we define the tube mobility as , where is the velocity of the chain when it is pulled by a force, . will be inversely proportional to the degree of polymerization (and thus also inversely proportional to chain weight). The diffusivity of the chain through the tube may then be written as . By then recalling that in 1-dimension the mean squared displacement due to Brownian motion is given by , we obtain . The time necessary for a polymer chain to displace the length of its original tube is then . By noting that this time is comparable to the relaxation time, we establish that . Since the length of the tube is proportional to the degree of polymerization, and μtube is inversely proportional to the degree of polymerization, we observe that (and so ). From the preceding analysis, we see that molecular mass has a very strong effect on relaxation time in entangled polymer systems. Indeed, this is significantly different from the untangled case, where relaxation time is observed to be proportional to molecular mass. This strong effect can be understood by recognizing that, as chain length increases, the number of tangles present will dramatically increase. These tangles serve to reduce chain mobility. The corresponding increase in relaxation time can result in viscoelastic behavior, which is often observed in polymer melts. Note that the polymer’s zero-shear viscosity gives an approximation of the actual observed dependency, ; this relaxation time has nothing to do with the reptation relaxation time. Models Entangled polymers are characterized with effective internal scale, commonly known as the length of macromolecule between adjacent entanglements . Entanglements with other polymer chains restrict polymer chain motion to a thin virtual tube passing through the restrictions. Without breaking polymer chains to allow the restricted chain to pass through it, the chain must be pulled or flow through the restrictions. The mechanism for movement of the chain through these restrictions is called reptation. In the blob model, the polymer chain is made up of Kuhn lengths of individual length . The chain is assumed to form blobs between each entanglement, containing Kuhn length segments in each. The mathematics of random walks can show that the average end-to-end distance of a section of a polymer chain, made up of Kuhn lengths is . Therefore if there are total Kuhn lengths, and blobs on a particular chain: The total end-to-end length of the restricted chain is then: This is the average length a polymer molecule must diffuse to escape from its particular tube, and so the characteristic time for this to happen can be calculated using diffusive equations. A classical derivation gives the reptation time : where is the coefficient of friction on a particular polymer chain, is the Boltzmann constant, and is the absolute temperature. The linear macromolecules reptate if the length of macromolecule is bigger than the critical entanglement molecular weight . is 1.4 to 3.5 times . There is no reptation motion for polymers with , so that the point is a point of dynamic phase transition. Due to the reptation motion the coefficient of self-diffusion and conformational relaxation times of macromolecules depend on the length of macromolecule as and , correspondingly. The conditions of existence of reptation in the thermal motion of macromolecules of complex architecture (macromolecules in the form of branch, star, comb and others) have not been established yet. The dynamics of shorter chains or of long chains at short times is usually described by the Rouse model. See also Important publications in polymer physics Polymer characterization Polymer physics Protein dynamics Soft matter References Polymer physics Materials science Polymers 1971 introductions
Reptation
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,064
[ "Polymer physics", "Applied and interdisciplinary physics", "Materials science", "Polymer chemistry", "nan", "Polymers" ]
14,564,281
https://en.wikipedia.org/wiki/Austin%20Creek
Austin Creek is a southward-flowing stream in the mountains of western Sonoma County, California which empties into the Russian River about from the Pacific Ocean. Course The creek originates in an isolated area known as The Cedars, about west of Healdsburg, California. It flows south past Layton Mine into a wooded canyon, where it joins King Ridge Road just above its confluence with Bearpen Creek. It parallels King Ridge Road to the town of Cazadero and continues south through confluences with East Austin Creek and Kidd Creek. It flows under State Route 116 at milepost 4.93 and enters the Russian River about north of the town of Duncans Mills. History In the 1885–1886, the North Pacific Coast Railroad (NWP) extended its narrow-gauge line up Austin Creek to transport lumber from Cazadero to points south. The railroad grade was later converted to road, becoming Cazadero Highway. Sonoma Magnesite Company The Cedars is a distinctive woodland of trees able to grow on a formation of Mesozoic intrusive ultramafic rock. Sonoma Magnesite Company was formed in 1912 to mine the Red Slide Deposit of magnesite in The Cedars. The mineral is important for steel-making and manufacture of bricks for high-temperature applications; but cost of transportation made mining in The Cedars infeasible until World War I interrupted availability of less expensive sources. The Sonoma Magnesite Tramway, an eleven-mile-long, narrow gauge industrial railway was built in 1914 along the bank of East Austin Creek to connect the mine with Magnesia station on the NWP railroad south of Cazadero. Thirty tons of ore were calcined daily in an oil-fired rotary kiln and packed into sacks for shipping. Production ended in 1920 when magnesite again became available from less expensive sources. Sonoma Magnesite Tramway The railway shipped sacks of magnesite on 4-wheel flatcars. Each of the ten flatcars was four feet wide by seven feet long and could be loaded with 5 tons of magnesite. Oil for the kiln was shipped in six 500-gallon tank cars eight feet long. Trains were pulled by an unusual variety of locomotives: High water in East Austin Creek washed out significant portions of the tramway in 1921; and Betsy was washed downstream and partially buried in the gravel channel. Most of the rails had been salvaged by 1925; and the kiln was scrapped in 1937. About of track was left in place for children's amusement on the Baldwin estate near Austin Creek and the old road to Cazadero. That track was destroyed by the Christmas Week flood of 1955, and Betsy was converted to scrap metal in 1961. Habitat and pollution As of 2000, Austin Creek and all its major tributaries all supported steelhead trout. Austin Creek and East Austin Creek also harbored California freshwater shrimp. In 2016, scientists found evidence of methane-producing microbes in water coming from underground at The Cedars, the first time methanogens that thrive in harsh environments have been discovered beyond the ocean floor. Bridges Many bridges span Austin Creek. The longest of these is the State Route 116 bridge, which is long and was built in 1962. See also Armstrong Redwoods State Reserve Austin Creek State Recreation Area List of watercourses in the San Francisco Bay Area References Rivers of Sonoma County, California Rivers of Northern California Tributaries of the Russian River (California) Extremophiles 2016 in science
Austin Creek
[ "Biology", "Environmental_science" ]
712
[ "Organisms by adaptation", "Extremophiles", "Environmental microbiology", "Bacteria" ]
14,564,960
https://en.wikipedia.org/wiki/Active%20metabolite
An active metabolite, or pharmacologically active metabolite is a biologically active metabolite of a xenobiotic substance, such as a drug or environmental chemical. Active metabolites may produce therapeutic effects, as well as harmful effects. Metabolites of drugs An active metabolite results when a drug is metabolized by the body into a modified form which produces effects in the body. Usually these effects are similar to those of the parent drug but weaker, although they can still be significant (see e.g. 11-hydroxy-THC, morphine-6-glucuronide). Certain drugs such as codeine and tramadol have metabolites (morphine and O-desmethyltramadol respectively) that are stronger than the parent drug and in these cases the metabolite may be responsible for much of the therapeutic action of the parent drug. Sometimes, however, metabolites may produce toxic effects and patients must be monitored carefully to ensure they do not build up in the body. This is an issue with some well-known drugs, such as pethidine (meperidine) and dextropropoxyphene. Prodrugs Sometimes drugs are formulated in an inactive form that is designed to break down inside the body to form the active drug. These are called prodrugs. The reasons for this type of formulation may be because the drug is more stable during manufacture and storage as the prodrug form, or because the prodrug is better absorbed by the body or has superior pharmacokinetics (e.g., lisdexamphetamine). References Further reading Pharmacokinetics Metabolism
Active metabolite
[ "Chemistry", "Biology" ]
356
[ "Pharmacology", "Pharmacokinetics", "Cellular processes", "Biochemistry", "Metabolism" ]
14,564,979
https://en.wikipedia.org/wiki/Hypersonic%20flight
Hypersonic flight is flight through the atmosphere below altitudes of about at speeds greater than Mach 5, a speed where dissociation of air begins to become significant and high heat loads exist. Speeds over Mach 25 have been achieved below the thermosphere as of 2020. Hypersonic vehicles are able to maneuver through the atmosphere in a non-parabolic trajectory, but their aerodynamic heat loads need to be managed. History The first manufactured object to achieve hypersonic flight was the two-stage Bumper rocket, consisting of a WAC Corporal second stage set on top of a V-2 first stage. In February 1949, at White Sands, the rocket reached a speed of , or about Mach 6.7. The vehicle, however, burned on atmospheric re-entry, and only charred remnants were found. In April 1961, Russian Major Yuri Gagarin became the first human to travel at hypersonic speed, during the world's first piloted orbital flight. Soon after, in May 1961, Alan Shepard became the first American and second person to fly hypersonic when his capsule reentered the atmosphere at a speed above Mach 5 at the end of his suborbital flight over the Atlantic Ocean. In November 1961, Air Force Major Robert White flew the X-15 research aircraft at speeds over Mach 6. On 3 October 1967, in California, an X-15 reached Mach 6.7. The reentry problem of a space vehicle was extensively studied. The NASA X-43A flew on scramjet for 10 seconds, and then glided for 10 minutes on its last flight in 2004. The Boeing X-51 Waverider flew on scramjet for 210 seconds in 2013, finally reaching Mach 5.1 on its fourth flight test. The hypersonic regime has since become the subject for further study during the 21st century, and strategic competition between the United States, India, Russia, and China. Physics Stagnation point The stagnation point of air flowing around a body is a point where its local velocity is zero. At this point the air flows around this location. A shock wave forms, which deflects the air from the stagnation point and insulates the flight body from the atmosphere. This can affect the lifting ability of a flight surface to counteract its drag and subsequent free fall. In order to maneuver in the atmosphere at faster speeds than supersonic, the forms of propulsion can still be airbreathing systems, but a ramjet does not suffice for a system to attain Mach 5, as a ramjet slows down the airflow to subsonic. Some systems (waveriders) use a first stage rocket to boost a body into the hypersonic regime. Other systems (boost-glide vehicles) use scramjets after their initial boost, in which the speed of the air passing through the scramjet remains supersonic. Other systems (munitions) use a cannon for their initial boost. High temperature effect Hypersonic flow is a high energy flow. The ratio of kinetic energy to the internal energy of the gas increases as the square of the Mach number. When this flow enters a boundary layer, there are high viscous effects due to the friction between air and the high-speed object. In this case, the high kinetic energy is converted in part to internal energy and gas energy is proportional to the internal energy. Therefore, hypersonic boundary layers are high temperature regions due to the viscous dissipation of the flow's kinetic energy. Another region of high temperature flow is the shock layer behind the strong bow shock wave. In the case of the shock layer, the flow's velocity decreases discontinuously as it passes through the shock wave. This results in a loss of kinetic energy and a gain of internal energy behind the shock wave. Due to high temperatures behind the shock wave, dissociation of molecules in the air becomes thermally active. For example, for air at T > , dissociation of diatomic oxygen into oxygen radicals is active: O2 → 2O For T > , dissociation of diatomic nitrogen into N radicals is active: N2 → 2N Consequently, in this temperature range, a plasma forms: —molecular dissociation followed by recombination of oxygen and nitrogen radicals produces nitric oxide: N2 + O2 → 2NO, which then dissociates and recombines to form ions: N + O → NO+ + e− Low density flow At standard sea-level condition for air, the mean free path of air molecules is about . At an altitude of , where the air is thinner, the mean free path is . Because of this large free mean path aerodynamic concepts, equations, and results based on the assumption of a continuum begin to break down, therefore aerodynamics must be considered from kinetic theory. This regime of aerodynamics is called low-density flow. For a given aerodynamic condition low-density effects depends on the value of a nondimensional parameter called the Knudsen number , defined as where is the typical length scale of the object considered. The value of the Knudsen number based on nose radius, , can be near one. Hypersonic vehicles frequently fly at very high altitudes and therefore encounter low-density conditions. Hence, the design and analysis of hypersonic vehicles sometimes require consideration of low-density flow. New generations of hypersonic airplanes may spend a considerable portion of their mission at high altitudes, and for these vehicles, low-density effects will become more significant. Thin shock layer The flow field between the shock wave and the body surface is called the shock layer. As the Mach number M increases, the angle of the resulting shock wave decreases. This Mach angle is described by the equation where a is the speed of the sound wave and v is the flow velocity. Since M=v/a, the equation becomes . Higher Mach numbers position the shock wave closer to the body surface, thus at hypersonic speeds, the shock wave lies extremely close to the body surface, resulting in a thin shock layer. At low Reynolds number, the boundary layer grows quite thick and merges with the shock wave, leading to a fully viscous shock layer. Viscous interaction The compressible flow boundary layer increases proportionately to the square of the Mach number, and inversely to the square root of the Reynolds number. At hypersonic speeds, this effect becomes much more pronounced, due to the exponential reliance on the Mach number. Since the boundary layer becomes so large, it interacts more viscously with the surrounding flow. The overall effect of this interaction is to create a much higher skin friction than normal, causing greater surface heat flow. Additionally, the surface pressure spikes, which results in a much larger aerodynamic drag coefficient. This effect is extreme at the leading edge and decreases as a function of length along the surface. Entropy layer The entropy layer is a region of large velocity gradients caused by the strong curvature of the shock wave. The entropy layer begins at the nose of the aircraft and extends downstream close to the body surface. Downstream of the nose, the entropy layer interacts with the boundary layer which causes an increase in aerodynamic heating at the body surface. Although the shock wave at the nose at supersonic speeds is also curved, the entropy layer is only observed at hypersonic speeds because the magnitude of the curve is far greater at hypersonic speeds. Propulsion Controlled detonation Researchers in China have used shock waves in a detonation chamber to compress ionized argon plasma waves moving at Mach 14. The waves are directed into magnetohydrodynamic (MHD) generators to create a current pulse that could be scaled up to gigawatt scale, given enough argon gas to feed into the MHD generators. Rotating detonation A rotating detonation engine (RDE) might propel airframes in hypersonic flight; on 14 December 2023 engineers at GE Aerospace demonstrated their test rig, which is to combine an RDE with a ramjet/scramjet, in order to evaluate the regimes of rotating detonation combustion. The goal is to achieve sustainable turbine-based combined cycle (TBCC) propulsion systems, at speeds between Mach 1 and Mach 5. Applications Shipping Transport consumes energy for three purposes: overcoming gravity, overcoming air/water friction, and achieving terminal velocity. The reduced trip times and higher flight altitudes reduce the first two, while increasing the third. Proponents claim that the net energy costs of hypersonic transport can be lower than those of conventional transport while slashing journey times. Stratolaunch Roc can be used to launch hypersonic aircraft. Hermeus demonstrated transition from turbojet aircraft engine operation to ramjet operation on 17 November 2022, thus avoiding the need to boost aircraft velocities by rocket or scramjet. See: SR-72, § Mayhem Weapons Two main types of hypersonic weapons are hypersonic cruise missiles and hypersonic glide vehicles. Hypersonic weapons, by definition, travel five or more times the speed of sound. Hypersonic cruise missiles, which are powered by scramjets, are limited to below ; hypersonic glide vehicles can travel higher. Hypersonic vehicles are much slower than ballistic (i.e. sub-orbital or fractional orbital) missiles, because they travel in the atmosphere, and ballistic missiles travel in the vacuum above the atmosphere. However, they can use the atmosphere to manoeuvre, making them capable of large-angle deviations from a ballistic trajectory. A hypersonic glide vehicle is usually launched with a ballistic first stage, then deploys wings and switches to hypersonic flight as it re-enters the atmosphere, allowing the final stage to evade existing missile defense systems which were designed for ballistic-only missiles. According to a CNBC July 2019 report (and now in a CNN 2022 report), Russia and China lead in hypersonic weapon development, trailed by the United States, and in this case the problem is being addressed in a joint program of the entire Department of Defense. To meet this development need, the US Army is participating in a joint program with the US Navy and Air Force, to develop a hypersonic glide body. India is also developing such weapons. France and Australia may also be pursuing the technology. Japan is acquiring both scramjet (Hypersonic Cruise Missile), and boost-glide weapons (Hyper Velocity Gliding Projectile). China China's XingKong-2 (星空二号, Starry-sky-2), a waverider, had its first flight 3 August 2018. In August 2021 China launched a boost-glide vehicle to low-earth orbit, circling Earth before maneuvering toward its target location, missing its target by two dozen miles. However China has responded that the vehicle was a spacecraft, and not a missile; there was a July 2021 test of a spaceplane, according to Chinese Foreign Ministry Spokesperson Zhao Lijian; Todd Harrison points out that an orbital trajectory would take 90 minutes for a spaceplane to circle Earth (which would defeat the mission of a weapon in hypersonic flight). The US DoD's headquarters (The Pentagon) reported in October 2021 that two such hypersonic launches have occurred; one launch did not demonstrate the accuracy needed for a precision weapon; the second launch by China demonstrated its ability to change trajectories, according to Pentagon reports on the 2021 competition in arms capabilities. In 2022, China unveiled two more hypersonic models. An AI simulation has revealed that a Mach 11 aircraft can simply outrun a Mach 1.3 fighter attempting to engage it, while firing its missile at the "pursuing" fighter. This strategy entails a fire control system to accomplish an over-the-shoulder missile launch, which does not yet exist (2023). In February 2023, the DF-27 covered in 12 minutes, according to leaked secret documents. The capability directly threatens Guam, and US Navy aircraft carriers. Russia In 2016, Russia is believed to have conducted two successful tests of Avangard, a hypersonic glide vehicle. The third known test, in 2017, failed. In 2018, an Avangard was launched at the Dombarovskiy missile base, reaching its target at the Kura shooting range, a distance of . Avangard uses new composite materials which are to withstand temperatures of up to . The Avangard's environment at hypersonic speeds reaches such temperatures. Russia considered its carbon fiber solution to be unreliable, and replaced it with new composite materials. Two Avangard hypersonic glide vehicles (HGVs) will first be mounted on SS-19 ICBMs; on 27 December 2019 the weapon was first fielded to the Yasnensky Missile Division, a unit in the Orenburg Oblast. In an earlier report, Franz-Stefan Gady named the unit as the 13th Regiment/Dombarovskiy Division (Strategic Missile Force). In 2021 Russia launched a 3M22 Zircon antiship missile over the White Sea, as part of a series of tests. "Kinzhal and Zircon (Tsirkon) are standoff strike weapons". In February 2022, a coordinated series of missile exercises, some of them hypersonic, were launched on 18 February 2022 in an apparent display of power projection. The launch platforms ranged from submarines in the Barents sea in the Arctic, as well as from ships on the Black sea to the south of Russia. The exercise included a RS-24 Yars ICBM, which was launched from the Plesetsk Cosmodrome in Northern Russia until it reached its destination on the Kamchatka Peninsula in Eastern Russia. Ukraine estimated a 3M22 Zircon was used against it, but apparently did not exceed Mach 3 and was shot down 7 February 2024 in Kyiv. United States These tests have prompted US responses in weapons development. By 2018, the AGM-183 and Long-Range Hypersonic Weapon were in development per John Hyten's USSTRATCOM statement on 8 August 2018 (UTC). At least one vendor is developing ceramics to handle the temperatures of hypersonics systems. There are over a dozen US hypersonics projects as of 2018, notes the commander of USSTRATCOM; from which a future hypersonic cruise missile is sought, perhaps by Q4 FY2021. The Long range precision fires (LRPF) CFT is supporting Space and Missile Defense Command's pursuit of hypersonics. Joint programs in hypersonics are informed by Army work;<ref however, at the strategic level, the bulk of the hypersonics work remains at the Joint level. Long Range Precision Fires (LRPF) is an Army priority, and also a DoD joint effort. The Army and Navy's Common Hypersonic Glide Body (C-HGB) had a successful test of a prototype in March 2020. A wind tunnel for testing hypersonic vehicles was completed in Texas (2021). The Army's Land-based Hypersonic Missile "is intended to have a range of ". By adding rocket propulsion to a shell or glide body, the joint effort shaved five years off the likely fielding time for hypersonic weapon systems. Countermeasures against hypersonics will require sensor data fusion: both radar and infrared sensor tracking data will be required to capture the signature of a hypersonic vehicle in the atmosphere. There are also privately developed hypersonic systems, as well as critics. DoD tested a Common Hypersonic Glide Body (C-HGB) in 2020. The Air Force dropped out of the tri-service hypersonic project in 2020, leaving only the Army and Navy on the C-HGB. According to Air Force chief scientist, Dr. Greg Zacharias, the US anticipates having hypersonic weapons by the 2020s, hypersonic drones by the 2030s, and recoverable hypersonic drone aircraft by the 2040s. The focus of DoD development will be on air-breathing boost-glide hypersonics systems. Countering hypersonic weapons during their cruise phase will require radar with longer range, as well as space-based sensors, and systems for tracking and fire control. A mid-2021 report from the Congressional Research Service states the United States is "unlikely" to field an operational hypersonic glide vehicle (HGV) until 2023. On 21 October 2021, the Pentagon stated that a test of a hypersonic glide body failed to complete because its booster failed; according to Lt. Cmdr. Timothy Gorman the booster was not part of the equipment under test, but the booster's failure mode will be reviewed to improve the test setup. The test occurred at Pacific Spaceport Complex – Alaska, on Kodiak island. Three rocketsondes at Wallops Island completed successful tests earlier that week, for the hypersonics effort. On 29 October 2021 the booster rocket for the Long-Range Hypersonic Weapon was successfully tested in a static test; the first stage thrust vector control system control system was included. On 26 October 2022 Sandia National Laboratories conducted a successful test of hypersonic technologies at Wallops Island. On 28 June 2024 DoD announced a successful recent end-to-end test of the US Army's Long-Range Hypersonic Weapon all-up round (AUR) and the US Navy's Conventional Prompt Strike. The missile was launched from the Pacific Missile Range Facility, Kauai, Hawaii. In September 2021, and in March 2022, US vendors Raytheon/Northrop Grumman, and Lockheed respectively, first successfully tested their air-launched, scramjet-powered hypersonic cruise missiles, which were funded by DARPA. By September 2022 Raytheon was selected for fielding Hypersonic Attack Cruise Missile (HACM), a scramjet-powered hypersonic missile by FY2027. In March 2024 Stratolaunch Roc launched TA-1, a vehicle which is nearing Mach 5 at in a powered flight, a risk-reduction exercise for TA-2. In a similar development Castelion launched its low-cost hypersonic platform in the Mojave desert, in March 2024. Iran In 2022, Iran was believed to have constructed their first hypersonic missile. Amir Ali Hajizadeh, the commander of the Air Force of the Islamic Republic of Iran's Revolutionary Guards Corps, announced the construction of the Islamic Republic's first hypersonic missile. He noted: "This new missile was produced to counter air defense shields and passes through all missile defense systems and which represents a big leap in the generation of missiles" and has a speed above Mach 13. but Col. Rob Lodwick, the spokesman for the Pentagon on Middle East affairs said that there are doubts in this regard. In 2021, DoD was codifying flight test guidelines, knowledge gained from Conventional Prompt Strike (CPS), and the other hypersonics programs, for some 70 hypersonics R&D programs alone, as of 2021. In 2021-2023, Heidi Shyu, the Under Secretary of Defense for Research and Engineering (USD(R&E)) is pursuing a program of annual rapid joint experiments, including hypersonics capabilities, to bring down their cost of development. A hypersonic test bed aims to bring the frequency of tests to one per week. Other programs France, Australia, India, Germany, Japan, South Korea, North Korea, and Iran also have ongoing hypersonic weapon projects or research programs. Australia and the US have begun joint development of air-launched hypersonic missiles, as announced by a Pentagon statement on 30 November 2020. The development will build on the $54 million Hypersonic International Flight Research Experimentation (HIFiRE) under which both nations collaborated on over a 15-year period. Small and large companies will all contribute to the development of these hypersonic missiles, named SCIFIRE in 2022. Defenses In May 2023 Ukraine shot down a Kinzhal with a Patriot. IBCS, or the Integrated Air and Missile Defense Battle Command System is an Integrated Air and Missile Defense (IAMD) capability designed to work with Patriots and other missiles. Rand 2017 assessment Rand Corporation (28 September 2017) estimates there is less than a decade to prevent Hypersonic Missile proliferation. In the same way that anti-ballistic missiles were developed as countermeasures to ballistic missiles, counter-countermeasures to hypersonics systems were not yet in development, as of 2019. See the National Defense Space Architecture (2021), above. But by 2019, $157.4 million was allocated in the FY2020 Pentagon budget for hypersonic defense, out of $2.6 billion for all hypersonic-related research. $207 million of the FY2021 budget was allocated to defensive hypersonics, up from the FY2020 budget allocation of $157 million. Both the US and Russia withdrew from the Intermediate-Range Nuclear Forces (INF) Treaty in February 2019. This will spur arms development, including hypersonic weapons, in FY2021 and forward. By 2021 the Missile Defense Agency was funding regional countermeasures against hypersonic weapons in their glide phase. James Acton characterized the proliferation of hypersonic vehicles as never-ending in October 2021; Jeffery Lewis views the proliferation as additional arguments for ending the arms race. Doug Loverro assesses that both missile defense and competition need rethinking. CSIS assesses that hypersonic defense should be the US' priority over hypersonic weapons. NDSA / PWSA As part of their Hypersonic vehicle tracking mission, the Space Development Agency (SDA) launched four satellites and the Missile Defense Agency (MDA) launched two satellites on 14 February 2024 (launch USSF-124). The satellites will share the same orbit, which allows the SDA's wide field of view (WFOV) satellites and the MDA's medium field of view (MFOV) downward-looking satellites to traverse the same terrain of Earth. The SDA's four satellites are part of its Tranche 0 tracking layer (T0TL). The MDA's two satellites are HBTSS or Hypersonic and ballistic tracking space sensors. Additional capabilities of Tranche 0 of the National defense space architecture (NDSA), also known as the Proliferated warfighting space architecture (PWSA) will be tested over the next two years. Proposed Aircraft I-Plane 14-X Espadon hypersonic combat aircraft concept (program conducted by the ONERA) Avatar (spacecraft) Advanced Technology Vehicle DARPA XS-1 Destinus hydrogen-powered hypersonic aircraft. A prototype was tested last year. Dream Chaser NASA X-43 HyperSoar HyperStar hypersonic passenger airliner Falcon HTV-2 Boeing Commercial Airplanes hypersonic airliner Concept Lockheed Martin SR-72 Kholod Ayaks waverider spaceplane Programme for Reusable In-orbit Demonstrator in Europe (PRIDE) Sänger II HyShot Hytex Horus SHEFEX Skylon Reaction Engines A2 Hypersonic Air Vehicle Experimental (HVX) with Concept V aircraft Spartan HEXAFLY SpaceLiner STRATOFLY Zero Emission Hyper Sonic Transport Hermeus Quarterhourse unmanned hypersonic demonstrator designed to land and take-off on conventional runways. Hermeus Halcyon hypersonic transport Venus Aerospace Stargazer hypersonic airliner with rotating detonation rocket engine POLARIS Raumflugzeuge GmbH is developing and testing a hypersonic spaceplane for the German Armed Forces in Peenemünde Bombers Expendable Hypersonic Air-Breathing Multi-Mission Demonstrator ("Mayhem") Based on § HAWC and HSSW: "solid rocket-boosted, air-breathing, hypersonic conventional cruise missile", a follow-on to AGM-183A. As of 2020 no design work had been done. By 2022 Mayhem was to be tasked with ISR and strike missions, as a possible bomber. Leidos is preparing a system requirements review, and a conceptual design for these missions. Draper Labs has begun a partnership with Leidos. Kratos is preparing a conceptual design for Mayhem, using Air Force Research Laboratory (AFRL) digital engineering techniques in a System design agent team, a collaboration with Leidos, Calspan, and Draper. DIU is soliciting additional Hypersonic and High-Cadence Airborne Testing Capabilities (HyCAT), for Mayhem. Cruise missiles Advanced Hypersonic Weapon (AHW) Hypersonic Air-breathing Weapon Concept (HAWC, pronounced "hawk"). September 2021: HAWC is DARPA-funded. Built by Raytheon and Northrop Grumman, HAWC is the first US scramjet-powered hypersonic missile to successfully complete a free flight test in the 2020s. DARPA's goals for the test, which were successfully met, were: "vehicle integration and release sequence, safe separation from the launch aircraft, booster ignition and boost, booster separation and engine ignition, and cruise". HAWC is capable of sustained, powered maneuver in the atmosphere. HAWC appears to depend on a rocket booster to accelerate to scramjet velocities operating in an oxygen-rich environment. It is easier to put a seeker on a sub-sonic air-breathing vehicle. In mid-March 2022 a HAWC Scramjet was successfully tested in an air-launched flight by a second vendor. On 18 July 2022 Raytheon announced another successful test of its Hypersonic Air-breathing Weapon Concept (HAWC) scramjet, in free flight. MoHAWC is a follow-on to DARPA's HAWC project. MoHAWC will seek "to further develop the vehicle’s scramjet propulsion system, upgrade integration algorithms, reduce the size of navigation components, and improve its manufacturing approach". Hypersonic Conventional Strike Weapon (HCSW - pronounced "hacksaw") passed its critical design review (CDR) but this IDIQ (indefinite duration, indefinite quantity) contract was terminated in favor of ARRW because twice as many ARRWs will fit on a bomber. ASN4G (air-launched, scramjet-powered, hypersonic cruise missile under development by MBDA France and the ONERA to succeed the ASMP) Kh-45 (cancelled) Zircon Hypersonic Technology Demonstrator Vehicle / Brahmos-II Hycore Glide vehicles AGM-183A air launched rapid response weapon (ARRW, pronounced "arrow") Telemetry data has been successfully transmitted from ARRW —AGM-183A IMV-2 (Instrumented Measurement Vehicle) to the Point Mugu ground stations, demonstrating the ability to accurately broadcast radio at hypersonic speeds; however, ARRW's launch sequence was not completed, as of 15 Dec 2021. Hundreds of ARRWs or other Hypersonic weapons are being sought by the Air Force. On 9 March 2022 Congress halved funding for ARRW and transferred the balance to ARRW's R&D account to allow for further testing, which puts the procurement contract at risk. A production decision on ARRW has been delayed for a year to complete flight testing. On 14 May 2022 an ARRW flight test was successfully completed, for the first time. There have been 3 successful tests of ARRW in 2022; however the Air Force is requiring 3 additional successful tests of an All-Up Round (AUR) before making a production decision. No production decision will be made in 2024. The USAF now intends to end the ARRW development program, as of 29 March 2023. A B-52 flying out of Anderson AFB in Guam fired an All-Up-Round AGM-183A Air-launched Rapid Response Weapon (ARRW); the AUR was tested at Reagan test site in the Pacific on 17 March 2024. DARPA Tactical Boost Glide vehicle VMaX-2 hypersonic glide vehicle (under development by ArianeGroup; first flight test scheduled for 2025) HGV-202F Flown Aircraft North American X-15 (crewed) Lockheed X-17 NASA X-43 Boeing X-51 WZ-8 HSTDV Glide vehicles Avangard DF-ZF Hwasong-8 Unnamed VMaX (developed by ArianeGroup; first flight test took place on 26 June 2023 and was a success) Spaceplanes Space Shuttle orbiter (crewed) Buran (human-rated, only flew without crew) RLV-TD Boeing X-37 Shenlong IXV BOR-4 Martin X-23 PRIME ASSET HYFLEX Reusable experimental spacecraft (disputed) Jiageng-1 Cancelled Aircraft Silbervogel (Sänger bomber) Keldysh bomber Tupolev Tu-360, follow-on to Tu-160 Tupolev Tu-2000 Lockheed L-301 Glide vehicles VERAS (hypersonic glide vehicle program launched in 1965 and cancelled in 1971) Spaceplanes Boeing X-20 Dyna-Soar Rockwell X-30 (National Aerospace Plane) Orbital Sciences X-34 Mikoyan-Gurevich MiG-105 Tsien Spaceplane 1949 HOPE-X XCOR Lynx Lockheed Martin X-33 Hermes Prometheus HL-20 Personnel Launch System HL-42 BAC Mustard Kliper HOTOL Valier Raketenschiff Rockwell C-1057 See also Hypersonic effect Supersonic transport Lifting body List of X-planes Thunderbird 1 Notes References Further reading David Wright and Cameron Tracy, "Over-hyped: Physics dictates that hypersonic weapons cannot live up to the grand promises made on their behalf", Scientific American, vol. 325, no. 2 (August 2021), pp. 64–71. Quote from p. 71: "Failure to fully assess [the potential benefits and costs of hypersonic weapons] is a recipe for wasteful spending and increased global risk." External links A comparative analysis of the performance of long-range hypervelocity vehicles (2022) Joint Air Power Competence Centre (JAPCC) Aerodynamics Aerospace engineering Airspeed
Hypersonic flight
[ "Physics", "Chemistry", "Engineering" ]
6,162
[ "Physical quantities", "Aerodynamics", "Airspeed", "Aerospace engineering", "Wikipedia categories named after physical quantities", "Fluid dynamics" ]