id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
3,054,711
https://en.wikipedia.org/wiki/Software%20license%20server
A software license server is a centralized computer software system which provides access tokens, or keys, to client computers in order to enable licensed software to run on them. In 1989, Sassafras Software Inc developed their trademarked KeyServer software license management tool. Since that time, other computing technology firms have adopted the phrase "key server" to be used interchangeably with "software license server." It is the job of a software license server to determine and control the number of copies of a program permitted to be used based on the license entitlements that an organization owns. Typically, an end-user customer organization will install a software license server on a host computer to provide licensing services to an enterprise computing environment. Publisher-specific license servers are commonly provided by software publishers, or through third party providers, to manage software licensing for a specific software publisher's products. Publisher-specific license servers are more commonly used for industry specialized software products than for common software products due to the high value of the managed software products. The server component of a client–server application may also contain an internal license server. References See also Floating licensing Product activation Digital rights management Information technology management Software licensing
Software license server
Technology
242
44,159,504
https://en.wikipedia.org/wiki/Drum%20tower%20%28Europe%29
A Drum tower in Europe is a round tower that has a longer diameter length than height, resembling the shape of the musical instrument. Sometimes the term is used erroneously to describe typical round Norman defense towers or circular towers in general. References Architectural elements Towers
Drum tower (Europe)
Technology,Engineering
51
67,133,439
https://en.wikipedia.org/wiki/Rose%20Mutiso
Rose M. Mutiso is a Kenyan activist and materials scientist. She is co-founder and CEO of The Mawazo Institute. She is the research director of the Energy for Growth Hub. She was short listed for the 2020 Pritzker Emerging Environmental Genius Award. Education Mutiso attended engineering school at Dartmouth College, before completing her PhD in Materials Science at the University of Pennsylvania in the lab of Karen I. Winey. Her dissertation focused material properties for nanoelectronics such as electrical percolation. She did her postdoc as a 2013-14 congressional science fellow with the American Institute of Physics. Career Mutiso was a fellow in the U.S. Department of Energy. She was Energy and Innovation Policy Fellow in the office of Senator Chris Coons (as part of her postdoc). Mutiso's activism work focuses on improving energy access in Africa in a climate-conscious way. Through the Mawazo Institute she hopes to train more women in the research and engineering skills necessary to further develop the Kenyan energy sector. She served on the advisory board for African Utility Week 2018. She has advocated for international carbon budgeting that recognizes both the comparatively low current emissions of African nations and allots space for their future development into higher energy consumptions. She has written for Scientific American and also given a TED talk on the topic. References External links Our Team The Mawazo Institute. Research Director - Rose Mutiso 21st-century Kenyan women scientists 21st-century Kenyan scientists Condensed matter physicists Year of birth missing (living people) Living people Dartmouth College alumni University of Pennsylvania alumni Kenyan women chief executives 21st-century Kenyan businesswomen 21st-century Kenyan businesspeople
Rose Mutiso
Physics,Materials_science
348
26,220,783
https://en.wikipedia.org/wiki/Characteristic%20admittance
Characteristic admittance is the mathematical inverse of the characteristic impedance. The general expression for the characteristic admittance of a transmission line is: where is the resistance per unit length, is the inductance per unit length, is the conductance of the dielectric per unit length, is the capacitance per unit length, is the imaginary unit, and is the angular frequency. The current and voltage phasors on the line are related by the characteristic admittance as: where the superscripts and represent forward- and backward-traveling waves, respectively. See also Characteristic impedance References Electricity Physical quantities Distributed element circuits
Characteristic admittance
Physics,Mathematics,Engineering
128
6,279,588
https://en.wikipedia.org/wiki/Extranuclear%20inheritance
Extranuclear inheritance or cytoplasmic inheritance is the transmission of genes that occur outside the nucleus. It is found in most eukaryotes and is commonly known to occur in cytoplasmic organelles such as mitochondria and chloroplasts or from cellular parasites like viruses or bacteria. Organelles Mitochondria are organelles which function to transform energy as a result of cellular respiration. Chloroplasts are organelles which function to produce sugars via photosynthesis in plants and algae. The genes located in mitochondria and chloroplasts are very important for proper cellular function. The mitochondrial DNA and other extranuclear types of DNA replicate independently of the DNA located in the nucleus, which is typically arranged in chromosomes that only replicate one time preceding cellular division. The extranuclear genomes of mitochondria and chloroplasts however replicate independently of cell division. They replicate in response to a cell's increasing energy needs which adjust during that cell's lifespan. Since they replicate independently, genomic recombination of these genomes is rarely found in offspring, contrary to nuclear genomes in which recombination is common. Mitochondrial diseases are inherited from the mother, not from the father. Mitochondria with their mitochondrial DNA are already present in the egg cell before it gets fertilized by a sperm. In many cases of fertilization, the head of the sperm enters the egg cell; leaving its middle part, with its mitochondria, behind. The mitochondrial DNA of the sperm often remains outside the zygote and gets excluded from inheritance. Parasites Extranuclear transmission of viral genomes and symbiotic bacteria is also possible. An example of viral genome transmission is perinatal transmission. This occurs from mother to fetus during the perinatal period, which begins before birth and ends about 1 month after birth. During this time viral material may be passed from mother to child in the bloodstream or breastmilk. This is of particular concern with mothers carrying HIV or hepatitis C viruses. Symbiotic cytoplasmic bacteria are also inherited in organisms such as insects and protists. Types Three general types of extranuclear inheritance exist. Vegetative segregation results from random replication and partitioning of cytoplasmic organelles. It occurs with chloroplasts and mitochondria during mitotic cell divisions and results in daughter cells that contain a random sample of the parent cell's organelles. An example of vegetative segregation is with mitochondria of asexually replicating yeast cells. Uniparental inheritance occurs in extranuclear genes when only one parent contributes organellar DNA to the offspring. A classic example of uniparental gene transmission is the maternal inheritance of human mitochondria. The mother's mitochondria are transmitted to the offspring at fertilization via the egg. The father's mitochondrial genes are not transmitted to the offspring via the sperm. Very rare cases which require further investigation have been reported of paternal mitochondrial inheritance in humans, in which the father's mitochondrial genome is found in offspring. Chloroplast genes can also inherit uniparentally during sexual reproduction. They are historically thought to inherit maternally, but paternal inheritance in many species is increasingly being identified. The mechanisms of uniparental inheritance from species to species differ greatly and are quite complicated. For instance, chloroplasts have been found to exhibit maternal, paternal and biparental modes even within the same species. In tobacco (Nicotiana tabacum), the mode of chloroplast inheritance is affected by the temperature and the enzymatic activity of an exonuclease during male gametogenesis. Biparental inheritance occurs in extranuclear genes when both parents contribute organellar DNA to the offspring. It may be less common than uniparental extranuclear inheritance, and usually occurs in a permissible species only a fraction of the time. An example of biparental mitochondrial inheritance is in the yeast Saccharomyces cerevisiae. When two haploid cells of opposite mating type fuse they can both contribute mitochondria to the resulting diploid offspring. Mutant mitochondria Poky is a mutant of the fungus Neurospora crassa that has extranuclear inheritance. Poky is characterized by slow growth, a defect in mitochondrial ribosome assembly and deficiencies in several cytochromes. The studies of poky mutants were among the first to establish an extranuclear mitochondrial basis for inheritance of a particular genotype. It was initially found, using genetic crosses, that poky is maternally inherited. Subsequently, the primary defect in the poky mutants was determined to be a deletion in the mitochondrial DNA sequence encoding the small subunit of mitochondrial ribosomal RNA. See also Maternal effect Plasmid Xenia (plants) References External links https://web.archive.org/web/20080521093531/http://www.tamu.edu/classes/magill/gene603/Lecture%20outlines/cytoplasmic%20inh/CYTOPLASMIC_INHERITANCE.html Genetics
Extranuclear inheritance
Biology
1,102
53,823,301
https://en.wikipedia.org/wiki/Tisagenlecleucel
Tisagenlecleucel, sold under the brand name Kymriah, is a CAR T cells medication for the treatment of B-cell acute lymphoblastic leukemia (ALL) which uses the body's own T cells to fight cancer (adoptive cell transfer). The most common serious side effects are cytokine release syndrome (a potentially life-threatening condition that can cause fever, vomiting, shortness of breath, pain and low blood pressure) and decreases in platelets (components that help the blood to clot), hemoglobin (the protein found in red blood cells that carries oxygen around the body) or white blood cells including neutrophils and lymphocytes. T cells from a person with cancer are removed, genetically engineered to make a specific chimeric cell surface receptor with components from both a T-cell receptor and an antibody specific to a protein on the cancer cell, and transferred back to the person. The T cells are engineered to target a protein called CD19 that is common on B cells. A chimeric T cell receptor ("CAR-T") is expressed on the surface of the T cell. The platform invented at the University of Pennsylvania was clinically developed by Novartis, including market authorization, and real world evidence. In August 2017, it became the first FDA-approved treatment that included a gene therapy step in the United States. Medical uses Tisagenlecleucel is indicated for the treatment of those under 25 years of age with B-cell precursor acute lymphoblastic leukemia (ALL) that is refractory or in second or later relapse; or adults with relapsed or refractory (r/r) large B-cell lymphoma after two or more lines of systemic therapy including diffuse large B-cell lymphoma (DLBCL) not otherwise specified, high grade B-cell lymphoma and DLBCL arising from follicular lymphoma. In May 2022, the indication in the US was updated to include the treatment of adults with relapsed or refractory follicular lymphoma (FL) after two or more lines of systemic therapy. Adverse effects A frequent side effect seen is cytokine release syndrome (CRS). Serious side effects occur in most patients. The most common serious side effects are cytokine release syndrome (a potentially life-threatening condition that can cause fever, vomiting, shortness of breath, pain and low blood pressure) and decreases in platelets (components that help the blood to clot), hemoglobin (the protein found in red blood cells that carries oxygen around the body) or white blood cells including neutrophils and lymphocytes. Serious infections occur in around three in ten diffuse large B-cell lymphoma (DLBCL) patients. In April 2024, the FDA label boxed warning was expanded to include T cell malignancies. History The treatment was developed by a group headed by Carl H. June and co-invented by Michael C. Milone at the University of Pennsylvania, and is licensed to Novartis. In April 2017, tisagenlecleucel received breakthrough therapy designation by the U.S. Food and Drug Administration (FDA) for the treatment of relapsed or refractory diffuse large B-cell lymphoma. In July 2017, an FDA advisory committee unanimously recommended that the agency approve it to treat B cell acute lymphoblastic leukemia that did not respond adequately to other treatments or have relapsed. In August 2017, the FDA granted approval for the use of tisagenlecleucel in people with acute lymphoblastic leukemia. According to Novartis, the treatment will be administered at specific medical centers where staff have been trained to manage possible reactions to this new type of treatment. In May 2018, the FDA further approved tisagenlecleucel to treat adults with relapsed or refractory diffuse large B-cell lymphoma (DLBCL), based on results from the JULIET phase II trial. In England, the NHS will use the procedure to treat children with acute lymphoblastic leukemia (ALL) if earlier treatments including stem cell transplants have failed; it is expected to apply to between 15 and 20 children. In March 2019, NICE issued guidance approving Kymriah for treatment of relapsed or refractory diffuse large B-cell lymphoma in adults after 2 or more systemic therapies. Manufacture In a 22-day process, the treatment is customized for each person. T cells are purified from blood drawn from the person, and those cells are then modified by a virus that inserts a gene into the cells' genome. The gene encodes a chimaeric antigen receptor (CAR) that targets leukaemia cells. It uses the 4-1BB co-stimulatory domain in its CAR to improve response. Modification of the cells to create the customized therapeutic has been a major bottleneck in expanding availability of the treatment, requiring T cells extracted in Europe to be transported to the United States where they are modified, then back to Europe. Novartis has been expanding a facility in France, and constructed a new facility in Stein, Switzerland, to relieve this bottleneck beginning in 2020. Novartis uses the company Cryoport Inc. for temperature-controlled transportation required for the manufacture and distribution of Kymriah. Society and culture Names Tisagenlecleucel is the international nonproprietary name. References Further reading BLA 125646 Tisagenlecleucel - Novartis Briefing document to FDA ODAC Antineoplastic drugs Approved gene therapies CAR T-cell therapy Drugs developed by Novartis Orphan drugs
Tisagenlecleucel
Biology
1,211
15,090,180
https://en.wikipedia.org/wiki/Amplification%20factor
The amplification factor, also called gain , is the extent to which an analog amplifier boosts the strength of a signal . Amplification factors are usually expressed in terms of power . The decibel (dB), a logarithmic unit, is the most common way of quantifying the gain of an amplifier. In general an amplification factor is the numerical multiplicative factor by which some quantity is increased. In structural engineering the amplification factor is the ratio of second order to first order deflections. In electronics the amplification factor, or gain, is the ratio of the output to the input of an amplifier, sometimes represented by the symbol AF. In numerical analysis the amplification factor is a number derived using Von Neumann stability analysis to determine stability of a numerical scheme for a partial differential equation. References "Developments in Tall Buildings 1983". . Page 489. "Numerical Computation of Internal & External Flows". . Page 296. Structural engineering
Amplification factor
Engineering
203
632,732
https://en.wikipedia.org/wiki/HEAnet
HEAnet is the national education and research network of Ireland. HEAnet's e-infrastructure services support approximately 210,000 students and staff (third-level) in Ireland, and approximately 800,000 students and staff (first and second-level) relying on the HEAnet network. In total, the network supports approximately 1 million users. Established in 1983 by a number of Irish universities, and supported by the Higher Education Authority, HEAnet provides e-infrastructure services to schools, colleges and universities within the Irish education system. Its network connects Irish universities, Institutes of technology in Ireland, the Irish Centre for High-End Computing (National Supercomputing Centre) and other higher education institutions (HEIs). It also provides internet services to primary and post-primary schools in Ireland and research organisations. Their clients also include various state-sponsored bodies, including hosting the online live conferencing service of the Oireachtas, the parliament of Ireland. HEAnet previously hosted a mirror service, which acted as a mirror for projects such as SourceForge, Debian, and Ubuntu. This service was discontinued in 2024. In 2014, HEAnet hosted the TERENA Conference in Dublin. It was held between 19 and 22 May 2014 in Dublin. In 2017, HEAnet announced additional investment in "100 Gbps [services] to boost bandwidth accessed by [...] 216 academic locations around Ireland". References External links HEAnet network map Education in the Republic of Ireland Internet in Ireland Internet mirror services National research and education networks
HEAnet
Technology
324
30,349,389
https://en.wikipedia.org/wiki/Neokuguaglucoside
Neokuguaglucoside is a chemical compound with formula , isolated from the fruit of the bitter melon vine (Momordica charantia, called kǔguā in Chinese), where it occurs at 23 mg/35 kg. It is a triterpene glucoside with the cucurbitane skeleton. It is a white powder, soluble in methanol and butanol. See also Kuguaglycoside References Triterpene glycosides Glucosides
Neokuguaglucoside
Chemistry
107
61,186,826
https://en.wikipedia.org/wiki/Bueno-Orovio%E2%80%93Cherry%E2%80%93Fenton%20model
The Bueno-Orovio–Cherry–Fenton model, also simply called Bueno-Orovio model, is a minimal ionic model for human ventricular cells. It belongs to the category of phenomenological models, because of its characteristic of describing the electrophysiological behaviour of cardiac muscle cells without taking into account in a detailed way the underlying physiology and the specific mechanisms occurring inside the cells. This mathematical model reproduces both single cell and important tissue-level properties, accounting for physiological action potential development and conduction velocity estimations. It also provides specific parameters choices, derived from parameter-fitting algorithms of the MATLAB Optimization Toolbox, for the modeling of epicardial, endocardial and myd-myocardial tissues. In this way it is possible to match the action potential morphologies, observed from experimental data, in the three different regions of the human ventricles. The Bueno-Orovio–Cherry–Fenton model is also able to describe reentrant and spiral wave dynamics, which occurs for instance during tachycardia or other types of arrhythmias. From the mathematical perspective, it consists of a system of four differential equations. One PDE, similar to the monodomain model, for an adimensional version of the transmembrane potential, and three ODEs that define the evolution of the so-called gating variables, i.e. probability density functions whose aim is to model the fraction of open ion channels across a cell membrane. Mathematical modeling The system of four differential equations reads as follows: where is the spatial domain and is the final time. The initial conditions are , , , . refers to the Heaviside function centered in . The adimensional transmembrane potential can be rescaled in mV by means of the affine transformation . , and refer to gating variables, where in particular can be also used as an indication of intracellular calcium concentration (given in the adimensional range [0, 1] instead of molar concentration). and are the fast inward, slow outward and slow inward currents respectively, given by the following expressions: All the above-mentioned ionic density currents are partially adimensional and are expressed in . Different parameters sets, as shown in Table 1, can be used to reproduce the action potential development of epicardial, endocardial and mid-myocardial human ventricular cells. There are some constants of the model, which are not located in Table 1, that can be deduced with the following formulas: where the temporal constants, i.e. are expressed in seconds, whereas and are adimensional. The diffusion coefficient results in a value of , which comes from experimental tests on human ventricular tissues. In order to trigger the action potential development in a certain position of the domain , a forcing term , which represents an externally applied density current, is usually added at the right hand side of the PDE and acts for a short time interval only. Weak formulation Assume that refers to the vector containing all the gating variables, i.e. , and contains the corresponding three right hand sides of the ionic model. The Bueno-Orovio–Cherry–Fenton model can be rewritten in the compact form: Let and be two generic test functions. To obtain the weak formulation: multiply by the first equation of the model and by the equations for the evolution of the gating variables. Integrating both members of all the equations in the domain : perform an integration by parts of the diffusive term of the first monodomain-like equation: Finally the weak formulation reads: Find and , , such that Numerical discretization There are several methods to discretize in space this system of equations, such as the finite element method (FEM) or isogeometric analysis (IGA). Time discretization can be performed in several ways as well, such as using a backward differentiation formula (BDF) of order or a Runge–Kutta method (RK). Space discretization with FEM Let be a tessellation of the computational domain by means of a certain type of elements (such as tetrahedrons or hexahedra), with representing a chosen measure of the size of a single element . Consider the set of polynomials with degree smaller or equal than over an element . Define as the finite dimensional space, whose dimension is . The set of basis functions of is referred to as . The semidiscretized formulation of the first equation of the model reads: find projection of the solution on , , such that with , semidiscretized version of the three gating variables, and is the total ionic density current. The space discretized version of the first equation can be rewritten as a system of non-linear ODEs by setting and : where , and . The non-linear term can be treated in different ways, such as using state variable interpolation (SVI) or ionic currents interpolation (ICI). In the framework of SVI, by denoting with and the quadrature nodes and weights of a generic element of the mesh , both and are evaluated at the quadrature nodes: The equations for the three gating variables, which are ODEs, are directly solved in all the degrees of freedom (DOF) of the tessellation separately, leading to the following semidiscrete form: Time discretization with BDF (implicit scheme) With reference to the time interval , let be the chosen time step, with number of subintervals. A uniform partition in time is finally obtained. At this stage, the full discretization of the Bueno-Orovio ionic model can be performed both in a monolithic and segregated fashion. With respect to the first methodology (the monolithic one), at time , the full problem is entirely solved in one step in order to get by means of either Newton method or Fixed-point iterations: where and are extrapolations of transmembrane potential and gating variables at previous timesteps with respect to , considering as many time instants as the order of the BDF scheme. is a coefficient which depends on the BDF order . If a segregated method is employed, the equation for the evolution in time of the transmembrane potential and the ones of the gating variables are numerically solved separately: Firstly, is calculated, using an extrapolation at previous timesteps for the transmembrane potential at the right hand side: Secondly, is computed, exploiting the value of that has just been calculated: Another possible segregated scheme would be the one in which is calculated first, and then it is used in the equations for . See also Monodomain model Bidomain model References Mathematical modeling Numerical analysis Cardiac electrophysiology
Bueno-Orovio–Cherry–Fenton model
Mathematics
1,403
4,548
https://en.wikipedia.org/wiki/Blizzard
A blizzard is a severe snowstorm characterized by strong sustained winds and low visibility, lasting for a prolonged period of time—typically at least three or four hours. A ground blizzard is a weather condition where snow that has already fallen is being blown by wind. Blizzards can have an immense size and usually stretch to hundreds or thousands of kilometres. Definition and etymology In the United States, the National Weather Service defines a blizzard as a severe snow storm characterized by strong winds causing blowing snow that results in low visibilities. The difference between a blizzard and a snowstorm is the strength of the wind, not the amount of snow. To be a blizzard, a snow storm must have sustained winds or frequent gusts that are greater than or equal to with blowing or drifting snow which reduces visibility to or less and must last for a prolonged period of time—typically three hours or more. Environment Canada defines a blizzard as a storm with wind speeds exceeding accompanied by visibility of or less, resulting from snowfall, blowing snow, or a combination of the two. These conditions must persist for a period of at least four hours for the storm to be classified as a blizzard, except north of the arctic tree line, where that threshold is raised to six hours. The Australia Bureau of Meteorology describes a blizzard as, "Violent and very cold wind which is laden with snow, some part, at least, of which has been raised from snow covered ground." While severe cold and large amounts of drifting snow may accompany blizzards, they are not required. Blizzards can bring whiteout conditions, and can paralyze regions for days at a time, particularly where snowfall is unusual or rare. A severe blizzard has winds over , near zero visibility, and temperatures of or lower. In Antarctica, blizzards are associated with winds spilling over the edge of the ice plateau at an average velocity of . Ground blizzard refers to a weather condition where loose snow or ice on the ground is lifted and blown by strong winds. The primary difference between a ground blizzard as opposed to a regular blizzard is that in a ground blizzard no precipitation is produced at the time, but rather all the precipitation is already present in the form of snow or ice at the surface. The Oxford English Dictionary concludes the term blizzard is likely onomatopoeic, derived from the same sense as blow, blast, blister, and bluster; the first recorded use of it for weather dates to 1829, when it was defined as a "violent blow". It achieved its modern definition by 1859, when it was in use in the western United States. The term became common in the press during the harsh winter of 1880–81. United States storm systems In the United States, storm systems powerful enough to cause blizzards usually form when the jet stream dips far to the south, allowing cold, dry polar air from the north to clash with warm, humid air moving up from the south. When cold, moist air from the Pacific Ocean moves eastward to the Rocky Mountains and the Great Plains, and warmer, moist air moves north from the Gulf of Mexico, all that is needed is a movement of cold polar air moving south to form potential blizzard conditions that may extend from the Texas Panhandle to the Great Lakes and Midwest. A blizzard also may be formed when a cold front and warm front mix together and a blizzard forms at the border line. Another storm system occurs when a cold core low over the Hudson Bay area in Canada is displaced southward over southeastern Canada, the Great Lakes, and New England. When the rapidly moving cold front collides with warmer air coming north from the Gulf of Mexico, strong surface winds, significant cold air advection, and extensive wintry precipitation occur. Low pressure systems moving out of the Rocky Mountains onto the Great Plains, a broad expanse of flat land, much of it covered in prairie, steppe and grassland, can cause thunderstorms and rain to the south and heavy snows and strong winds to the north. With few trees or other obstructions to reduce wind and blowing, this part of the country is particularly vulnerable to blizzards with very low temperatures and whiteout conditions. In a true whiteout, there is no visible horizon. People can become lost in their own front yards, when the door is only away, and they would have to feel their way back. Motorists have to stop their cars where they are, as the road is impossible to see. Nor'easter blizzards A nor'easter is a macro-scale storm that occurs off the New England and Atlantic Canada coastlines. It gets its name from the direction the wind is coming from. The usage of the term in North America comes from the wind associated with many different types of storms, some of which can form in the North Atlantic Ocean and some of which form as far south as the Gulf of Mexico. The term is most often used in the coastal areas of New England and Atlantic Canada. This type of storm has characteristics similar to a hurricane. More specifically, it describes a low-pressure area whose center of rotation is just off the coast and whose leading winds in the left-forward quadrant rotate onto land from the northeast. High storm waves may sink ships at sea and cause coastal flooding and beach erosion. Notable nor'easters include The Great Blizzard of 1888, one of the worst blizzards in U.S. history. It dropped of snow and had sustained winds of more than that produced snowdrifts in excess of . Railroads were shut down and people were confined to their houses for up to a week. It killed 400 people, mostly in New York. Historic events 1972 Iran blizzard The 1972 Iran blizzard, which caused 4,000 reported deaths, was the deadliest blizzard in recorded history. Dropping as much as of snow, it completely covered 200 villages. After a snowfall lasting nearly a week, an area the size of Wisconsin was entirely buried in snow. 2008 Afghanistan blizzard The 2008 Afghanistan blizzard, was a fierce blizzard that struck Afghanistan on 10 January 2008. Temperatures fell to a low of , with up to of snow in the more mountainous regions, killing at least 926 people. It was the third deadliest blizzard in history. The weather also claimed more than 100,000 sheep and goats, and nearly 315,000 cattle died. The Snow Winter of 1880–1881 The winter of 1880–1881 is widely considered the most severe winter ever known in many parts of the United States. The initial blizzard in October 1880 brought snowfalls so deep that two-story homes experienced accumulations, as opposed to drifts, up to their second-floor windows. No one was prepared for deep snow so early in the winter. Farmers from North Dakota to Virginia were caught flat with fields unharvested, what grain that had been harvested unmilled, and their suddenly all-important winter stocks of wood fuel only partially collected. By January train service was almost entirely suspended from the region. Railroads hired scores of men to dig out the tracks but as soon as they had finished shoveling a stretch of line a new storm arrived, burying it again. There were no winter thaws and on February 2, 1881, a second massive blizzard struck that lasted for nine days. In towns the streets were filled with solid drifts to the tops of the buildings and tunneling was necessary to move about. Homes and barns were completely covered, compelling farmers to construct fragile tunnels in order to feed their stock. When the snow finally melted in late spring of 1881, huge sections of the plains experienced flooding. Massive ice jams clogged the Missouri River, and when they broke the downstream areas were inundated. Most of the town of Yankton, in what is now South Dakota, was washed away when the river overflowed its banks after the thaw. Novelization Many children—and their parents—learned of "The Snow Winter" through the children's book The Long Winter by Laura Ingalls Wilder, in which the author tells of her family's efforts to survive. The snow arrived in October 1880 and blizzard followed blizzard throughout the winter and into March 1881, leaving many areas snowbound throughout the winter. Accurate details in Wilder's novel include the blizzards' frequency and the deep cold, the Chicago and North Western Railway stopping trains until the spring thaw because the snow made the tracks impassable, the near-starvation of the townspeople, and the courage of her future husband Almanzo and another man, Cap Garland, who ventured out on the open prairie in search of a cache of wheat that no one was even sure existed. The Storm of the Century The Storm of the Century, also known as the Great Blizzard of 1993, was a large cyclonic storm that formed over the Gulf of Mexico on March 12, 1993, and dissipated in the North Atlantic Ocean on March 15. It is unique for its intensity, massive size and wide-reaching effect. At its height, the storm stretched from Canada towards Central America, but its main impact was on the United States and Cuba. The cyclone moved through the Gulf of Mexico, and then through the Eastern United States before moving into Canada. Areas as far south as northern Alabama and Georgia received a dusting of snow and areas such as Birmingham, Alabama, received up to with hurricane-force wind gusts and record low barometric pressures. Between Louisiana and Cuba, hurricane-force winds produced high storm surges across northwestern Florida, which along with scattered tornadoes killed dozens of people. In the United States, the storm was responsible for the loss of electric power to over 10 million customers. It is purported to have been directly experienced by nearly 40 percent of the country's population at that time. A total of 310 people, including 10 from Cuba, perished during this storm. The storm cost $6 to $10 billion in damages. List of blizzards North America 1700 to 1799 The Great Snow 1717 series of four snowstorms between February 27 and March 7, 1717. There were reports of about five feet of snow already on the ground when the first of the storms hit. By the end, there were about ten feet of snow and some drifts reaching , burying houses entirely. In the colonial era, this storm made travel impossible until the snow simply melted. Blizzard of 1765. March 24, 1765. Affected area from Philadelphia to Massachusetts. High winds and over of snowfall recorded in some areas. Blizzard of 1772. "The Washington and Jefferson Snowstorm of 1772". January 26–29, 1772. One of largest D.C. and Virginia area snowstorms ever recorded. Snow accumulations of recorded. The "Hessian Storm of 1778". December 26, 1778. Severe blizzard with high winds, heavy snows and bitter cold extending from Pennsylvania to New England. Snow drifts reported to be high in Rhode Island. Storm named for stranded Hessian troops in deep snows stationed in Rhode Island during the Revolutionary War. The Great Snow of 1786. December 4–10, 1786. Blizzard conditions and a succession of three harsh snowstorms produced snow depths of to from Pennsylvania to New England. Reportedly of similar magnitude of 1717 snowstorms. The Long Storm of 1798. November 19–21, 1798. Heavy snowstorm produced snow from Maryland to Maine. 1800 to 1850 Blizzard of 1805. January 26–28, 1805. Cyclone brought heavy snowstorm to New York City and New England. Snow fell continuously for two days where over of snow accumulated. New York City Blizzard of 1811. December 23–24, 1811. Severe blizzard conditions reported on Long Island, in New York City, and southern New England. Strong winds and tides caused damage to shipping in harbor. Luminous Blizzard of 1817. January 17, 1817. In Massachusetts and Vermont, a severe snowstorm was accompanied by frequent lightning and heavy thunder. St. Elmo's fire reportedly lit up trees, fence posts, house roofs, and even people. John Farrar professor at Harvard, recorded the event in his memoir in 1821. Great Snowstorm of 1821. January 5–7, 1821. Extensive snowstorm and blizzard spread from Virginia to New England. Winter of Deep Snow in 1830. December 29, 1830. Blizzard storm dumped in Kansas City and in Illinois. Areas experienced repeated storms thru mid-February 1831. "The Great Snowstorm of 1831" January 14–16, 1831. Produced snowfall over widest geographic area that was only rivaled, or exceeded by, the 1993 Blizzard. Blizzard raged from Georgia, to Ohio Valley, all the way to Maine. "The Big Snow of 1836" January 8–10, 1836. Produced to of snowfall in interior New York, northern Pennsylvania, and western New England. Philadelphia got a reported and New York City of snow. 1851 to 1900 Plains Blizzard of 1856. December 3–5, 1856. Severe blizzard-like storm raged for three days in Kansas and Iowa. Early pioneers suffered. "The Cold Storm of 1857" January 18–19, 1857. Produced severe blizzard conditions from North Carolina to Maine. Heavy snowfalls reported in east coast cities. Midwest Blizzard of 1864. January 1, 1864. Gale-force winds, driving snow, and low temperatures all struck simultaneously around Chicago, Wisconsin and Minnesota. Plains Blizzard of 1873. January 7, 1873. Severe blizzard struck the Great Plains. Many pioneers from the east were unprepared for the storm and perished in Minnesota and Iowa. Great Plains Easter Blizzard of 1873. April 13, 1873 Seattle Blizzard of 1880. January 6, 1880. Seattle area's greatest snowstorm to date. An estimated fell around the town. Many barns collapsed and all transportation halted. The Hard Winter of 1880-81. October 15, 1880. A blizzard in eastern South Dakota marked the beginning of this historically difficult season. Laura Ingalls Wilder's book The Long Winter details the effects of this season on early settlers. In the three year winter period from December 1885 to March 1888, the Great Plains and Eastern United States suffered a series of the worst blizzards in this nation's history ending with the Schoolhouse Blizzard and the Great Blizzard of 1888. The massive explosion of the volcano Krakatoa in the South Pacific late in August 1883 is a suspected cause of these huge blizzards during these several years. The clouds of ash it emitted continued to circulate around the world for many years. Weather patterns continued to be chaotic for years, and temperatures did not return to normal until 1888. Record rainfall was experienced in Southern California during July 1883 to June 1884. The Krakatoa eruption injected an unusually large amount of sulfur dioxide (SO2) gas high into the stratosphere which reflects sunlight and helped cool the planet over the next few years until the suspended atmospheric sulfur fell to ground. Plains Blizzard of late 1885. In Kansas, heavy snows of late 1885 had piled drifts high. Kansas Blizzard of 1886. First week of January 1886. Reported that 80 percent of the cattle were frozen to death in that state alone from the cold and snow. January 1886 Blizzard. January 9, 1886. Same system as Kansas 1886 Blizzard that traveled eastward. Great Plains Blizzards of late 1886. On November 13, 1886, it reportedly began to snow and did not stop for a month in the Great Plains region. Great Plains Blizzard of 1887. January 9–11, 1887. Reported 72-hour blizzard that covered parts of the Great Plains in more than of snow. Winds whipped and temperatures dropped to around . So many cows that were not killed by the cold soon died from starvation. When spring arrived, millions of the animals were dead, with around 90 percent of the open range's cattle rotting where they fell. Those present reported carcasses as far as the eye could see. Dead cattle clogged up rivers and spoiled drinking water. Many ranchers went bankrupt and others simply called it quits and moved back east. The "Great Die-Up" from the blizzard effectively concluded the romantic period of the great Plains cattle drives. Schoolhouse Blizzard of 1888 North American Great Plains. January 12–13, 1888. What made the storm so deadly was the timing (during work and school hours), the suddenness, and the brief spell of warmer weather that preceded it. In addition, the very strong wind fields behind the cold front and the powdery nature of the snow reduced visibilities on the open plains to zero. People ventured from the safety of their homes to do chores, go to town, attend school, or simply enjoy the relative warmth of the day. As a result, thousands of people—including many schoolchildren—got caught in the blizzard. Great Blizzard of March 1888 March 11–14, 1888. One of the most severe recorded blizzards in the history of the United States. On March 12, an unexpected northeaster hit New England and the mid-Atlantic, dropping up to of snow in the space of three days. New York City experienced its heaviest snowfall recorded to date at that time, all street railcars were stranded, and the storm led to the creation of the NYC subway system. Snowdrifts reached up to the second story of some buildings. Some 400 people died from this blizzard, including many sailors aboard vessels that were beset by gale-force winds and turbulent seas. Great Blizzard of 1899 February 11–14, 1899. An extremely unusual blizzard in that it reached into the far southern states of the US. It hit in February, and the area around Washington, D.C., experienced 51 hours straight of snowfall. The port of New Orleans was totally iced over; revelers participating in the New Orleans Mardi Gras had to wait for the parade routes to be shoveled free of snow. Concurrent with this blizzard was the extremely cold arctic air. Many city and state record low temperatures date back to this event, including all-time records for locations in the Midwest and South. State record lows: Nebraska reached , Ohio experienced , Louisiana bottomed out at , and Florida dipped below zero to . 1901 to 1939 Great Lakes Storm of 1913 November 7–10, 1913. "The White Hurricane" of 1913 was the deadliest and most destructive natural disaster ever to hit the Great Lakes Basin in the Midwestern United States and the Canadian province of Ontario. It produced wind gusts, waves over high, and whiteout snowsqualls. It killed more than 250 people, destroyed 19 ships, and stranded 19 others. Blizzard of 1918. January 11, 1918. Vast blizzard-like storm moved through Great Lakes and Ohio Valley. 1920 North Dakota blizzard March 15–18, 1920 Knickerbocker Storm January 27–28, 1922 1940 to 1949 Armistice Day Blizzard of 1940 November 10–12, 1940. Took place in the Midwest region of the United States on Armistice Day. This "Panhandle hook" winter storm cut a through the middle of the country from Kansas to Michigan. The morning of the storm was unseasonably warm but by mid afternoon conditions quickly deteriorated into a raging blizzard that would last into the next day. A total of 145 deaths were blamed on the storm, almost a third of them duck hunters who had taken time off to take advantage of the ideal hunting conditions. Weather forecasters had not predicted the severity of the oncoming storm, and as a result the hunters were not dressed for cold weather. When the storm began many hunters took shelter on small islands in the Mississippi River, and the winds and waves overcame their encampments. Some became stranded on the islands and then froze to death in the single-digit temperatures that moved in over night. Others tried to make it to shore and drowned. North American blizzard of 1947 December 25–26, 1947. Was a record-breaking snowfall that began on Christmas Day and brought the Northeast United States to a standstill. Central Park in New York City got of snowfall in 24 hours with deeper snows in suburbs. It was not accompanied by high winds, but the snow fell steadily with drifts reaching . Seventy-seven deaths were attributed to the blizzard. The Blizzard of 1949 - The first blizzard started on Sunday, January 2, 1949; it lasted for three days. It was followed by two more months of blizzard after blizzard with high winds and bitter cold. Deep drifts isolated southeast Wyoming, northern Colorado, western South Dakota and western Nebraska, for weeks. Railroad tracks and roads were all drifted in with drifts of and more. Hundreds of people that had been traveling on trains were stranded. Motorists that had set out on January 2 found their way to private farm homes in rural areas and hotels and other buildings in towns; some dwellings were so crowded that there wasn't enough room for all to sleep at once. It would be weeks before they were plowed out. The Federal government quickly responded with aid, airlifting food and hay for livestock. The total rescue effort involved numerous volunteers and local agencies plus at least ten major state and federal agencies from the U.S. Army to the National Park Service. Private businesses, including railroad and oil companies, also lent manpower and heavy equipment to the work of plowing out. The official death toll was 76 people and one million livestock. Youtube video Storm of the Century - the Blizzard of '49 Storm of the Century - the Blizzard of '49 1950 to 1959 Great Appalachian Storm of November 1950 November 24–30, 1950 March 1958 Nor'easter blizzard March 18–21, 1958. The Mount Shasta California Snowstorm of 1959 – The storm dumped of snow on Mount Shasta. The bulk of the snow fell on unpopulated mountainous areas, barely disrupting the residents of the Mount Shasta area. The amount of snow recorded is the largest snowfall from a single storm in North America. 1960 to 1969 March 1960 Nor'easter blizzard March 2–5, 1960 December 1960 Nor'easter blizzard December 12–14, 1960. Wind gusts up to . March 1962 Nor'easter Great March Storm of 1962 – Ash Wednesday. North Carolina and Virginia blizzards. Struck during Spring high tide season and remained mostly stationary for almost 5 days causing significant damage along eastern coast, Assateague island was under water, and dumped of snow in Virginia. North American blizzard of 1966 January 27–31, 1966 Chicago Blizzard of 1967 January 26–27, 1967 February 1969 nor'easter February 8–10, 1969 March 1969 Nor'easter blizzard March 9, 1969 December 1969 Nor'easter blizzard December 25–28, 1969. 1970 to 1979 The Great Storm of 1975 known as the "Super Bowl Blizzard" or "Minnesota's Storm of the Century". January 9–12, 1975. Wind chills of to recorded, deep snowfalls. Groundhog Day gale of 1976 February 2, 1976 Buffalo Blizzard of 1977 January 28 – February 1, 1977. There were several feet of packed snow already on the ground, and the blizzard brought with it enough snow to reach Buffalo's record for the most snow in one season – . Great Blizzard of 1978 also called the "Cleveland Superbomb". January 25–27, 1978. Was one of the worst snowstorms the Midwest has ever seen. Wind gusts approached , causing snowdrifts to reach heights of in some areas, making roadways impassable. Storm reached maximum intensity over southern Ontario Canada. Northeastern United States Blizzard of 1978 – February 6–7, 1978. Just one week following the Cleveland Superbomb blizzard, New England was hit with its most severe blizzard in 90 years since 1888. Chicago Blizzard of 1979 January 13–14, 1979 1980 to 1989 February 1987 Nor'easter blizzard February 22–24, 1987 1990 to 1999 1991 Halloween blizzard Upper Mid-West US, October 31 – November 3, 1991 December 1992 Nor'easter blizzard December 10–12, 1992 1993 Storm of the Century March 12–15, 1993. While the southern and eastern U.S. and Cuba received the brunt of this massive blizzard, the Storm of the Century impacted a wider area than any in recorded history. February 1995 Nor'easter blizzard February 3–6, 1995 Blizzard of 1996 January 6–10, 1996 April Fool's Day Blizzard March 31 – April 1, 1997. US East Coast 1997 Western Plains winter storms October 24–26, 1997 Mid West Blizzard of 1999 January 2–4, 1999 2000 to 2009 January 25, 2000 Southeastern United States winter storm January 25, 2000. North Carolina and Virginia December 2000 Nor'easter blizzard December 27–31, 2000 North American blizzard of 2003 February 14–19, 2003 (Presidents' Day Storm II) December 2003 Nor'easter blizzard December 6–7, 2003 North American blizzard of 2005 January 20–23, 2005 North American blizzard of 2006 February 11–13, 2006 Early winter 2006 North American storm complex Late November 2006 Colorado Holiday Blizzards (2006–07) December 20–29, 2006 Colorado February 2007 North America blizzard February 12–20, 2007 January 2008 North American storm complex January, 2008 West Coast US North American blizzard of 2008 March 6–10, 2008 2009 Midwest Blizzard 6–8 December 2009, a bomb cyclogenesis event that also affected parts of Canada North American blizzard of 2009 December 16–20, 2009 2009 North American Christmas blizzard December 22–28, 2009 2010 to 2019 February 5–6, 2010 North American blizzard February 5–6, 2010 Referred to at the time as Snowmageddon was a Category 3 ("major") nor'easter and severe weather event. February 9–10, 2010 North American blizzard February 9–10, 2010 February 25–27, 2010 North American blizzard February 25–27, 2010 October 2010 North American storm complex October 23–28, 2010 December 2010 North American blizzard December 26–29, 2010 January 31 – February 2, 2011 North American blizzard January 31 – February 2, 2011. Groundhog Day Blizzard of 2011 2011 Halloween nor'easter October 28 – Nov 1, 2011 Hurricane Sandy October 29–31, 2012. West Virginia, western North Carolina, and southwest Pennsylvania received heavy snowfall and blizzard conditions from this hurricane November 2012 nor'easter November 7–10, 2012 December 17–22, 2012 North American blizzard December 17–22, 2012 Late December 2012 North American storm complex December 25–28, 2012 February 2013 nor'easter February 7–20, 2013 February 2013 Great Plains blizzard February 19 – March 6, 2013 March 2013 nor'easter March 6, 2013 October 2013 North American storm complex October 3–5, 2013 Buffalo, NY blizzard of 2014. Buffalo got over of snow during November 18–20, 2014. January 2015 North American blizzard January 26–27, 2015 Late December 2015 North American storm complex December 26–27, 2015 Was one of the most notorious blizzards in the state of New Mexico and West Texas ever reported. It had sustained winds of over and continuous snow precipitation that lasted over 30 hours. Dozens of vehicles were stranded in small county roads in the areas of Hobbs, Roswell, and Carlsbad New Mexico. Strong sustained winds destroyed various mobile homes. January 2016 United States blizzard January 20–23, 2016 February 2016 North American storm complex February 1–8, 2016 February 2017 North American blizzard February 6–11, 2017 March 2017 North American blizzard March 9–16, 2017 Early January 2018 nor’easter January 3–6, 2018 March 2019 North American blizzard March 8–16, 2019 April 2019 North American blizzard April 10–14, 2019 2020 to present December 5–6, 2020 nor'easter December 5–6, 2020 January 31 – February 3, 2021 nor'easter January 31 – February 3, 2021 February 13–17, 2021 North American winter storm February 13–17, 2021 March 2021 North American blizzard March 11–14, 2021 January 2022 North American blizzard January 27–30, 2022 December 2022 North American winter storm December 21–26, 2022 March 2023 North American winter storm March 12–15, 2023 January 8–10, 2024 North American storm complex January 8–10, 2024 January 10–13, 2024 North American storm complex January 10–13, 2024 January 5–6, 2025 United States blizzard January 5–6, 2025 January 20–22, 2025 Gulf Coast blizzard January 20–22, 2025 Canada The Eastern Canadian Blizzard of 1971 – Dumped a foot and a half (45.7 cm) of snow on Montreal and more than elsewhere in the region. The blizzard caused the cancellation of a Montreal Canadiens hockey game for the first time since 1918. Saskatchewan blizzard of 2007 – January 10, 2007, Canada United Kingdom Great Frost of 1709 Blizzard of January 1881 Winter of 1894–95 in the United Kingdom Winter of 1946–1947 in the United Kingdom Winter of 1962–1963 in the United Kingdom January 1987 Southeast England snowfall Winter of 1990–91 in Western Europe February 2009 Great Britain and Ireland snowfall Winter of 2009–10 in Great Britain and Ireland Winter of 2010–11 in Great Britain and Ireland Early 2012 European cold wave Other locations 1954 Romanian blizzard 1972 Iran blizzard Winter of 1990–1991 in Western Europe July 2007 Argentine winter storm 2008 Afghanistan blizzard 2008 Chinese winter storms Winter storms of 2009–2010 in East Asia See also Cold wave Lake-effect snow Nor'easter European windstorm Whiteout (weather) Blowing snow advisory Ground blizzard Severe weather terminology (Canada) Snowsquall Blowing snow List of blizzards References External links Digital Snow Museum Photos of historic blizzards and snowstorms. Farmers Almanac List of Worst Blizzards in the United States United States Search and Rescue Task Force: About Blizzards A Historical Review On The Origin and Definition of the Word Blizzard Dr Richard Wild Snow or ice weather phenomena Storm Weather hazards Hazards of outdoor recreation
Blizzard
Physics
5,989
2,409,546
https://en.wikipedia.org/wiki/Music%20venue
A music venue is any location used for a concert or musical performance. Music venues range in size and location, from a small coffeehouse for folk music shows, an outdoor bandshell or bandstand or a concert hall to an indoor sports stadium. Typically, different types of venues host different genres of music. Opera houses, bandshells, and concert halls host classical music performances, whereas public houses ("pubs"), nightclubs, and discothèques offer music in contemporary genres, such as rock, dance, country, and pop. Music venues may be either privately or publicly funded, and may charge for admission. An example of a publicly funded music venue is a bandstand in a municipal park; such outdoor venues typically do not charge for admission. A nightclub is a privately funded venue operated as a profit-making business; venues like these typically charge an entry fee to generate a profit. Music venues do not necessarily host live acts; disc jockeys at a discothèque or nightclub play recorded music through a PA system. Depending on the type of venue, the opening hours, location and length of performance may differ, as well as the technology used to deliver the music in the venue. Other attractions, such as performance art, standup comedy, or social activities, may also be available, either while music is playing or at other times. For example, at a bar or pub, the house band may be playing live songs while drinks are being served, and between songs, recorded music may be played. Some classes of venues may play live music in the background, such as a performance on a grand piano in a restaurant. Characteristics Music venues can be categorised in a number of ways. Typically, the genre of music played at the venue, whether it is temporary and who owns the venue decide many of the other characteristics. Permanent or temporary venues The majority of music venues are permanent; however, there are temporary music venues. An example of a temporary venue would be one constructed for a music festival. Ownership Music venues are typically either private businesses or public facilities set up by a city, state, or national government. Some music venues are also run by non-government organizations, such as music associations. Genre Some venues only promote and hold shows of one particular genre, such as opera houses. Stadiums, on the other hand, may show rock, classical, and world music. Size and capacity Music venues can be categorised by size and capacity. The smallest venues, coffeeshops and tiny nightclubs have room for tens of spectators; the largest venues, such as stadiums, can hold tens of thousands of spectators. Indoor or outdoor Music venues are either outdoor or indoor. Examples of outdoor venues include bandstands and bandshells; such outdoor venues provide minimal shelter for performing musicians and are usually located in parks. A temporary music festival is typically an outdoor venue. Examples of indoor venues include public houses, nightclubs, coffee bars, and stadia. Live or recorded music Venues can play live music, recorded music, or a combination of the two, depending on the event or time of day. Discothèques are mainly designed for prerecorded music played by a disc jockey. Live music venues have one or more stages for the performers. Admissions policy and opening hours Venues may be unticketed, casual entry available on the door with a cover charge, or advance tickets only. A dress code may or may not apply. Centrality of performance At some venues, the main focus is watching the show, such as at opera houses or classical recital halls. In some venues that also include food and beverage service, the performers may be playing background music to add to the ambiance. Types Amphitheater Amphitheaters are round- or oval-shaped and usually unroofed. Permanent seating at amphitheaters is generally tiered. Bandshell and bandstand A bandshell is a large, outdoor performing structure typically used by concert bands and orchestras. The roof and the back half of the shell protect musicians from the elements and reflect sound through the open side and out towards the audience. Bandstand is a small outdoor structure. Concert hall A concert hall is a performance venue constructed specifically for instrumental classical music. A concert hall may exist as part of a larger performing arts center. Jazz club Jazz clubs are an example of a venue that is dedicated to a specific genre of music. A jazz club is a venue where the primary entertainment is the performance of live jazz music, although some jazz clubs primarily focus on the study and/or promotion of jazz-music. Jazz clubs are usually a type of nightclub or bar, which is licensed to sell alcoholic beverages. Jazz clubs were in large rooms in the eras of orchestral jazz and big band jazz, when bands were large and often augmented by a string section. Large rooms were also more common in the Swing era, because at that time, jazz was popular as a dance music, so the dancers needed space to move. With the transition to 1940s-era styles like bebop and later styles such as soul jazz, small combos of musicians such as quartets and trios were mostly used, and the music became more of a music to listen to, rather than a form of dance music. As a result, smaller clubs with small stages became practical. In the 2000s, jazz clubs may be found in the basements of larger residential buildings, in storefront locations or in the upper floors of retail businesses. They can be rather small compared to other music venues, such as rock music clubs, reflecting the intimate atmosphere of jazz shows and long-term decline in popular interest in jazz. Despite being called "clubs", these venues are usually not exclusive. Some clubs, however, have a cover charge if a live band is playing. Some jazz clubs host "jam sessions" after hours or on early evenings of the week. At jam sessions, both professional musicians and advanced amateurs will typically share the stage. Live house In Japan, small live music clubs are known as live houses (ライブハウス), especially featuring rock, jazz, blues, and folk music, and have existed since the 1970s, now being found across the country. The term is a Japanese coinage (wasei eigo) and is mainly used in East Asia. The oldest live house is Coffee House Jittoku (拾得, after the Chinese monk Shide "Foundling") in Kyoto, founded in 1973 in an old sake warehouse. Soon afterwards, the idea spread through Japan. In recent years, similar establishments started to appear in big cities in South Korea and China; many of them are also locally called "live houses." Opera house An opera house is a theatre venue constructed specifically for opera. It consists of a stage, an orchestra pit, audience seating, and backstage facilities for costumes and set building. While some venues are constructed specifically for operas, other opera houses are part of larger performing arts centers. Indeed the term opera house itself is often used as a term of prestige for any large performing-arts center. The Teatro San Carlo in Naples, opened in 1737, introduced the horseshoe-shaped auditorium, the oldest in the world, a model for the Italian theater. On this model were built subsequent theaters in Italy and Europe, among others, the court theater of the Palace of Caserta, which became the model for other theaters. Given the popularity of opera in 18th and 19th century Europe, opera houses are usually large, often containing more than 1,000 seats. Traditionally, Europe's major opera houses built in the 19th century contained between about 1,500 to 3,000 seats, examples being Brussels' La Monnaie (after renovations, 1,700 seats), Odessa Opera and Ballet Theater (with 1,636), Warsaw's Grand Theatre (the main auditorium with 1,841), Paris' Palais Garnier (with 2,200), the Royal Opera House in London (with 2,268), and the Vienna State Opera (the new auditorium with 2,280). Modern opera houses of the 20th century such as New York's Metropolitan Opera House (with 3,800) and the War Memorial Opera House in San Francisco (with 3,146) are larger. Many operas are better suited to being presented in smaller theaters, such as Venice's La Fenice with about 1,000 seats. In a traditional opera house, the auditorium is U-shaped, with the length of the sides determining the audience capacity. Around this are tiers of balconies, and often, nearer to the stage, are boxes (small partitioned sections of a balcony). Since the latter part of the 19th century, opera houses often have an orchestra pit, where many orchestra players may be seated at a level below the audience, so that they can play without overwhelming the singing voices. This is especially true of Wagner's Bayreuth Festspielhaus where the pit is partially covered. The size of an opera orchestra varies, but for some operas, oratorios and other works, it may be very large; for some romantic period works (or for many of the operas of Richard Strauss), it can be more than 100 players. Similarly, an opera may have a large cast of characters, chorus, dancers and supernumeraries. Therefore, a major opera house will have extensive dressing room facilities. Opera houses often have on-premises set and costume building shops and facilities for storage of costumes, make-up, masks, and stage properties, and may also have rehearsal spaces. Major opera houses throughout the world often have highly mechanized stages, with large stage elevators permitting heavy sets to be changed rapidly. At the Metropolitan Opera, for instance, sets are often changed during the action, as the audience watches, with singers rising or descending as they sing. This occurs in the Met's productions of operas such as Aida and Tales of Hoffmann. London's Royal Opera House, which was remodeled in the late 1990s, retained the original 1858 auditorium at its core, but added completely new backstage and wing spaces as well as an additional performance space and public areas. Much the same happened in the remodeling of Milan's La Scala opera house between 2002 and 2004. Although stage, lighting and other production aspects of opera houses often make use of the latest technology, traditional opera houses have not used sound reinforcement systems with microphones and loudspeakers to amplify the singers, since trained opera singers are normally able to project their unamplified voices in the hall. Since the 1990s, however, some opera houses have begun using a subtle form of sound reinforcement called acoustic enhancement. Public houses and nightclubs A pub, or public house, is an establishment licensed to sell alcoholic drinks, which traditionally include beer (such as ale) and cider. It is a social drinking establishment and a prominent part of British, Irish, Breton, New Zealand, Canadian, South African, and Australian cultures. In many places, especially in villages, a pub is the focal point of the community. In his 17th-century diary, Samuel Pepys described the pub as "the heart of England". Most pubs focus on offering beers, ales and similar drinks. As well, pubs often sell wines, spirits, and soft drinks, meals and snacks. Pubs may be venues for pub songs and live music. During the 1970s pubs provided an outlet for a number of bands, such as Kilburn and the High Roads, Dr. Feelgood, and The Kursaal Flyers, who formed a musical genre called pub rock that was a precursor to Punk music. A nightclub is an entertainment venue and bar that usually operates late into the night. A nightclub is generally distinguished from regular bars, pubs or taverns by the inclusion of a stage for live music, one or more dance floor areas and a DJ booth, where a DJ plays recorded music. The upmarket nature of nightclubs can be seen in the inclusion of VIP areas in some nightclubs, for celebrities and their guests. Nightclubs are much more likely than pubs or sports bars to use bouncers to screen prospective clubgoers for entry. Some nightclub bouncers do not admit people with informal clothing or gang apparel as part of a dress code. The busiest nights for a nightclub are Friday and Saturday night. Most clubs or club nights cater to certain music genres, such as house music or hip hop. Many clubs have recurring club nights on different days of the week. Most club nights focus on a particular genre or sound for branding effects. Stadiums and arenas A stadium is a place or venue for (mostly) outdoor sports, concerts, or other events and consists of a field or stage either partly or completely surrounded by a tiered structure designed to allow spectators to stand or sit and view the event. Although concerts, such as classical music, had been presented in them for decades, beginning in the 1960s stadiums began to be used as live venues for popular music, giving rise to the term "stadium rock", particularly for forms of hard rock and progressive rock. The origins of stadium rock are sometimes dated to when The Beatles played Shea Stadium in New York in 1965. Also important was the use of large stadiums for American tours by bands in the later 1960s, such as The Rolling Stones, Grand Funk Railroad, and Led Zeppelin. The tendency developed in the mid-1970s as the increased power of amplification and sound systems allowed the use of larger and larger venues. Smoke, fireworks and sophisticated lighting shows became staples of arena rock performances. Key acts from this era included Journey, REO Speedwagon, Boston, Foreigner, Styx, Kiss, Peter Frampton, and Queen. In the 1980s, arena rock became dominated by glam metal bands, following the lead of Aerosmith and including Mötley Crüe, Quiet Riot, W.A.S.P., and Ratt. Since the 1980s, rock, pop, and folk stars, including the Grateful Dead, Madonna, Britney Spears, Beyoncé, and Taylor Swift, have undertaken large-scale stadium based concert tours. Theater A theater or playhouse is a structure where theatrical works or plays are performed, or other performances such as musical concerts may be produced. A theatre used for opera performances is called an opera house. The theater serves to define the performance and audience spaces. The facility is traditionally organized to provide support areas for performers, the technical crew and the audience members. There are as many types of theaters as there are types of performance. Theaters may be built specifically for a certain types of productions, they may serve for more general performance needs or they may be adapted or converted for use as a theater. They may range from open-air amphitheaters to ornate, cathedral-like structures to simple, undecorated rooms or black box theaters. Some theaters may have a fixed performing area (in most theaters this is known as the stage), while some theaters, such as "black box theaters", may not. For the audience, theaters may include balconies or galleries, boxes, typically considered the most prestigious area of the house, and "house seats", known as "the best seats in the house", giving the best view of the stage. See also Art space Auditorium Concert hall Cultural centre History of music Music festival Music gym Music venues in the Netherlands List of concert venues References External links Theatres Buildings and structures by type Dance venues
Music venue
Engineering
3,124
51,176,694
https://en.wikipedia.org/wiki/Mercedes%20Richards
Mercedes Tharam Richards (Kingston, 14 May 1955 – Hershey, 3 February 2016), née Davis, was a Jamaican astronomy and astrophysics professor. Her investigation focused on computational astrophysics, stellar astrophysics and exoplanets and brown dwarfs, and the physical dynamics of interacting binary stars systems. However, her pioneering research in the tomography of interacting binary star systems and cataclysmic variable stars to predict magnetic activity and simulate gas flow is her most known work. In 1977, she graduated with the degree of BSc in Physics from the University of the West Indies. She then moved to Toronto, where two years later, in 1979, she received the MS in Space Science at York University, Toronto, and in 1986 she earned her PhD in Astronomy and Astrophysics from the University of Toronto. She worked as president of Commission 42 of the International Astronomical Union (IAU) which deals with Close Binary Stars, was a member of the Board of Advisers of the Caribbean Institute of Astronomy and a councilor of the American Astronomical Society. She also cooperated with some projects to encourage people's engagement with science, one of them was the Summer Experience in the Eberly College of Science in 2006, a six-week summer programme that was designed to engage low-income high-school students in science research. Biography Early years She was born in Kingston, Jamaica, on 14 May 1955. In one of the city's suburbs, she was raised by her father, Frank Davis, a police detective who stressed to her the importance of observation and deduction, and her mother, Phyllis Davis, an accountant who instilled in her the idea of doing her work with precision. Richards and her father would go to a botanical garden after dawn and observe the nature around them. He taught her to identify the colour varieties, training that would be useful in her career for the examination of stars. "What I do is definitely detective work," she explains. "Astronomers want to know what happened. We look for evidence. We have to piece it all together like forensic scientists of the sky." She attended Providence Primary School and graduated in 1966. Then, she studied in St. Hugh's High School, an all-girls high school. The fact of only having female teachers inspired her because she used them as role models. She graduated in 1973. Academic beginnings Richards graduated with the degree of BSc with Special Honors in Physics from the University of the West Indies in 1977 and in 1979 she earned a MS in Space Science at York University in Toronto. Persistence during her studies in Toronto helped her in a time where teachers were really tough with female students. She received her PhD in Astronomy and Astrophysics in 1986 at the University of Toronto. Personal life She married Donald Richards in 1980, a professor of Statistics at Pennsylvania State University. They had two daughters, Chandra and Suzanne. Even though she had little free time, she enjoyed reading detective novels and writing poetry. She liked playing the violin and she passed some exams of the British Royal School of Music. Richards spoke French fluently and had a working knowledge of Spanish, Slovak, Czech and German. Along with her many refined hobbies, she maintained a concern with the most vulnerable part and cooperated with food banks. Professional career During the 1986–87 scholar year, she worked as a visiting scholar at the University of North Carolina. In 1987, she joined the University of Virginia in Charlottesville, where she started as an assistant professor in the Department of Astronomy. At this university she moved up to associate professor in 1993 and became a professor of astronomy in 1999. During that year she worked at the Vatican Observatory in Castel Gandolfo. One year later, in 2000, she visited the Institute for Advanced Study in Princeton, New Jersey, as an invited scientist. In 2002, she was hired as professor of astronomy and astrophysics in the Pennsylvania State University, where she worked until her passing. During her tenure, she was appointed assistant department chair. However, she visited some universities during that period, among them, the University of Heidelberg, Germany, in 2013. She was involved in one of the most important decisions of recent years while she was a member of the IAU in Prague where, in 2006, it was decided that Pluto would not be a planet anymore. In 2011, Richards organized the IAU symposium in Slovakia, the first joint international meeting between binary star specialists. She also participated in programs of math and science enrichment for high-school students in Pennsylvania, Maryland, Michigan, New York, Vermont, Virginia and Toronto. Richards served as an officer in some astronomical organizations, for example, as president of Commission 42 of the IAU, member of the Board of Advisers of the Caribbean Institute of Astronomy or councilor of the American Astronomical Society. Additionally, she was part of the Eberly College of Science’s Climate and Diversity Committee, an organization whose aim was to create a good environment for all members of the college. On the other hand, she led multiple research studies in different parts of the world: at the Kitt Peak National Observatory in Arizona, at the Skalnaté Pleso Observatory and the one in Lomnický štít, Slovakia, the South African Astronomical Observatory in Cape Town, the Spanish National Observatory in Madrid and the Teide Observatory in Tenerife. Apart from that, she was committed to science, so she was involved in some programs to encourage people's engagement with it. In 2006, she and Jacqueline Bortiatynski, a chemistry lecturer, co-founded the Summer Experience in the Eberly College of Science. This six-week summer program was designed to engage low-income high school students in science research. She also participated in Exploration Day, a local event emphasizing practical science learning for families, and Penn State Astronomy's Astrofest and Astronight, annual events held to promote astronomy and science to families in the Central Pennsylvania area. Richards was a mentor and an advocate for the promotion of young people, including women and other underrepresented groups, in physics and astronomy. Research Her main research interests can be divided into three groups: computational astrophysics, stellar astrophysics, and exoplanets and brown dwarfs. Among the investigations she carried out, the speciality of Richards’ research are binary stars. A binary star is a system formed by two stars that were born at the same time which orbit around a common center. The stars, however, mature at different rates. She was the first to use tomography in astronomy. At first, it was widely used in medicine or archeology, but Richards found the utility of this to test the gas-flow model that she created for her doctoral thesis. She used tomography to observe the binary system as it moves in relation to the Earth. This proves how the gas flows between the stars. As well as verifying the accuracy of her model, she could demonstrate that the gravity force acts in binary stars as the physical laws predicted. Richards also worked with a team of Russian scientists in order to continue with her studies about the gas flows between binary stars. She was also a pioneer in making theoretical hydrodynamic simulations of the Algol binary stars, where the gas flows from one star and hits the surface of the other one. Additionally, she developed distance correlation methods to discover associations in large astrophysical databases. Within the field of stellar astrophysics, she created three-dimensional "movies" of mass-exchange systems by scrutinizing spectroscopic and photometric time series of stars and compact objects in close orbit. This work could give important clues to the way mass transfer happens. Moreover, she researched the magnetic activity in cool stars and in the Sun, and the effects of nearby interacting binary stars on optical interferometry. Awards and honors In 2005, she was elected Honorary Member of the National Honor Society of Phi Eta Sigma in the Pennsylvania State University. In 2008 the Institute of Jamaica bestowed on her the Musgrave Medal in Gold, its most important academic prize. This one, however, is not only an award given to scientists, but also to artists and writers. Richards was the 14th scientist who received this medal. In 2009, she received the Award for Outstanding Achievement in Astronomy & Astrophysics on St. Hugh's High School 110th Anniversary. Three years later, in 2011, she received a Fulbright Distinguished Chair Research award from the Council for International Exchange of Scholars and the Slovak Fulbright Commission, which allowed her to conduct research on interacting binary stars at the Astronomical Institute of Slovakia in that same year. In 2012, she was elected again as an Honorary Member of the Physics Honor Society, Sigma Pi Sigma in the Quadrennial Congress, Orlando. In 2013 she was awarded the American Physical Society Woman Physicist of the Month Award. References Jamaican astronomers 1955 births 2016 deaths People from Kingston, Jamaica University of the West Indies alumni York University alumni University of Toronto alumni Recipients of the Musgrave Medal Fulbright Distinguished Chairs Computational physicists People educated at St Hugh's High School
Mercedes Richards
Physics
1,824
16,000,580
https://en.wikipedia.org/wiki/Simulation%20Model%20Portability
The Simulation Model Portability is a standard for simulation models developed by ESA together with various stakeholders in the European Space Industry. The first version, also known as SMI standard, was implemented in SIMSAT and EuroSim, simulator infrastructures in use at various ESA locations. The second version, also known as Simulation Modelling Platform - SMP2, is currently at version 1.2. It is implemented in several Space Simulators such as Basiles (CNES) or SimTG (AIRBUS D&S) The ECSS has now taken the lead for further iterations of the SMP2 standard. At first, it has been published as a Technical Memorandum ECSS-E-TM-40-07 and in 2020 it become the ECSS standard ECSS-E-ST-40-07. The first implementations have already been released by several stakeholders in the European Space Industry. Simulations models have already been exchanged, using the new ECSS SMP between SIMULUS (ESA) and SimTG (AIRBUS D&S) simulators. An update of the ECSS SMP E-40-07 is currently ongoing to specify the exchanged artefacts; called ECSS SMP Level 2 used to instanciate, configure and schedule a model execution. It is a collection of XSD files that define the model exchange format. References External links Simulation Model Portability SMP implementation in Eurosim Presentation at the 6th NASA/ESA Workshop on product exchange 25 January 2011: ECSS-E-TM-40-07 "Simulation modelling platform" made available European Space Agency
Simulation Model Portability
Astronomy
325
1,491,614
https://en.wikipedia.org/wiki/Dennis%20Brutus
Dennis Vincent Brutus (28 November 1924 – 26 December 2009) was a South African activist, educator, journalist and poet best known for his campaign to have South Africa banned from the Olympic Games due to its racial policy of apartheid. Life and work Born in Salisbury, Southern Rhodesia in 1924 to South African parents, Brutus was of indigenous Khoi, Dutch, French, English, German and Malay ancestry. His parents moved back home to Port Elizabeth when he was aged four, and young Brutus was classified under South Africa's apartheid racial code as "coloured". Brutus was a graduate of the University of Fort Hare (BA, 1946) and of the University of the Witwatersrand, where he studied law. He taught English and Afrikaans at several high schools in South Africa after 1948, but was eventually dismissed for his vocal criticism of apartheid. He served on the faculty of the University of Denver, Northwestern University and University of Pittsburgh, and was a Professor Emeritus from the last institution. In 2008, Brutus was awarded the Lifetime Honorary Award by the South African Department of Arts and Culture for his lifelong dedication to African and world poetry and literary arts. Activist Brutus was an activist against the apartheid government of South Africa in the 1950s and 1960s. He learned politics in the Trotskyist movement of the Eastern Cape. Although not an accomplished athlete in his own right, he was motivated by the unfairness of selections for athletic teams. He joined the Anti-Coloured Affairs Department organisation (Anti-CAD), a Trotskyist group that organised against the Coloured Affairs Department, which was an attempt by the government to institutionalise divisions between blacks and coloureds. In 1958, he formed the South African Sports Association, and as Secretary was strongly opposed to a proposed cricket tour by Frank Worrell’s West Indies to South Africa in 1959, leading a successful campaign to have it cancelled. In 1962, Brutus was a co-founder of the South African Non-Racial Olympic Committee (SANROC), an organisation that would be heavily influential in the banning of apartheid-era South Africa from the Olympics in 1964. In 1961, Brutus was banned for his political activities as part of SANROC. As South Africa attempted, in 1968, to get back into the Olympics by arguing that they would field multi-racial teams, SANROC successfully pointed out that those teams were chosen on a segregated basis, leading to South Africa's continued ban from 1968 until 1992. Arrest and jail In 1963, Brutus was arrested for trying to meet with an International Olympic Committee (IOC) official; he was accused of breaking the terms of his "banning," which were that he could not meet with more than two people outside his family, and he was sentenced to 18 months in jail. However, he "jumped bail" by trying to leave South Africa to attend the IOC meeting in Baden-Baden, West Germany, on behalf of SANROC and while he was in Mozambique, on a Rhodesian passport, the Portuguese colonial secret police arrested him and returned him to South Africa. There, while trying to escape, he was shot in the back at point-blank range. After only partly recovering from the wound, Brutus was sent to Robben Island for 16 months, five in solitary. He was in the cell next to Nelson Mandela's. Brutus was in prison when news of the country's suspension from the 1964 Tokyo Olympics, for which he had campaigned, broke. Brutus was forbidden to teach, write and publish in South Africa. His first collection of poetry, Sirens, Knuckles and Boots (1963), was published in Nigeria while he was in prison. The book received the Mbari Poetry Prize, awarded to a black poet of distinction, but Brutus turned it down on the grounds of its racial exclusivity. He was the author of 14 books. Release from jail After he was released, in 1965, Brutus left South Africa on an exit permit, which meant he could never return home while the apartheid regime stayed in power. He went into exile in Britain, where he first met George Houser, the executive director of the American Committee on Africa (ACOA). South Africa made a concerted effort to get reinstated to the Olympic Games in Mexico City in 1968. Its Prime Minister John Vorster outlined a new policy of fielding a multi-racial team. At first the IOC accepted this new policy and was going to allow South Africa to compete, but SANROC pointed out that there would be no mixed sporting events within South Africa and therefore all South African athletes chosen for the Games would be chosen under a segregated framework. In 1967, Brutus came to the United States under the auspices of the ACOA on a speaking tour, where he acquainted Americans more closely with the present situation in South Africa, informed American sports organisations about the segregated conditions that South African athletes must endure, and raised money to support the ACOA's Africa Defense and Aid Fund to support the defence of those charged under the apartheid laws. The Supreme Council for Sport in Africa, which represented the independent African nations at the IOC, threatened to boycott if South Africa was included in the 1968 Games. In co-operation with SANROC, the ACOA organised a boycott of American athletes in February 1968. Jackie Robinson, the first African-American athlete to break the colour barrier in major league baseball, published a statement calling for continued suspension of South Africa from the Olympic Games. As a result of the international pressure, the IOC relented and kept South Africa out of the Olympic Games from 1968 until 1992. Life in the United States In 1971, Brutus settled in the United States, where he served as professor of African Literature at Northwestern University. When his British passport was cancelled in the wake of Zimbabwe's independence in 1980, he was threatened with deportation, and he fought a protracted and highly publicized legal battle until 1983, when he was granted asylum in the United States. He continued to participate in protests against the apartheid government while teaching in the United States. He was eventually "unbanned" by the South African government in 1990, and in 1991 he became one of the sponsors of the Committee for Academic Freedom in Africa. Brutus taught at Amherst College, Cornell University, and Swarthmore College, before heading, in 1986, to the University of Pittsburgh, where he served a professor of African Literature until his retirement. Return to South Africa, poetry and activism He returned to South Africa and was based at the University of KwaZulu-Natal, where he often contributed to the annual Poetry Africa Festival hosted by the university and supported activism against neo-liberal policies in contemporary South Africa through working with NGOs. In December 2007, Brutus was to be inducted into the South African Sports Hall of Fame. At the induction ceremony, he publicly turned down his nomination, stating: According to fellow writer Olu Oguibe, interim Director of the Institute for African American Studies at the University of Connecticut, "Brutus was arguably Africa's greatest and most influential modern poet after Leopold Sedar Senghor and Christopher Okigbo, certainly the most widely-read, and no doubt among the world's finest poets of all time. More than that, he was a fearless campaigner for justice, a relentless organizer, an incorrigible romantic, and a great humanist and teacher." Brutus died on 26 December 2009, aged 85, at his home in Cape Town, South Africa, from prostate cancer. He is survived by two sisters, eight children including his son Anthony, nine grandchildren, and four great-grandchildren. The Dennis Brutus Tapes: Essays at Autobiography, edited by Bernth Lindfors, was published in 2011, including transcripts of tapes recorded when he was a visiting professor at the University of Texas at Austin in 1974–75, reflecting on his life and career. Bibliography Sirens, Knuckles and Boots (Mbari Productions, 1963). Letters to Martha and Other Poems from a South African Prison (Heinemann, 1968). Poems from Algiers (African and Afro-American Studies and Research Institute, 1970). A Simple Lust (Heinemann, 1973). China Poems (African and Afro-American Studies and Research Centre, 1975). Stubborn Hope (Three Continents Press/Heinemann, 1978). Salutes and Censures (Fourth Dimension, 1982). Airs & Tributes (Whirlwind Press, 1989). Still the Sirens (Pennywhistle Press, 1993). Remembering Soweto, ed. Lamont B. Steptoe (Whirlwind Press, 2004). Leafdrift, ed. Lamont B. Steptoe (Whirlwind Press, 2005). Sustar, Lee, and Karim, Aisha (eds), Poetry and Protest: A Dennis Brutus Reader (Haymarket Books, 2006). It is The Constant Image Of Your Face: A Dennis Brutus Reader (2008). Brown, Geoff, and Hogsbjerg, Christian. Apartheid is not a Game: Remembering the Stop the Seventy Tour campaign. London: Redwords, 2020. . See also List of people subject to banning orders under apartheid References External links Dennis Brutus Papers, 1960–1984, Northwestern University Archives, Evanston, Illinois Dennis Brutus Papers Worcester State University Archives, Worcester, Massachusetts Dennis Brutus Papers on sport, anti-apartheid activities and literature, 1958–1971, Borthwick Institute, University of York "Dennis Brutus reads from his work" for the WGBH series, Ten O'clock News "Dennis Brutus poem 'Gull' Copenhagen conference" Dennis Brutus Defense Committee Western Massachusetts Dennis Brutus Defense Committee Obituaries Dennis Brutus 1924–2009 This "cyber-tombeau" at Silliman's Blog by poet Ron Silliman includes comments, tributes, and links Dennis Brutus (1924–2009): South African Poet and Activist Dies in Cape Town – video by Democracy Now!, 28 December 2009. 1924 births 2009 deaths 20th-century South African poets South African anti-apartheid activists Coloureds Environmental ethics Inmates of Robben Island Northwestern University faculty People from Harare Rhodesian emigrants to South Africa South African expatriates in Southern Rhodesia South African expatriates in the United States South African people of Dutch descent South African people of English descent South African people of French descent South African people of German descent South African people of Malay descent South African refugees South African Trotskyists Tax resisters University of Denver faculty University of Fort Hare alumni University of Pittsburgh faculty University of the Witwatersrand alumni Writers from Pittsburgh Zimbabwean people of German descent African poets
Dennis Brutus
Environmental_science
2,174
25,998,318
https://en.wikipedia.org/wiki/History%20of%20Mars%20observation
The history of Mars observation is about the recorded history of observation of the planet Mars. Some of the early records of Mars' observation date back to the era of the ancient Egyptian astronomers in the 2nd millennium BCE. Chinese records about the motions of Mars appeared before the founding of the Zhou dynasty (1045 BCE). Detailed observations of the position of Mars were made by Babylonian astronomers who developed arithmetic techniques to predict the future position of the planet. The ancient Greek philosophers and Hellenistic astronomers developed a geocentric model to explain the planet's motions. Measurements of Mars' angular diameter can be found in ancient Greek and Indian texts. In the 16th century, Nicolaus Copernicus proposed a heliocentric model for the Solar System in which the planets follow circular orbits about the Sun. This was revised by Johannes Kepler, yielding an elliptic orbit for Mars that more accurately fitted the observational data. The first telescopic observation of Mars was by Galileo Galilei in 1610. Within a century, astronomers discovered distinct albedo features on the planet, including the dark patch Syrtis Major Planum and polar ice caps. They were able to determine the planet's rotation period and axial tilt. These observations were primarily made during the time intervals when the planet was located in opposition to the Sun, at which points Mars made its closest approaches to the Earth. Better telescopes developed early in the 19th century allowed permanent Martian albedo features to be mapped in detail. The first crude map of Mars was published in 1840, followed by more refined maps from 1877 onward. When astronomers mistakenly thought they had detected the spectroscopic signature of water in the Martian atmosphere, the idea of life on Mars became popularized among the public. Percival Lowell believed he could see a network of artificial canals on Mars. These linear features later proved to be an optical illusion, and the atmosphere was found to be too thin to support an Earth-like environment. Yellow clouds on Mars have been observed since the 1870s, which Eugène M. Antoniadi suggested were windblown sand or dust. During the 1920s, the range of Martian surface temperature was measured; it ranged from . The planetary atmosphere was found to be arid with only trace amounts of oxygen and water. In 1947, Gerard Kuiper showed that the thin Martian atmosphere contained extensive carbon dioxide; roughly double the quantity found in Earth's atmosphere. The first standard nomenclature for Mars albedo features was adopted in 1960 by the International Astronomical Union. Since the 1960s, multiple robotic spacecraft have been sent to explore Mars from orbit and the surface. The planet has remained under observation by ground and space-based instruments across a broad range of the electromagnetic spectrum.The discovery of meteorites on Earth that originated on Mars has allowed laboratory examination of the chemical conditions on the planet. Earliest records The existence of Mars as a wandering object in the night sky was recorded by ancient Egyptian astronomers. By the 2nd millennium BCE they were familiar with the apparent retrograde motion of the planet, in which it appears to move in the opposite direction across the sky from its normal progression. Mars was portrayed on the ceiling of the tomb of Seti I, on the Ramesseum ceiling, and in the Senenmut star map. The last is the oldest known star map, being dated to 1534 BCE based on the position of the planets. By the period of the Neo-Babylonian Empire, Babylonian astronomers were making systematic observations of the positions and behavior of the planets. For Mars, they knew, for example, that the planet made 37 synodic periods, or 42 circuits of the zodiac, every 79 years. The Babylonians invented arithmetic methods for making minor corrections to the predicted positions of the planets. This technique was primarily derived from timing measurements—such as when Mars rose above the horizon, rather than from the less accurately known position of the planet on the celestial sphere. Chinese records of the appearances and motions of Mars appear before the founding of the Zhou dynasty (1045 BCE), and by the Qin dynasty (221 BCE) astronomers maintained close records of planetary conjunctions, including those of Mars. Occultations of Mars by Venus were noted in 368, 375, and 405 CE. The period and motion of the planet's orbit was known in detail during the Tang dynasty (618 CE). The early astronomy of ancient Greece was influenced by knowledge transmitted from the Mesopotamian culture. Thus, the Babylonians associated Mars with Nergal, their god of war and pestilence, and the Greeks connected the planet with their god of war, Ares. During this period, the motions of the planets were of little interest to the Greeks; Hesiod's Works and Days (c. 650 BCE) makes no mention of the planets. Orbital models The Greeks used the word planēton to refer to the seven celestial bodies that moved with respect to the background stars and they held a geocentric view that these bodies moved about the Earth. In his work, The Republic (X.616E–617B), the Greek philosopher Plato provided the oldest known statement defining the order of the planets in Greek astronomical tradition. His list, in order of the nearest to the most distant from the Earth, was as follows: the Moon, Sun, Venus, Mercury, Mars, Jupiter, Saturn, and the fixed stars. In his dialogue Timaeus, Plato proposed that the progression of these objects across the skies depended on their distance, so that the most distant object moved the slowest. Aristotle, a student of Plato, observed an occultation of Mars by the Moon on 4 May 357 BCE. From this he concluded that Mars must lie further from the Earth than the Moon. He noted that other such occultations of stars and planets had been observed by the Egyptians and Babylonians. Aristotle used this observational evidence to support the Greek sequencing of the planets. His work De Caelo presented a model of the universe in which the Sun, Moon, and planets circle about the Earth at fixed distances. A more sophisticated version of the geocentric model was developed by the Greek astronomer Hipparchus when he proposed that Mars moved along a circular track called the epicycle that, in turn, orbited about the Earth along a larger circle called the deferent. In Roman Egypt during the 2nd century CE, Claudius Ptolemaeus (Ptolemy) attempted to address the problem of the orbital motion of Mars. Observations of Mars had shown that the planet appeared to move 40% faster on one side of its orbit than the other, in conflict with the Aristotelian model of uniform motion. Ptolemy modified the model of planetary motion by adding a point offset from the center of the planet's circular orbit about which the planet moves at a uniform rate of rotation. He proposed that the order of the planets, by increasing distance, was: the Moon, Mercury, Venus, Sun, Mars, Jupiter, Saturn, and the fixed stars. Ptolemy's model and his collective work on astronomy was presented in the multi-volume collection Almagest, which became the authoritative treatise on Western astronomy for the next fourteen centuries. In 1543, Nicolaus Copernicus published a heliocentric model in his work De revolutionibus orbium coelestium. This approach placed the Earth in an orbit around the Sun between the circular orbits of Venus and Mars. His model successfully explained why the planets Mars, Jupiter and Saturn were on the opposite side of the sky from the Sun whenever they were in the middle of their retrograde motions. Copernicus was able to sort the planets into their correct heliocentric order based solely on the period of their orbits about the Sun. His theory gradually gained acceptance among European astronomers, particularly after the publication of the Prutenic Tables by the German astronomer Erasmus Reinhold in 1551, which were computed using the Copernican model. On October 13, 1590, the German astronomer Michael Maestlin observed an occultation of Mars by Venus. One of his students, Johannes Kepler, quickly became an adherent to the Copernican system. After the completion of his education, Kepler became an assistant to the Danish nobleman and astronomer, Tycho Brahe. With access granted to Tycho's detailed observations of Mars, Kepler was set to work mathematically assembling a replacement to the Prutenic Tables. After repeatedly failing to fit the motion of Mars into a circular orbit as required under Copernicanism, he succeeded in matching Tycho's observations by assuming the orbit was an ellipse and the Sun was located at one of the foci. His model became the basis for Kepler's laws of planetary motion, which were published in his multi-volume work Epitome Astronomiae Copernicanae (Epitome of Copernican Astronomy) between 1615 and 1621. Early telescope observations At its closest approach, the angular size of Mars is 25 arcseconds (a unit of degree); this is much too small for the naked eye to resolve. Hence, prior to the invention of the telescope, nothing was known about the planet besides its red hue and its position on the sky. The Italian scientist Galileo Galilei was the first person known to use a telescope to make astronomical observations. His records indicate that he began observing Mars through a telescope in September 1610. This instrument was too primitive to display any surface detail on the planet, so he set the goal of seeing if Mars exhibited phases of partial darkness similar to Venus or the Moon. Although uncertain of his success, by December he did note that Mars had shrunk in angular size. Polish astronomer Johannes Hevelius succeeded in observing a phase of Mars in 1645. In 1644, the Italian Jesuit Daniello Bartoli reported seeing two darker patches on Mars. During the oppositions of 1651, 1653 and 1655, when the planet made its closest approaches to the Earth, the Italian astronomer Giovanni Battista Riccioli and his student Francesco Maria Grimaldi noted patches of differing reflectivity on Mars. The first person to draw a map of Mars that displayed terrain features was the Dutch astronomer Christiaan Huygens. On November 28, 1659, he made an illustration of Mars that showed the distinct dark region now known as Syrtis Major Planum, and possibly one of the polar ice caps. The same year, he succeeded in measuring the rotation period of the planet, giving it as approximately 24 hours. He made a rough estimate of the diameter of Mars, guessing that it is about 60% of the size of the Earth, which compares well with the modern value of 53%. Perhaps the first definitive mention of Mars's southern polar ice cap was by the Italian astronomer Giovanni Domenico Cassini, in 1666. That same year, he used observations of the surface markings on Mars to determine a rotation period of 24h 40m. This differs from the currently-accepted value by less than three minutes. In 1672, Huygens noticed a fuzzy white cap at the north pole. After Cassini became the first director of the Paris Observatory in 1671, he tackled the problem of the physical scale of the Solar System. The relative size of the planetary orbits was known from Kepler's third law, so what was needed was the actual size of one of the planet's orbits. For this purpose, the position of Mars was measured against the background stars from different points on the Earth, thereby measuring the diurnal parallax of the planet. During this year, the planet was moving past the point along its orbit where it was nearest to the Sun (a perihelic opposition), which made this a particularly close approach to the Earth. Cassini and Jean Picard determined the position of Mars from Paris, while the French astronomer Jean Richer made measurements from Cayenne, South America. Although these observations were hampered by the quality of the instruments, the parallax computed by Cassini came within 10% of the correct value. The English astronomer John Flamsteed made comparable measurement attempts and had similar results. In 1704, Italian astronomer Jacques Philippe Maraldi "made a systematic study of the southern cap and observed that it underwent" variations as the planet rotated. This indicated that the cap was not centered on the pole. He observed that the size of the cap varied over time. The German-born British astronomer Sir William Herschel began making observations of the planet Mars in 1777, particularly of the planet's polar caps. In 1781, he noted that the south cap appeared "extremely large", which he ascribed to that pole being in darkness for the past twelve months. By 1784, the southern cap appeared much smaller, thereby suggesting that the caps vary with the planet's seasons and thus were made of ice. In 1781, he estimated the rotation period of Mars as 24h 39m 21.67s and measured the axial tilt of the planet's poles to the orbital plane as 28.5°. He noted that Mars had a "considerable but moderate atmosphere, so that its inhabitants probably enjoy a situation in many respects similar to ours". Between 1796 and 1809, the French astronomer Honoré Flaugergues noticed obscurations of Mars, suggesting "ochre-colored veils" covered the surface. This may be the earliest report of yellow clouds or storms on Mars. Geographical period At the start of the 19th century, improvements in the size and quality of telescope optics proved a significant advance in observation capability. Most notable among these enhancements was the two-component achromatic lens of the German optician Joseph von Fraunhofer that essentially eliminated coma—an optical effect that can distort the outer edge of the image. By 1812, Fraunhofer had succeeded in creating an achromatic objective lens in diameter. The size of this primary lens is the main factor in determining the light gathering ability and resolution of a refracting telescope. During the opposition of Mars in 1830, the German astronomers Johann Heinrich Mädler and Wilhelm Beer used a Fraunhofer refracting telescope to launch an extensive study of the planet. They chose a feature located 8° south of the equator as their point of reference. (This was later named the Sinus Meridiani, and it would become the zero meridian of Mars.) During their observations, they established that most of Mars' surface features were permanent, and more precisely determined the planet's rotation period. In 1840, Mädler combined ten years of observations to draw the first map of Mars. Rather than giving names to the various markings, Beer and Mädler simply designated them with letters; thus Meridian Bay (Sinus Meridiani) was feature "a". Working at the Vatican Observatory during the opposition of Mars in 1858, Italian astronomer Angelo Secchi noticed a large blue triangular feature, which he named the "Blue Scorpion". This same seasonal cloud-like formation was seen by English astronomer J. Norman Lockyer in 1862, and it has been viewed by other observers. During the 1862 opposition, Dutch astronomer Frederik Kaiser produced drawings of Mars. By comparing his illustrations to those of Huygens and the English natural philosopher Robert Hooke, he was able to further refine the rotation period of Mars. His value of 24h 37m 22.6s is accurate to within a tenth of a second. Father Secchi produced some of the first color illustrations of Mars in 1863. He used the names of famous explorers for the distinct features. In 1869, he observed two dark linear features on the surface that he referred to as canali, which is Italian for 'channels' or 'grooves'. In 1867, English astronomer Richard A. Proctor created a more detailed map of Mars based on the 1864 drawings of English astronomer William R. Dawes. Proctor named the various lighter or darker features after astronomers, past and present, who had contributed to the observations of Mars. During the same decade, comparable maps and nomenclature were produced by the French astronomer Camille Flammarion and the English astronomer Nathan Green. At the University of Leipzig in 1862–64, German astronomer Johann K. F. Zöllner developed a custom photometer to measure the reflectivity of the Moon, planets and bright stars. For Mars, he derived an albedo of 0.27. Between 1877 and 1893, German astronomers Gustav Müller and Paul Kempf observed Mars using Zöllner's photometer. They found a small phase coefficient—the variation in reflectivity with angle—indicating that the surface of Mars is smooth and without large irregularities. In 1867, French astronomer Pierre Janssen and British astronomer William Huggins used spectroscopes to examine the atmosphere of Mars. Both compared the optical spectrum of Mars to that of the Moon. As the spectrum of the latter did not display absorption lines of water, they believed they had detected the presence of water vapor in the atmosphere of Mars. This result was confirmed by German astronomer Herman C. Vogel in 1872 and English astronomer Edward W. Maunder in 1875, but would later come into question. In 1882, an article appeared in Scientific American discussing snow on the polar regions of Mars and speculation on the probability of ocean currents. A particularly favorable perihelic opposition occurred in 1877. The English astronomer David Gill used this opportunity to measure the diurnal parallax of Mars from Ascension Island, which led to a parallax estimate of . Using this result, he was able to more accurately determine the distance of the Earth from the Sun, based upon the relative size of the orbits of Mars and the Earth. He noted that the edge of the disk of Mars appeared fuzzy because of its atmosphere, which limited the precision he could obtain for the planet's position. In August 1877, the American astronomer Asaph Hall discovered the two moons of Mars using a telescope at the U.S. Naval Observatory. The names of the two satellites, Phobos and Deimos, were chosen by Hall based upon a suggestion by Henry Madan, a science instructor at Eton College in England. Martian canals During the 1877 opposition, Italian astronomer Giovanni Schiaparelli used a telescope to help produce the first detailed map of Mars. These maps notably contained features he called canali, which were later shown to be an optical illusion. These canali were supposedly long straight lines on the surface of Mars to which he gave names of famous rivers on Earth. His term canali was popularly mistranslated in English as canals. In 1886, the English astronomer William F. Denning observed that these linear features were irregular in nature and showed concentrations and interruptions. By 1895, English astronomer Edward Maunder became convinced that the linear features were merely the summation of many smaller details. In his 1892 work La planète Mars et ses conditions d'habitabilité, Camille Flammarion wrote about how these channels resembled man-made canals, which an intelligent race could use to redistribute water across a dying Martian world. He advocated for the existence of such inhabitants, and suggested they may be more advanced than humans. Influenced by the observations of Schiaparelli, Percival Lowell founded an observatory with telescopes. The observatory was used for the exploration of Mars during the last good opportunity in 1894 and the following less favorable oppositions. He published books on Mars and life on the planet, which had a great influence on the public. The canali were found by other astronomers, such as Henri Joseph Perrotin and Louis Thollon using a refractor at the Nice Observatory in France, one of the largest telescopes of that time. Beginning in 1901, American astronomer A. E. Douglass attempted to photograph the canal features of Mars. These efforts appeared to succeed when American astronomer Carl O. Lampland published photographs of the supposed canals in 1905. Although these results were widely accepted, they became contested by Greek astronomer Eugène M. Antoniadi, English naturalist Alfred Russel Wallace and others as merely imagined features. As bigger telescopes were used, fewer long, straight canali were observed. During an observation in 1909 by Flammarion with an telescope, irregular patterns were observed, but no canali were seen. Starting in 1909 Eugène Antoniadi was able to help disprove the theory of Martian canali by viewing through the great refractor of Meudon, the Grande Lunette (83 cm lens). A trifecta of observational factors synergize; viewing through the third largest refractor in the World, Mars was at opposition, and exceptional clear weather. The canali dissolved before Antoniadi's eyes into various "spots and blotches" on the surface of Mars. Refining planetary parameters Surface obscuration caused by yellow clouds had been noted in the 1870s when they were observed by Schiaparelli. Evidence for such clouds was observed during the oppositions of 1892 and 1907. In 1909, Antoniadi noted that the presence of yellow clouds was associated with the obscuration of albedo features. He discovered that Mars appeared more yellow during oppositions when the planet was closest to the Sun and was receiving more energy. He suggested windblown sand or dust as the cause of the clouds. In 1894, American astronomer William W. Campbell found that the spectrum of Mars was identical to the spectrum of the Moon, throwing doubt on the burgeoning theory that the atmosphere of Mars is similar to that of the Earth. Previous detections of water in the atmosphere of Mars were explained by unfavorable conditions, and Campbell determined that the water signature came entirely from the Earth's atmosphere. Although he agreed that the ice caps did indicate there was water in the atmosphere, he did not believe the caps were sufficiently large to allow the water vapor to be detected. At the time, Campbell's results were considered controversial and were criticized by members of the astronomical community, but they were confirmed by American astronomer Walter S. Adams in 1925. Baltic German astronomer Hermann Struve used the observed changes in the orbits of the Martian moons to determine the gravitational influence of the planet's oblate shape. In 1895, he used this data to estimate that the equatorial diameter was 1/190 larger than the polar diameter. In 1911, he refined the value to 1/192. This result was confirmed by American meteorologist Edgar W. Woolard in 1944. Using a vacuum thermocouple attached to the Hooker Telescope at Mount Wilson Observatory, in 1924 the American astronomers Seth Barnes Nicholson and Edison Pettit were able to measure the thermal energy being radiated by the surface of Mars. They determined that the temperature ranged from at the pole up to at the midpoint of the disk (corresponding to the equator). Beginning in the same year, radiated energy measurements of Mars were made by American physicist William Coblentz and American astronomer Carl Otto Lampland. The results showed that the night time temperature on Mars dropped to , indicating an "enormous diurnal fluctuation" in temperatures. The temperature of Martian clouds was measured as . In 1926, by measuring spectral lines that were redshifted by the orbital motions of Mars and Earth, American astronomer Walter Sydney Adams was able to directly measure the amount of oxygen and water vapor in the atmosphere of Mars. He determined that "extreme desert conditions" were prevalent on Mars. In 1934, Adams and American astronomer Theodore Dunham Jr. found that the amount of oxygen in the atmosphere of Mars was less than one percent of the amount over a comparable area on Earth. In 1927, Dutch graduate student Cyprianus Annius van den Bosch made a determination of the mass of Mars based upon the motions of the Martian moons, with an accuracy of 0.2%. This result was confirmed by the Dutch astronomer Willem de Sitter and published posthumously in 1938. Using observations of the near Earth asteroid Eros from 1926 to 1945, German-American astronomer Eugene K. Rabe was able to make an independent estimate the mass of Mars, as well as the other planets in the inner Solar System, from the planet's gravitational perturbations of the asteroid. His estimated margin of error was 0.05%, but subsequent checks suggested his result was poorly determined compared to other methods. During the 1920s, French astronomer Bernard Lyot used a polarimeter to study the surface properties of the Moon and planets. In 1929, he noted that the polarized light emitted from the Martian surface is very similar to that radiated from the Moon, although he speculated that his observations could be explained by frost and possibly vegetation. Based on the amount of sunlight scattered by the Martian atmosphere, he set an upper limit of 1/15 the thickness of the Earth's atmosphere. This restricted the surface pressure to no greater than . Using infrared spectrometry, in 1947 the Dutch-American astronomer Gerard Kuiper detected carbon dioxide in the Martian atmosphere. He was able to estimate that the amount of carbon dioxide over a given area of the surface is double that on the Earth. However, because he overestimated the surface pressure on Mars, Kuiper concluded erroneously that the ice caps could not be composed of frozen carbon dioxide. In 1948, American meteorologist Seymour L. Hess determined that the formation of the thin Martian clouds would only require of water precipitation and a vapor pressure of . The first standard nomenclature for Martian albedo features was introduced by the International Astronomical Union (IAU) when in 1960 they adopted 128 names from the 1929 map of Antoniadi named La Planète Mars. The Working Group for Planetary System Nomenclature (WGPSN) was established by the IAU in 1973 to standardize the naming scheme for Mars and other bodies. Remote sensing The International Planetary Patrol Program was formed in 1969 as a consortium to continually monitor planetary changes. This worldwide group focused on observing dust storms on Mars. Their images allow Martian seasonal patterns to be studied globally, and they showed that most Martian dust storms occur when the planet is closest to the Sun. Since the 1960s, robotic spacecraft have been sent to explore Mars from orbit and the surface in extensive detail. In addition, remote sensing of Mars from Earth by ground-based and orbiting telescopes has continued across much of the electromagnetic spectrum. These include infrared observations to determine the composition of the surface, ultraviolet and submillimeter observation of the atmospheric composition, and radio measurements of wind velocities. The Hubble Space Telescope (HST) has been used to perform systematic studies of Mars and has taken the highest resolution images of Mars ever captured from Earth. This telescope can produce useful images of the planet when it is at an angular distance of at least 50° from the Sun. The HST can take images of a hemisphere, which yields views of entire weather systems. Earth-based telescopes equipped with charge-coupled devices can produce useful images of Mars, allowing for regular monitoring of the planet's weather during oppositions. X-ray emission from Mars was first observed by astronomers in 2001 using the Chandra X-ray Observatory, and in 2003 it was shown to have two components. The first component is caused by X-rays from the Sun scattering off the upper Martian atmosphere; the second comes from interactions between ions that result in an exchange of charges. The emission from the latter source has been observed out to eight times the radius of Mars by the XMM-Newton orbiting observatory. In 1983, the analysis of the shergottite, nakhlite, and chassignite (SNC) group of meteorites showed that they may have originated on Mars. The Allan Hills 84001 meteorite, discovered in Antarctica in 1984, is believed to have originated on Mars but it has an entirely different composition than the SNC group. In 1996, it was announced that this meteorite might contain evidence for microscopic fossils of Martian bacteria. However, this finding remains controversial. Chemical analysis of the Martian meteorites found on Earth suggests that the ambient near-surface temperature of Mars has most likely been below the freezing point of water (0 °C) for much of the last four billion years. Observations See also Exploration of Mars Mars in history Mars lander References External links Mars Mars
History of Mars observation
Astronomy
5,669
66,178,339
https://en.wikipedia.org/wiki/Honoured%20Doctor%20of%20the%20People
Honoured Doctor of the People () was the highest honorary title awarded to physicians in East Germany. It was given in form of a medal. The title was stablished on 31 March 1949 and awarded every year on 11 December, the birthday of Robert Koch. Sources Bundesministerium für innerdeutsche Beziehungen (Hrsg.): DDR-Handbuch, Auszeichnungen, Verlag Wissenschaft und Politik, 1985, S. 26 und 29, . Andreas Herbst, Winfried Ranke, Jürgen R. Winkler: So funktionierte die DDR, Lexikon der Organisationen und Institutionen; Gesundheitswesen, Rowohlt Taschenbuch, 1994, . Health care in East Germany Honorary titles Medicine awards Awards established in 1949 1949 establishments in East Germany Orders, decorations, and medals of East Germany
Honoured Doctor of the People
Technology
182
2,614,426
https://en.wikipedia.org/wiki/Hypophosphorous%20acid
Hypophosphorous acid (HPA), or phosphinic acid, is a phosphorus oxyacid and a powerful reducing agent with molecular formula H3PO2. It is a colorless low-melting compound, which is soluble in water, dioxane and alcohols. The formula for this acid is generally written H3PO2, but a more descriptive presentation is HOP(O)H2, which highlights its monoprotic character. Salts derived from this acid are called hypophosphites. HOP(O)H2 exists in equilibrium with the minor tautomer HP(OH)2. Sometimes the minor tautomer is called hypophosphorous acid and the major tautomer is called phosphinic acid. Preparation and availability Hypophosphorous acid was first prepared in 1816 by the French chemist Pierre Louis Dulong (1785–1838). The acid is prepared industrially via a two step process: Firstly, elemental white phosphorus reacts with alkali and alkaline earth hydroxides to give an aqueous solution of hypophosphites: P4 + 4 OH− + 4 H2O → 4  + 2 H2 Any phosphites produced in this step can be selectively precipitated out by treatment with calcium salts. The purified material is then treated with a strong, non-oxidizing acid (often sulfuric acid) to give the free hypophosphorous acid: + H+ → H3PO2 HPA is usually supplied as a 50% aqueous solution. Anhydrous acid cannot be obtained by simple evaporation of the water, as the acid readily oxidises to phosphorous acid and phosphoric acid and also disproportionates to phosphorous acid and phosphine. Pure anhydrous hypophosphorous acid can be formed by the continuous extraction of aqueous solutions with diethyl ether. Properties The molecule displays P(═O)H to P–OH tautomerism similar to that of phosphorous acid; the P(═O) form is strongly favoured. HPA is usually supplied as a 50% aqueous solution and heating at low temperatures (up to about 90 °C) prompts it to react with water to form phosphorous acid and hydrogen gas. H3PO2 + H2O → H3PO3 + H2 Heating above 110 °C causes hypophosphorous acid to undergo disproportionation to give phosphorous acid and phosphine. 3 H3PO2 → 2 H3PO3 + PH3 Reactions Inorganic Hypophosphorous acid can reduce chromium(III) oxide to chromium(II) oxide: H3PO2 + 2 Cr2O3 → 4 CrO + H3PO4 Inorganic derivatives Most metal-hypophosphite complexes are unstable, owing to the tendency of hypophosphites to reduce metal cations back into the bulk metal. Some examples have been characterised, including the important nickel salt [Ni(H2O)6](H2PO2)2. DEA List I chemical status Because hypophosphorous acid can reduce elemental iodine to form hydroiodic acid, which is a reagent effective for reducing ephedrine or pseudoephedrine to methamphetamine, the United States Drug Enforcement Administration designated hypophosphorous acid (and its salts) as a List I precursor chemical effective November 16, 2001. Accordingly, handlers of hypophosphorous acid or its salts in the United States are subject to stringent regulatory controls including registration, recordkeeping, reporting, and import/export requirements pursuant to the Controlled Substances Act and 21 CFR §§ 1309 and 1310. Organic In organic chemistry, H3PO2 can be used for the reduction of arenediazonium salts, converting to Ar–H. When diazotized in a concentrated solution of hypophosphorous acid, an amine substituent can be removed from arenes. Owing to its ability to function as a mild reducing agent and oxygen scavenger it is sometimes used as an additive in Fischer esterification reactions, where it prevents the formation of colored impurities. It is used to prepare phosphinic acid derivatives. Applications Hypophosphorous acid (and its salts) are used to reduce metal salts back into bulk metals. It is effective for various transition metals ions (i.e. those of: Co, Cu, Ag, Mn, Pt) but is most commonly used to reduce nickel. This forms the basis of electroless nickel plating (Ni–P), which is the single largest industrial application of hypophosphites. For this application it is principally used as a salt (sodium hypophosphite). Sources ChemicalLand21 Listing References Oxoacids Phosphorus oxoacids Reagents for organic chemistry Reducing agents
Hypophosphorous acid
Chemistry
1,052
56,666,301
https://en.wikipedia.org/wiki/Internal%20Security%20Assessor
Internal Security Assessor (ISA) is a designation given by the PCI Security Standards Council to eligible internal security audit professionals working for a qualifying organization. The intent of this qualification is for these individuals to receive PCI DSS training so that their qualifying organization has a better understanding of PCI DSS and how it impacts their company. Becoming an ISA can improve the relationship with Qualified Security Assessors and support the consistent and proper application of PCI DSS measures and controls within the organization. The PCI SSC's public website can be used to verify ISA employees. An ISA is also able to perform self-assessments for their organization as long as they are not a Level 1 merchant ISA training is only available for merchants and processors. Organizations are required to have an internal audit department and cannot be affiliated with a Qualified Security Assessor or Automated Scanning Vendor (ASV) company in any way. Certificate Renewal The ISA certification must be renewed annually. The ISA certification is company-specific. If the certified individual leaves the company that sponsored them, the certification is no longer valid References External links PCI Security Standards Council Computer security organizations Information privacy all Standards
Internal Security Assessor
Engineering
232
1,598,279
https://en.wikipedia.org/wiki/Formox%20process
The Formox process produces formaldehyde. Formox is a registered trademark owned by Johnson Matthey. The process was originally invented jointly by Swedish chemical company Perstorp and Reichhold Chemicals. Industrially, formaldehyde is produced by catalytic oxidation of methanol. The most commonly used catalysts are silver metal or a mixture of an iron oxide with molybdenum and/or vanadium. In the recently more commonly used Formox process using iron oxide and molybdenum and/or vanadium, methanol and oxygen react at 300-400°C to produce formaldehyde according to the chemical equation: CH3OH + ½ O2 → H2CO + H2O. The silver-based catalyst is usually operated at a higher temperature, about 650 °C. On it, two chemical reactions simultaneously produce formaldehyde: the one shown above, and the dehydrogenation reaction: CH3OH → H2CO + H2 Further oxidation of the formaldehyde product during its production usually gives formic acid that is found in formaldehyde solution, found in parts per million values. References Chemical processes
Formox process
Chemistry
238
29,107,764
https://en.wikipedia.org/wiki/Polyhedrin
For the three dimensional shape, see Polyhedron Polyhedrins are a type of viral protein that form occlusion bodies (also called polyhedra), large structures that protect the virus particles from the outside environment for extended periods until they are ingested by susceptible insects. They occur in various viruses including nuclear polyhedrosis viruses (NPVs) and granuloviruses (GVs). GVs contain one virus particle per occlusion, whereas NPVs package about 5–15 viruses in each occlusion. References Protein families
Polyhedrin
Biology
112
12,788,181
https://en.wikipedia.org/wiki/7JP4
The 7JP4 is an early black and white or monochrome cathode-ray tube (also called picture tube and kinescope). It was a popular type used in late 1940s low cost and small table model televisions. The 7JP4 has a 7" diameter round screen which was often partially masked. Unlike later electromagnetically deflected TV tubes, the 7JP4 is electrostatically deflected like an oscilloscope tube. Development The 7JP4 is part of the 7JPx series of circular face electrostatic cathode ray tubes (CRT). Originally developed for radar applications as a display device for radar display A scopes around 1944. After World War 2 the CRT was adapted for television applications. There are three versions. The 7JP4 (P4 represents the phosphor that glows white and has medium persistence) for television. For oscilloscope applications the 7JP1 was used (P1 phosphor has a green trace and short persistence). Radar applications the 7JP7 was used (P7 phosphor has a blue-white trace with a long persistence). This CRT was produced by multiple manufacturers (RCA, General Electric, Sylvania Electric Products and Tung-sol). Except for the type of phosphor used all three are identical in operation and connection. The screen diagonal is 7 inches (17.8 cm) for 7JP1 and 7JP4, but only 5.5 inches (14 cm) for the 7JP7. 7JP4 electrical characteristics Some general electrical characteristics are shown below. Second anode + grid 2 (pin 9) and plates (pins 10, 11 and 7, 8) have a maximum value limit of 6000 volts dc. Internal arcing can be expected when this voltage is exceeded. Actual values are typically in the 4000 to 5500 volts range, and some sets were operated as low as 2000 volts dc. Grid 1 (pin 3) can receive either a negative bias or a positive bias. Pin 2 is the brightness voltage; pin 3 is the video signal on top of DC, and pin 2 is a DC level varying with the brightness control. Some sets are backwards and have pin 3 on ground and video on pin 2 along with brightness adjustment. Early television in the United States From 1946 to 1951 the 7JP4 was a common CRT (picture tube, or kinescope) used in lower priced televisions sold in the United States. These television were popular for portable carry around and small table top sets. These smaller sets were direct view electrostatic deflection designs. This required an extremely high voltage to produce an image across the full display screen. In 1946, RCA influenced manufacturers (with royalty-free circuit designs) to move toward electromagnetic deflection type televisions. Electromagnetic deflection uses varying magnetic fields to produce a full screen image. Horizontal and vertical electromagnets are placed at the picture tube neck, called the "yoke". This method allowed the image to be viewed on larger screens. The first heavily mass-produced large picture tube to use this newer method of deflection was RCA's 10BP4, introduced in 1946. Soon after electrostatic picture tubes and the television electronic design would be completely replaced. For a detailed look at US electrostatic TV design and the 7JP4 kinescope from an Australian view (by cablehack) go to the external link below involving the Wards Airline TV (Sentinel model 400TV). Restoration of dead technology The interest in the 7JPx series of CRT's is in restoration of dead technology. Dead technology represent products that are no longer mass-produced or seriously used. The technology used in these designs have reach their highest level or have been replaced by a better technology. The restoration of early US made television sets has spurred interest in the 7JP4. Since this particular model is no longer made, only available ones are from old stock or from television sets that cannot be restored. This makes the price of a working CRT very expensive. Because it is electrostatically deflected and was obsolete by the mid-1950s, most CRT testers will not test the 7JP4 and thus it is best tested in a working TV set. The Precision CR30, Sencore CR-70 and Jackson 707 are some of the CRT testers that are capable of testing the 7JP4, 3KP4 and other electrostatic deflection CRTs. Since the availability of these CRT testers is very limited, the prices for such testers are steep, so many restorers test these CRT's on a working TV set that used electrostatic CRTs. There are many picture tube restoring equipment available for magnetic deflection tubes but there is no way to restore electrostatic tubes. The biggest problem with many picture tubes is the loss of emission or electron production due to contaminated or damaged cathode that surrounds the heater. The 7JP4 was used in the following sets (incomplete list): Motorola VT-71 Motorola VT-73 Hallicrafters 504, 505, T-54 Sentinel TV-400 Sentinel TV-405 National TV-7W Philco 50-T701 & 50-T702 Tele-Tone TV-149 These and other early television sets can be found in the "Collectors Guide to Vintage Televisions Identification and Values", by Bryan Durbal and Glenn Bubenheimer () published by Collector Books, Paducah, KY, USA. References External links Datasheet Phil's Old Radios TV gallery Full discussion of electrostatic TV design from an Australian perspective go to Sentinel 400TV Vacuum tube displays Television technology
7JP4
Technology
1,192
67,281,985
https://en.wikipedia.org/wiki/Transaction-Based%20Reporting
Transaction-based reporting, or Invoice reporting, sometimes called Continuous Transaction Controls (CTC), is a method of data collection of governmental bodies to reduce fraud and increase compliance. Invoice reporting helps in overcoming the inefficiencies of post audit systems, where the auditors can only check VAT refunds after the facts and mainly needs to use data that is collected by the companies it is auditing. The core of invoice reporting is that all companies within a jurisdiction report their invoices to their tax authority. History Invoice reporting was first implemented in Latin America. In most Latin American countries, transaction-based reporting is coupled with electronic invoicing and therefore the Inter-American Development Bank named the system ‘electronic invoicing of taxes’. However, the key principle is the same as it is defined as ‘electronic invoices that are not only valid for all tax purposes but that are also received in their entirety by the tax authority’. Chile and Mexico were the first and soon others followed. Over time, transaction-based reporting has proved to be efficient in contrasting VAT fraud. For instance, Brazil saw an increase of US$58 billion in tax revenues, while Chile and Mexico reduced their VAT gap by 50%. Furthermore, Colombia estimates that it can achieve the same result if they implement such an invoice reporting system. Around the same time, China started its Golden Tax Project. Part of this project is the Golden Tax System, which started as a pilot in the 1990s and is soon to be integrated with every company in the country. India, is also implementing e-invoicing to increase compliance. Other countries in the region, such as Israel and Jordan are investigating this possibility as well. Tunisia has been a pioneer in the African region as they introduced mandatory e-invoicing in 2016. In Tunisia, invoices are reported to the responsible authorities, and therefore can be considered as transaction-based reporting. Furthermore, Egypt is also conducting research in the best model of invoice reporting. The trend and successes of invoice reporting also reached Europe, though not combined with mandatory electronic invoicing. Countries such as Hungary, Spain, and Italy have implemented a variant of invoice reporting in the second half of the 2010s. The successes of these and other systems made France decide to implement an invoice reporting system combined with e-invoicing from July 2024. The news also reached the European Commission as it stated in their Action Plan for Fair and Simple Taxation Supporting the Recovery Strategy, published in 2020, that by 2022/2023 the Commission ‘will present a legislative proposal for modernising VAT reporting obligations. The EU's Digital Reporting Requirements Directive amendment proposals are due by the end of 2022. Usage In countries where a VAT system exists without mandatory invoice reporting, often companies self-report the amount of VAT which they supposedly received and sent in a certain reporting period. These reporting periods are typically between 1 and 12 months. Because companies are free to fill in what they want on their VAT return, fraudsters can potentially fill in false information. Suppliers might report that they collected 100 euro in VAT, whilst in reality they collected 1000 euro in VAT, which allows them to keep VAT instead of giving it to the tax authority. Buyers might report that they paid 10,000 euro in VAT, which allows them to fraudulently get back money from the tax authority. A widely noted type of VAT fraud is Missing Trader Fraud. By using transaction-based reporting systems, discrepancies between the buyer and seller can be identified, and reduced. Moreover, even without analysing data, transaction-based reporting systems have a preventative effect due to the fact that companies are aware that their data is collected. Implementation Most invoice reporting systems are developed by the country implementing the system itself. This results in a wide variety of ways to implement them. – (Near) real-time transaction-based reporting: Transaction-based reporting can be either real-time or near real-time. A near-time invoice reporting system requires the taxpayer to report all invoices within a certain amount of days after the invoice was actually sent to the buyer. An example of such a system can be found in Spain. The system is called Sistema de Información Inmediata (SII) and companies have 4 days to report all invoices after issuance. Real-time invoice reporting requires to report invoices at the same time of their issuance. The Italian system Sistema di Interscambio (SdI) is an example of such a system. – Electronic invoicing and data exchange: In some countries, transaction-based reporting requirements are coupled with a particular mandatory electronic invoicing standard. This approach was taken e.g. by many Latin American countries. Alternatively, companies can also be given the freedom to use existing electronic invoicing and Electronic Data Interchange solutions to exchange data. While the freedom to choose a data format may lower the administrative burden for some companies, it also raises questions about which invoice data and format can be considered legally valid. – Centralised clearance: Some countries require companies to send the invoice to the tax authority before it will be sent to the buyer. Transaction-based reporting systems in Peru and Italy function like this. The advantage is that fraudulent companies can be interrupted in their business activities at a very early stage. However, it could also cause honest businesses to be interrupted when a false-positive fraud alert has been triggered. Furthermore, VAT experts like Aleksandra Bal underline the high compliance costs that may generate for businesses from a centralised clearance system. – Data to report: The type of data required by the transaction-based reporting system can vary. In Italy and Spain, the national transaction-based reporting system requires significant amounts of invoice information from businesses. The PEPPOL (Pan-European Public Procurement Online) e-invoicing network suggested the use of only a few data points per invoice. During a working group session of the European Parliament, a design was praised to only share a fingerprint or a hash of the original invoice. References Tax investigation Data collection Corporate taxation
Transaction-Based Reporting
Technology
1,285
48,415,905
https://en.wikipedia.org/wiki/NGC%202301
NGC 2301 is an open cluster in the constellation Monoceros. It was discovered by William Herschel in 1786. It is visible through 7x50 binoculars and it is considered the best open cluster for small telescopes in the constellation. It is located 5° WNW of Delta Monocerotis and 2° SSE of 18 Monocerotis. The brightest star of the cluster is an orange G8 subgiant star of 8.0 magnitude, but it is possible that it is a foreground star. The cluster contains also blue giants. The brightest main sequence star is a B9 star with magnitude 9.1. References External links 2301 Monoceros Open clusters
NGC 2301
Astronomy
141
30,220,726
https://en.wikipedia.org/wiki/Quasisymmetric%20map
In mathematics, a quasisymmetric homeomorphism between metric spaces is a map that generalizes bi-Lipschitz maps. While bi-Lipschitz maps shrink or expand the diameter of a set by no more than a multiplicative factor, quasisymmetric maps satisfy the weaker geometric property that they preserve the relative sizes of sets: if two sets A and B have diameters t and are no more than distance t apart, then the ratio of their sizes changes by no more than a multiplicative constant. These maps are also related to quasiconformal maps, since in many circumstances they are in fact equivalent. Definition Let (X, dX) and (Y, dY) be two metric spaces. A homeomorphism f:X → Y is said to be η-quasisymmetric if there is an increasing function η : [0, ∞) → [0, ∞) such that for any triple x, y, z of distinct points in X, we have Basic properties Inverses are quasisymmetric If f : X → Y is an invertible η-quasisymmetric map as above, then its inverse map is -quasisymmetric, where Quasisymmetric maps preserve relative sizes of sets If and are subsets of and is a subset of , then Examples Weakly quasisymmetric maps A map f:X→Y is said to be H-weakly-quasisymmetric for some if for all triples of distinct points in , then Not all weakly quasisymmetric maps are quasisymmetric. However, if is connected and and are doubling, then all weakly quasisymmetric maps are quasisymmetric. The appeal of this result is that proving weak-quasisymmetry is much easier than proving quasisymmetry directly, and in many natural settings the two notions are equivalent. δ-monotone maps A monotone map f:H → H on a Hilbert space H is δ-monotone if for all x and y in H, To grasp what this condition means geometrically, suppose f(0) = 0 and consider the above estimate when y = 0. Then it implies that the angle between the vector x and its image f(x) stays between 0 and arccos δ < π/2. These maps are quasisymmetric, although they are a much narrower subclass of quasisymmetric maps. For example, while a general quasisymmetric map in the complex plane could map the real line to a set of Hausdorff dimension strictly greater than one, a δ-monotone will always map the real line to a rotated graph of a Lipschitz function L:ℝ → ℝ. Doubling measures The real line Quasisymmetric homeomorphisms of the real line to itself can be characterized in terms of their derivatives. An increasing homeomorphism f:ℝ → ℝ is quasisymmetric if and only if there is a constant C > 0 and a doubling measure μ on the real line such that Euclidean space An analogous result holds in Euclidean space. Suppose C = 0 and we rewrite the above equation for f as Writing it this way, we can attempt to define a map using this same integral, but instead integrate (what is now a vector valued integrand) over ℝn: if μ is a doubling measure on ℝn and then the map is quasisymmetric (in fact, it is δ-monotone for some δ depending on the measure μ). Quasisymmetry and quasiconformality in Euclidean space Let and be open subsets of ℝn. If f : Ω → Ω´ is η-quasisymmetric, then it is also K-quasiconformal, where is a constant depending on . Conversely, if f : Ω → Ω´ is K-quasiconformal and is contained in , then is η-quasisymmetric on , where depends only on . Quasi-Möbius maps A related but weaker condition is the notion of quasi-Möbius maps where instead of the ratio only the cross-ratio is considered: Definition Let (X, dX) and (Y, dY) be two metric spaces and let η : [0, ∞) → [0, ∞) be an increasing function. An η-quasi-Möbius homeomorphism f:X → Y is a homeomorphism for which for every quadruple x, y, z, t of distinct points in X, we have See also Douady–Earle extension References Geometry Homeomorphisms Mathematical analysis Metric geometry
Quasisymmetric map
Mathematics
947
24,274,546
https://en.wikipedia.org/wiki/C5H10O3
{{DISPLAYTITLE:C5H10O3}} The molecular formula C5H10O3 may refer to: Diethyl carbonate Ethyl lactate β-Hydroxy β-methylbutyric acid 3-Hydroxyvaleric acid γ-Hydroxyvaleric acid Roche ester
C5H10O3
Chemistry
64
360,812
https://en.wikipedia.org/wiki/Orichalcum
Orichalcum or aurichalcum is a metal mentioned in several ancient writings, including the story of Atlantis in the Critias of Plato. Within the dialogue, Critias (460–403 BC) says that orichalcum had been considered second only to gold in value and had been found and mined in many parts of Atlantis in ancient times, but that by Critias's own time, orichalcum was known only by name. Orichalcum may have been a noble metal such as platinum, as it was supposed to be mined, but has been identified as pure copper or certain alloys of bronze, and especially brass alloys in the case of antique Roman coins, the latter being of "similar appearance to modern brass" according to scientific research. Overview The name is derived from the Greek , (from , , mountain and , , copper), literally meaning "mountain copper". The Romans transliterated "orichalcum" as "aurichalcum", which was thought to mean literally "gold copper". It is known from the writings of Cicero that the metal which they called orichalcum resembled gold in color but had a much lower value. In Virgil's Aeneid, the breastplate of Turnus is described as "stiff with gold and white orichalc". Orichalcum has been vaguely identified by ancient Greek authors to be either a gold–copper alloy, a form of pure copper or a copper ore or various chemicals based on copper, but also copper–tin and copper–zinc alloys, or a metal or metallic alloy supposedly no longer known. In later years "orichalcum" was used to describe the sulfide mineral chalcopyrite and also to describe brass. These usages are difficult to reconcile with the claims of Plato's Critias, who states that the metal was "only a name" by his time, while brass and chalcopyrite were very important in the time of Plato, as they still are today. Joseph Needham notes that Bishop Richard Watson, an 18th-century professor of chemistry, wrote of an ancient idea that there were "two sorts of brass or orichalcum". Needham also suggests that the Greeks may not have known how orichalcum was made and that they might even have had an imitation of the original. Ingots found In 2015, 39 ingots were discovered in a sunken vessel on the coast of Gela in Sicily which have tentatively been dated at 2,100 years old. They were analyzed with X-ray fluorescence and found to be an alloy consisting of 75–80% copper, 15–20% zinc, and smaller percentages of nickel, lead, and iron. Another cache of 47 ingots was recovered in February 2016 and found to have similar composition as measured with ICP-OES and ICP-MS: around 65–80% copper, 15–25% zinc, 4–7% lead, 0.5–1% nickel, and trace amounts of silver, antimony, arsenic, bismuth, and other elements. In ancient literature Orichalcum is first mentioned in the 7th century BC by Hesiod, and in the Homeric hymn dedicated to Aphrodite, dated to the 630s BC. According to the Critias of Plato, the inner wall surrounding the citadel of Atlantis with the Temple of Poseidon "flashed with the red light of orichalcum". The interior walls, pillars, and floors of the temple were completely covered in orichalcum, and the roof was variegated with gold, silver, and orichalcum. In the center of the temple stood a pillar of orichalcum, on which the laws of Poseidon and records of the first son princes of Poseidon were inscribed. Pliny the Elder points out that orichalcum had lost currency due to the mines being exhausted. Pseudo-Aristotle in De mirabilibus auscultationibus (62) describes a type of copper that is "very shiny and white, not because there is tin mixed with it, but because some earth is combined and molten with it." This might be a reference to orichalcum obtained during the smelting of copper with the addition of "cadmia", a kind of earth formerly found on the shores of the Black Sea, which is attributed to be zinc oxide. Numismatics In numismatics, the term "orichalcum" is used to refer exclusively to a type of brass alloy used for minting Roman as, sestertius, dupondius, and semis type of coins. It is considered more valuable than copper, of which the as coin was previously made. See also Ashtadhatu Auricupride Corinthian bronze Electrum Hepatizon Panchaloha Shakudō Shibuichi Thokcha Tumbaga References External links Atlantis Coins of ancient Rome Precious metal alloys Mythological substances Ancient Greek metalwork Coinage metals and alloys Objects in Greek mythology Fictional metals el:Ορείχαλκος
Orichalcum
Chemistry
1,048
68,851,513
https://en.wikipedia.org/wiki/Cannabimovone
Cannabimovone (CBM) is a phytocannabinoid first isolated from a non-psychoactive strain of Cannabis sativa in 2010, which is thought to be a rearrangement product of cannabidiol. It lacks affinity for cannabinoid receptors, but acts as an agonist at both TRPV1 and PPARγ. See also Cannabichromene Cannabicitran Cannabicyclol Cannabielsoin Cannabigerol Cannabinodiol Cannabitriol Delta-6-CBD References Cannabinoids Ketones Isopropenyl compounds Cyclopentanols Phenols
Cannabimovone
Chemistry
144
20,522,913
https://en.wikipedia.org/wiki/Atriplex%20spinifera
Atriplex spinifera is a species of saltbush, known by the common names spiny saltbush and spinescale saltbush. It is endemic to California, where it grows in dry habitat with saline soils, such as salt flats. Its distribution includes the Mojave Desert, the southern Transverse Ranges, and the Central Valley, and surrounding mountain ranges. It is a halophyte. Description Atriplex spinifera is a grayish or whitish brambly shrub with erect branching stems growing to a maximum height near 2 meters. The tangled branches are stiff and tipped with spiny points. The leaves are oval in shape and less than 3 centimeters in length. The shrub is dioecious, with male and female individuals. The male flowers are small clusters in the leaf axils. The female flowers are solitary or borne in small clusters, and are enclosed in scaly bracts which become spherical as the fruit grows within. References External links Jepson Manual Treatment — Atriplex spinifera USDA Plants Profile: Atriplex spinifera Flora of North America Atriplex spinifera — U.C. Photo gallery spinifera Endemic flora of California Halophytes Flora of the California desert regions Natural history of the California Coast Ranges Natural history of the Central Valley (California) Natural history of the Mojave Desert Natural history of the Santa Monica Mountains Natural history of the Transverse Ranges ~ Plants described in 1918 Dioecious plants Flora without expected TNC conservation status
Atriplex spinifera
Chemistry
308
519,276
https://en.wikipedia.org/wiki/Intelligence%20assessment
Intelligence assessment, or simply intel, is the development of behavior forecasts or recommended courses of action to the leadership of an organisation, based on wide ranges of available overt and covert information (intelligence). Assessments develop in response to leadership declaration requirements to inform decision-making. Assessment may be executed on behalf of a state, military or commercial organisation with ranges of information sources available to each. An intelligence assessment reviews available information and previous assessments for relevance and currency. Where there requires additional information, the analyst may direct some collection. Intelligence studies is the academic field concerning intelligence assessment, especially relating to international relations and military science. Process Intelligence assessment is based on a customer requirement or need, which may be a standing requirement or tailored to a specific circumstance or a Request for Information (RFI). The "requirement" is passed to the assessing agency and worked through the intelligence cycle, a structured method for responding to the RFI. The RFI may indicate in what format the requester prefers to consume the product. The RFI is reviewed by a Requirements Manager, who will then direct appropriate tasks to respond to the request. This will involve a review of existing material, the tasking of new analytical product or the collection of new information to inform an analysis. New information may be collected through one or more of the various collection disciplines; human source, electronic and communications intercept, imagery or open sources. The nature of the RFI and the urgency placed on it may indicate that some collection types are unsuitable due to the time taken to collect or validate the information gathered. Intelligence gathering disciplines and the sources and methods used are often highly classified and compartmentalised, with analysts requiring an appropriate high level of security clearance. The process of taking known information about situations and entities of importance to the RFI, characterizing what is known and attempting to forecast future events is termed "all source" assessment, analysis or processing. The analyst uses multiple sources to mutually corroborate, or exclude, the information collected, reaching a conclusion along with a measure of confidence around that conclusion. Where sufficient current information already exists, the analysis may be tasked directly without reference to further collection. The analysis is then communicated back to the requester in the format directed, although subject to the constraints on both the RFI and the methods used in the analysis, the format may be made available for other uses as well and disseminated accordingly. The analysis will be written to a defined classification level with alternative versions potentially available at a number of classification levels for further dissemination. Target-centric intelligence cycle This approach, known as Find-Fix-Finish-Exploit-Assess (F3EA), is complementary to the intelligence cycle and focused on the intervention itself, where the subject of the assessment is clearly identifiable and provisions exist to make some form of intervention against that subject, the target-centric assessment approach may be used. The subject for action, or target, is identified and efforts are initially made to find the target for further development. This activity will identify where intervention against the target will have the most beneficial effects. When the decision is made to intervene, action is taken to fix the target, confirming that the intervention will have a high probability of success and restricting the ability of the target to take independent action. During the finish stage, the intervention is executed, potentially an arrest or detention or the placement of other collection methods. Following the intervention, exploitation of the target is carried out, which may lead to further refinement of the process for related targets. The output from the exploit stage will also be passed into other intelligence assessment activities. Intelligence Information Cycle Theory The Intelligence Information Cycle leverages secrecy theory and U.S. regulation of classified intelligence to re-conceptualize the traditional intelligence cycle under the following four assumptions: Intelligence is secret information Intelligence is a public good Intelligence moves cyclically Intelligence is hoarded Information is transformed from privately held to secretly held to public based on who has control over it. For example, the private information of a source becomes secret information (intelligence) when control over its dissemination is shared with an intelligence officer, and then becomes public information when the intelligence officer further disseminates it to the public by any number of means, including formal reporting, threat warning, and others. The fourth assumption, intelligence is hoarded, causes conflict points where information transitions from one type to another. The first conflict point, collection, occurs when private transitions to secret information (intelligence). The second conflict point, dissemination, occurs when secret transitions to public information. Thus, conceiving of intelligence using these assumptions demonstrates the cause of collection techniques (to ease the private-secret transition) and dissemination conflicts, and can inform ethical standards of conduct among all agents in the intelligence process. See also All-source intelligence Intelligence cycle List of intelligence gathering disciplines Military intelligence Surveillance Threat assessment Futures studies References Further reading Surveys Andrew, Christopher. For the President's Eyes Only: Secret Intelligence and the American Presidency from Washington to Bush (1996) Black, Ian and Morris, Benny Israel's Secret Wars: A History of Israel's Intelligence Services (1991) Bungert, Heike et al. eds. Secret Intelligence in the Twentieth Century (2003) essays by scholars Dulles, Allen W. The Craft of Intelligence: America's Legendary Spy Master on the Fundamentals of Intelligence Gathering for a Free World (2006) Kahn, David The Codebreakers: The Comprehensive History of Secret Communication from Ancient Times to the Internet (1996), 1200 pages Lerner, K. Lee and Brenda Wilmoth Lerner, eds. Encyclopedia of Espionage, Intelligence and Security (2003), 1100 pages. 850 articles, strongest on technology Odom, Gen. William E. Fixing Intelligence: For a More Secure America, Second Edition (Yale Nota Bene) (2004) O'Toole, George. Honorable Treachery: A History of U.S. Intelligence, Espionage, Covert Action from the American Revolution to the CIA (1991) Owen, David. Hidden Secrets: A Complete History of Espionage and the Technology Used to Support It (2002), popular Richelson, Jeffery T. A Century of Spies: Intelligence in the Twentieth Century (1997) Richelson, Jeffery T. The U.S. Intelligence Community (4th ed. 1999) Shulsky, Abram N. and Schmitt, Gary J. "Silent Warfare: Understanding the World of Intelligence" (3rd ed. 2002), 285 pages West, Nigel. MI6: British Secret Intelligence Service Operations 1909–1945 (1983) West, Nigel. Secret War: The Story of SOE, Britain's Wartime Sabotage Organization (1992) Wohlstetter, Roberta. Pearl Harbor: Warning and Decision (1962) World War I Beesly, Patrick. Room 40. (1982). Covers the breaking of German codes by RN intelligence, including the Turkish bribe, Zimmermann telegram, and failure at Jutland. May, Ernest (ed.) Knowing One's Enemies: Intelligence Assessment before the Two World Wars (1984) Tuchman, Barbara W. The Zimmermann Telegram (1966) Yardley, Herbert O. American Black Chamber (2004) World War II 1931–1945 Babington Smith, Constance. Air Spy: the Story of Photo Intelligence in World War II (1957) - originally published as Evidence in Camera in the UK Beesly, Patrick. Very Special Intelligence: the Story of the Admiralty's Operational Intelligence Centre, 1939–1945 (1977) Hinsley, F. H. British Intelligence in the Second World War (1996) (abridged version of multivolume official history) Jones, R. V. Most Secret War: British Scientific Intelligence 1939–1945 (2009) Kahn, David. Hitler's Spies: German Military Intelligence in World War II (1978) Kahn, David. Seizing the Enigma: the Race to Break the German U-Boat Codes, 1939–1943 (1991) Kitson, Simon. The Hunt for Nazi Spies: Fighting Espionage in Vichy France, Chicago: University of Chicago Press, (2008). Lewin, Ronald. The American Magic: Codes, Ciphers and the Defeat of Japan (1982) May, Ernest (ed.) Knowing One's Enemies: Intelligence Assessment before the Two World Wars (1984) Smith, Richard Harris. OSS: the Secret History of America's First Central Intelligence Agency (2005) Stanley, Roy M. World War II Photo Intelligence (1981) Wark, Wesley K. The Ultimate Enemy: British Intelligence and Nazi Germany, 1933–1939 (1985) Wark, Wesley K. "Cryptographic Innocence: the Origins of Signals Intelligence in Canada in the Second World War", in: Journal of Contemporary History 22 (1987) Cold War Era 1945–1991 Aldrich, Richard J. The Hidden Hand: Britain, America and Cold War Secret Intelligence (2002). Ambrose, Stephen E. Ike's Spies: Eisenhower and the Intelligence Establishment (1981). Andrew, Christopher and Vasili Mitrokhin. The Sword and the Shield: The Mitrokhin Archive and the Secret History of the KGB (1999) Andrew, Christopher, and Oleg Gordievsky. KGB: The Inside Story of Its Foreign Operations from Lenin to Gorbachev (1990). Bogle, Lori, ed. Cold War Espionage and Spying (2001), essays by scholars Boiling, Graham. Secret Students on Parade: Cold War Memories of JSSL, CRAIL, PlaneTree, 2005. Dorril, Stephen. MI6: Inside the Covert World of Her Majesty's Secret Intelligence Service (2000). Dziak, John J. Chekisty: A History of the KGB (1988) Elliott, Geoffrey and Shukman, Harold. Secret Classrooms. An Untold Story of the Cold War. London, St Ermin's Press, Revised Edition, 2003. Koehler, John O. Stasi: The Untold Story of the East German Secret Police (1999) Ostrovsky, Viktor By Way of Deception (1990) Persico, Joseph. Casey: The Lives and Secrets of William J. Casey-From the OSS to the CIA (1991) Prados, John. Presidents' Secret Wars: CIA and Pentagon Covert Operations Since World War II (1996) Rositzke, Harry. The CIA's Secret Operations: Espionage, Counterespionage, and Covert Action (1988) Trahair, Richard C. S. Encyclopedia of Cold War Espionage, Spies and Secret Operations (2004) , by an Australian scholar; contains excellent historiographical introduction Weinstein, Allen, and Alexander Vassiliev. The Haunted Wood: Soviet Espionage in America—The Stalin Era (1999). External links Intelligence Literature: Suggested Reading List (CIA) The Literature of Intelligence: A Bibliography of Materials, with Essays, Reviews, and Comments by J. Ransom Clark, Emeritus Professor of Political Science, Muskingum College Data collection Intelligence analysis
Intelligence assessment
Technology
2,231
2,005,111
https://en.wikipedia.org/wiki/FINO
In computer science, FINO is a humorous scheduling algorithm. It is an acronym for first in, never out as opposed to traditional first in, first out (FIFO) and last in, first out (LIFO) algorithms. A similar acronym is "FISH", for first in, still here. FINO works by withholding all scheduled tasks permanently. No matter how many tasks are scheduled at any time, no task ever actually takes place. A stateful FINO queue can be used to implement a memory leak. The first mention of FINO appears in the Signetics 25120 write-only memory joke datasheet. See also Bit bucket Black hole (networking) Null route Write-only memory References Scheduling algorithms Computer humour
FINO
Technology
151
49,401,579
https://en.wikipedia.org/wiki/Neil%20Waters
Sir Thomas Neil Morris Waters (10 April 1931 – 7 June 2018) was a New Zealand inorganic chemist and academic administrator who served as vice-chancellor of Massey University from 1983 to 1995. He is noted for establishing the university's Albany campus near Auckland in 1993. Early life, family, and education Born in New Plymouth on 10 April 1931, Waters was the son of Kathleen Emily Waters (née Morris) and Edwin Benjamin Waters. He was educated at New Plymouth Boys' High School, and went on to study chemistry at Auckland University College, graduating Bachelor of Science in 1953, Master of Science with second-class honours the following year, and PhD in 1958. His doctoral thesis, supervised by David Hall, was titled The colour isomerism and structure of some copper co‑ordination compounds. In 1959, Waters married crystallographer Joyce Mary Partridge. Academic career Waters was appointed as a lecturer in chemistry at Auckland in 1961, rising to the rank of full professor in 1970. In 1969, he was awarded the degree of Doctor of Science by the University of Auckland on the basis of published papers submitted. Waters served as assistant vice chancellor of the University of Auckland between 1979 and 1981, including a period in 1980 as acting vice chancellor. He left Auckland at the end of 1982, and was accorded the title of professor emeritus by the university in 1984. In 1983, Waters was appointed as principal and vice chancellor of Massey University, serving in that role until 1995. In 1995, Massey also bestowed the title of professor emeritus on Waters. During his career, Waters served on a range of university, science sector, and government bodies, including: the council of the Australian and New Zealand Association for the Advancement of Science from 1977 to 1979; the board of the New Zealand University Grants Committee in 1982; the New Zealand Vice Chancellors' Committee from 1983 to 1995, including periods as chair in 1984–85 and 1994; the council of Palmerston North College of Education from 1983 to 1988, the council of Manawatu Polytechnic from 1983 to 1990, as chair of the Foundation for Research, Science and Technology between 1995 and 1998; and chair of the New Zealand Qualifications Authority from 1995 to 1999. Later life and death From 1997, Waters was an honorary senior research fellow at Massey University's Albany campus, where his wife Joyce was a professor of chemistry. In 2002, Massey University's governing council considered restoring Waters to the vice-chancellorship as an interim replacement following the retirement of his successor, James McWha; however, the board was prevented from doing so by the State Sector Act 1988, which barred the appointment of someone not already on the university's payroll; Waters had since moved to Auckland and no longer worked in the university sector. Waters died in Auckland on 7 June 2018, aged 87. Honours and awards In 1990, Waters was awarded the New Zealand 1990 Commemoration Medal. In the 1995 Queen's Birthday Honours, he was appointed a Knight Bachelor, for services to tertiary education. Waters was conferred with honorary Doctor of Science degrees by the University of East Asia in 1986, and Massey University in 1996. He was elected a Fellow of the New Zealand Institute of Chemistry in 1977, Fellow of the Australian and New Zealand Association for the Advancement of Science in 1979, and Fellow of the Royal Society of New Zealand in 1992. References 1931 births 2018 deaths People from New Plymouth People educated at New Plymouth Boys' High School University of Auckland alumni New Zealand chemists Inorganic chemists Academic staff of the University of Auckland Vice-chancellors of Massey University Fellows of the Royal Society of New Zealand New Zealand Knights Bachelor Fellows of the New Zealand Institute of Chemistry
Neil Waters
Chemistry
732
33,010,292
https://en.wikipedia.org/wiki/WISE%201828%2B2650
WISE 1828+2650 (full designation WISEPA J182831.08+265037.8) is a possibly binary brown dwarf or rogue planet of spectral class >Y2, located in the constellation Lyra at approximately 32.5 light-years from Earth. It is the "archetypal member" of the Y spectral class. History of observations Discovery WISE 1828+2650 was discovered in 2011 from data collected by NASA's 40 cm (16 in) Wide-field Infrared Survey Explorer (WISE) space telescope at infrared wavelength. WISE 1828+2650 has two discovery papers: Kirkpatrick et al. (2011) and Cushing et al. (2011), however, basically with the same authors and published nearly simultaneously. Kirkpatrick et al. presented discovery of 98 new found by WISE brown dwarf systems with components of spectral types M, L, T and Y, among which also was WISE 1828+2650 – coolest of them. Cushing et al. presented discovery of seven brown dwarfs – one of T9.5 type, and six of Y-type – first members of the Y spectral class, ever discovered and spectroscopically confirmed, including "archetypal member" of the Y spectral class – WISE 1828+2650. These seven objects are also the faintest seven of 98 brown dwarfs, presented in Kirkpatrick et al. (2011). Distance Currently the most accurate distance estimate of WISE 1828+2650 is a trigonometric parallax, published in 2021 by Kirkpatrick et al.: , corresponding to a distance of , or . Proper motion WISE 1828+2650 has a proper motion of milliarcseconds per year. Physical properties Until the discovery of WISE 0855−0714 in 2014, WISE 1828+2650 was considered as the coldest currently known brown dwarf or the first example of free-floating planet (it is not currently known if it is a brown dwarf or a free-floating planet). It has a temperature in the range and was initially estimated below 300 K, or about . It has been assigned the latest known spectral class (>Y2, initially estimated as >Y0). The mass of WISE 1828+2650 is in the range for ages of 0.1–10 Gyr. The high tangential velocity of WISE 1828+2650, characteristic of an old disk population, indicates a possible age of WISE 1828+2650 in the range 2–4 Gyr, leading to a mass estimate of about . This suggests that WISE 1828+2650 may be a free-floating planet rather than a brown dwarf, since it is below the lower mass limit for deuterium fusion (). WISE 1828+2650 is similar in appearance to the other Y-type object WD 0806-661 B. WD 0806-661 B could have formed as a planet close to its primary, WD 0806-661 A, and later, when the primary became a white dwarf and lost most of its mass, have migrated into a larger orbit of 2500 AU, and similarity between WD 0806-661 B and WISE 1828+2650 may indicate that WISE 1828+2650 had formed in the same way. JWST observation with MIRI detected water vapor (H2O), methane (CH4) and ammonia (NH3) in the atmosphere of WISE 1828+2650. The work detected a low amount of ammonia containing the 15N isotope when compared to ammonia containing the 14N isotope. The 14NH3-to-15NH3 ratio was measured as 670. This amount of 15NH3 is lower than in any Solar System body and it is an indication that WISE 1828+2650 has formed like a star and not like a planet. Thus, this provides evidence that the object is a (sub-)brown dwarf and not a free-floating planet. Another team used the NIRSpec instrument on JWST and detected water vapor, methane, ammonia, carbon monoxide (12CO), carbon dioxide (CO2) and hydrogen sulfide (H2S). These molecules are the major carbon, nitrogen, oxygen, and sulfur bearing species in the atmosphere of WISE 1828+2650. The carbon-to-oxygen ratio (C/O ratio) is with 0.45±0.01 close to the solar ratio. The abundance of carbon, oxygen and sulfur is higher than the sun, but the abundance of nitrogen is likely similar to the sun. Possible binarity Comparison between WISE 1828+2650 and WD 0806-661 B may suggest that WISE 1828+2650 is a system of two equal-mass objects. Observations with Hubble Space Telescope (HST) and Keck-II LGS-AO system had not revealed binarity, suggesting that if any such companion exists, it would have an orbit less than 0.5 AU, and no direct evidence for binarity yet exists. However, the spectrum of the system best fits a pair of brown dwarfs, each with an effective temperature of about 325 K and a mass of about . JWST NIRCam imaging observations did not find a companion at a separation larger than 0.5 astronomical units. NIRSpec low resolution prism observations cannot be explained with existing Sonora Bobcat models of planetary objects, neither single nor multiple. The binary model fails to provide an improved fit for the existing photometric data. A newer analysis of the NIRSpec data compared radius determined with atmospheric retrieval framework CHIMERA and evolutionary model Sonora Bobcat. The CHIMERA radius for a single object ( ) compared with predicted radius from Sonora Bobcat (10 Gyrs, 33 ,0.87 ) is too large for its age, which might be an indication that WISE 1828+2650 is an equal mass binary with both objects having a radius of 0.87 . The estimated semi-major axis of this binary is astronomical units or . Sonora Bobcat however predicts a lower age (1.4 Gyrs) and mass (9.982 ) when using the temperature and gravity from Sonora Elf Owl atmospheric model grid. Future observations could look for radial velocity variations to confirm the binary. Comparison See also The other six discoveries of brown dwarfs, published by Cushing et al. in 2011: WISE 0148−7202 (T9.5) WISE 0410+1502 (Y0) WISE 1405+5534 (Y0 (pec?)) WISE 1541−2250 (Y0.5) WISE 1738+2732 (Y0) WISE 2056+1459 (Y0) Lists: List of star systems within 30–35 light-years List of Y-dwarfs Notes References External links NASA news release Science news Solstation.com (New Objects within 20 light-years) Rogue planets Y-type brown dwarfs Lyra 20110901 WISE objects
WISE 1828+2650
Astronomy
1,431
266,449
https://en.wikipedia.org/wiki/Coilgun
A coilgun is a type of mass driver consisting of one or more coils used as electromagnets in the configuration of a linear motor that accelerate a ferromagnetic or conducting projectile to high velocity. In almost all coilgun configurations, the coils and the gun barrel are arranged on a common axis. A coilgun is not a rifle as the barrel is smoothbore (not rifled). Coilguns generally consist of one or more coils arranged along a barrel, so the path of the accelerating projectile lies along the central axis of the coils. The coils are switched on and off in a precisely timed sequence, causing the projectile to be accelerated quickly along the barrel via magnetic forces. Coilguns are distinct from railguns, as the direction of acceleration in a railgun is at right angles to the central axis of the current loop formed by the conducting rails. In addition, railguns usually require the use of sliding contacts to pass a large current through the projectile or sabot, but coilguns do not necessarily require sliding contacts. While some simple coilgun concepts can use ferromagnetic projectiles or even permanent magnet projectiles, most designs for high velocities actually incorporate a coupled coil as part of the projectile. Coilguns are also distinct from Gauss guns, although many works of science fiction have erroneously confused the two. A coil gun uses electromagnetic acceleration whereas Gauss guns predate the idea of coil guns and instead consists of ferromagnets using a configuration similar to a Newton's Cradle to impart acceleration. History The oldest electromagnetic gun came in the form of the coilgun, the first of which was invented by Norwegian scientist Kristian Birkeland at the University of Kristiania (today Oslo). The invention was officially patented in 1904, although its development reportedly started as early as 1845. According to his accounts, Birkeland accelerated a 500-gram projectile to approximately . In 1933, Texan inventor Virgil Rigsby developed a stationary coil gun that was designed to be used similarly to a machine gun. It was powered by a large electrical motor and generator. It appeared in many contemporary science publications, but never piqued the interest of any armed forces. Construction There are two main types or setups of a coilgun: single-stage and multistage. A single-stage coilgun uses one electromagnetic coil to propel a projectile. A multistage coilgun uses several electromagnetic coils in succession to progressively increase the speed of the projectile. Ferromagnetic projectiles For ferromagnetic projectiles, a single-stage coilgun can be formed by a coil of wire, an electromagnet, with a ferromagnetic projectile placed at one of its ends. This type of coilgun is formed like the solenoid used in an electromechanical relay, i.e. a current-carrying coil which will draw a ferromagnetic object through its center. A large current is pulsed through the coil of wire and a strong magnetic field forms, pulling the projectile to the center of the coil. When the projectile nears this point the electromagnet must be switched off, to prevent the projectile from becoming arrested at the center of the electromagnet. In a multistage design, further electromagnets are then used to repeat this process, progressively accelerating the projectile. In common coilgun designs, the "barrel" of the gun is made up of a track that the projectile rides on, with the driver into the magnetic coils around the track. Power is supplied to the electromagnet from some sort of fast discharge storage device, typically a battery, or capacitors (one per electromagnet), designed for fast energy discharge. A diode is used to protect polarity sensitive components (such as semiconductors or electrolytic capacitors) from damage due to inverse polarity of the voltage after turning off the coil. Many hobbyists use low-cost rudimentary designs to experiment with coilguns, for example using photoflash capacitors from a disposable camera, or a capacitor from a standard cathode-ray tube television as the energy source, and a low inductance coil to propel the projectile forward. Non-ferromagnetic projectiles Some designs have non-ferromagnetic projectiles, of materials such as aluminium or copper, with the armature of the projectile acting as an electromagnet with internal current induced by pulses of the acceleration coils. A superconducting coilgun called a quench gun could be created by successively quenching a line of adjacent coaxial superconducting coils forming a gun barrel, generating a wave of magnetic field gradient traveling at any desired speed. A traveling superconducting coil might be made to ride this wave like a surfboard. The device would be a mass driver or linear synchronous motor with the propulsion energy stored directly in the drive coils. Another method would have non-superconducting acceleration coils and propulsion energy stored outside them but a projectile with superconducting magnets. Though the cost of power switching and other factors can limit projectile energy, a notable benefit of some coilgun designs over simpler railguns is avoiding an intrinsic velocity limit from hypervelocity physical contact and erosion. By having the projectile pulled towards or levitated within the center of the coils as it is accelerated, no physical friction with the walls of the bore occurs. If the bore is a total vacuum (such as a tube with a plasma window), there is no friction at all, which helps prolong the period of reusability. Switching One main obstacle in coilgun design is switching the power through the coils. There are several common solutions—the simplest (and probably least effective) is the spark gap, which releases the stored energy through the coil when the voltage reaches a certain threshold. A better option is to use solid-state switches; these include IGBTs or power MOSFETs (which can be switched off mid-pulse) and SCRs (which release all stored energy before turning off). A quick-and-dirty method for switching, especially for those using a flash camera for the main components, is to use the flash tube itself as a switch. By wiring it in series with the coil, it can silently and non-destructively (assuming that the energy in the capacitor is kept below the tube's safe operating limits) allow a large amount of current to pass through to the coil. Like any flash tube, ionizing the gas in the tube with a high voltage triggers it. However, a large amount of the energy will be dissipated as heat and light, and, because of the tube being a spark gap, the tube will stop conducting once the voltage across it drops sufficiently, leaving some charge remaining on the capacitor. Resistance The electrical resistance of the coils and the equivalent series resistance (ESR) of the current source dissipate considerable power. At low speeds the heating of the coils dominates the efficiency of the coilgun, giving exceptionally low efficiency. However, as speeds climb, mechanical power grows proportional to the square of the speed, but, correctly switched, the resistive losses are largely unaffected, and thus these resistive losses become much smaller in percentage terms. Magnetic circuit Ideally, 100% of the magnetic flux generated by the coil would be delivered to and act on the projectile; in reality this is impossible due to energy losses always present in a real system, which cannot be eliminated. With a simple air-cored solenoid, the majority of the magnetic flux is not coupled into the projectile because of the magnetic circuit's high reluctance. The uncoupled flux generates a magnetic field that stores energy in the surrounding air. The energy that is stored in this field does not simply disappear from the magnetic circuit once the capacitor finishes discharging, instead returning to the coilgun's electric circuit. Because the coilgun's electric circuit is inherently analogous to an LC oscillator, the unused energy returns in the reverse direction ('ringing'), which can seriously damage polarized capacitors such as electrolytic capacitors. Reverse charging can be prevented by a diode connected in reverse-parallel across the capacitor terminals; as a result, the current keeps flowing until the diode and the coil's resistance dissipate the field energy as heat. While this is a simple and frequently utilized solution, it requires an additional expensive high-power diode and a well-designed coil with enough thermal mass and heat dissipation capability in order to prevent component failure. Some designs attempt to recover the energy stored in the magnetic field by using a pair of diodes. These diodes, instead of being forced to dissipate the remaining energy, recharge the capacitors with the right polarity for the next discharge cycle. This will also avoid the need to fully recharge the capacitors, thus significantly reducing charge times. However, the practicality of this solution is limited by the resulting high recharge current through the equivalent series resistance (ESR) of the capacitors; the ESR will dissipate some of the recharge current, generating heat within the capacitors and potentially shortening their lifetime. To reduce component size, weight, durability requirements, and most importantly, cost, the magnetic circuit must be optimized to deliver more energy to the projectile for a given energy input. This has been addressed to some extent by the use of back iron and end iron, which are pieces of magnetic material that enclose the coil and create paths of lower reluctance in order to improve the amount of magnetic flux coupled into the projectile. Results can vary widely, depending on the materials used; hobbyist designs may use, for example, materials ranging anywhere from magnetic steel (more effective, lower reluctance) to video tape (little improvement in reluctance). Moreover, the additional pieces of magnetic material in the magnetic circuit can potentially exacerbate the possibility of flux saturation and other magnetic losses. Ferromagnetic projectile saturation Another significant limitation of the coilgun is the occurrence of magnetic saturation in the ferromagnetic projectile. When the flux in the projectile lies in the linear portion of its material's B(H) curve, the force applied to the core is proportional to the square of coil current (I)—the field (H) is linearly dependent on I, B is linearly dependent on H and force is linearly dependent on the product BI. This relationship continues until the core is saturated; once this happens B will only increase marginally with H (and thus with I), so force gain is linear. Since losses are proportional to I2, increasing current beyond this point eventually decreases efficiency although it may increase the force. This puts an absolute limit on how much a given projectile can be accelerated with a single stage at acceptable efficiency. Projectile magnetization and reaction time Apart from saturation, the B(H) dependency often contains a hysteresis loop and the reaction time of the projectile material may be significant. The hysteresis means that the projectile becomes permanently magnetized and some energy will be lost as a permanent magnetic field of the projectile. The projectile reaction time, on the other hand, makes the projectile reluctant to respond to abrupt B changes; the flux will not rise as fast as desired while current is applied and a B tail will occur after the coil field has disappeared. This delay decreases the force, which would be maximized if the H and B were in phase. Induction coilguns Most of the work to develop coilguns as hyper-velocity launchers has used "air-cored" systems to get around the limitations associated with ferromagnetic projectiles. In these systems, the projectile is accelerated by a moving coil "armature". If the armature is configured as one or more "shorted turns" then induced currents will result as a consequence of the time variation of the current in the static launcher coil (or coils). In principle, coilguns can also be constructed in which the moving coils are fed with current via sliding contacts. However, the practical construction of such arrangements requires the provision of reliable high speed sliding contacts. Although feeding current to a multi-turn coil armature might not require currents as large as those required in a railgun, the elimination of the need for high speed sliding contacts is an obvious potential advantage of the induction coilgun relative to the railgun. Air cored systems also introduce the penalty that much higher currents may be needed than in an "iron cored" system. Ultimately though, subject to the provision of appropriately rated power supplies, air cored systems can operate with much greater magnetic field strengths than "iron cored" systems, so that, ultimately, much higher accelerations and forces should be possible. Formula for exit velocity of coilgun projectile An approximate result for the exit velocity of a projectile having been accelerated by a single-stage coilgun can be obtained by the equation m being the mass of the projectile, defined as kg V being the volume of the projectile, defined as m3 μ0 being the vacuum permeability, defined in SI units as 4π × 10−7 V·s/(A·m) χm being the magnetic susceptibility of the projectile, a dimensionless proportionality constant indicating the degree of magnetization in a material in response to applied magnetic fields. This often must be determined experimentally, and tables containing susceptibility values for certain materials may be found in the CRC Handbook of Chemistry and Physics as well as the Wikipedia article for magnetic susceptibility n being the number of coil turns per unit length of the coil, defined as m−1 And I being the current passing through the coil, defined as A. While this approximation is useful for quickly defining the upper limit of velocity in a coilgun system, more accurate and non-linear second order differential equations do exist. The issues with this formula being that it assumes the projectile lies completely within a uniform magnetic field, that the current dies out instantly once the projectile reaches the center of the coil (eliminating the possibility of coil suck-back), that all potential energy is transferred into kinetic energy (whereas most would go into frictional forces), and that the wires of the coil are infinitely thin and do not stack on one another, all cumulatively increasing the expected exit velocity. Uses Small coilguns are recreationally made by hobbyists, typically up to several joules to tens of joules projectile energy (the latter comparable to a typical air gun and an order of magnitude less than a firearm) while ranging from under one percent to several percent efficiency. In 2018, a Los Angeles-based company Arcflash Labs offered the first coilgun for sale to the general public, the EMG-01A. It fired 6-gram steel slugs at 45 m/s with a muzzle energy of approximately 5 joules. In 2021, they developed a larger model, the GR-1 Gauss rifle which fired 30-gram steel slugs at up to 75 m/s with a muzzle energy of approximately 85 joules, comparable to a PCP air rifle. In 2022 Northshore Sports Club, an American gun club in Lake Forest, Illinois began distributing the CS/LW21, also referred to as the "E-Shotgun", a compact, 15 joule magazine fed coil gun, manufactured by the China North Industries Group Corp. They project distribution to reach 5000 units per year in the US, and the manufacturer has also unveiled plans to supply the Chinese police and military with units for "non-lethal riot control". Much higher efficiency and energy can be obtained with designs of greater expense and sophistication. In 1978, Bondaletov in the USSR achieved record acceleration with a single stage by sending a 2-gram ring to 5000 m/s in 1 cm of length, but the most efficient modern designs tend to involve many stages. It is estimated that greater than 90% efficiency will be required for vastly larger superconducting systems for space launch. An experimental 45-stage, 2.1 m long DARPA coilgun mortar design is 22% efficient, with 1.6 megajoules kinetic energy delivered to a round. Though they face the challenge of competitiveness versus conventional guns (and sometimes railgun alternatives), coilguns are being researched for weaponry. The DARPA Electromagnetic Mortar program is one example, if practical challenges like sufficiently low weight can be achieved. The coilgun would be relatively silent with no smoke giving away its position, though a supersonic projectile would still create a sonic boom. Adjustable, smooth acceleration of the projectile along the barrel length would allow higher velocity, with a predicted range increase of 30% for a 120mm EM mortar over the conventional version of similar length. With no separate propellant charges to load, the researchers envision the firing rate to approximately double. In 2006, a 120mm prototype was under construction for evaluation, though a tenuous time for deployment was then estimated to be 5 to 10+ years by Sandia National Laboratories. In 2011, development was proposed for an 81mm coilgun mortar to operate with a hybrid-electric version of the future Joint Light Tactical Vehicle. Electromagnetic aircraft catapults are planned, including on board future U.S. Gerald R. Ford class aircraft carriers. An experimental induction coilgun version of an Electromagnetic Missile Launcher (EMML) has been tested for launching Tomahawk missiles. A coilgun-based active defense system for tanks is under development at HIT in China. Coilgun potential has been perceived as extending beyond military applications. Few entities could overcome the challenges and corresponding capital investment to fund gigantic coilguns with projectile mass and velocity on the scale of gigajoules of kinetic energy (as opposed to megajoules or less). Such have been proposed as Earth or Moon launchers: An ambitious lunar-base proposal considered in a 1975 NASA study would have involved a 4000-ton coilgun sending 10 million tons of lunar material over several years to L5 in support of massive space colonization utilizing a large 9900-ton power plant). A 1992 NASA study calculated that a 330-ton lunar quench gun (superconducting coilgun) could launch 4400 projectiles annually, each 1.5 tons and mostly liquid oxygen payload, using a relatively small amount of power, 350 kW average. After a NASA Ames study of aerothermal requirements for heat shields with terrestrial surface launch, Sandia National Laboratories investigated electromagnetic launchers for spacecraft and researched other EML applications using both railguns and coilguns. In 1990 a kilometer-long coilgun was proposed for launch of small satellites. Later investigations at Sandia included a 2005 proposal of the StarTram concept for an extremely long coilgun, one version conceived as launching passengers to orbit with survivable acceleration. A mass driver is essentially a coilgun that magnetically accelerates a package consisting of a magnetizable container with a payload. Once accelerated, the container and payload separate, the container is slowed and recycled to receive another payload. See also Electromagnetic propulsion Carl Friedrich Gauss Helical railgun Light-gas gun Mass driver Edwin Fitch Northrup Plasma railgun Railgun Ram accelerator Solenoid Tubular linear motor References External links Prototype of coil gun. CG42- Full auto coil gun Artillery by type Electric motors Magnetic propulsion devices Non-rocket spacelaunch Norwegian inventions
Coilgun
Technology,Engineering
4,022
44,734,239
https://en.wikipedia.org/wiki/Lanabecestat
Lanabecestat (formerly known as AZD3293 or LY3314814) is an oral beta-secretase 1 cleaving enzyme (BACE) inhibitor. A BACE inhibitor in theory would prevent the buildup of beta-amyloid and may help slow or stop the progression of Alzheimer's disease. In September 2014, AstraZeneca and Eli Lilly and Company announced an agreement to co-develop lanabecestat. A pivotal Phase II/III clinical trial of lanabecestat started in late 2014 and is planned to recruit 2,200 patients and end in June 2019. In April 2016 the company announced it would advance to phase 3 without modification. AstraZeneca and Eli Lilly announced in August 2016 that they received FDA fast track designation for lanabecestat. On June 12, 2018, Lilly and AstraZeneca announced that the two ongoing Phase III clinical trials for lanabecestat were not likely to meet their primary endpoint and that all current trials would be stopped due to futility. References Experimental drugs for Alzheimer's disease Enzyme inhibitors Pyridines Drugs developed by AstraZeneca Alkyne derivatives Imidazoles Abandoned drugs
Lanabecestat
Chemistry
246
10,773,039
https://en.wikipedia.org/wiki/E%C3%B6tv%C3%B6s%20rule
The Eötvös rule, named after the Hungarian physicist Loránd (Roland) Eötvös (1848–1919) enables the prediction of the surface tension of an arbitrary liquid pure substance at all temperatures. The density, molar mass and the critical temperature of the liquid have to be known. At the critical point the surface tension is zero. The first assumption of the Eötvös rule is: 1. The surface tension is a linear function of the temperature. This assumption is approximately fulfilled for most known liquids. When plotting the surface tension versus the temperature a fairly straight line can be seen which has a surface tension of zero at the critical temperature. The Eötvös rule also gives a relation of the surface tension behaviour of different liquids in respect to each other: 2. The temperature dependence of the surface tension can be plotted for all liquids in a way that the data collapses to a single master curve. To do so either the molar mass, the density, or the molar volume of the corresponding liquid has to be known. More accurate versions are found on the main page for surface tension. The Eötvös rule If V is the molar volume and Tc the critical temperature of a liquid the surface tension γ is given by where k is a constant valid for all liquids, with a value of 2.1×10−7 J/(K·mol2/3). More precise values can be gained when considering that the line normally passes the temperature axis 6 K before the critical point: The molar volume V is given by the molar mass M and the density ρ The term is also referred to as the "molar surface tension" γmol : A useful representation that prevents the use of the unit mol−2/3 is given by the Avogadro constant NA : As John Lennard-Jones and Corner showed in 1940 by means of the statistical mechanics the constant k′ is nearly equal to the Boltzmann constant. Water For water, the following equation is valid between 0 and 100 °C. History As a student, Eötvös started to research surface tension and developed a new method for its determination. The Eötvös rule was first found phenomenologically and published in 1886. In 1893 William Ramsay and Shields showed an improved version considering that the line normally passes the temperature axis 6 K before the critical point. John Lennard-Jones and Corner published (1940) a derivation of the equation by means of statistical mechanics. In 1945 E. A. Guggenheim gave a further improved variant of the equation. References Physical chemistry Thermodynamic equations
Eötvös rule
Physics,Chemistry
525
37,587,600
https://en.wikipedia.org/wiki/P-Course
The P-Course (P stands for Production) is a practical training program about Industrial Engineering implemented to teach the main techniques for industrial work improvement (改善 "kaizen" in Japanese). Although especially aimed at applications in manufacturing, it can be useful also to transfer the core concepts of systematic improvement approach to people involved in operational activities in non-manufacturing businesses (i.e. services, indirect corporate functions). History This course was born in Japan and was intended for production technicians, so they could best contribute to the rebirth of the manufacturing industry in the postwar period. After World War II it was necessary to reconstruct the foundations of the Japanese industrial system: in May 1946 (year 22 of the Showa era) the Japan Management Association (JMA) decided to start off a course specifically conceived for the development of those skills and knowledge required to production technicians. That was the P-Course. The training format was basically an off-site program. It was held each time in a company that would host the lessons at its shopfloor. It consisted of sessions of intensive work until late evening and it lasted one full month. Such formula was based on the scheme of a former practical course held by the Nihon Kōgyō Kyōkai (日本工業協会), which included in-depth study of operational activities and was 2 to 3 months long. The P-Course could therefore be considered a compact version of the original course offered by this body, saving as much as possible the value of the original course. The first and second edition of the P-Course (Production Course) were held at the Kōsoku Kikan Kōgyō (高速機関工業 High Speed Engine Ind.) of Shinagawa, whereas the third edition was held at Hitachi Kōki (日立工機) of Katsuta, near Mito. The first two editions were held by Mr.Ohno Tsuneo, but from the third edition onward the courses were under the responsibility of Mr. Shigeo Shingo, whose personal touch remained unequalled and yet remembered, focusing in particular on the concept of “motion mind”. Up to 1956 a total of nineteen (19) editions were held, which allowed the training of 410 participants. This course had been created by the joint effort of JMA and the Japanese Government intending to supply the most promising enterprises with the right tools and operational know-how in order to get the maximum value by the least investment from the resources of a Country willing to go a step further the post-war reorganization. This was especially an orientation shared by an automotive company (Toyota Motor Company) which would soon become a top reference for its best practice in the management of production. Thanks to the great contribution by Mr. Taiichi Ohno – a young mechanical engineer who soon became the vice-president of Toyota – the Toyota Production System (TPS) is the approach to production engineering and organisation set in the 1950s by the said carmaker with the objective of increasing dramatically the efficiency of the company.  Based upon the Industrial Engineering principles, the TPS introduced a strongly innovative element, which turned over completely the work philosophy. The underlying statement is that «If one wants to reach success in the long run, it is not possible to separate the development of human resources from the development of the production system». The P-Course of the first editions consisted of training sessions addressed to groups of twenty people, initially made up half by people from Toyota Motor and half by companies of the Toyota group, amongst which – for example – Denso. Mr. Taiichi Ohno adopted these training modules within Toyota. In 1955 the courses were held for those employed at the Koromo plant – the only one existing at the time. The contents of the course were essentially the first elements of Industrial Engineering. Shigeo Shingo had been in charge of the course from 1947 to 1955, for a total of about 80 courses and 3,000 trainees; from 1956 to 1958 Shingo held in Toyota one course each month. To deliver the course Shingo had one assistant, who was for a period Mr Akira Kōdate, nowadays Principal Consultant and Technical Advisor engaged in business management consulting worldwide within the JMA Group. One of the core features of the course has been, since the beginning, the use of a training methodology based on “learning while improving” (not just as commonly referred to as "learning by doing"). That is performed by setting the activities at the shopfloor of a real factory and using that place as a classroom and the problems arising with the production of that shoploor as training material. At the time the course lasted a month, where 1/3 of theory was sided by 2/3 of practical application of the concepts learnt by theoretical explanation. Nowadays similar courses are held with the same assumptions, in a shorter lapse of time and larger classrooms, and focused on continuous improvement. one distinctive feature of the P-Course is the capability to train people quickly, fostering their ability to “see” waste (muda 無駄) and guiding people towards autonomous improvement. This happens by pursuing two main objectives on top of others: The transfer of analysis and productive improvement techniques through brief theoretical sessions, focusing on the Industrial Engineering methods of analysis. The objective is to develop the ability to work out real solutions against the root causes generating critical situations / problems; Teaching how to realize the improvement cycle by means of practical and concrete application at the workplace (genba in Japanese). Alternation of theoretical and practical sessions continues to be one of the distinctive and valuable features of the course. References Bibliography JMAグループの原点・The DNA of JMA Group - JMA Group no genten (The origins of JMA Group), Edited by JMA Group renkei sokushin iinkai, Tokyo 2010 - pp. 34–35. Related word list Genba Industrial Engineering Japan Management Association Kaizen Muda Toyota Production System Shigeo Shingo Maintenance
P-Course
Engineering
1,220
54,624,137
https://en.wikipedia.org/wiki/Chaplygin%20Siberian%20Scientific%20Research%20Institute%20Of%20Aviation
Chaplygin Siberian Scientific Research Institute Of Aviation (SibNIA) () is a research institute based in Novosibirsk, Russia and established in 1941. SIBNIA is one of Russia's leading aviation research institutes, with departments devoted to aerodynamics, avionics testing, full-scale fatigue testing, thermal strength testing, fatigue and loading spectra, and dynamic strength testing. Researchers at SIBNIA have published work on topics such as localization of the sources of acoustic emissions in strength testing of aviation materials, fatigue and fracture in steel and concrete structures, piezotransducer amplification, strengths of materials, and metallurgy. Since 1943, over 180 aircraft and 200 aircraft components have been tested at SIBNIA's facilities. SIBNIA came into existence in 1941 when the Central Aerohydrodynamics Institute (TsAGI) was evacuated to Novosibirsk to escape from the German advance on Moscow. Sergey Chaplygin, the second director of TsAGI, died in Novosibirsk on 8 October 1942 and was buried in front of SIBNIA's main administration building. The bulk of TsAGI's personnel returned to Moscow in 1943, leaving behind a cadre of researchers and equipment at what then became the S.A. Chaplygin Siberian Scientific Research Institute of Aviation. References External links Official website Companies based in Novosibirsk Federal State Unitary Enterprises of Russia Ministry of the Aviation Industry (Soviet Union) Science and technology in Siberia Research institutes in Novosibirsk Dzerzhinsky City District, Novosibirsk Research institutes in the Soviet Union Aerospace engineering organizations Aviation research institutes Aviation in the Soviet Union
Chaplygin Siberian Scientific Research Institute Of Aviation
Engineering
341
67,485,154
https://en.wikipedia.org/wiki/Time%20in%20Palau
Time in Palau is given by Palau Time (PWT; UTC+09:00). Palau does not have an associated daylight saving time. Palau Time is equivalent to Japan Standard Time, Korean Standard Time, Pyongyang Time (North Korea), Eastern Indonesia Standard Time, East-Timorese Standard Time, and Yakutsk Time (Russia). History Until the end of 1844, Palau belonged to Captaincy General of the Philippines, which followed the date of the western hemisphere on its island. At the end of Monday, 30 December 1844, Palau was transferred to eastern hemisphere in response to Philippines's decision to switch sides of the line. This brought Palau closer to Asia (and in the process removing Tuesday, 31 December 1844 from the calendar). Before time zones were introduced, every place used local observation of the sun to set its clocks, which meant that every location used a different local mean time based on its longitude. For example, Koror, the largest city of Palau at the time, at longitude 134°29′E, had a local time equivalent to GMT-15:02:04 under the date of the western hemisphere and GMT+08:57:56 under the eastern hemisphere. In 1901, "Palau Time" (PWT) was established as GMT+09:00. IANA time zone database The IANA time zone database gives Palau one time zone, Pacific/Palau. References Time by country Geography of Palau
Time in Palau
Physics
310
32,039,956
https://en.wikipedia.org/wiki/Koch%E2%80%93Pasteur%20rivalry
The French Louis Pasteur (1822–1895) and German Robert Koch (1843–1910) are the two greatest figures in medical microbiology and in establishing acceptance of the germ theory of disease (germ theory). In 1882, fueled by national rivalry and a language barrier, the tension between Pasteur and the younger Koch erupted into an acute conflict. Pasteur had already discovered molecular chirality, investigated fermentation, refuted spontaneous generation, inspired Lister's introduction of antisepsis to surgery, introduced pasteurization to France's wine industry, answered the silkworm diseases blighting France's silkworm industry, attenuated a Pasteurella species of bacteria to develop vaccine to chicken cholera (1879), and introduced anthrax vaccine (1881). Koch had transformed bacteriology by introducing the technique of pure culture, whereby he established the microbial cause of the disease anthrax (1876), had introduced both staining and solid culture plates to bacteriology (1881), had identified the microbial cause of tuberculosis (1882), had incidentally popularized Koch's postulates for identifying the microbial cause of a disease, and would later identify the microbial cause of cholera (1883). Although Koch had briefly and, thereafter, his bacteriological followers regarded a bacterial species' properties as unalterable, Pasteur's modification of virulence to develop vaccine demonstrated this doctrine's falsity. At an 1882 conference, a mistranslated term from French to German during Pasteur's lecture triggered Koch's indignation, whereupon Koch's two bacteriologist colleagues, Friedrich Loeffler and Georg Gaffky, published denigration of the entirety of Pasteur's research on anthrax since 1877. Tensions between France and Germany Germany had unified by way of its victory in the Franco-Prussian War (1870–71), seizing Alsace-Lorraine from France. Pasteur was professor in the University of Strasbourg, located in Alsace, where he married the daughter of the rector. Jean Baptiste Pasteur, the only son of Louis and Marie Pasteur, was a soldier in the Franco-Prussian War. The tone set by this war contributed to the rivalry between Koch and Pasteur. The "German Problem", as Germany increasingly gained scientific, technological, and industrial dominance, fed tensions among European nations. Germ theory's applications were embedded in the heightening quest by France, Germany, Britain, and Italy to colonize Africa and Asia with the aid of tropical medicine, a new variant of colonial medicine, while medical scientists in respective nations vied to lead advances. Koch's bacteriology and anthrax In 1863, influenced by Pasteur's research on fermentation, fellow Frenchman Casimir Davaine mostly explained the cause of anthrax, but Davaine's explanation was opposed by those who opposed the idea that infection with a microorganism could explain it. In 1840, Jakob Henle had proposed microorganism infections caused diseases, and in 1875 German botanist Ferdinand Cohn weighed in on a controversy in microbiology by declaring that the elementary unit was the cell and that each form of bacteria was constant and naturally divided from the other forms. Influenced by Henle and by Cohn, Koch developed a pure culture of the bacteria described by Davaine, traced its spore stage, inoculated it into animals, and showed it caused anthrax. Pasteur called this a "remarkable achievement". In pure culture, bacteria tend to keep constant traits, and Koch reported having already observed constancy. Pasteur undertook investigation, yet gave much credit to Davaine. Meanwhile, Pasteur's researchers always reported variation in their cultures. In 1879, Henri Toussaint identified a bacterial species involved in chicken cholera and named the genus in honor of Pasteur, Pasteurella. In Pasteur's laboratory, a culture of Pasteurella multocida was left out over a weekend exposed to air, and Pasteur and Emile Roux noticed upon return to the laboratory that its virulence to chickens was diminished. Pasteur applied the discovery to develop chicken cholera vaccine, introduced in a public experiment, an empirical challenge to the stance of Koch's bacteriologists that bacterial traits were unalterable. Pasteur's attenuation and vaccines From 1878 to 1880, when publishing on anthrax, Pasteur referred to the bacteria by the name given it by Frenchman Davaine, but in one footnote called it "Bacillus anthracis of the Germans". In July 1880 Toussaint reported developing a technique of chemical deactivation to produce anthrax vaccine that successfully protected dogs and cattle, and was praised by the Academy of Science, but Pasteur attacked the feat—chemical deactivation and not virulence attenuation to make a vaccine—as impossible. Pasteur soon introduced his own anthrax vaccine in a highly successful public experiment, and entered commerce with it. Pasteur was criticized by Koch and colleagues. (Pasteur had not used attenuation, but secretly used Toussaint's technique.) Microbe hunting, cholera, and public health In 1883, responding to a cholera epidemic in Alexandria, Egypt, both Pasteur and Koch sent missions vying to identify its cause. Koch returned victorious, whereupon Pasteur switched research direction and began development of rabies vaccine. As to public health, Koch's bacteriologists feuded with Max von Pettenkofer—whose miasmatic theory claimed the bacteria was but one causal factor among at least several—but von Pettenkoffer stubbornly opposed water treatment, and the massive cholera epidemic in Hamburg, Germany, in 1892 devastated von Pettenkofer's position, and German public health was grounded on Koch's bacteriology. Meanwhile, Pasteur led introduction of pasteurization in France. Rabies vaccine and Pasteur Institute Rabies, uncommon but excruciating and almost invariably fatal, was dreaded. Amid anthrax vaccine's success, Pasteur introduced rabies vaccine (1885), the first human vaccine since Jenner's smallpox vaccine (1796). On 6 July 1885, the vaccine was tested on 9-year old Joseph Meister who had been bitten by a rabid dog but failed to develop rabies, and Pasteur was called a hero. (Even without vaccination, not everyone bitten by a rabid dog develops rabies.) After other apparently successful cases, donations poured in from across the globe, funding the establishment of the Pasteur Institute, the globe's first biomedical institute, which opened in Paris in 1888. Pasteur Institute trained military physicians in colonial medicine, although French government soon took over this role. The success of Pasteur's modification of bacterial virulence inspired confidence in the universality of Pasteurian science, though Pasteur's researchers preferred the term microbiology over the term bacteriology. Koch discouraged use of rabies vaccine, whose production later became a premise for opening Pasteur Institutes abroad, as in Shanghai, China. The first overseas Pasteur Institute was opened by Albert Calmette in Saigon in French Indochina in 1891, although Pasteur's nephew Adrien Loir was already planning to open one in Australia. Tuberculin and Robert Koch Institute In 1882, Koch reported identification of the tubercle bacillus as the cause of tuberculosis, cementing germ theory. Koch took his research into a new direction—applied research—to develop a tuberculosis treatment and use the profits to found his own research institute, autonomous from government. In 1890 Koch introduced the intended drug, tuberculin, but it soon proved ineffective, and accounts of deaths followed in news press. Amid Koch's reluctance to disclose tuberculin's formula, Koch's reputation sustained damage, but Koch retained lasting acclaim and received the 1905 Nobel Prize in Physiology or Medicine "for his investigations and discoveries in relation to tuberculosis". Koch accepted government's offer to direct the Institute for Infectious Diseases (1891), in Berlin, a prestigious position but not the kind of institute that Koch had sought. It was later renamed the Robert Koch Institute, which remains a government organization. American medicine embraces Koch The monomorphist doctrine of Koch's bacteriologists suggested public health interventions to eliminate bacteria, whereas Pasteur's acceptance of variation suggested attenuating bacterial virulence in the laboratory to develop vaccines. Although inspired by Pasteur's applications suggesting medicine's potential, American physicians traveled to Germany to learn Koch's bacteriology as basic science, though Pasteur emphasized the fuzzy boundary between basic science and applied science. From 1876 to 1878, the American William Henry Welch trained in Germany pathology, and in 1879 opened America's first scientific laboratory, a pathology laboratory in Bellevue's medical school in New York. While in Germany, Welch had met John Shaw Billings who had been appointed by Daniel Coit Gilman—the first president of the newly forming Johns Hopkins University—to plan Hopkins' hospital and medical school. Named the medical school's first dean in 1883, Welch promptly traveled for training in Koch's bacteriology, and returned to America eager to transform medicine with the "secrets of nature". Hopkins medical school opened in 1894 with Welch emphasizing Koch's bacteriology, which became the foundation of modern medicine. As "dean of American medicine", William H Welch became the first scientific director of Rockefeller Institute for Medical Research (1901), and appointed his former Hopkins student Simon Flexner the first director of pathology and bacteriology laboratories. Aided by the "Flexner report", published in 1910 while Welch was president of the American Medical Association, Welch's view of science and medicine became the national standard, a transformation of American medical education completed around 1930. As first dean of America's first public health school, founded in 1916 at Hopkins, Welch set the standard for public health, and with Simon Flexner exported the Hopkins model internationally. Pasteur's image ascends Although Pasteur died in 1895, eventually over thirty official Pasteur Institutes opened across the globe. Pasteur's team had planned in 1885 to open a rabies-treatment facility in St. Louis, Missouri, and an American Pasteur Institute in New York City, but the plans were abandoned, and America has never hosted an official Pasteur Institute. A number of American copycats appeared, however, starting with "Chicago Pasteur Institute" in 1890, and "New York Pasteur Institute" in 1891. In 1897 a "Pasteur Institute" opened in Baltimore, in 1900 in Pittsburgh and St. Louis, in 1903 in Ann Arbor and Austin, and in 1904 perhaps in Philadelphia. In 1908, Georgia Department of Public Health opened a "Pasteur Department" in Atlanta, California State Hygienic Laboratory opened a "Pasteur Division" in Berkeley, and a "Pasteur Institute" opened in Washington, D.C. In 1900, Paul Gibier, the French medical scientist who opened "New York Pasteur Institute", accidentally died, but his nephew, George Gibier Rambaud, continued it on reduced scale until he closed it when US Medical Corps commissioned him overseas in 1918. While MDs ascended in American public health, it was thought that "the greatest contribution of all, the foundation upon which modern sanitary science is built, was made by Pasteur." Nationalism flares Koch was celebrated by the American medical community, including by Welch, when at last Koch visited America in 1908. Soon, however, America was influenced by the British and French view that although their denizens appreciated Germany's progress in science and arts, Germany was elitist and dismissive while socially and politically antiquated, so authoritarian and aggressive as to resemble medieval tyranny. In 1917, when America entered World War I (1914–18), US government seized German-owned property and assets, including Bayer AG's American trademarks and the 80% of Merck & Co's shares owned by George Merck. Welch exhibited gratuitous anti-German bias despite the debt of his own career, thus American medicine, to Germany, especially to Koch's bacteriology. After World War II (1939–45), and more of the "German Problem", Merck & Co became the global leader in vaccinology. Two legacies Tuberculin's main use rapidly became in determining M tuberculosis infection—a use remaining till today—but this use soon revealed that in London, 9 of 10 individuals were infected, whereas only 1 in 10 of the infected developed the disease. In 1901 at the London Congress on Tuberculosis, Koch stated on theoretical grounds that M bovis, which infects cows, was not transmissible to humans. British attendees disagreed, and later Theobald Smith and the English Royal Commission empirically established that M bovis was transmissible and could result in human disease. Though widely considered ineffective as treatment, tuberculin might have remained in use for this purpose until the 1940s and maybe had some effectiveness. Milk pasteurization became popular in America around 1920. In 1921 Albert Calmette and Camille Guérin of Pasteur Institute introduced tuberculosis vaccine, whose virulence of strains varied in the late 1920s. BCG vaccine was not used in the public health of America, which virtually eliminated tuberculosis without it. BCG vaccine's effectiveness preventing tuberculosis remains uncertain, but appears to confer nonspecific survival gains, as perhaps by preventing leprosy, and is a cancer treatment. Pasteur had highlighted a new threat—microorganisms benign to humans passing among and multiplying in nonhuman animals while gaining new virulence for humans—that is thought to loom and to have approximately materialized with AIDS. Although Koch's postulates are often inapplicable, they remain heuristic, and the authority of "fulfilling Koch's postulates" is still invoked in medical science, though often in modified form, as in the identification of HIV-1 as the cause of AIDS or the identification of SARS coronavirus as the cause of SARS. Germ theory's stance that the "germ" was the disease's necessary and sufficient cause—the single factor both required and complete to result in the disease—proved false. Germ theory gradually evolved to include other factors, whereupon germ theory resembled miasmatic theory, which had had to recognize bacteria as a causal factor, and so the two competing explanations merged without true, decisive victor. Twentieth-century philosophy, inspired by revolutions in physics, establishment of molecular biology, and advances in epidemiology, revealed that any claim of a single causal factor both necessary (required) and sufficient (complete), the cause, is untenable. French-born microbiologist René Dubos, a biographer of Pasteur, discussed tuberculosis to illustrate disease's social causes and to illustrate the failure of germ theory, whose apparent successes were aided by improvements in nutrition and living conditions but sparked scientific research that brought a wealth of new understandings. References Scientific rivalry Microbiology 19th century in science France–Germany relations Robert Koch
Koch–Pasteur rivalry
Chemistry,Biology
3,122
502,579
https://en.wikipedia.org/wiki/Octaeteris
In astronomy, an octaeteris (, plural: octaeterides) is the period of eight solar years after which the moon phase occurs on the same day of the year plus one or two days. This period is also in a very good synchronicity with five Venusian visibility cycles (the Venusian synodic period) and thirteen Venusian revolutions around the Sun (Venusian sidereal period). This means, that if Venus is visible beside the Moon, after eight years the two will be again close together near the same date of the calendar. {| class="wikitable" |+ Comparison of differing parts of the octaeteris ! Astronomical period ! Number in anoctaeteris ! Overall duration(Earth days) |- | Tropical year |align=right| 8     |   2 921.93754 |- | Synodic lunar month |align=right| 99     |   2 923.528230 |- | Sidereal lunar month |align=right| 107     |   2 923.417787 |- | Venusian synodic period |align=right| 5     |   2 919.6 |- | Venusian sidereal period |align=right| 13     |   2 921.07595 |} The octaeteris, also known as oktaeteris, was noted by Cleostratus in ancient Greece as a day cycle. The octaeteris is the calendar used for the Olympic games; if one Olympiad was 50 months long, the next would be 49 lunar months long. This octaeteris calendar is used for the Olympic dial of the Antikythera mechanism, to determine the time of the Olympic games and other Greek festivities. The 8 year short lunisolar cycle was probably known to many ancient cultures. The mathematical proportions of the octaeteris cycles were The Three Kings panel also contains more accurate ratios, ratios related to other planets, and apparent astronomical symbolism. See also Metonic cycle Eclipse cycle References Ancient Greek astronomy Calendars Time in astronomy
Octaeteris
Physics,Astronomy
420
54,021,539
https://en.wikipedia.org/wiki/OnePlus%205
The OnePlus 5 (also abbreviated as OP5) is a smartphone made by OnePlus. It is the successor to the OnePlus 3T, which was released in 2016. The OnePlus 5 was officially unveiled during a keynote on 20 June 2017 and first released on June 20, 2017. It was succeeded by the OnePlus 5T five months later on November 21, 2017. History The Verge announced in May 2017 that the successor to the OnePlus 3 would be known as the OnePlus 5. While OnePlus did not officially state why the number four was skipped, it was speculated that it was due to the number four being considered unlucky (tetraphobia) in China. According to India Today, the "4" was skipped because OnePlus 2 was not very successful and now OnePlus considers even numbers unlucky. OnePlus confirmed that the handset would feature a Snapdragon 835 processor prior to launch. OnePlus also noted its work with the image processing firm DxO to improve the camera on the device. Specifications Hardware The OnePlus 5 has an anodized metal back like its predecessors, the OnePlus 3 and 3T. The handset is currently available in Slate Gray (black/gray) and Soft Gold (white/gold) colours. The limited edition Soft Gold colour was released on 7 August 2017. A Midnight Black (black/matte black) colour was also manufactured, but later removed from their website. Another special edition, called JCC+, made in association with Castelbajac is exclusively available in Europe. It features a Slate Gray back with handwriting along with colourful hardware buttons. The device comes with a Qualcomm Snapdragon 835 chipset clocked at 2.45 GHz, with up to 8 GB RAM and 128 GB storage. It has a 3,300 mAh battery with OnePlus' proprietary Dash Charge technology and a 1080p AMOLED display with DCI-P3 wide colour gamut. Following criticism of prior models' camera quality, OnePlus implemented an electronically stabilized 16 MP Sony Exmor IMX398 camera module along with a 20 MP Sony Exmor IMX350 telephoto lens, capable of producing a bokeh effect. The front camera uses a 16 MP IMX371 lens. OnePlus claims that the dual camera enables a 1.6X optical zoom; however, independent testing has only reproduced a 1.33X optical zoom. DxOMark gave the camera a rating of 87, which is above that of the iPhone 7. Software The company released the device tree and kernel sources a few days prior to the phone's launch. It has TWRP and root access. OnePlus 5 was launched with OxygenOS 4.5 based on Android 7.1.1. One feature in OxygenOS is the reading mode. OxygenOS 5.0 (later updated to 5.0.1) was released in late-2017, bringing Android 8.0 Oreo to the OnePlus 5. Among the changes are an updated launcher, redesigned camera UI, and several system optimizations. The "Face Unlock" facial recognition feature, originally shipped with the OnePlus 5T, was added to the OnePlus 5 in OxygenOS 5.0.2. The device was updated to OxygenOS 10 based on Android 10. There will be no official Android 11 for the Oneplus 5. However, the OnePlus 5 has received unofficial software support with custom ROMs. Network compatibility The OnePlus 5 includes only a single variant for cellular networks worldwide. Reception Critical reception The majority of the initial reviews of the phone were positive. Engadget praised the phone's specifications in its price bracket. The Verge concurred and noted its departure from previous OnePlus phone designs, but pointed out design similarities with the iPhone 7 Plus. Wired noted that despite the phone's camera specifications, the photo quality was mediocre; however, it commended the smoothness and battery life of the device. XDA Developers noted that the phone was locking the CPU and GPU at their maximum clock speeds while in certain games and benchmarks in order to inflate their benchmark scores, resulting in the phone reaching external temperatures as high as 50 °C. OnePlus co-founder, Carl Pei, defended the actions by saying that users "want to see the full potential of their device without interference from tampering" and that "Every OEM has proprietary performance profiles for their devices". Known issues Users noted that the handset had a "jelly scrolling effect" due to an inverted display. OnePlus noted that this was not a defect. Some phones were known to reboot when dialing 911, preventing users from calling emergency services. OnePlus quickly issued an update (OxygenOS 4.5.6) to fix this issue, and claimed that it happened as a result of not using the stock dialing app. OnePlus later said that the problem was "related to a modem memory usage issue", and that it was only a "random occurrence" for some users on VoLTE networks. Some users, after updating the phone from Android 9 to Android 10, experience a failure of the fingerprint sensor. OnePlus said this will be addressed in an upcoming patch. Sales The OnePlus 5 became the company's fastest selling smartphone, according to CEO Carl Pei. It is also selling better than the previous generation's OnePlus 3T. During the week leading up to the device being revealed, a television advertisement was shown in the final of the 2017 ICC Champions Trophy between Pakistan and India, starring OnePlus brand ambassador and Indian film star Amitabh Bachchan. OnePlus 5 went on sale in India on 22 June 2017 on Amazon, the OnePlus India online store, and the OnePlus Experience store in Bengaluru. The Soft Gold variant went on sale on August 9, 2017 References External links OnePlus mobile phones Ubuntu Touch devices Mobile phones introduced in 2017 Discontinued flagship smartphones Mobile phones with multiple rear cameras Mobile phones with 4K video recording
OnePlus 5
Technology
1,275
23,798,069
https://en.wikipedia.org/wiki/Steam%20hammer
A steam hammer, also called a drop hammer, is an industrial power hammer driven by steam that is used for tasks such as shaping forgings and driving piles. Typically the hammer is attached to a piston that slides within a fixed cylinder, but in some designs the hammer is attached to a cylinder that slides along a fixed piston. The concept of the steam hammer was described by James Watt in 1784, but it was not until 1840 that the first working steam hammer was built to meet the needs of forging increasingly large iron or steel components. In 1843 there was an acrimonious dispute between François Bourdon of France and James Nasmyth of Britain over who had invented the machine. Bourdon had built the first working machine, but Nasmyth claimed it was built from a copy of his design. Steam hammers proved to be invaluable in many industrial processes. Technical improvements gave greater control over the force delivered, greater longevity, greater efficiency and greater power. A steam hammer built in 1891 by the Bethlehem Iron Company delivered a 125-ton blow. In the 20th century steam hammers were gradually displaced in forging by mechanical and hydraulic presses, but some are still in use. Compressed air power hammers, descendants of the early steam hammers, are still manufactured. Mechanism A single-acting steam hammer is raised by the pressure of steam injected into the lower part of a cylinder and drops under gravity when the pressure is released. With the more common double-acting steam hammer, steam is also used to push the ram down, giving a more powerful blow at the die. The weight of the ram may range from . The piece being worked is placed between a bottom die resting on an anvil block and a top die attached to the ram (hammer). Hammers are subject to repeated concussion, which could cause fracturing of cast iron components. The early hammers were therefore made from a number of parts bolted together. This made it cheaper to replace broken parts, and also gave it a degree of elasticity that made fractures less likely.A steam hammer may have one or two supporting frames. The single frame design lets the operator move around the dies more easily, while the double frame can support a more powerful hammer. The frame(s) and the anvil block are mounted on wooden beams that protect the concrete foundations by absorbing the shock. Deep foundations are needed, but a large steam drop hammer will still shake the building that holds it. This may be solved with a counterblow steam hammer, in which two converging rams drive the top and bottom dies together. The upper ram is driven down and the lower ram is pulled or driven up. These hammers produce a large impact and can make large forgings. They can be installed with smaller foundations than anvil hammers of similar force. Counterblow hammers are not often used in the United States, but are common in Europe. With some early steam hammers an operator moved the valves by hand, controlling each blow. With others the valve action was automatic, allowing for rapid repetitive hammering. Automatic hammers could give an elastic blow, where steam cushioned the piston towards the end of the down stroke, or a dead blow with no cushioning. The elastic blow gave a quicker rate of hammering, but less force than the dead blow. Machines were built that could run in either mode according to the job requirement. The force of the blow could be controlled by varying the amount of steam introduced to cushion the blow. A modern air/steam hammer can deliver up to 300 blows per minute. History Concept The possibility of a steam hammer was noted by James Watt (1736–1819) in his 28 April 1784 patent for an improved steam engine. Watt described "Heavy Hammers or Stampers, for forging or stamping iron, copper, or other metals, or other matters without the intervention of rotative motions or wheels, by fixing the Hammer or Stamper to be so worked, either directly to the piston or piston rod of the engine." Watt's design had the cylinder at one end of a wooden beam and the hammer at the other. The hammer did not move vertically, but in the arc of a circle. On 6 June 1806 W. Deverell, engineer of Surrey, filed a patent for a steam-powered hammer or stamper. The hammer would be welded to a piston rod contained in a cylinder. Steam from a boiler would be let in under the piston, raising it and compressing the air above it. The steam would then be released and the compressed air would force the piston down. In August 1827 John Hague was awarded a patent for a method of working cranes and tilt-hammers driven by a piston in an oscillating cylinder where air power supplied the motive force. A partial vacuum was made in one end of a long cylinder by an air pump worked by a steam engine or some other power source, and atmospheric pressure drove the piston into that end of the cylinder. When a valve was reversed, the vacuum was formed in the other end and the piston forced in the opposite direction. Hague made a hammer to this design for planishing frying pans. Many years later, when discussing the advantages of air over steam for delivering power, it was recalled that Hague's air hammer "worked with such an extraordinary rapidity that it was impossible to see where the hammer was in working, and the effect was seemed more like giving one continuous pressure." However, it was not possible to regulate the force of the blows. Invention It seems probable that the Scottish Engineer James Nasmyth (1808–1890) and his French counterpart François Bourdon (1797–1865) reinvented the steam hammer independently in 1839, both trying to solve the same problem of forging shafts and cranks for the increasingly large steam engines used in locomotives and paddle boats. In Nasmyth's 1883 "autobiography", written by Samuel Smiles, he described how the need arose for a paddle shaft for Isambard Kingdom Brunel's new transatlantic steamer SS Great Britain, with a diameter shaft, larger than any that had been previously forged. He came up with his steam hammer design, making a sketch dated 24 November 1839, but the immediate need disappeared when the practicality of screw propellers was demonstrated and the Great Britain was converted to that design. Nasmyth showed his design to all visitors. Bourdon came up with the idea of what he called a "Pilon" in 1839 and made detailed drawings of his design, which he also showed to all engineers who visited the works at Le Creusot owned by the brothers Adolphe and Eugène Schneider. However, the Schneiders hesitated to build Bourdon's radical new machine. Bourdon and Eugène Schneider visited the Nasmyth works in England in the middle of 1840, where they were shown Nasmyth's sketch. This confirmed the feasibility of the concept to Schneider. In 1840 Bourdon built the first steam hammer in the world at the Schneider & Cie works at Le Creusot. It weighed and lifted to . The Schneiders patented the design in 1841. Nasmyth visited Le Creusot in April 1842. By his account, Bourdon took him to the forge department so he might, as he said, "see his own child". Nasmyth said "there it was, in truth–a thumping child of my brain!" After returning from France in 1842 Nasmyth built his first steam hammer in his Patricroft foundry in Manchester, England, adjacent to the (then new) Liverpool and Manchester Railway and the Bridgewater Canal. In 1843 a dispute broke out between Nasmyth and Bourdon over priority of invention of the steam hammer. Nasmyth, an excellent publicist, managed to convince many people that he was the first. Early improvements Nasmyth's first steam hammer, described in his patent of 9 December 1842, was built for the Low Moor Works at Bradford. They rejected the machine, but on 18 August 1843 accepted an improved version with a self-acting gear. Robert Wilson (1803–1882), who had also invented the screw propeller and was manager of Nasmyth's Bridgewater works, invented the self-acting motion that made it possible to adjust the force of the blow delivered by the hammer – a critically important improvement. An early writer said of Wilson's gear, "... I would be prouder to say that I was the inventor of that motion, then to say I had commanded a regiment at Waterloo..." Nasmyth's steam hammers could now vary the force of the blow across a wide range. Nasmyth was fond of breaking an egg placed in a wineglass without breaking the glass, followed by a blow that shook the building. By 1868 engineers had introduced further improvements to the original design. John Condie's steam hammer, built for Fulton in Glasgow, had a stationary piston and a moving cylinder to which the hammer was attached. The piston was hollow, and was used to deliver steam to the cylinder and then remove it. The hammer weighed 6.5 tons with a stroke of . Condie steam hammers were used to forge the shafts of Isambard Kingdom Brunel's SS Great Eastern. A high-speed compressed-air hammer was described in The Mechanics' Magazine in 1865, a variant of the steam hammer for use where steam power was not available or a very dry environment was required. The Bowling Ironworks steam hammers had the steam cylinder bolted to the back of the hammer, thus reducing the height of the machine. These were designed by John Charles Pearce, who took out a patent for his steam hammer design several years before Nasmyth's patent expired. Marie-Joseph Farcot of Paris proposed a number of improvements including an arrangement so the steam acted from above, increasing the striking force, improved valve arrangements and the use of springs and material to absorb the shock and prevent breakage. John Ramsbottom invented a duplex hammer, with two rams moving horizontally towards a forging placed between them. Using the same principles of operation, Nasmyth developed a steam-powered pile-driving machine. At its first use at Devonport, a dramatic contest was carried out. His engine drove a pile in four and half minutes compared with the twelve hours that the conventional method required. It was soon found that a hammer with a relatively short fall height was more effective than a taller machine. The shorter machine could deliver many more blows in a given time, driving the pile faster even though each blow was smaller. It also caused less damage to the pile. Riveting machines designed by Garforth and Cook were based on the steam hammer. The catalog for the Great Exhibition held in London in 1851 said of Garforth's design, "With this machine, one man and three boys can rivet with perfect ease, and in the firmest manner, at the rate of six rivets per minute, or three hundred and sixty per hour." Other variants included crushers to help extract iron ore from quartz and a hammer to drive holes in the rock of a quarry to hold gunpowder charges. An 1883 book on modern steam practice said Later development Schneider & Co. built 110 steam hammers between 1843 and 1867 with different sizes and strike rates, but trending towards ever larger machines to handle the demands of large cannon, engine shafts and armor plate, with steel increasingly used in place of wrought iron. In 1861 the "Fritz" steam hammer came into operation at the Krupp works in Essen, Germany. With a 50-ton blow, for many years it was the most powerful in the world. There is a story that the Fritz steam hammer took its name from a machinist named Fritz whom Alfred Krupp presented to the Emperor William when he visited the works in 1877. Krupp told the emperor that Fritz had such perfect control of the machine that he could let the hammer drop without harming an object placed on the center of the block. The Emperor immediately put his watch, which was studded with diamonds, on the block and motioned Fritz to start the hammer. When the machinist hesitated, Krupp told him "Fritz let fly!" He did as he was told, the watch was unharmed, and the emperor gave Fritz the watch as a gift. Krupp had the words "Fritz let fly!" engraved on the hammer. The Schneiders eventually saw a need for a hammer of colossal proportions. The Creusot steam hammer was a giant steam hammer built in 1877 by Schneider and Co. in the French industrial town of Le Creusot. With the ability to deliver a blow of up to 100 tons, the Creusot hammer was the largest and most powerful in the world. A wooden replica was built for the Exposition Universelle (1878) in Paris. In 1891 the Bethlehem Iron Company of the United States purchased patent rights from Schneider and built a steam hammer of almost identical design but capable of delivering a 125-ton blow. Eventually the great steam hammers became obsolete, displaced by hydraulic and mechanical presses. The presses applied force slowly and at a uniform rate, ensuring that the internal structure of the forging was uniform, without hidden internal flaws. They were also cheaper to operate, not requiring steam to be blown off, and much cheaper to build, not requiring huge strong foundations. The 1877 Creusot steam hammer now stands as a monument in the Creusot town square. An original Nasmyth hammer stands facing his foundry buildings (now a "business park"). A larger Nasmyth & Wilson steam hammer stands in the campus of the University of Bolton. Steam hammers continue to be used for driving piles into the ground. Steam supplied by a circulating steam generator is more efficient than air. However, today compressed air is often used rather than steam. As of 2013 manufacturers continued to sell air/steam pile-driving hammers. Forging services suppliers also continue to use steam hammers of varying sizes based on classical designs. See also Trip hammer Power hammer Frohnauer Hammer Mill References Citations Sources External links Video of steam hammer in modern use at Scot Forge Hammers Metalworking tools Industrial machinery Scottish inventions French inventions Steam power
Steam hammer
Physics,Engineering
2,900
25,447,274
https://en.wikipedia.org/wiki/LEED%20Professional%20Exams
The LEED Professional Exams are administered by the Green Business Certification Inc. (GBCI) for professionals seeking to earn credentials and certificates. The exams test knowledge based on the U.S. Green Building Council's Leadership in Energy and Environmental Design (LEED) Rating Systems. LEED Professional Credentials The LEED professional credentials were developed to encourage green building professionals to maintain and advance their knowledge and expertise. A LEED professional credentials provides employers, policymakers, and other stakeholders with assurances of an individual's current level of competence and is the mark of the most qualified, educated, and influential green building professionals in the marketplace. All LEED professional credentials require adherence to the LEED Professional Disciplinary and Exam Appeals Policy and require ongoing credential maintenance requirements either through continuing education and practical experience or through biennial retesting. Starting in 2009, newly credentialed individuals must maintain their credential on a two-year cycles; if not, they expire. There are three tiers in the LEED professional credentials program: Tier 1: LEED Green Associate Tier 2: LEED AP with specialty Tier 3: LEED Fellow Additionally, the LEED AP exam was offered from 2001 to June 30, 2009. This credential has been grandfathered in, does not require credential maintenance, and does not expire. LEED Green Associate For professionals who want to demonstrate green building expertise in non-technical fields of practice, GBCI has created the LEED Green Associate credential, which denotes basic knowledge of green design, construction, and operations. The eligibility requirements for the LEED Green Associate exam no longer require candidates to have experience in the form of EITHER documented involvement on a LEED-registered project OR employment (or previous employment) in a sustainable field of work OR engagement in (or completion of) an education program that addresses green building principles. Candidates are still required to agree to the Disciplinary and Exam Appeals Policy and Credential Maintenance Program and submit to an application audit. The LEED Green Associate exam consists of 100 randomly delivered questions which must be completed in 2 hours. The content of the exam focuses on the LEED project process (including integrated design), core sustainability concepts, green building terminology, and various aspects of the LEED rating systems. The fees associated with the LEED Green Associate are a $50 application fee, a $200 exam fee (per exam appointment) for USGBC national members and full-time students or $250 exam fee (per exam appointment) for all others, and an $85 biennial CMP renewal fee. LEED AP with Specialty The LEED AP (Accredited Professional) credential signifies an advanced depth of knowledge in green building practices; it also reflects the ability to specialize in a particular LEED Rating System. The LEED AP exam is divided into two parts. The first part is the LEED Green Associate exam, which demonstrates general knowledge of green building practices. The second part is a specialty exam based on one of the LEED Reference Guides. The specialties are: LEED AP Building Design + Construction LEED AP Homes LEED AP Interior Design + Construction LEED AP Neighborhood Development LEED AP Operations + Maintenance Candidates are required to agree to the Disciplinary and Exam Appeals Policy and Credential Maintenance Program and submit to an application audit. The LEED AP exams consist of two parts, the LEED Green Associate exam and the applicable LEED AP specialty exam; each part contains 100 randomly delivered multiple choice questions and each part must be completed in 2 hours. Individuals must score at least 170 out of 200 in order to pass. While the LEED Green Associate focuses on concepts and terminology, the LEED AP with Specialty exam tests a candidate's in-depth understanding of one of the five main rating system categories. Candidates have to memorize performance thresholds (percentages of energy savings for example) and perform calculations during the exam. The fees associated with the LEED AP exams are a $100 application fee and either a $300 exam fee (per exam appointment) for USGBC national members or a $450 exam fee (per exam appointment) for non-members. For existing LEED Green Associates that wish to take the specialty exam only, there is either a $150 exam fee (per exam appointment) for USGBC national members or a $250 exam fee (per exam appointment) for non-members. There is also an $85 biennial CMP renewal fee. LEED Fellow LEED Fellows are a highly accomplished class of individuals nominated by their peers and distinguished by a minimum of 10 or more years of professional green building experience. LEED Fellows must also have achieved a LEED AP with specialty credential. LEED Professional Certificates There are currently two LEED Professional Certificates: LEED for Homes Green Rater Green Classroom Professional See also U.S. Green Building Council Green Building Certification Institute LEED BREEAM References External links LEED Exam Prep USGBC official website GBCI official website Sustainable building
LEED Professional Exams
Engineering
1,013
1,605,292
https://en.wikipedia.org/wiki/Data%20Transformation%20Services
Data Transformation Services (DTS) is a Microsoft database tool with a set of objects and utilities to allow the automation of extract, transform and load operations to or from a database. The objects are DTS packages and their components, and the utilities are called DTS tools. DTS was included with earlier versions of Microsoft SQL Server, and was almost always used with SQL Server databases, although it could be used independently with other databases. DTS allows data to be transformed and loaded from heterogeneous sources using OLE DB, ODBC, or text-only files, into any supported database. DTS can also allow automation of data import or transformation on a scheduled basis, and can perform additional functions such as FTPing files and executing external programs. In addition, DTS provides an alternative method of version control and backup for packages when used in conjunction with a version control system, such as Microsoft Visual SourceSafe. DTS has been superseded by SQL Server Integration Services in later releases of Microsoft SQL Server though there was some backwards compatibility and ability to run DTS packages in the new SSIS for a time. History In SQL Server versions 6.5 and earlier, database administrators (DBAs) used SQL Server Transfer Manager and Bulk Copy Program, included with SQL Server, to transfer data. These tools had significant shortcomings, and many DBAs used third-party tools such as Pervasive Data Integrator to transfer data more flexibly and easily. With the release of SQL Server 7 in 1998, "Data Transformation Services" was packaged with it to replace all these tools. The concept, design, and implementation of the Data Transformation Services was led by Stewart P. MacLeod (SQL Server Development Group Program Manager), Vij Rajarajan (SQL Server Lead Developer), and Ted Hart (SQL Server Lead Developer). The goal was to make it easier to import, export, and transform heterogeneous data and simplify the creation of data warehouses from operational data sources. SQL Server 2000 expanded DTS functionality in several ways. It introduced new types of tasks, including the ability to FTP files, move databases or database components, and add messages into Microsoft Message Queue. DTS packages can be saved as a Visual Basic file in SQL Server 2000, and this can be expanded to save into any COM-compliant language. Microsoft also integrated packages into Windows 2000 security and made DTS tools more user-friendly; tasks can accept input and output parameters. DTS comes with all editions of SQL Server 7 and 2000, but was superseded by SQL Server Integration Services in the Microsoft SQL Server 2005 release in 2005. DTS packages The DTS package is the fundamental logical component of DTS; every DTS object is a child component of the package. Packages are used whenever one modifies data using DTS. All the metadata about the data transformation is contained within the package. Packages can be saved directly in a SQL Server, or can be saved in the Microsoft Repository or in COM files. SQL Server 2000 also allows a programmer to save packages in a Visual Basic or other language file (when stored to a VB file, the package is actually scripted—that is, a VB script is executed to dynamically create the package objects and its component objects). A package can contain any number of connection objects, but does not have to contain any. These allow the package to read data from any OLE DB-compliant data source, and can be expanded to handle other sorts of data. The functionality of a package is organized into tasks and steps. A DTS Task is a discrete set of functionalities executed as a single step in a DTS package. Each task defines a work item to be performed as part of the data movement and data transformation process or as a job to be executed. Data Transformation Services supplies a number of tasks that are part of the DTS object model and that can be accessed graphically through the DTS Designer or accessed programmatically. These tasks, which can be configured individually, cover a wide variety of data copying, data transformation and notification situations. For example, the following types of tasks represent some actions that you can perform by using DTS: executing a single SQL statement, sending an email, and transferring a file with FTP. A step within a DTS package describes the order in which tasks are run and the precedence constraints that describe what to do in the case damage or of failure. These steps can be executed sequentially or in parallel. Packages can also contain global variables which can be used throughout the package. SQL Server 2000 allows input and output parameters for tasks, greatly expanding the usefulness of global variables. DTS packages can be edited, password protected, scheduled for execution, and retrieved by version. DTS tools DTS tools packaged with SQL Server include the DTS wizards, DTS Designer, and DTS Programming Interfaces. DTS wizards The DTS wizards can be used to perform simple or common DTS tasks. These include the Import/Export Wizard and the Copy of Database Wizard. They provide the simplest method of copying data between OLE DB data sources. There is a great deal of functionality that is not available by merely using a wizard. However, a package created with a wizard can be saved and later altered with one of the other DTS tools. A Create Publishing Wizard is also available to schedule packages to run at certain times. This only works if SQL Server Agent is running; otherwise the package will be scheduled, but will not be executed. DTS Designer The DTS Designer is a graphical tool used to build complex DTS Packages with workflows and event-driven logic. DTS Designer can also be used to edit and customize DTS Packages created with the DTS wizard. Each connection and task in DTS Designer is shown with a specific icon. These icons are joined with precedence constraints, which specify the order and requirements for tasks to be run. One task may run, for instance, only if another task succeeds (or fails). Other tasks may run concurrently. The DTS Designer has been criticized for having unusual quirks and limitations, such as the inability to visually copy and paste multiple tasks at one time. Many of these shortcomings have been overcome in SQL Server Integration Services, DTS's successor. DTS Query Designer A graphical tool used to build queries in DTS. DTS Run Utility DTS Packages can be run from the command line using the DTSRUN Utility. The utility is invoked using the following syntax: dtsrun /S server_name[\instance_name] { {/[~]U user_name [/[~]P password]} | /E } ] { {/[~]N package_name } | {/[~]G package_guid_string} | {/[~]V package_version_guid_string} } [/[~]M package_password] [/[~]F filename] [/[~]R repository_database_name] [/A global_variable_name:typeid=value] [/L log_file_name] [/W NT_event_log_completion_status] [/Z] [/!X] [/!D] [/!Y] [/!C] ] When passing in parameters which are mapped to Global Variables, you are required to include the typeid. This is rather difficult to find on the Microsoft site. Below are the TypeIds used in passing in these values. See also OLAP Data warehouse Data mining SQL Server Integration Services Meta Data Services References External links Microsoft SQL Server: Data Transformation Services (DTS) SQL DTS unique DTS information resource Understanding Microsoft Repository DTS Videos & Training Data Strategy Microsoft database software Data management Extract, transform, load tools Microsoft server technology
Data Transformation Services
Technology
1,595
51,317,356
https://en.wikipedia.org/wiki/4-Vinylbenzyl%20chloride
4-Vinylbenzyl chloride is an organic compound with the formula ClCH2C6H4CH=CH2. It is a bifunctional molecule, featuring both vinyl and a benzylic chloride functional groups. It is a colorless liquid that is typically stored with a stabilizer to suppress polymerization. In combination with styrene, vinylbenzyl chloride is used as a comonomer in the production of chloromethylated polystyrene. It is produced by the chlorination of vinyltoluene. Often vinyltoluene consists of a mixture of 3- and 4-vinyl isomers, in which case the vinylbenzyl chloride will also be produced as a mixture of isomers. References Monomers Vinylbenzenes Organochlorides
4-Vinylbenzyl chloride
Chemistry,Materials_science
165
39,138,768
https://en.wikipedia.org/wiki/Cable%20grommet
A cable grommet is a tube or ring through which an electrical cable passes. They are usually made of rubber or metal. The grommet is usually inserted in holes in certain materials in order to protect, improve friction or seal cables passing through it, from a possible mechanical or chemical attack. Grommets used as wire managers for furniture A grommet can be used in furniture to protect wires, cables or cords for computer equipment or other electronic equipment in homes or offices. At the same time, they are used decoratively to embellish the furniture and can be bought in a large variety of sizes, colors and finishes. The grommets usually consist of two pieces: A liner that goes into the hole of the furniture and a cap with a hole (often adjustable in size) for the cables to go through. When there is no need to use them they can be blanked either by turning one piece 90° against the other or by inserting an extra plastic piece designed to fit that purpose. Rubber grommets "anti-microphonism" A rubber grommet is made of a resilient material (typically rubber), with the molding designed to hold it in place, so as to help to absorb vibrations, for example, between a tandem radio and the chassis or between microphone and its tripod, keeping the two components "floating" mechanically decoupled one from another to prevent a characteristic coupling called microphonism . See also Cable entry system Cable gland References External links Radio technology Fasteners
Cable grommet
Technology,Engineering
314
49,768,059
https://en.wikipedia.org/wiki/9600%20port
The '9600 port' (also named data-jack or data-port) is an industry-specific name given to a special connector on the back of amateur radio HF, VHF, and UHF transceivers. It is used for connecting a packet radio modem or any other type of data-modem which uses audio tones to convey data. This port is capable of transmitting and receiving data at speeds of at least 9600 bits per second, but usually faster. This is achieved by bypassing the highpass, lowpass, preemphasis, and deemphasis filters normally contained in the microphone and speaker circuits of an FM transmitter and receiver. Amateur radio data ports which are not "9600 capable" are typically limited to a max speed of 1200 to 3000 bits per second. Commonly this 9600-capable data port uses a 6-pin mini-DIN connector (shown to the right). This is the same physical connector-type as PS/2 port mice and keyboards. Modem Manufacturers There are a number of manufacturers making modems intended for this 9600 port / data port. Kantronics Tigertronics Argent Data Byonics Coastal ChipWorks MFJ Enterprises Symek Timewave Technologies Masters Communications Radio Manufacturers There are a number of manufacturers making radios which include a 9600 capable data port as a feature: Alinco Icom Incorporated Yaesu Kenwood Software Modems The 9600 port can also be connected to computer's soundcard for use with a number of different software-based data modems: Direwolf MixW AGW Packet Engine Soundmodem UZ7HO Soundmodem Digital Voice The 9600 port can be used to connect a digital voice adapter, or dongle, which allows analog amateur radios to transmit and receive ICOM's D-Star digital voice protocol (AMBE2020). Digital Voice Dongle Star*DV / Star*Board DVRPTR_V1 D-Star boards PAPA GMSK Boards DUTCH*Star Users of this technology This 9600 port is used to communicate with some amateur radio satellites using the packet radio A 9600-baud capable amateur radio and modem are installed aboard the International Space Station as part of the ARISS project. References Digital amateur radio
9600 port
Technology
467
41,158,072
https://en.wikipedia.org/wiki/Distributed%20Overlay%20Virtual%20Ethernet
Distributed Overlay Virtual Ethernet (DOVE) is a tunneling and virtualization technology for computer networks, created and backed by IBM. DOVE allows creation of network virtualization layers for deploying, controlling, and managing multiple independent and isolated network applications over a shared physical network infrastructure. Overview The tunneling format is decoupled from the logical network view offered by DOVE, and defines only the way frames are encapsulated to be transferred by the underlying network infrastructure. As a notable difference from other network virtualization solutions (such as OTV), this allows DOVE not to be limited to providing OSI layer 2 emulation only (for example, passing Ethernet frames). Logical components of the DOVE architecture are DOVE controllers and DOVE switches (abbreviated as dSwitch). DOVE controllers perform management functions, and one part of the control plane functions across DOVE switches. DOVE switches perform the encapsulation of layer 2 frames into UDP packets using the Virtual Extensible LAN (VXLAN) frame format, and provide virtual interfaces for virtual machines to plug into, similarly to how physical Ethernet switches provide ports for network interface controller (NIC) connections. DOVE switches are running as part of virtual machine hypervisors. Advantages Primary advantages of DOVE include the following: No dependency on the underlying physical network and protocols Use of the existing IP network infrastructure No addresses of virtual machines are present in Ethernet switches, resulting in smaller MAC tables and less complex STP layouts No limitations related to the Virtual LAN (VLAN) technology, resulting in more than 16 million possible separate networks, compared to the VLAN's limit of 4,000 No dependency on the IP multicast traffic Implementations , DOVE components are implemented as part of VMware's hypervisors, while implementations for the Linux KVM and Open vSwitch are planned. DOVE extensions for VXLAN were merged into the Linux kernel mainline in kernel version 3.8, which was released on February 18, 2013. Appropriate extensions to related userspace configuration utilities were added into version 3.8.0 of the iproute2 utilities, which was released on February 21, 2013. See also Generic Routing Encapsulation (GRE) Overlay transport virtualization (OTV) Software-defined networking References External links IBM DOVE Takes Flight with New SDN Overlay, Fixes VXLAN Scaling Issues, March 26, 2013, by Roy Chua Distributed Overlay Virtual Ethernet (DOVE) integration with OpenStack, IEEE, May 2013, by Rami Cohen, Katherine Barabash and Liran Schour Building an Open, Adaptive and Responsive Data Center using OpenDaylight, OpenDaylight summit, February 4, 2014, by Vijoy Pandey Software Defined Network, IBM revelation day, November 6, 2013, by Igor Marty Network protocols Tunneling protocols
Distributed Overlay Virtual Ethernet
Engineering
575
73,966,770
https://en.wikipedia.org/wiki/Bobkov%27s%20inequality
In probability theory, Bobkov's inequality is a functional isoperimetric inequality for the canonical Gaussian measure. It generalizes the Gaussian isoperimetric inequality. The equation was proven in 1997 by the Russian mathematician Sergey Bobkov. Bobkov's inequality Notation: Let be the canonical Gaussian measure on with respect to the Lebesgue measure, be the one dimensional canonical Gaussian density the cumulative distribution function be a function that vanishes at the end points Statement For every locally Lipschitz continuous (or smooth) function the following inequality holds Generalizations There exists a generalization by Dominique Bakry and Michel Ledoux. References Probabilistic inequalities
Bobkov's inequality
Mathematics
145
76,416
https://en.wikipedia.org/wiki/Maximum%20power%20transfer%20theorem
In electrical engineering, the maximum power transfer theorem states that, to obtain maximum external power from a power source with internal resistance, the resistance of the load must equal the resistance of the source as viewed from its output terminals. Moritz von Jacobi published the maximum power (transfer) theorem around 1840; it is also referred to as "Jacobi's law". The theorem results in maximum power transfer from the power source to the load, but not maximum efficiency of useful power out of total power consumed. If the load resistance is made larger than the source resistance, then efficiency increases (since a higher percentage of the source power is transferred to the load), but the magnitude of the load power decreases (since the total circuit resistance increases). If the load resistance is made smaller than the source resistance, then efficiency decreases (since most of the power ends up being dissipated in the source). Although the total power dissipated increases (due to a lower total resistance), the amount dissipated in the load decreases. The theorem states how to choose (so as to maximize power transfer) the load resistance, once the source resistance is given. It is a common misconception to apply the theorem in the opposite scenario. It does not say how to choose the source resistance for a given load resistance. In fact, the source resistance that maximizes power transfer from a voltage source is always zero (the hypothetical ideal voltage source), regardless of the value of the load resistance. The theorem can be extended to alternating current circuits that include reactance, and states that maximum power transfer occurs when the load impedance is equal to the complex conjugate of the source impedance. The mathematics of the theorem also applies to other physical interactions, such as: mechanical collisions between two objects, the sharing of charge between two capacitors, liquid flow between two cylinders, the transmission and reflection of light at the boundary between two media. Maximizing power transfer versus power efficiency The theorem was originally misunderstood (notably by Joule) to imply that a system consisting of an electric motor driven by a battery could not be more than 50% efficient, since the power dissipated as heat in the battery would always be equal to the power delivered to the motor when the impedances were matched. In 1880 this assumption was shown to be false by either Edison or his colleague Francis Robbins Upton, who realized that maximum efficiency was not the same as maximum power transfer. To achieve maximum efficiency, the resistance of the source (whether a battery or a dynamo) could be (or should be) made as close to zero as possible. Using this new understanding, they obtained an efficiency of about 90%, and proved that the electric motor was a practical alternative to the heat engine. The efficiency is the ratio of the power dissipated by the load resistance to the total power dissipated by the circuit (which includes the voltage source's resistance of as well as ): Consider three particular cases (note that voltage sources must have some resistance): If , then Efficiency approaches 0% if the load resistance approaches zero (a short circuit), since all power is consumed in the source and no power is consumed in the short. If , then Efficiency is only 50% if the load resistance equals the source resistance (which is the condition of maximum power transfer). If , then Efficiency approaches 100% if the load resistance approaches infinity (though the total power level tends towards zero) or if the source resistance approaches zero. Using a large ratio is called impedance bridging. Impedance matching A related concept is reflectionless impedance matching. In radio frequency transmission lines, and other electronics, there is often a requirement to match the source impedance (at the transmitter) to the load impedance (such as an antenna) to avoid reflections in the transmission line. Calculus-based proof for purely resistive circuits In the simplified model of powering a load with resistance by a source with voltage and source resistance , then by Ohm's law the resulting current is simply the source voltage divided by the total circuit resistance: The power dissipated in the load is the square of the current multiplied by the resistance: The value of for which this expression is a maximum could be calculated by differentiating it, but it is easier to calculate the value of for which the denominator: is a minimum. The result will be the same in either case. Differentiating the denominator with respect to : For a maximum or minimum, the first derivative is zero, so or In practical resistive circuits, and are both positive, so the positive sign in the above is the correct solution. To find out whether this solution is a minimum or a maximum, the denominator expression is differentiated again: This is always positive for positive values of and , showing that the denominator is a minimum, and the power is therefore a maximum, when: The above proof assumes fixed source resistance . When the source resistance can be varied, power transferred to the load can be increased by reducing . For example, a 100 Volt source with an of will deliver 250 watts of power to a load; reducing to increases the power delivered to 1000 watts. Note that this shows that maximum power transfer can also be interpreted as the load voltage being equal to one-half of the Thevenin voltage equivalent of the source. In reactive circuits The power transfer theorem also applies when the source and/or load are not purely resistive. A refinement of the maximum power theorem says that any reactive components of source and load should be of equal magnitude but opposite sign. (See below for a derivation.) This means that the source and load impedances should be complex conjugates of each other. In the case of purely resistive circuits, the two concepts are identical. Physically realizable sources and loads are not usually purely resistive, having some inductive or capacitive components, and so practical applications of this theorem, under the name of complex conjugate impedance matching, do, in fact, exist. If the source is totally inductive (capacitive), then a totally capacitive (inductive) load, in the absence of resistive losses, would receive 100% of the energy from the source but send it back after a quarter cycle. The resultant circuit is nothing other than a resonant LC circuit in which the energy continues to oscillate to and fro. This oscillation is called reactive power. Power factor correction (where an inductive reactance is used to "balance out" a capacitive one), is essentially the same idea as complex conjugate impedance matching although it is done for entirely different reasons. For a fixed reactive source, the maximum power theorem maximizes the real power (P) delivered to the load by complex conjugate matching the load to the source. For a fixed reactive load, power factor correction minimizes the apparent power (S) (and unnecessary current) conducted by the transmission lines, while maintaining the same amount of real power transfer. This is done by adding a reactance to the load to balance out the load's own reactance, changing the reactive load impedance into a resistive load impedance. Proof In this diagram, AC power is being transferred from the source, with phasor magnitude of voltage (positive peak voltage) and fixed source impedance (S for source), to a load with impedance (L for load), resulting in a (positive) magnitude of the current phasor . This magnitude results from dividing the magnitude of the source voltage by the magnitude of the total circuit impedance: The average power dissipated in the load is the square of the current multiplied by the resistive portion (the real part) of the load impedance : where and denote the resistances, that is the real parts, and and denote the reactances, that is the imaginary parts, of respectively the source and load impedances and . To determine, for a given source, the voltage and the impedance the value of the load impedance for which this expression for the power yields a maximum, one first finds, for each fixed positive value of , the value of the reactive term for which the denominator: is a minimum. Since reactances can be negative, this is achieved by adapting the load reactance to: This reduces the above equation to: and it remains to find the value of which maximizes this expression. This problem has the same form as in the purely resistive case, and the maximizing condition therefore is The two maximizing conditions: describe the complex conjugate of the source impedance, denoted by and thus can be concisely combined to: See also Maximum power point tracking Notes References H.W. Jackson (1959) Introduction to Electronic Circuits, Prentice-Hall. External links Conjugate matching versus reflectionless matching (PDF) taken from Electromagnetic Waves and Antennas The Spark Transmitter. 2. Maximising Power, part 1. Circuit theorems Electrical engineering
Maximum power transfer theorem
Physics,Engineering
1,825
2,286,941
https://en.wikipedia.org/wiki/List%20of%20types%20of%20XML%20schemas
This is a list of notable XML schemas in use on the Internet sorted by purpose. XML schemas can be used to create XML documents for a wide range of purposes such as syndication, general exchange, and storage of data in a standard format. Bookmarks XBEL - XML Bookmark Exchange Language Brewing BeerXML - a free XML based data description standard for the exchange of brewing data Business ACORD data standards - Insurance Industry XML schemas specifications by Association for Cooperative Operations Research and Development Europass XML - XML vocabulary describing the information contained in a Curriculum Vitae (CV), Language Passport (LP) and European Skills Passport (ESP) OSCRE - Open Standards Consortium for Real Estate format for data exchange within the real estate industry UBL - Defining a common XML library of business documents (purchase orders, invoices, etc.) by Oasis XBRL Extensible Business Reporting Language for International Financial Reporting Standards (IFRS) and United States generally accepted accounting principles (GAAP) business accounting. Elections EML - Election Markup Language, is an OASIS standard to support end-to-end management of election processes. It defines over thirty schemas, for example EML 510 for vote count reporting and EML 310 for voter registration. Engineering gbXML - an open schema developed to facilitate transfer of building data stored in Building Information Models (BIMs) to engineering analysis tools. IFC-XML - Building Information Models for architecture, engineering, construction, and operations. XMI - an Object Management Group (OMG) standard for exchanging metadata information, commonly used for exchange of UML information XTCE - XML Telemetric and Command Exchange is an XML based data exchange format for spacecraft telemetry and command meta-data Financial FIXatdl - FIX algorithmic trading definition language. Schema provides a HCI between a human trader, the order entry screen(s), unlimited different algorithmic trading types (called strategies) from a variety of sources, and formats a new order message on the FIX wire. FIXML - Financial Information eXchange (FIX) protocol is an electronic communications protocol initiated in 1992 for international real-time exchange of information related to the securities transactions and markets. FpML - Financial products Markup Language is the industry-standard protocol for complex financial products. It is based on XML (eXtensible Markup Language), the standard meta-language for describing data shared between applications. Geographic information systems and geotagging KML - Keyhole Markup Language is used for annotation on geographical browsers including Google Earth and NASA's World Wind. These annotations are used to place events such as earthquake warnings, historical events, etc. SensorML - used for describing sensors and measurement processes Graphical user interfaces FIXatdl - algorithmic trading GUIs (language independent) FXML - Extensible Application Markup Language for Java GLADE - GNOME's User Interface Language (GTK+) KParts - KDE's User Interface Language (Qt) UXP - Unified XUL Platform, a 2017 fork of XUL. XAML - Microsoft's Extensible Application Markup Language XForms - XForms XUL - XML User Interface Language (Native) Humanities texts EpiDoc - Epigraphic Documents TEI - Text Encoding Initiative Intellectual property DS-XML - Industrial Design Information Exchange Standard IPMM Invention Disclosure Standard TM-XML - Trade Mark Information Exchange Standard Libraries EAD - for encoding archival finding aids, maintained by the Technical Subcommittee for Encoded Archival Description of the Society of American Archivists, in partnership with the Library of Congress MARCXML - a direct mapping of the MARC standard to XML syntax METS - a schema for aggregating in a single XML file descriptive, administrative, and structural metadata about a digital object MODS - a schema for a bibliographic element set and maintained by the Network Development and MARC Standards Office of the Library of Congress Math and science MathML - Mathematical Markup Language ANSI N42.42 or "N42" - NIST data format standard for radiation detectors used for Homeland Security Metadata DDML - reformulations XML DTD ONIX for Books - ONline Information eXchange, developed and maintained by EDItEUR jointly with Book Industry Communication (UK) and the Book Industry Study Group (US), and with user groups in Australia, Canada, France, Germany, Italy, the Netherlands, Norway, Spain and the Republic of Korea. PRISM - Publishing Requirements for Industry Standard Metadata RDF - Resource Description Framework Music playlists XSPF - XML Shareable Playlist Format Musical notation MusicXML - XML western musical notation format News syndication Atom - Atom RSS - Really Simple Syndication Paper and forest products EPPML - an XML conceptual model for the interactions between parties of a postal communication system. papiNet - XML format for exchange of business documents and product information in the paper and forest products industries. Publishing DITA - Darwin Information Typing Architecture, document authoring system DocBook - for technical documentation JATS (formerly known as the NLM DTD) - Journal Article Tag Suite, a journal publishing structure originally developed by the United States National Library of Medicine PRISM - Publishing Requirements for Industry Standard Metadata Statistics DDI - "Data Documentation Initiative" is a format for information describing statistical and social science data (and the lifecycle). SDMX - SDMX-ML is a format for exchange and sharing of Statistical Data and Metadata. Vector images SVG - Scalable Vector Graphics See also List of XML markup languages XML Schema Language Comparison XML transformation language XML pipeline Data Format Description Language XML log Notes External links Schema Documentation Library Data modeling languages XML Schemas Financial industry XML-based standards
List of types of XML schemas
Technology
1,165
5,903,973
https://en.wikipedia.org/wiki/Record%20press
A record press or stamper is a machine for manufacturing vinyl records. It is essentially a hydraulic press fitted with thin nickel stampers which are negative impressions of a master disc. Labels and a pre-heated vinyl patty (or biscuit) are placed in a heated mold cavity. Two stampers are used, one for each of side of the disc. The record press closes under a pressure of about 150 tons. The process of compression molding forces the hot vinyl to fill the grooves in the stampers, and take the form of the finished record. Vacuum molding In the mid-1960s, Emory Cook developed a system of record forming wherein the mold pressure was replaced by a vacuum. In this technique, the mold cavity was evacuated and vinyl was introduced in micro-particle form. The particles were then flash-fused instantaneously at a high temperature forming a coherent solid. Cook called this disc manufacturing technology microfusion. A small pressing plant in Hollywood also employed a similar system which they maintained fused the particles more evenly throughout the disc thickness calling their product polymax. Both claimed the resultant disc grooves exhibited less surface noise and greater resistance to deformation from stylus tip inertia than convention pressure molded vinyl discs. References Audio storage Machines Plastics industry External links List of vinyl press manufacturers
Record press
Physics,Technology,Engineering
258
4,606,867
https://en.wikipedia.org/wiki/Bottom%20ash
Bottom ash is part of the non-combustible residue of combustion in a power plant, boiler, furnace, or incinerator. In an industrial context, it has traditionally referred to coal combustion and comprises traces of combustibles embedded in forming clinkers and sticking to hot side walls of a coal-burning furnace during its operation. The portion of the ash that escapes up the chimney or stack is referred to as fly ash. The clinkers fall by themselves into the bottom hopper of a coal-burning furnace and are cooled. The above portion of the ash is also referred to as bottom ash. Most bottom ash generated at U.S. power plants is stored in ash ponds, which can cause serious environmental damage if they experience structural failures. Ash handling processes In a conventional water-impounded hopper (WIH) system, the clinker lumps get crushed to small sizes by clinker grinders mounted under water and fall down into a trough, where a water ejector takes them out to a sump. From there, it is pumped out by suitable rotary pumps. In another arrangement, a continuous link chain scrapes out the clinkers from under water and feeds them to clinker grinders outside the bottom ash hopper. Modern municipal waste incinerators reduce the production of dioxins by incinerating at 850 to 950 degrees Celsius for at least two seconds, forming incinerator bottom ash as byproduct. Waste handling In United States facilities, the ash waste is typically pumped to ash ponds, the most common disposal method. Some power plants operate a dry disposal system with landfills. Health impacts Environmental impacts In the United States, coal ash is a major component of the nation's industrial waste stream. In 2017, of fly ash, and of bottom ash, were generated. Coal contains trace levels of arsenic, barium, beryllium, boron, cadmium, chromium, thallium, selenium, molybdenum, and mercury, many of which are highly toxic to humans and other life. Coal ash, a product of combustion, concentrates these elements and can contaminate groundwater or surface waters if there are leaks from an ash pond. Most U.S. power plants do not use geomembranes, leachate collection systems, or other flow controls often found in municipal solid waste landfills. Following a 2008 failure that caused the Tennessee Valley Authority's Kingston Fossil Plant coal fly ash slurry spill, the U.S. Environmental Protection Agency (EPA) began developing regulations that would apply to all ash ponds in the U.S. The EPA published its "Part A" final rule for Coal Combustion Residuals (CCR) on August 28, 2020, requiring all unlined ash ponds to retrofit with liners or close by April 11, 2021. Some facilities may apply to obtain additional time—up to 2028—to find alternatives for managing ash wastes before closing their surface impoundments. EPA published its "CCR Part B" rule on November 12, 2020, which allows certain facilities to use an alternative liner, based on a demonstration that human health and the environment will not be affected. Further litigation on the CCR regulation is pending as of 2021. Ash recycling Bottom ash can be extracted, cooled, and conveyed using dry ash handling technology. When left dry, the ash can be used to make concrete, bricks, and other useful materials. There are also several environmental benefits. Bottom ash may be used as raw alternative material, replacing earth or sand or aggregates, for example in road construction and in cement kilns (clinker production). A noticeable other use is as growing medium in horticulture (usually after sieving). In the United Kingdom it is known as furnace bottom ash (FBA), to distinguish it from incinerator bottom ash (IBA), the non-combustible elements remaining after incineration. A pioneer use of bottom ash was in the production of concrete blocks used to construct many high-rise flats in London in the 1960s. See also Coal combustion products Fly ash Health effects of coal ash Metal toxicity Industrial wastewater treatment References External links EcoSmart Concrete – Use of fly ash and other supplementary cementing materials in concrete LondonWaste – How bottom ash is processed to make aggregate Environmental impact of the coal industry Waste Water pollution By-products
Bottom ash
Physics,Chemistry,Environmental_science
903
2,881,110
https://en.wikipedia.org/wiki/Static%20spherically%20symmetric%20perfect%20fluid
In metric theories of gravitation, particularly general relativity, a static spherically symmetric perfect fluid solution (a term which is often abbreviated as ssspf) is a spacetime equipped with suitable tensor fields which models a static round ball of a fluid with isotropic pressure. Such solutions are often used as idealized models of stars, especially compact objects such as white dwarfs and especially neutron stars. In general relativity, a model of an isolated star (or other fluid ball) generally consists of a fluid-filled interior region, which is technically a perfect fluid solution of the Einstein field equation, and an exterior region, which is an asymptotically flat vacuum solution. These two pieces must be carefully matched across the world sheet of a spherical surface, the surface of zero pressure. (There are various mathematical criteria called matching conditions for checking that the required matching has been successfully achieved.) Similar statements hold for other metric theories of gravitation, such as the Brans–Dicke theory. In this article, we will focus on the construction of exact ssspf solutions in our current Gold Standard theory of gravitation, the theory of general relativity. To anticipate, the figure at right depicts (by means of an embedding diagram) the spatial geometry of a simple example of a stellar model in general relativity. The euclidean space in which this two-dimensional Riemannian manifold (standing in for a three-dimensional Riemannian manifold) is embedded has no physical significance, it is merely a visual aid to help convey a quick impression of the kind of geometrical features we will encounter. Short history We list here a few milestones in the history of exact ssspf solutions in general relativity: 1916: Schwarzschild fluid solution, 1939: The relativistic equation of hydrostatic equilibrium, the Oppenheimer-Volkov equation, is introduced, 1939: Tolman gives seven ssspf solutions, two of which are suitable for stellar models, 1949: Wyman ssspf and first generating function method, 1958: Buchdahl ssspf, a relativistic generalization of a Newtonian polytrope, 1967: Kuchowicz ssspf, 1969: Heintzmann ssspf, 1978: Goldman ssspf, 1982: Stewart ssspf, 1998: major reviews by Finch & Skea and by Delgaty & Lake, 2000: Fodor shows how to generate ssspf solutions using one generating function and differentiation and algebraic operations, but no integrations, 2001: Nilsson & Ugla reduce the definition of ssspf solutions with either linear or polytropic equations of state to a system of regular ODEs suitable for stability analysis, 2002: Rahman & Visser give a generating function method using one differentiation, one square root, and one definite integral, in isotropic coordinates, with various physical requirements satisfied automatically, and show that every ssspf can be put in Rahman-Visser form, 2003: Lake extends the long-neglected generating function method of Wyman, for either Schwarzschild coordinates or isotropic coordinates, 2004: Martin & Visser algorithm, another generating function method which uses Schwarzschild coordinates, 2004: Martin gives three simple new solutions, one of which is suitable for stellar models, 2005: BVW algorithm, apparently the simplest variant now known References The original paper presenting the Oppenheimer-Volkov equation. See section 23.2 and box 24.1 for the Oppenheimer-Volkov equation. See chapter 10 for the Buchdahl theorem and other topics. See chapter 6 for a more detailed exposition of white dwarf and neutron star models than can be found in other gtr textbooks. eprint version An excellent review stressing problems with the traditional approach which are neatly avoided by the Rahman-Visser algorithm. Fodor; Gyula. Generating spherically symmetric static perfect fluid solutions (2000). Fodor's algorithm. eprint version eprint version The Nilsson-Uggla dynamical systems. eprint version Lake's algorithms. eprint version The Rahman-Visser algorithm. eprint version The BVW solution generating method. Exact solutions in general relativity
Static spherically symmetric perfect fluid
Mathematics
870
19,284,379
https://en.wikipedia.org/wiki/Bohuslav%20Brauner
Bohuslav Brauner (May 8, 1855 – February 15, 1935) was a Czech chemist from the University of Prague, who investigated the properties of the rare earth elements, especially the determination of their atomic weights. Brauner predicted the existence of the rare earth element promethium ten years before the existence of the gap was confirmed experimentally (although the element was still undiscovered). In the 1880s, when he already had started lecturing in Prague, he still competed internationally in cycling races. Career Brauner was a student of Robert Bunsen at the University of Heidelberg and later of Henry Roscoe at the University of Manchester. Brauner became lecturer for chemistry at the Charles University of Prague in 1883, assistant professor as of 1890, and full professor as of 1897. During the course of his career, he corresponded frequently with Dmitri Mendeleev, and they influenced each other as models for periodicity of the elements were developed. Brauner retired from the Charles University of Prague in 1925 and died of pneumonia in 1935. Scientific contributions During Brauner's time with Roscoe at the University of Manchester, he became interested in the chemistry of the rare earth elements. A theme of his investigations was the determination of their relative positions in the periodic table. One method he used to separate these elements was fluorination, yielding compounds that could be purified. As part of his investigations on the chemistry of the lanthanides, Brauner proposed in 1902 the existence of an element that would be located between neodymium and samarium in the periodic table. Henry Moseley experimentally confirmed Brauner's prediction in 1914, identifying the gap between their nuclear charges. The missing element was eventually synthesized in 1945 and named promethium. Brauner's investigations into the rare earth elements and their atomic weights depended on purity of the compounds under evaluation. This caused ambiguity at times. He proposed that the element tellurium had an atomic weight of 125 atomic mass units, although he acknowledged that his evaluation of tellurium could be based on mixtures of metals. Later, Brauner obtained samples of the substance referred to as didymium. In 1882, he was able to use spectroscopy to observe two groups of absorption bands, one blue (A = 449–443) and one yellow (A = 590–568). He concluded that didymium was actually a mixture of two rare earth elements. However, it was Carl Auer von Welsbach who recognized that didymium was actually a mixture of the two rare earth elements praseodymium and neodymium, the discovery occurring in 1885. Von Welsbach's discovery was initially a source of consternation to Brauner. Brauner authored the chapter on rare earth elements in Mendeleev's textbook "Principles of Chemistry". He also wrote the portion on atomic weights in "Handbuch der Anorganischen Chemie", which was a textbook authored principally by Richard Abegg. Honors Brauner received honorary memberships to the Chemical Society of London, the American Chemical Society and the Societe Chimique de France, and an honorary Doctor of Science degree from the University of Manchester. The Recueil des Travaux Chimiques des Pays-Bas honored him in 1925 as did the Collection of Czechoslovak Chemical Communications in 1930. References 1855 births 1935 deaths Czech chemists Rare earth scientists People involved with the periodic table Charles University alumni Chemists from Austria-Hungary Czechoslovak chemists
Bohuslav Brauner
Chemistry
709
23,884,776
https://en.wikipedia.org/wiki/Glycinergic
A glycinergic agent (or drug) is a chemical which functions to directly modulate the glycine system in the body or brain. Examples include glycine receptor agonists, glycine receptor antagonists, and glycine reuptake inhibitors. See also Adenosinergic Adrenergic Cannabinoidergic Cholinergic Dopaminergic GABAergic Histaminergic Melatonergic Monoaminergic Opioidergic Serotonergic References Neurochemistry Neurotransmitters
Glycinergic
Chemistry,Biology
117
87,299
https://en.wikipedia.org/wiki/Heaviside%20step%20function
The Heaviside step function, or the unit step function, usually denoted by or (but sometimes , or ), is a step function named after Oliver Heaviside, the value of which is zero for negative arguments and one for positive arguments. Different conventions concerning the value are in use. It is an example of the general class of step functions, all of which can be represented as linear combinations of translations of this one. The function was originally developed in operational calculus for the solution of differential equations, where it represents a signal that switches on at a specified time and stays switched on indefinitely. Heaviside developed the operational calculus as a tool in the analysis of telegraphic communications and represented the function as . Formulation Taking the convention that , the Heaviside function may be defined as: a piecewise function: using the Iverson bracket notation: an indicator function: For the alternative convention that , it may be expressed as: a linear transformation of the sign function, the arithmetic mean of two Iverson brackets, a one-sided limit of the two-argument arctangent a hyperfunction or equivalently where is the principal value of the complex logarithm of Other definitions which are undefined at include: the derivative of the ramp function: in terms of the absolute value function as Relationship with Dirac delta The Dirac delta function is the weak derivative of the Heaviside function: Hence the Heaviside function can be considered to be the integral of the Dirac delta function. This is sometimes written as although this expansion may not hold (or even make sense) for , depending on which formalism one uses to give meaning to integrals involving . In this context, the Heaviside function is the cumulative distribution function of a random variable which is almost surely 0. (See Constant random variable.) Analytic approximations Approximations to the Heaviside step function are of use in biochemistry and neuroscience, where logistic approximations of step functions (such as the Hill and the Michaelis–Menten equations) may be used to approximate binary cellular switches in response to chemical signals. For a smooth approximation to the step function, one can use the logistic function where a larger corresponds to a sharper transition at . If we take , equality holds in the limit: There are many other smooth, analytic approximations to the step function. Among the possibilities are: These limits hold pointwise and in the sense of distributions. In general, however, pointwise convergence need not imply distributional convergence, and vice versa distributional convergence need not imply pointwise convergence. (However, if all members of a pointwise convergent sequence of functions are uniformly bounded by some "nice" function, then convergence holds in the sense of distributions too.) In general, any cumulative distribution function of a continuous probability distribution that is peaked around zero and has a parameter that controls for variance can serve as an approximation, in the limit as the variance approaches zero. For example, all three of the above approximations are cumulative distribution functions of common probability distributions: the logistic, Cauchy and normal distributions, respectively. Non-Analytic approximations Approximations to the Heaviside step function could be made through Smooth transition function like : Integral representations Often an integral representation of the Heaviside step function is useful: where the second representation is easy to deduce from the first, given that the step function is real and thus is its own complex conjugate. Zero argument Since is usually used in integration, and the value of a function at a single point does not affect its integral, it rarely matters what particular value is chosen of . Indeed when is considered as a distribution or an element of (see space) it does not even make sense to talk of a value at zero, since such objects are only defined almost everywhere. If using some analytic approximation (as in the examples above) then often whatever happens to be the relevant limit at zero is used. There exist various reasons for choosing a particular value. is often used since the graph then has rotational symmetry; put another way, is then an odd function. In this case the following relation with the sign function holds for all : Also, H(x) + H(-x) = 1 for all x. is used when needs to be right-continuous. For instance cumulative distribution functions are usually taken to be right continuous, as are functions integrated against in Lebesgue–Stieltjes integration. In this case is the indicator function of a closed semi-infinite interval: The corresponding probability distribution is the degenerate distribution. is used when needs to be left-continuous. In this case is an indicator function of an open semi-infinite interval: In functional-analysis contexts from optimization and game theory, it is often useful to define the Heaviside function as a set-valued function to preserve the continuity of the limiting functions and ensure the existence of certain solutions. In these cases, the Heaviside function returns a whole interval of possible solutions, . Discrete form An alternative form of the unit step, defined instead as a function (that is, taking in a discrete variable ), is: or using the half-maximum convention: where is an integer. If is an integer, then must imply that , while must imply that the function attains unity at . Therefore the "step function" exhibits ramp-like behavior over the domain of , and cannot authentically be a step function, using the half-maximum convention. Unlike the continuous case, the definition of is significant. The discrete-time unit impulse is the first difference of the discrete-time step This function is the cumulative summation of the Kronecker delta: where is the discrete unit impulse function. Antiderivative and derivative The ramp function is an antiderivative of the Heaviside step function: The distributional derivative of the Heaviside step function is the Dirac delta function: Fourier transform The Fourier transform of the Heaviside step function is a distribution. Using one choice of constants for the definition of the Fourier transform we have Here is the distribution that takes a test function to the Cauchy principal value of . The limit appearing in the integral is also taken in the sense of (tempered) distributions. Unilateral Laplace transform The Laplace transform of the Heaviside step function is a meromorphic function. Using the unilateral Laplace transform we have: When the bilateral transform is used, the integral can be split in two parts and the result will be the same. See also Gamma function Dirac delta function Indicator function Iverson bracket Laplace transform Laplacian of the indicator List of mathematical functions Macaulay brackets Negative number Rectangular function Sign function Sine integral Step response References External links Digital Library of Mathematical Functions, NIST, . Special functions Generalized functions Schwartz distributions
Heaviside step function
Mathematics
1,382
87,352
https://en.wikipedia.org/wiki/Graph%20of%20a%20function
In mathematics, the graph of a function is the set of ordered pairs , where In the common case where and are real numbers, these pairs are Cartesian coordinates of points in a plane and often form a curve. The graphical representation of the graph of a function is also known as a plot. In the case of functions of two variables – that is, functions whose domain consists of pairs –, the graph usually refers to the set of ordered triples where . This is a subset of three-dimensional space; for a continuous real-valued function of two real variables, its graph forms a surface, which can be visualized as a surface plot. In science, engineering, technology, finance, and other areas, graphs are tools used for many purposes. In the simplest case one variable is plotted as a function of another, typically using rectangular axes; see Plot (graphics) for details. A graph of a function is a special case of a relation. In the modern foundations of mathematics, and, typically, in set theory, a function is actually equal to its graph. However, it is often useful to see functions as mappings, which consist not only of the relation between input and output, but also which set is the domain, and which set is the codomain. For example, to say that a function is onto (surjective) or not the codomain should be taken into account. The graph of a function on its own does not determine the codomain. It is common to use both terms function and graph of a function since even if considered the same object, they indicate viewing it from a different perspective. Definition Given a function from a set (the domain) to a set (the codomain), the graph of the function is the set which is a subset of the Cartesian product . In the definition of a function in terms of set theory, it is common to identify a function with its graph, although, formally, a function is formed by the triple consisting of its domain, its codomain and its graph. Examples Functions of one variable The graph of the function defined by is the subset of the set From the graph, the domain is recovered as the set of first component of each pair in the graph . Similarly, the range can be recovered as . The codomain , however, cannot be determined from the graph alone. The graph of the cubic polynomial on the real line is If this set is plotted on a Cartesian plane, the result is a curve (see figure). Functions of two variables The graph of the trigonometric function is If this set is plotted on a three dimensional Cartesian coordinate system, the result is a surface (see figure). Oftentimes it is helpful to show with the graph, the gradient of the function and several level curves. The level curves can be mapped on the function surface or can be projected on the bottom plane. The second figure shows such a drawing of the graph of the function: See also Asymptote Chart Plot Concave function Convex function Contour plot Critical point Derivative Epigraph Normal to a graph Slope Stationary point Tetraview Vertical translation y-intercept References Further reading External links Weisstein, Eric W. "Function Graph." From MathWorld—A Wolfram Web Resource. Charts Functions and mappings Numerical function drawing
Graph of a function
Mathematics
676
66,861,619
https://en.wikipedia.org/wiki/Osmundastrum%20pulchellum
Osmundastrum pulchellum is an extinct species of Osmundastrum, leptosporangiate ferns in the family Osmundaceae from the lower Jurassic (Pliensbachian-Toarcian?) Djupadal Formation of Southern Sweden. It remained unstudied for 40 years. It is one of the most exceptional fossil ferns ever found, preserving intact calcified (thus dead) tissue with DNA and cells. Its exceptional preservation has allowed the study of the DNA relationships with extant Osmundaceae ferns, proving a 180-million-year genomic stasis. It has also preserved its biotic interactions and even ongoing mitosis. History and discovery The only known specimen was recovered at the mafic pyroclastic and epiclastic deposits of the Djupadal Formation, dated Pliensbachian-Toarcian(?), that are present near Korsaröd Lake, at the north of Höör, central Skåne, southern Sweden. The location was studied first by Gustav Andersson, a local farmer, who was a passionate follower of scientific discoveries. Through his interest in geology, he identified several coeval volcanic plugs, and motivated by the presence of volcanic soils, he excavated a location at the south of the Korsaröd lake. Initially nothing was found, but a second deeper dig revealed a series of aggregated wood remains on volcanic lahar-derived stones. Samples taken from the location were sent to the geologist Hans Tralau, who carried out palynological research on them, estimating an age of deposition of Late Toarcian-Aalenian(?). A petrified rhizome was sent to Tralau, who understood the significance of the fossil and intended to publish it formally, but his untimely death in March 1977 made it impossible. The rhizome, along with the fossil wood, was archived at the Swedish Museum of Natural History, where the geologist Britta Lundblad tried also to publish it formally, what was also impossible due to her retirement in 1986. The fossil was lying forgotten in the archives of the museum until 2013, when it was discovered again and studied, finding that it preserved spectacular cellular detail, rarely seen on fossils. In 2015, it was finally published as Osmunda pulchella by B. Bomfleur, G. W. Grimm and S. McLoughlin. The specific epithet pulchella (Latin diminutive of pulchra, 'beautiful', 'fair;) was chosen in reference to the exquisite preservation and aesthetic appeal of the holotype specimen. The name Osmunda pulchella was mostly used in the main publications referring to it until in 2017 a revision of the cladistic status of the fossil Osmundales showed that the fossil was in fact a member of the genus Osmundastrum, so it became Osmundatrum pulchellum. Description The Osmundastrum pulchellum holotype is a calcified rhizome fragment about 6 cm long and up to 4 cm in diameter that probably come from a small (approx. 50 cm tall) fern. It is composed of a small central stem surrounded by a compact mantle of helically arranged petiole bases and interspersed rootlets that extend outwards perpendicular to the axis, indicating a low rhizomatous rather than arborescent growth. This, together with the asymmetrical distribution of the roots, points to a creeping habit. The stem is around about 7.5 mm in diameter and the pith about 1.5 mm in diameter and entirely parenchymatous. In the pith, cell walls lack the presence of an internal endodermis or internal phloem, considered to be an original feature, rather than a loss due to inadequate preservation. Traces of leaves and associated rootlets are present traversing the outer cortex. This specimen is well known for the quality of its preservation, quality revealing cellular and subcellular detail: from tracheids with preserved wall thickenings, to parenchyma cells containing preserved cellular contents. Some of the parenchyma cells contain oblate particles about 1–5 μm in diameter, interpreted as putative amyloplasts. Classification The exceptional preservation of Osmundastrum pulchellum has allowed the establishment of an evolutionary overview of royal ferns since the lower Jurassic. At its description as Osmunda pulchella, it was compared with Todea, Leptopteris, Plenasium and Claytosmunda, and found as a bridge in the morphological gap between extant Osmundastrum and the subgenus Osmunda inside Osmunda – the closest species to Osmundastrum. It was shown that this species and the extant Osmundaceae share the same chromosome count and DNA content. In 2017, a re-examination of the phylogeny of the fossil Osmundales showed it to be a member of the genus Osmundastrum and a probable precursor of the modern Osmundastrum cinnamomeum. Latter, a new species, Osmundastrum gvozdevae from the Middle Jurassic of the Russian Kursk Region was recovered as a possible sister taxon. Biology Osmundastrum pulchellum is well known thanks to exceptional preservation of detailed anatomical structures (e.g., pith, stele, petiole base, adventitious roots, and even nuclei). As well is the only known case of fossilized ongoing mitosis. This is shown by the fact that the chromosomes and cell nuclei show marked structural heterogeneities compared to the cell walls during different stages of the cell cycle. A rapid calcite permineralization "froze" the organic molecules in time, which suggests the fern rhizome was fossilized probably on a very short time, perhaps even minutes thanks to a fast lahar deposit. The tissues show cells with nuclei, nucleoli, and chromosomes during the interphase, prophase, prometaphase, and possible anaphase of the cell cycle. Some cells also show pyknotic nuclei typical of cells undergoing apoptosis (programmed cell death). The subcellular detail is nearly unique, as other ferns preserved in similar conditions lack them, for example Ashicaulis liaoningensis. Several biotic interactions were recovered on the rhizome. Exotic roots were recovered on the petiole bases, with a level of preservation that matches that of the whole plant, bearing a similar vasculature as seen in modern lycophytes. They are interpreted as belonging to a small herbaceous epiphytic lycopsid, with its megaspores also linked with the specimen. Other sporangial fragments from other ferns (Deltoidospora toralis, Cibotiumspora jurienensis, etc.) were also recovered, known from the nearby deposits. A similar community was recovered on a Todea rhizome from the early Eocene of Patagonia, but with the epiphytic plants being in Osmundastrum pulchellum exclusively lycopsids and ferns, which may indicate that bryophytes had not yet evolved the epiphytic habit during the Jurassic. Possible oogonia of Peronosporomycetes are found in a parasitic or saprotrophic relation with the plant. If the identification of the oogonia of Peronosporomycetes is correct, then this implies regularly moist conditions for the growth of Osmundastrum pulchellum. Thread-like structures were found, identified as derived from a pathogenic or saprotrophic fungus invading necrotic tissues of the host plant. The interaction of the fungus with the plant was probably mycorrhizal. Excavations up to 715 μm in diameter are evident, filled with pellets that resemble the coprolites of oribatid mites, found also in Paleozoic and Mesozoic woods. Paleoenvironment The Djupadal Formation was deposited in the Central Skane region, linked to the late Early Jurassic Volcanism. Several coeval Volcanic necks are recovered on the region, such as Eneskogen (A large hill covered by quaternary sediments. Some few boulders and basalt pillars were exposed), Bonnarp (5–6 m height and covers roughly 5,000 square meters, covered by Jurassic sediments) and Säte (Comprise two basalt pipes, each roughly 6–10 m high and some 10,000 square meters in area). The Korsaröd member includes a volcanic-derived lagerstatten where this fern was found, probably derived from a fast lahar deposition. Thanks to the data provided by the fossilized wood rings, it was found that the location of Korsaröd hosted a middle-latitude Mediterranean-type biome in the late Early Jurassic, with low rainfall. Superimposed on this climate were the effects of a local active Strombolian Volcanism and hydrothermal activity. This location has been compared with modern Rotorua, New Zealand, considered an analogue for the type of environment represented in southern Sweden at this time. The locality was populated mostly by Cupressaceae trees (including specimens up to 5 m in circunference), known thanks to the great abundance of the wood genus Protophyllocladoxylon and the high presence of the genus Perinopollenites elatoides (also Cupressaceae) and Eucommiidites troedsonii (Erdtmanithecales). The underlying Höör Sandstone Formation hosts abundant Chasmatosporites spp. pollen produced by plants related to cycadophytes, while the Djupadal volcanogenic deposits are dominated by cypress family pollen with an understorey component rich in putative Erdtmanithecales, both representing vegetation of disturbed habitats. The abundance of Protophyllocladoxylon sp. is also related with a sporadic intraseasonal and multi-year episodes of growth disruption, probably due to the volcanic action. Pollen, spores, wood and charcoal locally indicate a complex forest community subject to episodic fires and other forms of disturbance in an active volcanic landscape under a moderately seasonal climate. Osmundastrum pulchellum was a prominent understorey element in this vegetation and was probably involved in various competitive interactions with neighboring plant species, such as lycophytes, whose roots have been recovered inside the rhizome. The ferns were part of a fern- and conifer-rich vegetation occupying a topographic depression in the landscape (moist gully) that was engulfed by one or more lahar deposits. References Osmundales Plants described in 2015 Jurassic Sweden Paleontology in Sweden Prehistoric plants
Osmundastrum pulchellum
Biology
2,223
2,869,000
https://en.wikipedia.org/wiki/Atovaquone
Atovaquone, sold under the brand name Mepron, is an antimicrobial medication for the prevention and treatment of Pneumocystis jirovecii pneumonia (PCP). Atovaquone is a chemical compound that belongs to the class of naphthoquinones. Atovaquone is a hydroxy-1,4-naphthoquinone, an analog of both ubiquinone and lawsone. Medical uses Atovaquone is a medication used to treat or prevent: For pneumocystis pneumonia (PCP), it is used in mild cases, although it is not approved for treatment of severe cases. For toxoplasmosis, the medication has antiparasitic and therapeutic effects. For malaria, it is one of the two components (along with proguanil) in the drug Malarone. Malarone has fewer side effects and is more expensive than mefloquine. Resistance has been observed. For babesia, it is often used in conjunction with oral azithromycin. Trimethoprim/sulfamethoxazole (TMP-SMX, Bactrim) is generally considered first-line therapy for PCP (not to be confused with sulfadiazine and pyrimethamine, which is first line for toxoplasmosis). However, atovaquone may be used in patients who cannot tolerate, or are allergic to, sulfonamide medications such as TMP-SMX. In addition, atovaquone has the advantage of not causing myelosuppression, which is an important issue in patients who have undergone bone marrow transplantation. Atovaquone is given prophylactically to kidney transplant patients to prevent PCP in cases where Bactrim is contraindicated for the patient. Malaria Atovaquone, as a combination preparation with proguanil, has been commercially available from GlaxoSmithKline since 2000 as Malarone for the treatment and prevention of malaria. Research COVID-19 Preliminary research found that atovaquone could inhibit the replication of SARS-CoV-2 in vitro. Clinical trials of atovaquone for the treatment of COVID-19 are planned, and ongoing in United States in December 2021. Atovaquone has also been found to inhibit human coronavirus OC43 and feline coronavirus in vitro. In newer researches, atovaquone did not demonstrate evidence of enhanced SARS-CoV-2 viral clearance compared with placebo. Veterinary use Atovaquone is used in livestock veterinary cases of babesiosis in cattle, especially if imidocarb resistance is a concern. References Further reading External links Antimalarial agents Antiprotozoal agents 1,4-Naphthoquinones 4-Chlorophenyl compounds Drugs developed by GSK plc Hydroxynaphthoquinones
Atovaquone
Biology
613
32,040,137
https://en.wikipedia.org/wiki/Technological%20unemployment
Technological unemployment is the loss of jobs caused by technological change. It is a key type of structural unemployment. Technological change typically includes the introduction of labour-saving "mechanical-muscle" machines or more efficient "mechanical-mind" processes (automation), and humans' role in these processes are minimized. Just as horses were gradually made obsolete as transport by the automobile and as labourer by the tractor, humans' jobs have also been affected throughout modern history. Historical examples include artisan weavers reduced to poverty after the introduction of mechanized looms. Thousands of man-years of work was performed in a matter of hours by the bombe codebreaking machine during World War II. A contemporary example of technological unemployment is the displacement of retail cashiers by self-service tills and cashierless stores. That technological change can cause short-term job losses is widely accepted. The view that it can lead to lasting increases in unemployment has long been controversial. Participants in the technological unemployment debates can be broadly divided into optimists and pessimists. Optimists agree that innovation may be disruptive to jobs in the short term, yet hold that various compensation effects ensure there is never a long-term negative impact on jobs, whereas pessimists contend that at least in some circumstances, new technologies can lead to a lasting decline in the total number of workers in employment. The phrase "technological unemployment" was popularised by John Maynard Keynes in the 1930s, who said it was "only a temporary phase of maladjustment". The issue of machines displacing human labour has been discussed since at least Aristotle's time. Prior to the 18th century, both the elite and common people would generally take the pessimistic view on technological unemployment, at least in cases where the issue arose. Due to generally low unemployment in much of pre-modern history, the topic was rarely a prominent concern. In the 18th century fears over the impact of machinery on jobs intensified with the growth of mass unemployment, especially in Great Britain which was then at the forefront of the Industrial Revolution. Yet some economic thinkers began to argue against these fears, claiming that overall innovation would not have negative effects on jobs. These arguments were formalised in the early 19th century by the classical economists. During the second half of the 19th century, it stayed apparent that technological progress was benefiting all sections of society, including the working class. Concerns over the negative impact of innovation diminished. The term "Luddite fallacy" was coined to describe the thinking that innovation would have lasting harmful effects on employment. The view that technology is unlikely to lead to long-term unemployment has been repeatedly challenged by a minority of economists. In the early 1800s these included David Ricardo himself. There were dozens of economists warning about technological unemployment during brief intensifications of the debate that spiked in the 1930s and 1960s. Especially in Europe, there were further warnings in the closing two decades of the twentieth century, as commentators noted an enduring rise in unemployment suffered by many industrialised nations since the 1970s. Yet a clear majority of both professional economists and the interested general public held the optimistic view through most of the 20th century. In the second decade of the 21st century, a number of studies have been released suggesting that technological unemployment may increase worldwide. Oxford Professors Carl Benedikt Frey and Michael Osborne, for example, have estimated that 47 percent of U.S. jobs are at risk of automation. However, their methodology has been challenged as lacking evidential foundation and criticised for implying that technology (rather than social policy) creates unemployment rather than redundancies. On the PBS NewsHours the authors defended their findings and clarified they do necessarily imply future technological unemployment. While many economists and commentators still argue such fears are unfounded, as was widely accepted for most of the previous two centuries, concern over technological unemployment is growing once again. A report in Wired in 2017 quotes knowledgeable people such as economist Gene Sperling and management professor Andrew McAfee on the idea that handling existing and impending job loss to automation is a "significant issue". Recent technological innovations have the potential to displace humans in the professional, white-collar, low-skilled, creative fields, and other "mental jobs". The World Bank's World Development Report 2019 argues that while automation displaces workers, technological innovation creates more new industries and jobs on balance. History Classical era According to author Gregory Woirol, the phenomenon of technological unemployment is likely to have existed since at least the invention of the wheel. Ancient societies had various methods for relieving the poverty of those unable to support themselves with their own labour. Ancient China and ancient Egypt may have had various centrally run relief programmes in response to technological unemployment dating back to at least the second millennium BC. Ancient Hebrews and adherents of the ancient Vedic religion had decentralised responses where aiding the poor was encouraged by their faiths. In ancient Greece, large numbers of free labourers could find themselves unemployed due to both the effects of ancient labour saving technology and to competition from slaves ("machines of flesh and blood"). Sometimes, these unemployed workers would starve to death or were forced into slavery themselves although in other cases they were supported by handouts. Pericles responded to perceived technological unemployment by launching public works programmes to provide paid work to the jobless. Some people criticized Pericle's programmes as wasting public money but were defeated. Perhaps the earliest example of a scholar discussing the phenomenon of technological unemployment occurs with Aristotle, who speculated in Book One of Politics that if machines could become sufficiently advanced, there would be no more need for human labour. Similar to the Greeks, ancient Romans responded to the problem of technological unemployment by relieving poverty with handouts (such as the ). Several hundred thousand families were sometimes supported like this at once. Less often, jobs were directly created with public works programmes, such as those launched by the Gracchi. Various emperors even went as far as to refuse or ban labour saving innovations. In one instance, the introduction of a labor-saving invention was blocked, when Emperor Vespasian refused to allow a new method of low-cost transportation of heavy goods, saying "You must allow my poor hauliers to earn their bread." Labour shortages began to develop in the Roman empire towards the end of the second century AD, and from this point mass unemployment in Europe appears to have largely receded for over a millennium. Post-classical era The medieval and early renaissance period saw the widespread adoption of newly invented technologies, as well as older ones which had been conceived yet barely used in the Classical era. Some were invented in Europe while others were invented in more Eastern countries like China, India, Arabia and Persia. The Black Death left fewer workers across Europe. Mass unemployment began to reappear in Europe, especially in Western, Central and Southern Europe in the 15th century, partly as a result of population growth, and partly due to changes in the availability of land for subsistence farming caused by early enclosures. As a result of the threat of unemployment, there was less tolerance for disruptive new technologies. European authorities would often side with groups representing subsections of the working population, such as Guilds, banning new technologies and sometimes even executing those who tried to promote or trade in them. 16th to 18th century In Great Britain, the ruling elite began to take a less restrictive approach to innovation somewhat earlier than in much of continental Europe, which has been cited as a possible reason for Britain's early lead in driving the Industrial Revolution. Yet concern over the impact of innovation on employment remained strong through the 16th and early 17th century. A famous example of new technology being refused occurred when the inventor William Lee invited Queen Elizabeth I to view a labour saving knitting machine. The Queen declined to issue a patent on the grounds that the technology might cause unemployment among textile workers. After moving to France and also failing to achieve success in promoting his invention, Lee returned to England but was again refused by Elizabeth's successor James I for the same reason. After the Glorious Revolution, authorities became less sympathetic to workers concerns about losing their jobs due to innovation. An increasingly influential strand of Mercantilist thought held that introducing labour saving technology would actually reduce unemployment, as it would allow British firms to increase their market share against foreign competition. From the early 18th century workers could no longer rely on support from the authorities against the perceived threat of technological unemployment. They would sometimes take direct action, such as machine breaking, in attempts to protect themselves from disruptive innovation. Joseph Schumpeter notes that as the 18th century progressed, thinkers would raise the alarm about technological unemployment with increasing frequency, with von Justi being a prominent example. Yet Schumpeter also notes that the prevailing view among the elite solidified on the position that technological unemployment would not be a long-term problem. 19th century It was only in the 19th century that debates over technological unemployment became intense, especially in Great Britain where many economic thinkers of the time were concentrated. Building on the work of Dean Tucker and Adam Smith, political economists began to create what would become the modern discipline of economics. While rejecting much of mercantilism, members of the new discipline largely agreed that technological unemployment would not be an enduring problem. In the first few decades of the 19th century, several prominent political economists did, however, argue against the optimistic view, claiming that innovation could cause long-term unemployment. These included Sismondi, Malthus, J S Mill, and from 1821, David Ricardo himself. As arguably the most respected political economist of his age, Ricardo's view was challenging to others in the discipline. The first major economist to respond was Jean-Baptiste Say, who argued that no one would introduce machinery if they were going to reduce the amount of product, and that as Say's law states that supply creates its own demand, any displaced workers would automatically find work elsewhere once the market had had time to adjust. Ramsey McCulloch expanded and formalised Say's optimistic views on technological unemployment, and was supported by others such as Charles Babbage, Nassau Senior and many other lesser known political economists. Towards the middle of the 19th century, Karl Marx joined the debates. Building on the work of Ricardo and Mill, Marx went much further, presenting a deeply pessimistic view of technological unemployment; his views attracted many followers and founded an enduring school of thought but mainstream economics was not dramatically changed. By the 1870s, at least in Great Britain, technological unemployment faded both as a popular concern and as an issue for academic debate. It had become increasingly apparent that innovation was increasing prosperity for all sections of British society, including the working class. As the classical school of thought gave way to neoclassical economics, mainstream thinking was tightened to take into account and refute the pessimistic arguments of Mill and Ricardo. 20th century For the first two decades of the 20th century, mass unemployment was not the major problem it had been in the first half of the 19th. While the Marxist school and a few other thinkers continued to challenge the optimistic view, technological unemployment was not a significant concern for mainstream economic thinking until the mid to late 1920s. In the 1920s mass unemployment re-emerged as a pressing issue within Europe. At this time the U.S. was generally more prosperous, but even there urban unemployment had begun to increase from 1927. Rural American workers had been suffering job losses from the start of the 1920s; many had been displaced by improved agricultural technology, such as the tractor. The centre of gravity for economic debates had by this time moved from Great Britain to the United States, and it was here that the 20th century's two great periods of debate over technological unemployment largely occurred. The peak periods for the two debates were in the 1930s and the 1960s. According to economic historian Gregory R Woirol, the two episodes share several similarities. In both cases academic debates were preceded by an outbreak of popular concern, sparked by recent rises in unemployment. In both cases the debates were not conclusively settled, but faded away as unemployment was reduced by an outbreak of war – World War II for the debate of the 1930s, and the Vietnam War for the 1960s episodes. In both cases, the debates were conducted within the prevailing paradigm at the time, with little reference to earlier thought. In the 1930s, optimists based their arguments largely on neo-classical beliefs in the self-correcting power of markets to reduce any short-term unemployment via compensation effects. In the 1960s, belief in compensation effects was less strong, but the mainstream Keynesian economists of the time largely believed government intervention would be able to counter any persistent technological unemployment that was not cleared by market forces. Another similarity was the publication of a major Federal study towards the end of each episode, which broadly found that long-term technological unemployment was not occurring (though the studies did agree innovation was a major factor in the short term displacement of workers, and advised government action to provide assistance). As the golden age of capitalism came to a close in the 1970s, unemployment once again rose, and this time generally remained relatively high for the rest of the century, across most advanced economies. Several economists once again argued that this may be due to innovation, with perhaps the most prominent being Paul Samuelson. Overall, the closing decades of the 20th century saw most concern expressed over technological unemployment in Europe, though there were several examples in the U.S. A number of popular works warning of technological unemployment were also published. These included James S. Albus's 1976 book titled Peoples' Capitalism: The Economics of the Robot Revolution; David F. Noble with works published in 1984 and 1993; Jeremy Rifkin and his 1995 book The End of Work; and the 1996 book The Global Trap. Yet for the most part, other than during the periods of intense debate in the 1930s and 60s, the consensus in the 20th century among both professional economists and the general public remained that technology does not cause long-term joblessness. 21st century Opinions The general consensus that innovation does not cause long-term unemployment held strong for the first decade of the 21st century although it continued to be challenged by a number of academic works, and by popular works such as Marshall Brain's Robotic Nation and Martin Ford's The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future. Since the publication of their 2011 book Race Against the Machine, MIT professors Andrew McAfee and Erik Brynjolfsson have been prominent among those raising concern about technological unemployment. The two professors remain relatively optimistic, however, stating "the key to winning the race is not to compete against machines but to compete with machines". Concern about technological unemployment grew in 2013 due in part to a number of studies predicting substantially increased technological unemployment in forthcoming decades and empirical evidence that, in certain sectors, employment is falling worldwide despite rising output, thus discounting globalization and offshoring as the only causes of increasing unemployment. In 2013, professor Nick Bloom of Stanford University stated there had recently been a major change of heart concerning technological unemployment among his fellow economists. In 2014 the Financial Times reported that the impact of innovation on jobs has been a dominant theme in recent economic discussion. According to the academic and former politician Michael Ignatieff writing in 2014, questions concerning the effects of technological change have been "haunting democratic politics everywhere". Concerns have included evidence showing worldwide falls in employment across sectors such as manufacturing; falls in pay for low and medium skilled workers stretching back several decades even as productivity continues to rise; the increase in often precarious platform mediated employment; and the occurrence of "jobless recoveries" after recent recessions. The 21st century has seen a variety of skilled tasks partially taken over by machines, including translation, legal research and even low level journalism. Care work, entertainment, and other tasks requiring empathy, previously thought safe from automation, have also begun to be performed by robots. Former U.S. Treasury Secretary and Harvard economics professor Lawrence Summers stated in 2014 that he no longer believed automation would always create new jobs and that "This isn't some hypothetical future possibility. This is something that's emerging before us right now." Summers noted that already, more labor sectors were losing jobs than creating new ones. While himself doubtful about technological unemployment, professor Mark MacCarthy stated in the fall of 2014 that it is now the "prevailing opinion" that the era of technological unemployment has arrived. At the 2014 Davos meeting, Thomas Friedman reported that the link between technology and unemployment seemed to have been the dominant theme of that year's discussions. A survey at Davos 2014 found that 80% of 147 respondents agreed that technology was driving jobless growth. At the 2015 Davos, Gillian Tett found that almost all delegates attending a discussion on inequality and technology expected an increase in inequality over the next five years, and gives the reason for this as the technological displacement of jobs. 2015 saw Martin Ford win the Financial Times and McKinsey Business Book of the Year Award for his Rise of the Robots: Technology and the Threat of a Jobless Future, and saw the first world summit on technological unemployment, held in New York. In late 2015, further warnings of potential worsening for technological unemployment came from Andy Haldane, the Bank of England's chief economist, and from Ignazio Visco, the governor of the Bank of Italy. In an October 2016 interview, US President Barack Obama said that due to the growth of artificial intelligence, society would be debating "unconditional free money for everyone" within 10 to 20 years. In 2019, computer scientist and artificial intelligence expert Stuart J. Russell stated that "in the long run nearly all current jobs will go away, so we need fairly radical policy changes to prepare for a very different future economy." In a book he authored, Russell claims that "One rapidly emerging picture is that of an economy where far fewer people work because work is unnecessary." However, he predicted that employment in healthcare, home care, and construction would increase. Other economists have argued that long-term technological unemployment is unlikely. In 2014, Pew Research canvassed 1,896 technology professionals and economists and found a split of opinion: 48% of respondents believed that new technologies would displace more jobs than they would create by the year 2025, while 52% maintained that they would not. Economics professor Bruce Chapman from Australian National University has advised that studies such as Frey and Osborne's tend to overstate the probability of future job losses, as they don't account for new employment likely to be created, due to technology, in what are currently unknown areas. Looking deeper into this, small and mid-sized businesses have created a large amount of new jobs around the world, which allows for entrepreneurs and investors to have the freedom to create and grow businesses, which is extremely vital with new technologies emerging everyday. With all of these new buinesses there will be a large number of workers that will be required to work for these companies, which would improve the world's employment situation, replacing jobs that were previously lost. General public surveys have often found an expectation that automation would impact jobs widely, but not the jobs held by those particular people surveyed. Studies A number of studies have predicted that automation will take a large proportion of jobs in the future, but estimates of the level of unemployment this will cause vary. Research by Carl Benedikt Frey and Michael Osborne of the Oxford Martin School showed that employees engaged in "tasks following well-defined procedures that can easily be performed by sophisticated algorithms" are at risk of displacement. The study, published in 2013, shows that automation can affect both skilled and unskilled work and both high and low-paying occupations; however, low-paid physical occupations are most at risk. It estimated that 47% of US jobs were at high risk of automation. In 2014, the economic think tank Bruegel released a study, based on the Frey and Osborne approach, claiming that across the European Union's 28 member states, 54% of jobs were at risk of automation. The countries where jobs were least vulnerable to automation were Sweden, with 46.69% of jobs vulnerable, the UK at 47.17%, the Netherlands at 49.50%, and France and Denmark, both at 49.54%. The countries where jobs were found to be most vulnerable were Romania at 61.93%, Portugal at 58.94%, Croatia at 57.9%, and Bulgaria at 56.56%. A 2015 report by the Taub Center found that 41% of jobs in Israel were at risk of being automated within the next two decades. In January 2016, a joint study by the Oxford Martin School and Citibank, based on previous studies on automation and data from the World Bank, found that the risk of automation in developing countries was much higher than in developed countries. It found that 77% of jobs in China, 69% of jobs in India, 85% of jobs in Ethiopia, and 55% of jobs in Uzbekistan were at risk of automation. The World Bank similarly employed the methodology of Frey and Osborne. A 2016 study by the International Labour Organization found 74% of salaried electrical & electronics industry positions in Thailand, 75% of salaried electrical & electronics industry positions in Vietnam, 63% of salaried electrical & electronics industry positions in Indonesia, and 81% of salaried electrical & electronics industry positions in the Philippines were at high risk of automation. A 2016 United Nations report stated that 75% of jobs in the developing world were at risk of automation, and predicted that more jobs might be lost when corporations stop outsourcing to developing countries after automation in industrialized countries makes it less lucrative to outsource to countries with lower labor costs. The Council of Economic Advisers, a US government agency tasked with providing economic research for the White House, in the 2016 Economic Report of the President, used the data from the Frey and Osborne study to estimate that 83% of jobs with an hourly wage below $20, 31% of jobs with an hourly wage between $20 and $40, and 4% of jobs with an hourly wage above $40 were at risk of automation. A 2016 study by Ryerson University (now Toronto Metropolitan University) found that 42% of jobs in Canada were at risk of automation, dividing them into two categories - "high risk" jobs and "low risk" jobs. High risk jobs were mainly lower-income jobs that required lower education levels than average. Low risk jobs were on average more skilled positions. The report found a 70% chance that high risk jobs and a 30% chance that low risk jobs would be affected by automation in the next 10–20 years. A 2017 study by PricewaterhouseCoopers found that up to 38% of jobs in the US, 35% of jobs in Germany, 30% of jobs in the UK, and 21% of jobs in Japan were at high risk of being automated by the early 2030s. A 2017 study by Ball State University found about half of American jobs were at risk of automation, many of them low-income jobs. A September 2017 report by McKinsey & Company found that as of 2015, 478 billion out of 749 billion working hours per year dedicated to manufacturing, or $2.7 trillion out of $5.1 trillion in labor, were already automatable. In low-skill areas, 82% of labor in apparel goods, 80% of agriculture processing, 76% of food manufacturing, and 60% of beverage manufacturing were subject to automation. In mid-skill areas, 72% of basic materials production and 70% of furniture manufacturing was automatable. In high-skill areas, 52% of aerospace and defense labor and 50% of advanced electronics labor could be automated. In October 2017, a survey of information technology decision makers in the US and UK found that a majority believed that most business processes could be automated by 2022. On average, they said that 59% of business processes were subject to automation. A November 2017 report by the McKinsey Global Institute that analyzed around 800 occupations in 46 countries estimated that between 400 million and 800 million jobs could be lost due to robotic automation by 2030. It estimated that jobs were more at risk in developed countries than developing countries due to a greater availability of capital to invest in automation. Job losses and downward mobility blamed on automation has been cited as one of many factors in the resurgence of nationalist and protectionist politics in the US, UK and France, among other countries. However, not all recent empirical studies have found evidence to support the idea that automation will cause widespread unemployment. A study released in 2015, examining the impact of industrial robots in 17 countries between 1993 and 2007, found no overall reduction in employment was caused by the robots, and that there was a slight increase in overall wages. According to a study published in McKinsey Quarterly in 2015 the impact of computerization in most cases is not replacement of employees but automation of portions of the tasks they perform. A 2016 OECD study found that among the 21 OECD countries surveyed, on average only 9% of jobs were in foreseeable danger of automation, but this varied greatly among countries: for example in South Korea the figure of at-risk jobs was 6% while in Austria it was 12%. In contrast to other studies, the OECD study does not primarily base its assessment on the tasks that a job entails, but also includes demographic variables, including sex, education and age. It is not clear however why a job should be more or less automatise just because it is performed by a woman. In 2017, Forrester estimated that automation would result in a net loss of about 7% of jobs in the US by 2027, replacing 17% of jobs while creating new jobs equivalent to 10% of the workforce. Another study argued that the risk of US jobs to automation had been overestimated due to factors such as the heterogeneity of tasks within occupations and the adaptability of jobs being neglected. The study found that once this was taken into account, the number of occupations at risk to automation in the US drops, ceteris paribus, from 38% to 9%. A 2017 study on the effect of automation on Germany found no evidence that automation caused total job losses but that they do effect the jobs people are employed in; losses in the industrial sector due to automation were offset by gains in the service sector. Manufacturing workers were also not at risk from automation and were in fact more likely to remain employed, though not necessarily doing the same tasks. However, automation did result in a decrease in labour's income share as it raised productivity but not wages. A 2018 Brookings Institution study that analyzed 28 industries in 18 OECD countries from 1970 to 2018 found that automation was responsible for holding down wages. Although it concluded that automation did not reduce the overall number of jobs available and even increased them, it found that from the 1970s to the 2010s, it had reduced the share of human labor in the value added to the work, and thus had helped to slow wage growth. In April 2018, Adair Turner, former Chairman of the Financial Services Authority and head of the Institute for New Economic Thinking, stated that it would already be possible to automate 50% of jobs with current technology, and that it will be possible to automate all jobs by 2060. Premature deindustrialization Premature deindustrialization occurs when developing nations deindustrialize without first becoming rich, as happened with the advanced economies. The concept was popularized by Dani Rodrik in 2013, who went on to publish several papers showing the growing empirical evidence for the phenomena. Premature deindustrialization adds to concern over technological unemployment for developing countries – as traditional compensation effects that advanced economy workers enjoyed, such being able to get well paid work in the service sector after losing their factory jobs – may not be available. Some commentators, such as Carl Benedikt Frey, argue that with the right responses, the negative effects of further automation on workers in developing economies can still be avoided. Artificial intelligence Since about 2017, a new wave of concern over technological unemployment had become prominent, this time over the effects of artificial intelligence (AI). Commentators including Calum Chace and Daniel Hulme have warned that if unchecked, AI threatens to cause an "economic singularity", with job churn too rapid for humans to adapt to, leading to widespread technological unemployment. However, they also advise that with the right responses by business leaders, policy makers and society, the impact of AI could be a net positive for workers. Morgan R. Frank et al. cautions that there are several barriers preventing researchers from making accurate predictions of the effects AI will have on future job markets. Marian Krakovsky has argued that the jobs most likely to be completely replaced by AI are in middle-class areas, such as professional services. Often, the practical solution is to find another job, but workers may not have the qualifications for high-level jobs and so must drop to lower level jobs. However, Krakovsky (2018) predicts that AI will largely take the route of "complementing people," rather than "replicating people." Suggesting that the goal of people implementing AI is to improve the life of workers, not replace them. Studies have also shown that rather than solely destroying jobs AI can also create work: albeit low-skill jobs to train AI in low-income countries. Following Russian president Vladimir Putin's 2017 statement that whichever country first achieves mastery in AI "will become the ruler of the world", various national and supranational governments have announced AI strategies. Concerns on not falling behind in the AI arms race have been more prominent than worries over AI's potential to cause unemployment. Several strategies suggest that achieving a leading role in AI should help their citizens get more rewarding jobs. Finland has aimed to help the citizens of other EU nations acquire the skills they need to compete in the post-AI jobs market, making a free course on "The Elements of AI" available in multiple European languages. Oracle CEO Mark Hurd predicted that AI "will actually create more jobs, not less jobs" as humans will be needed to manage AI systems. Martin Ford argues that many jobs are routine, repetitive and (to an AI) predictable; Ford warns that these jobs may be automated in the next couple of decades, and that many of the new jobs may not be "accessible to people with average capability", even with retraining. Certain digital technologies are predicted to result in more job losses than others. For example, in recent years, the adoption of modern robotics has led to net employment growth. However, many businesses anticipate that automation, or employing robots would result in job losses in the future. This is especially true for companies in Central and Eastern Europe. Other digital technologies, such as platforms or big data, are projected to have a more neutral impact on employment. Issues within the debates Long-term effects on employment Participants in the technological employment debates agree that temporary job losses can result from technological innovation. Similarly, there is no dispute that innovation sometimes has positive effects on workers. Disagreement focuses on whether it is possible for innovation to have a lasting negative impact on overall employment. Levels of persistent unemployment can be quantified empirically, but the causes are subject to debate. Optimists accept short term unemployment may be caused by innovation, yet claim that after a while, compensation effects will always create at least as many jobs as were originally destroyed. While this optimistic view has been continually challenged, it was dominant among mainstream economists for most of the 19th and 20th centuries. For example, labor economists Jacob Mincer and Stephan Danninger developed an empirical study using data from the Panel Study of Income Dynamics, and find that although in the short run, technological progress seems to have unclear effects on aggregate unemployment, it reduces unemployment in the long run. When they include a 5-year lag, however, the evidence supporting a short-run employment effect of technology seems to disappear as well, suggesting that technological unemployment "appears to be a myth". Other studies, on the other hand, suggest that the labour-market effects of technologies such as industrial robots strongly depend on domestic institutional context. The concept of structural unemployment, a lasting level of joblessness that does not disappear even at the high point of the business cycle, became popular in the 1960s. For pessimists, technological unemployment is one of the factors driving the wider phenomena of structural unemployment. Since the 1980s, even optimistic economists have increasingly accepted that structural unemployment has indeed risen in advanced economies, but they have tended to attribute this on globalisation and offshoring rather than technological change. Others claim a chief cause of the lasting increase in unemployment has been the reluctance of governments to pursue expansionary policies since the displacement of Keynesianism that occurred in the 1970s and early 80s. In the 21st century, and especially since 2013, pessimists have been arguing with increasing frequency that lasting worldwide technological unemployment is a growing threat. Compensation effects Compensation effects are labour-friendly consequences of innovation which "compensate" workers for job losses initially caused by new technology. In the 1820s, several compensation effects were described by Jean-Baptiste Say in response to Ricardo's statement that long-term technological unemployment could occur. Soon after, a whole system of effects was developed by Ramsey McCulloch. The system was labelled "compensation theory" by Karl Marx, who criticized its ideas, arguing that none of the effects were guaranteed to operate. Disagreement over the effectiveness of compensation effects has remained a central part of academic debates on technological unemployment ever since. Compensation effects include: By new machines. (The labour needed to build the new equipment that applied innovation requires.) By new investments. (Enabled by the cost savings and therefore increased profits from the new technology.) By changes in wages. (In cases where unemployment does occur, this can cause a lowering of wages, thus allowing more workers to be re-employed at the now lower cost. On the other hand, sometimes workers will enjoy wage increases as their profitability rises. This leads to increased income and therefore increased spending, which in turn encourages job creation.) By lower prices. (Which then lead to more demand, and therefore more employment.) Lower prices can also help offset wage cuts, as cheaper goods will increase workers' buying power. By new products. (Where innovation directly creates new jobs.) The "by new machines" effect is now rarely discussed by economists; it is often accepted that Marx successfully refuted it. Even pessimists often concede that product innovation associated with the "by new products" effect can sometimes have a positive effect on employment. An important distinction can be drawn between 'process' and 'product' innovations. Evidence from Latin America seems to suggest that product innovation significantly contributes to the employment growth at the firm level, more so than process innovation. The extent to which the other effects are successful in compensating the workforce for job losses has been extensively debated throughout the history of modern economics; the issue is still not resolved. One such effect that potentially complements the compensation effect is job multiplier. According to research developed by Enrico Moretti, with each additional skilled job created in high tech industries in a given city, more than two jobs are created in the non-tradable sector. His findings suggest that technological growth and the resulting job-creation in high-tech industries might have a more significant spillover effect than anticipated. Evidence from Europe also supports such a job multiplier effect, showing local high-tech jobs could create five additional low-tech jobs. Many economists pessimistic about technological unemployment accept that compensation effects did largely operate as the optimists claimed through most of the 19th and 20th century. Yet they hold that the advent of computerisation means that compensation effects have become less effective. An early example of this argument was made by Wassily Leontief in 1983. He conceded that after some disruption, the advance of mechanization during the Industrial Revolution increased the demand for labour as well as increasing pay due to effects that flow from increased productivity. While early machines lowered the demand for muscle power, they were unintelligent and needed large numbers of human operators to remain productive. Yet since the introduction of computers into the workplace, there is now less need not just for muscle power but also for human brain power. Hence even as productivity continues to rise, the lower demand for human labour may mean less pay and employment. Luddite fallacy The term "Luddite fallacy" is sometimes used to express the view that those concerned about long-term technological unemployment are committing a fallacy, as they fail to account for compensation effects. People who use the term typically expect that technological progress will have no long-term impact on employment levels, and eventually will raise wages for all workers, because progress helps to increase the overall wealth of society. The term is originating on from the Luddites, members of an early 19th century English anti-textile-machinery organisation. During the 20th century and the first decade of the 21st century, the dominant view among economists has been that belief in long-term technological unemployment was indeed a fallacy. More recently, there has been increased support for the view that the benefits of automation are not equally distributed. There are two different theories for why long-term difficulty could develop. Traditionally ascribed to the Luddites (accurately or not), that there is a finite amount of work available and if machines do it, there can be none left for humans. Economists may call this the lump of labour fallacy, arguing that in reality no such limitation exists. A long-term difficulty can arise that has nothing to do with any lump of labour. In this view, the amount of work that can exist is infinite, but machines can do most of the "easy" work that requires less skill, talent, knowledge, or insight the definition of what is "easy" expands as information technology progresses, and the work that lies beyond "easy" may require greater brainpower than most people have. This second view is supported by many modern advocates of the possibility of long-term, systemic technological unemployment. Skill levels and technological unemployment A frequent view among those discussing the effect of innovation on the labour market has been that it mainly hurts those with low skills, while often benefiting skilled workers. According to scholars such as Lawrence F. Katz, this may have been true for much of the twentieth century, yet in the 19th century, innovations in the workplace largely displaced costly skilled artisans, and generally benefited the low skilled. While 21st century innovation has been replacing some unskilled work, other low skilled occupations remain resistant to automation, while white collar work requiring intermediate skills is increasingly being performed by autonomous computer programs. Some recent studies however, such as a 2015 paper by Georg Graetz and Guy Michaels, found that at least in the area they studied – the impact of industrial robots – innovation is boosting pay for highly skilled workers while having a more negative impact on those with low to medium skills. A 2015 report by Carl Benedikt Frey, Michael Osborne and Citi Research agreed that innovation had been disruptive mostly to middle-skilled jobs, yet predicted that in the next ten years the impact of automation would fall most heavily on those with low skills. Geoffrey Colvin at Forbes argued that predictions on the kind of work a computer will never be able to do have proven inaccurate. A better approach to anticipate the skills on which humans will provide value would be to find out activities where we will insist that humans remain accountable for important decisions, such as with judges, CEOs, bus drivers and government leaders, or where human nature can only be satisfied by deep interpersonal connections, even if those tasks could be automated. In contrast, others see even skilled human laborers being obsolete. Oxford academics Carl Benedikt Frey and Michael A Osborne have predicted computerization could make nearly half of jobs redundant; of the 702 professions assessed, they found a strong correlation between education and income with ability to be automated, with office jobs and service work being some of the more at risk. In 2012 co-founder of Sun Microsystems Vinod Khosla predicted that 80% of medical doctors' jobs would be lost in the next two decades to automated machine learning medical diagnostic software. The issue of redundant job places is elaborated by the 2019 paper by Natalya Kozlova, according to which over 50% of workers in Russia perform work that requires low levels of education and can be replaced by applying digital technologies. Only 13% of those people possess education that exceeds the level of intellectual computer systems present today and expected within the following decade. Empirical findings There has been a significant amount of empirical research that attempts to quantify the impact of technological unemployment, mainly at the microeconomic level. Most existing firm-level research has found a labor-friendly nature of technological innovations. For example, German economists Stefan Lachenmaier and Horst Rottmann find that both product and process innovation have a positive effect on employment. They also find that process innovation has a more significant job creation effect than product innovation. This result is supported by evidence in the United States as well, which shows that manufacturing firm innovations have a positive effect on the total number of jobs, not just limited to firm-specific behavior. At the industry level, however, researchers have found mixed results with regard to the employment effect of technological changes. A 2017 study on manufacturing and service sectors in 11 European countries suggests that positive employment effects of technological innovations only exist in the medium- and high-tech sectors. There also seems to be a negative correlation between employment and capital formation, which suggests that technological progress could potentially be labor-saving given that process innovation is often incorporated in investment. Limited macroeconomic analysis has been done to study the relationship between technological shocks and unemployment. The small amount of existing research, however, suggests mixed results. Italian economist Marco Vivarelli finds that the labor-saving effect of process innovation appears to have affected the Italian economy more negatively than the United States. On the other hand, the job creating effect of product innovation could only be observed in the United States, not Italy. Another study in 2013 finds a more transitory, rather than permanent, unemployment effect of technological change. Measures of technological innovation There have been four main approaches that attempt to capture and document technological innovation quantitatively. The first one, proposed by Jordi Gali in 1999 and further developed by Neville Francis and Valerie A. Ramey in 2005, is to use long-run restrictions in a vector autoregression (VAR) to identify technological shocks, assuming that only technology affects long-run productivity. The second approach is from Susanto Basu, John Fernald and Miles Kimball. They create a measure of aggregate technology change with augmented Solow residuals, controlling for aggregate, non-technological effects such as non-constant returns and imperfect competition. The third method, initially developed by John Shea in 1999, takes a more direct approach and employs observable indicators such as research and development (R&D) spending, and number of patent applications. This measure of technological innovation is widely used in empirical research, since it does not rely on the assumption that only technology affects long-run productivity, and fairly accurately captures output variation based on input variation. However, there are limitations with direct measures such as R&D. For example, since R&D only measures the input in innovation, the output is unlikely to be perfectly correlated with the input. In addition, R&D fails to capture the indeterminate lag between developing a new product or service, and bringing it to market. The fourth approach, constructed by Michelle Alexopoulos, looks at the number of new titles published in the fields of technology and computer science to reflect technological progress, which he found to be consistent with R&D expenditure data. Compared with R&D, this indicator captures the lag between changes in technology. Solutions Preventing net job losses Banning/refusing innovation Historically, innovations were sometimes banned due to concerns about their impact on employment. Since the development of modern economics, however, this option has generally not even been considered as a solution, at least not for the advanced economies. Even commentators who are pessimistic about long-term technological unemployment invariably consider innovation to be an overall benefit to society, with J. S. Mill being perhaps the only prominent western political economist to have suggested prohibiting the use of technology as a possible solution to unemployment. Gandhian economics called for a delay in the uptake of labour saving machines until unemployment was alleviated, however this advice was largely rejected by Nehru who was to become prime minister once India achieved its independence. The policy of slowing the introduction of innovation so as to avoid technological unemployment was, however, implemented in the 20th century within China under Mao's administration. Shorter working hours In 1870, the average American worker clocked up about 75 hours per week. Just prior to World War II working hours had fallen to about 42 per week, and the fall was similar in other advanced economies. According to Wassily Leontief, this was a voluntary increase in technological unemployment. The reduction in working hours helped share out available work, and was favoured by workers who were happy to reduce hours to gain extra leisure, as innovation was at the time generally helping to increase their rates of pay. Further reductions in working hours have been proposed as a possible solution to unemployment by economists including John R. Commons, Lord Keynes and Luigi Pasinetti. Yet once working hours have reached about 40 hours per week, workers have been less enthusiastic about further reductions, both to prevent loss of income and as many value engaging in work for its own sake. Generally, 20th-century economists had argued against further reductions as a solution to unemployment, saying it reflects a lump of labour fallacy. In 2014, Google's co-founder, Larry Page, suggested a four-day workweek, so as technology continues to displace jobs, more people can find employment. Public works Programmes of public works have traditionally been used as way for governments to directly boost employment, though this has often been opposed by some, but not all, conservatives. Jean-Baptiste Say, although generally associated with free market economics, advised that public works could be a solution to technological unemployment. Some commentators, such as professor Mathew Forstater, have advised that public works and guaranteed jobs in the public sector may be the ideal solution to technological unemployment, as unlike welfare or guaranteed income schemes they provide people with the social recognition and meaningful engagement that comes with work. For less developed economies, public works may be an easier to administrate solution compared to universal welfare programmes. A partial exception is for spending on infrastructure, which has been recommended as a solution to technological unemployment even by economists previously associated with a neoliberal agenda, such as Larry Summers. Education Improved availability to quality education, including skills training for adults, is a solution that in principle at least is not opposed by any side of the political spectrum, and welcomed even by those who are optimistic about long-term technological employment. Improved education paid for by government tends to be especially popular with industry. However, several academics have argued that improved education alone will not be sufficient to solve technological unemployment, pointing to recent declines in the demand for many intermediate skills, and suggesting that not everyone is capable in becoming proficient in the most advanced skills. Kim Taipale has said that "The era of bell curve distributions that supported a bulging social middle class is over... Education per se is not going to make up the difference." while back in 2011 Paul Krugman argued that better education would be an insufficient solution to technological unemployment. Living with technological unemployment Welfare payments The use of various forms of subsidies has often been accepted as a solution to technological unemployment even by conservatives and by those who are optimistic about the long-term effect on jobs. Welfare programmes have historically tended to be more durable once established, compared with other solutions to unemployment such as directly creating jobs with public works. Despite being the first person to create a formal system describing compensation effects, Ramsey McCulloch and most other classical economists advocated government aid for those suffering from technological unemployment, as they understood that market adjustment to new technology was not instantaneous and that those displaced by labour-saving technology would not always be able to immediately obtain alternative employment through their own efforts. Basic income Several commentators have argued that traditional forms of welfare payment may be inadequate as a response to the future challenges posed by technological unemployment, and have suggested a basic income as an alternative. People advocating some form of basic income as a solution to technological unemployment include Martin Ford, Erik Brynjolfsson, Robert Reich, Andrew Yang, Elon Musk, Zoltan Istvan, and Guy Standing. Reich has gone as far as to say the introduction of a basic income, perhaps implemented as a negative income tax is "almost inevitable", while Standing has said he considers that a basic income is becoming "politically essential". Since late 2015, new basic income pilots have been announced in Finland, the Netherlands, and Canada. Further recent advocacy for basic income has arisen from a number of technology entrepreneurs, the most prominent being Sam Altman, president of Y Combinator. Skepticism about basic income includes both right and left elements, and proposals for different forms of it have come from all segments of the spectrum. For example, while the best-known proposed forms (with taxation and distribution) are usually thought of as left-leaning ideas that right-leaning people try to defend against, other forms have been proposed even by libertarians, such as von Hayek and Friedman. In the United States, President Richard Nixon's Family Assistance Plan (FAP) of 1969, which had much in common with basic income, passed in the House but was defeated in the Senate. One objection to basic income is that it could be a disincentive to work, but evidence from older pilots in India, Africa, and Canada indicates that this does not happen and that a basic income encourages low-level entrepreneurship and more productive, collaborative work. Another objection is that funding it sustainably is a huge challenge. While new revenue-raising ideas have been proposed such as Martin Ford's wage recapture tax, how to fund a generous basic income remains a debated question, and skeptics have dismissed it as utopian. Even from a progressive viewpoint, there are concerns that a basic income set too low may not help the economically vulnerable, especially if financed largely from cuts to other forms of welfare. To better address both the funding concerns and concerns about government control, one alternative model is that the cost and control would be distributed across the private sector instead of the public sector. Companies across the economy would be required to employ humans, but the job descriptions would be left to private innovation, and individuals would have to compete to be hired and retained. This would be a for-profit sector analog of basic income, that is, a market-based form of basic income. It differs from a job guarantee in that the government is not the employer (rather, companies are) and there is no aspect of having employees who "cannot be fired", a problem that interferes with economic dynamism. The economic salvation in this model is not that every individual is guaranteed a job, but rather just that enough jobs exist that massive unemployment is avoided and employment is no longer solely the privilege of only the very smartest or highly trained 20% of the population. Another option for a market-based form of basic income has been proposed by the Center for Economic and Social Justice (CESJ) as part of "a Just Third Way" (a Third Way with greater justice) through widely distributed power and liberty. Called the Capital Homestead Act, it is reminiscent of James S. Albus's Peoples' Capitalism in that money creation and securities ownership are widely and directly distributed to individuals rather than flowing through, or being concentrated in, centralized or elite mechanisms. Broadening the ownership of technological assets Several solutions have been proposed which do not fall easily into the traditional left-right political spectrum. This includes broadening the ownership of robots and other productive capital assets. Enlarging the ownership of technologies has been advocated by people including James S. Albus John Lanchester, Richard B. Freeman, and Noah Smith. Jaron Lanier has proposed a somewhat similar solution: a mechanism where ordinary people receive "nano payments" for the big data they generate by their regular surfing and other aspects of their online presence. Structural changes towards a post-scarcity economy The Zeitgeist Movement (TZM), The Venus Project (TVP) as well as various individuals and organizations propose structural changes towards a form of a post-scarcity economy in which people are 'freed' from their automatable, monotonous jobs, instead of 'losing' their jobs. In the system proposed by TZM all jobs are either automated, abolished for bringing no true value for society (such as ordinary advertising), rationalized by more efficient, sustainable and open processes and collaboration or carried out based on altruism and social relevance, opposed to compulsion or monetary gain. The movement also speculates that the free time made available to people will permit a renaissance of creativity, invention, community and social capital as well as reducing stress. Other approaches The threat of technological unemployment has occasionally been used by free market economists as a justification for supply side reforms, to make it easier for employers to hire and fire workers. Conversely, it has also been used as a reason to justify an increase in employee protection. Economists including Larry Summers have advised a package of measures may be needed. He advised vigorous cooperative efforts to address the "myriad devices" – such as tax havens, bank secrecy, money laundering, and regulatory arbitrage – which enable the holders of great wealth to avoid paying taxes, and to make it more difficult to accumulate great fortunes without requiring "great social contributions" in return. Summers suggested more vigorous enforcement of anti-monopoly laws; reductions in "excessive" protection for intellectual property; greater encouragement of profit-sharing schemes that may benefit workers and give them a stake in wealth accumulation; strengthening of collective bargaining arrangements; improvements in corporate governance; strengthening of financial regulation to eliminate subsidies to financial activity; easing of land-use restrictions that may cause estates to keep rising in value; better training for young people and retraining for displaced workers; and increased public and private investment in infrastructure development, such as energy production and transportation. Michael Spence has advised that responding to the future impact of technology will require a detailed understanding of the global forces and flows technology has set in motion. Adapting to them "will require shifts in mindsets, policies, investments (especially in human capital), and quite possibly models of employment and distribution". See also Notes References Citations Sources Further reading John Maynard Keynes, The Economic Possibilities of our Grandchildren (1930) E McGaughey, 'Will Robots Automate Your Job Away? Full Employment, Basic Income, and Economic Democracy' (2018) ssrn.com, part 2(2) Ross, Alec (2016), The Industries of the Future, USA: Simon & Schuster. Impact of automation Technological change Unemployment
Technological unemployment
Engineering
11,222
9,840,408
https://en.wikipedia.org/wiki/Klasies%20River%20Caves
The Klasies River Caves are a series of caves located east of the Klasies River Mouth on the Tsitsikamma coast in the Humansdorp district of Eastern Cape Province, South Africa. The Klasies River Main (KRM) site consists of 3 main caves and 2 shelters located within a cliff on the southern coast of the Eastern Cape. The site provides evidence for developments in stone tool technology, evolution of modern human anatomy and behavior, and changes in paleoecology and climate in Southern Africa based on evidence from plant remains. Site exposition Klasies River Cave is located on the border of the Tsitsikamma mountain range on the southeastern coast of Africa. The site sits within the Greater Cape Floristic region, characterized by the fynbos biome; however the Klasies River Cave environment is mixed woods and shrubby brushland and maintains a temperate climate. Klasies River main site is located on a sandstone cliff less than 1 kilometer from the Klasies River mouth and on the coast of the Indian Ocean. The district receives approximately 500–700 mm (20–28 in) of rainfall annually. The site consists of Caves 1 and 2, and the protected overhangs of Cave 1A and 1B, together known as Klasies River main site. However, Cave 2 was not accessible until later stages after there had been significant deposition and build-up of sediments, and Cave 1B has been under-documented; most finds therefore come from Caves 1 and1A. These caves contain 21 meters of deposits that researchers have struggled to delineate stratigraphically. While sea levels fluctuated over time, during certain occupations, the proximity to the coast and the surrounding grasslands provided marine life and terrestrial animals that were exploited by the caves inhabitants. Excavations From 1967-1968 John Wymer and Ronald Singer conducted excavations that revealed evidence of Middle Stone Age (MSA)-associated human habitation beginning approximately 125,000 years ago. Singer and Wymer excavated Caves 1, 1A and 1B, and part of Cave 2; using culture stratigraphy, they determined stages as MSA I, MSA II, Howiesons Poort, MSA III and MSA IV, which allowed comparison across the caves. Critiques of the original excavation include sampling bias due to excavation and screening methods, and combination of stratigraphic layers that obscures the sites' complexity; certain strata were lumped together making it difficult to differentiate between activities at the site and combining artifacts and bones from multiple different strata. These initial findings prompted successive excavations to examine the varied and complex stratigraphy. Hilary Deacon began work in Caves 1, 1A and 1B from 1984-1995 focusing on the arbitrary delineation (the "Witness Baulk") that Singer and Wymer used to differentiate Cave 1A from Cave 1, as these caves are actually continuous. While the Singer and Wymer excavation used units that were excavated uniformly across the cave layer by layer, successive excavations focused on different stratigraphic approaches. Deacon's excavations maintained the microstratigraphy of the site. Deacon excavated units one by one, independently of other units in the cave, recording the stratigraphy observed in each individual unit before grouping units together based on their shared stratigraphic patterns. Rather than use culture stratigraphy, Deacon used a hybrid strategy combining Singer and Wymer's culture stratigraphy and lithostratigraphy. He developed a descriptive naming pattern for the observed soils instead of Singer and Wymer's classification system, however both systems are still used today. Sarah Wurz began directing excavations at the site in 2013 and continues today with a continued focus on microstratigraphy; her work is mainly within the Cave 1 Witness Baulk, a section that hadn't been excavated previously. The current work is focused on gathering data from the microstratigraphy and refining the process of micro-scale excavations. This data allows comparison of KRM to other sites across the continent. Stratigraphy and dating The dates of the KRM stratigraphy have been obtained through isotopic analysis and dating of biological materials. Because the site exceeds 50,000 years old, radiocarbon dates are less useful due to carbon contamination. Researchers have resorted to analysis of unstable isotopes, such as Uranium-Thorium (U-Th) to provide more accurate date ranges. Marine isotope stages (MIS) are used to compare large-scale global temporal comparisons; each stage in the Klasies River Caves site correlates with a MIS stage. The stratigraphy of the site consists of very fine, thin layers of sediment that have compressed under the successive layers on top of them. Researchers have used microstratigraphic techniques to analyze and interpret the complex timeline of sediment deposition and post-depositional activities within the caves. Because the cave system has so many varied layers and the different caves have different depositional characteristics and sediment properties, it has been hard to create a uniform system to group and chronologically group the layers for a site-wide comparison; natural processes such as erosion, and the influence of people at certain areas of the site (anthropogenic deposition of shell middens, hearths, etc.) have further complicated the interpretation of the stratigraphy. A lithostratigraphic and culture stratigraphic approach are both used at KRM today. Singer and Wymer's stages at KRM began at the base, MSA I, followed by MSA II, Howiesons Poort, MSA III, and MSA IV. These groupings are based on changes on stratigraphy and/or changes in material culture though time. Deacon chose to organize the site based on soil descriptions; he categorizes the site on this basis with the lowest level being Basal Gravels, followed by the Light Brown Sand (LBS) Member, the Rubble Brown Sand (RBS) member, the Shell and Sand (SAS) member, the Rockfall (RF) and Upper member, and the White Sand (WS) member. Some of these layers are then further subdivided. This system allows researchers to compare deposition patterns and contexts across the site in each cave which may have different dates because of differing deposition processes in each cave environment; some caves may not contain each member listed. Paleoenvironment Paleoenvironment reconstruction uses multiple analytical foci to help determine a proximate estimate of the climate of a site during a given time frame. These reconstructions are aided through archaeological excavations and finds. Analysis of faunal remains can indicate what species existed across space and time. Likewise, archaeobotanical analysis provides insight into the plants that were in proximity to the site. These determinations are also aided by global climate estimates provided by deep sea cores. Plant and animal remains can also be indicative of certain climates as species have certain climatic ranges and preferred biomes. Changes in the location of the sea shore and in the grassy or wetland areas surrounding the cave have been determined using these methods. MSA I fauna indicate a mosaic environment that included closed, drier areas with micromammals including moles, and open grasslands that favored grazing ungulates. This period is associated with an interglacial period (MIS 5e) and higher sea levels and warmer temperatures, which is supported by shell middens. MSA II shows a shift from MSA I. This period is associated with MIS 5d-a, with the early part of this phase associated with warmer temperatures shifting towards cooler temperatures later. During this time the coastline was likely never more than 10 km away from the site. The number of grazers decreased while mixed-feeders and browsers increased; this correlates with a decrease in shrubland and expanding grasslands. Terrestrial fauna include rock hyrax (rock rabbits), brown hyena, Cape dune mole rat, buffalo, equids, and members of the subfamily Alcelaphinae. There is also evidence of forest-dwelling bovid and other animals that prefer wetland grasses and reeds (African marsh rat and hippopotamus), indicating the environmental diversity of the site during MSA II. Warm-water shellfish (brown mussels and other rocky shore species) and cape fur seals were present at the cave as groups were able to exploit marine food sources as well. The Howiesons Poort (HP) environment shifts from the closed environment of the later MSA II into a more open environment featuring more grazing animals. The earlier levels of HP contain more evidence of browsers which indicate a more closed environment while later periods increase in grazing fauna associated with more open environments. This period is associated with MIS 4, a glacial period with cooler temperatures and lower sea levels. Shellfish that appear in tide pools are more common in HP which suggests a further coastline. Data from Pinnacle Point show that the climate during HP was variable and unstable with periods of drought. MSA III is marked by a declining temperatures and receding coastline that would have exposed the Palaeo-Agulhas Plain. The faunal remains are variable and consist of wetland species, open environment grassland grazers, and Cape dune mole rats that prefer sand dunes. This period corresponds to MIS 3, a cooler environment with short warm periods. There is little data available to describe the paleoecology of the MSA IV. It does not contain many archaeological features such as hearths or material culture other than lithics. Current ethnobotanical research exploring the biodiversity of Klasies River and the Cape region found that many hunter-gatherer-pastoralists from the Khoi and San populations use the flora and fauna of the region that was also found in archaeobotanical and faunal assemblages. The study took an inventory of plants within 5 km of the modern Klasies site and discovered 268 species. Over 50% of these plants were medicinal and 43% were edible or had other uses as demonstrated from interviews with Khoi and San communities. While not all of these plant species may have existed during the Middle Stone Age, these findings still demonstrate the longevity of plant knowledge throughout the communities in the area. The study also noted that the use of certain plant types for survival during climate changes, and proposed that humans in the past could have subsisted in similar ways. Findings Material culture There is evidence for stone tool production at the site. Rounded quartzite cobbles appear to be the preferred material for stone tool production based on the recovery of raw materials at the site. Analysis of flakes and debitage indicate that free hand stone percussion was the primary method of tool production at Cave 1, and the prevalence of consistent and nearly uniform points suggest that these tools were the desired end product. Lithic artifacts do show variation through the various occupation stages of KRM. Howiesons Poort interrupts the relative uniformity seen in MSA I and MSA II: in the latter, large and long quartzite points and blades were the goal end products and were usually not retouched, while the Howiesons Poort lithics were made from a wider variety of materials and were fashioned into smaller blades and artifacts. Tools from MSA III are made from more non-local raw materials than MSA I or II, but less than Howiesons Poort; these tools from MSA III also have a similar core morphology to Howiesons Poort. The MSA IV lithics consist of more flake blades than observed in MSA III. Earlier layers from the site dating to MIS 5d-e have a density of shellfish and debitage from stone tool production. This dense accumulation of animal remains, shell middens, and the remains from stone tool production indicate that the caves were used as a home base rather than an intermittent shelter during this period. At later times in the younger strata of the site, a higher density of stone tools and lower density of shellfish at later dates suggest that the site was only used as a tool production site rather than a residential location. This change in mobility at KRM corresponds to a similar change during the same time period at Pinnacle Point. Bone tools have been found at KRM during the original excavations by Singer and Wymer coming from MSA II and Howiesons Poort strata. Three denticulated bone tools originating from an MSA II context resemble musical rasps found at other Middle Stone Age sites in Southern Africa. The bone rasps were fashioned out of bovid rib and long bone, but a study found that the use wear did not relate to the same type of musical instrument used at the other MSA sites. Instead, the authors found starchy reside within the teeth of the rasp and suggest that the tool was used for plant processing rather than as an instrument or skin abrader, however this is not proven. A ground bone point dating to 80-65 ka and a charred bone with engraved lines come from Howiesons Poort contexts. Use-wear analysis found longitudinal striations consistent with longitudinal action (including arrow use); microstriations suggest that the bone point was also hafted within a reed, consistent with ethno-historic accounts of early hunter-gatherers from the region. Alternative interprations that include use as a domestic tool for woodworking or as a javelin point are not consistent with microwear of the bone point leading the authors to conclude that the bone tool was used as an arrowhead. Other bone points have been found at MSA sites including Katanda in the Semliki River Valley, dating to around 90 ka, and at Blombos Cave dating to 73 ka. Researchers have also confirmed evidence of shell beads at Klasies River caves, which is corroborated by similar discoveries from other cave sites on the southern coast. Faunal remains and food sources The hominins at KRM were hunter-gatherers, and the presence of faunal remains, shellfish, and plant residues shows the wide variety of food sources available near the site. Food consumption at the site consisted of marine and terrestrial animals, evidenced by shell middens and faunal remains of bovids. Scientists suggest that the wide availability and variation in food sources were the cause for the anatomically modern humans due to the nutrients required for larger brains and cognitive function. However other research on the relationship between brain size, longevity and social learning suggests that large brain size is not indicative of higher cognitive function or intelligence. Instead brain size may increase with social learning and increased perceptual and motor abilities. These increased abilities create a positive feedback loop that increases brain size via social learning and its impact on primate and hominin ability to acquire more nutrient rich food sources that favor brain growth. There is a clear relationship between brain size, sociality, and lifespan in primates, but how these influence each other is not yet certain. Plants within a 12.5 kilometer foraging radius of the caves would have included 161 native species from a mix of geophytes / underground storage organs (USOs), leaves, and fruits, all of which would provide sufficient nutrients for the hominins at the site. By increasing the foraging radius to 35 kilometers away from the site, more nuts, seeds, and grains become available with a total of 281 available edible plant species; these resources were less abundant within the smaller radius and would give reason for an extended foraging journey to acquire varied and less perishable nutrients. A majority of the food plants within these radii can be eaten raw. The association of animal and plant remains with hearths provides evidence that hominins at the site used fire to cook. Bones with cut marks and percussion marks from hammer stones indicate that meat and bone marrow were consumed. Cooked foods provide quickly digestible energy and would have contributed to a higher quality diet which could lead to an evolutionary change in Homo sapiens. Samples taken from hearths within MSA I and Howiesons Poort levels identified parenchyma, heated bones and shellfish found together indicating the cooking of multiple food sources. The parenchyma samples came from underground storage organs, but preservation did not allow for determination of plant species. The abundance of plant species available year-round, as discussed in previous sections, would have provided many reliable energy sources for humans; cooking these starchy plants increases energy absorption. Site occupation patterns based on faunal, food source remains, and lithic evidence suggest that hominins were more mobile during the MSA I than in MSA II as indicated by use-wear analysis on stone tools and interpretation of shell middens. Human remains During the first excavation in 1967 by Singer and Wymer, a coarse sieve was used for screening causing the loss of smaller bones, shells, and other artifacts; because of this sample sized are biased because long bones (i.e. bone shafts) and small bones (i.e. finger bones) were not collected. However, the bones that were analyzed show anatomical differentiation within Homo sapiens throughout time. Over 50 human remains have been found, a majority of them from Singer and Wymer's original excavation in 1967-68, and a majority of these excavated in Cave 1. The human remains come from early Homo sapiens and the fragments show sexually dimorphic traits. Inventory The human remains from KRM are mostly fragmentary adult skeletal elements. No human remains were recovered from the WS member (MIS IV). Only one deciduous tooth is attributable to Howiesons Poort levels (Upper Member). The Upper member (MSA III) contains two parietal fragments, 1 deciduous tooth, and one permanent premolar. The majority of remains came from the SAS member (MSA II) and post-HP included two vertebral fragments, five mandibular fragments, seven teeth without associated alveolar bone, one facial bone, one pelvis fragment, one clavicle, 15 cranial fragments, one radius and one ulna fragment, one manual distal phalanx, and three metatarsals. The LBS Member (MSA I) contained 27 cranial fragments and two mandibular fragments. A majority of the remains are cranial fragments, and these outnumber the post-cranial elements. This pattern has been argued to represent a pattern that is also observed in the faunal remains record. One argument proposes that the preferred elements are taken to the site and are preserved due to fragmentation from marrow collection e.g. the cranial bones are the preferred bones. Another argument suggests a collection bias in which post-cranial long bones were preferred and were not preserved because they were destroyed during marrow collection leaving only the cranial bones, hand and foot bones. However, it is likely an error in excavation and collection from Singer and Wymer that has biased against the collection of human remains. An argument for cannibalism has been posed by some researchers. The fragmented human cranial fragments exhibit cut marks and charring similar to that of the faunal remains from the site. This treatment has led to the conclusion that the inhabitants participated in episodic cannibalism. Two individuals appear to be deposited around the same time in one stratigraphic layer, but correlating a site-wide cannibalism event requires a finer-scale understanding of the lithostratigraphy that is not possible at present. Of the remains recovered, two mandibles display the idiopathic dental anomaly of hypercementosis; this condition has been discovered in Neanderthal and Homo erectus remains, but the individuals at Klasies remain the oldest case of hypercementosis in Sub-Saharan Africa (ca. 119 ka). This find is significant because it demonstrates continuity of the condition through the hominin lineage. Finger bones (manual distal phalanges) dating to 90-100 ka were also recovered from the Witness Baulk in Cave 1. These bones are from an adult individual and appear smaller in size than modern human populations; they are also not comparable to Neanderthal phalanges. These phalanges are however similar to phalanges originating in Die Kelders cave which the authors suggest are more comparable to Holocene Khoesan populations. However, comparisons of prehistoric populations to modern populations are debated and may not have merit. Anatomically modern humans Assessment of relatedness between species is based on ancestral or derived traits to create a phylogeny that assigns closely related specimens to the same or similar groups; this is usually visualized as branches on a phylogenetic tree. Ancestral and derived traits vary with genetic drift, mutations, and other genetic factors that can steer evolution in many directions. Modern humans (Homo sapiens) originated in Africa and trace a lineage back to non-human primate ancestors there. For further discussion on human evolution, see human evolution, evolutionary genetics, and timeline of human evolution. Anatomically modern humans share traits with today's modern Homo sapiens. Remains from the Klasies site appear to have modern human morphology based on cranial traits. The specimens do not have retromolar spaces in the mandible and the supraorbital regions appear similar to other Homo sapiens specimens. The differences in some of the skull fragments are attributed to sexual dimorphism—differences in the size or robustness of the bone between sexes. Other anatomically modern Homo sapiens specimens from the MSA are found at sites in East Africa and in the Levant (see Omo Kibish, Mumba Cave, and Shkul Cave). Researchers have turned to Africa as the birthplace of human behavioral modernity since it is also the place where modern humans evolved. Some researchers have proposed models and trait lists for behavioral modernity that has sparked intense debate among scholars about what constitutes modern human behavior. Evidence of woven grass beds at Border Cave, engraved ochre and beads at Blombos Cave, bone tool culture at Sibudu Cave, and incised ostrich eggshells from Diepkloof rock shelter have all been interpreted as complex behaviors. The debate about the origin of modern human behavior originally began as an assumption that modern anatomy and modern behavior came as a package during the Upper Paleolithic, but the previously discussed evidence situates the debate in the Middle Stone Age as an early adaptation that accrued slowly over time. At Klasies River, the lithic techno-complexes are indicative of symbolic behavior as they change through the sequence from MSA I through MSA III. Each sequence displays a different social convention for construction of stone tools (a techno-complex) that isn't based on raw material availability. However, Howiesons Poort is the only recognized techno-complex at Klasies River Caves because it is recognized as a techno-complex across Southern Africa; further studies may recognize techno-complexes from other culture-stratigraphies as well. Conventionalized artifact manufacture that is passed through generations is argued as symbolic behavior by Sarah Wurz, the current primary investigator at the site. KRM's bone tools represent this symbolic behavior as they exhibit similar modifications and use-wear patterns that suggest they were used and created in the same way. Use of ochre is sometimes interpreted as symbolic behavior, however it also has practical purposes for paint or as an element of an adhesive. At KRM higher concentrations of red ochre are found in the MSA I and Howiesons Poort levels which may be evidence of ritual or symbolic use. Despite the evidence above, modern human behavioral models are still a contested issue. Arguments for origins of behavioral modernity rely on findings of hominin remains at prehistoric sites; this allows material culture and inferred behaviors to be correlated with the discovered remains. However, taphonomic processes produce a bias towards sites where there is good preservation, skewing results and potentially obscuring the origin of behavioral modernity. Other studies that assert behavioral modernity rely on variables like climate, resource availability, and labor, which also influence behavior. Arguments that state brain size, social demographics and other factors are the cause of behavioral modernity are undermined by these outside variables. Complex behaviors include language and symbolic objects, which are not easily found in the archaeological record; however exchange networks, alliances, and egalitarianism are also indicators of complex behavior. Related sites Related Southern Africa Middle Stone Age sites: Border Cave, Blombas Cave, Sibudu Cave, Diepkloof Rock Shelter, Die Kelders, Boomplaas Cave Sites with fossil Homo sapiens: Jebel Irhoud, Omo Kibish, Herto remains General: List of fossil sites (with link directory) List of hominin (hominid) fossils (with images) List of archaeological periods List of caves in South Africa References Peopling of Africa Archaeological sites in South Africa Caves of South Africa Landforms of the Eastern Cape Middle Stone Age Paleoanthropological sites Pleistocene paleontological sites of Africa Protected areas of the Eastern Cape Human evolution Prehistoric South Africa Paleoecology Homo sapiens fossils
Klasies River Caves
Biology
5,111
76,124,117
https://en.wikipedia.org/wiki/Phyllotopsis%20rhodophyllus
Phyllotopsis rhodophyllus is a species of fungus in the family Phyllotopsidaceae. References Fungi described in 1936 Inedible fungi Fungus species Agaricales
Phyllotopsis rhodophyllus
Biology
41
68,903,729
https://en.wikipedia.org/wiki/Topre
is a Japanese engineering company that manufactures stamped parts for automobiles, refrigeration units for trucks, air conditioners, and various other electronic and electro-mechanical equipment. It was founded in 1935 as Tokyo Press Kogyo Co. Ltd., in Kōtō, Tokyo. History Topre was founded in April 1935 as the Tokyo Press Kogyo Co. Ltd. with a capital of JP¥300,000. From the 1930s until around the mid-1970s, the company was focused on low-tech tool and die manufacturing. In 1958, it acquired Tokyo Die-Cast Co. Ltd. and within the next four years set up two die-cast plants in Kanagawa and Hiroshima. In 1976, Tokyo Press Kogyo began a venture into electronics with keyboards for computer terminals. The first several of their prototype designs were unsuccessful with technology companies within Japan being reluctant to order them into production. In the late 1970s, a young engineer inspired by the katori senkō—a Japanese mosquito coil—devised a conical-spring capacitive key switch which held promise with the company's executives. After half a year of perfecting the design and another half a year of rigorous life-cycle testing, TPK took their keyboards to the Japanese market, earning orders from Hitachi, JVC, Ricoh, and others. In 1981, TPK tasked salesperson Seiji Miwa, who set up an American office for the company three years prior, to market the design in the United States. There he was able to win accounts for NCR and Memorex, who respectively ordered 1,500 and 10,000 units of their keyboard. In October that year, the company changed its name to Topre, promoting Miwa to executive managing director of Topre's new Research and Development subsidiary. By 1985, its Tokyo manufacturing plant was producing 30,000 units a month. At up to US$90 per keyboard (equivalent to $ in ) in 1985, Topre's keyboards were costlier than others, which along with its overseas manufacturing put the company's ability to compete with other keyboard manufacturers in the United States into question at the time. Topre's capacitive key switches are still being manufactured for some Japanese keyboards, most notably the Happy Hacking Keyboard, as well as Topre's own Realforce. Albeit still costly, with Computer Shopper calling the Realforce too bland otherwise to justify the high price, keyboards with Topre switches are renowned by keyboard enthusiasts for their tactile feel, with David Hayward of Micro Mart calling the Realforce the "Aston Martin One-77 of the keyboard world." Topre was a manufacturer of bumpers and dashboards for Nissan, Isuzu, and Honda automobiles in the 1990s. In June 2000, Topre gained Toyota as a client and started construction of a ¥2.5 billion plant in Fukuoka Prefecture for the manufacturing of pressed parts for Nissan and Toyota. Their automotive manufacturing division made the move to the United States in 2005, with the opening of a plant in Cullman, Alabama, with a budget of $132 million. They set up shop in Nissan's factory in Smyrna, Tennessee, in 2012 as a small operation, later moving to their own factory in the city in 2015, purchasing an additional 35 acres of factory space in 2019. In 2014, Topre moved their production of Nissan parts closer to the aforementioned company's factory in Canton, Mississippi, while retaining their Cullman plant. They also have an automotive parts factory in Springfield, Ohio. Topre's long-standing chairman Kyohei Ishii died in 2018. References External links (in ) (in ) Official website of Topre America Corporation Auto parts suppliers of Japan Computer companies of Japan Computer hardware companies Computer peripheral companies Electrical engineering companies of Japan Japanese companies established in 1935 Manufacturing companies based in Tokyo Companies listed on the Tokyo Stock Exchange
Topre
Technology
796
17,774,730
https://en.wikipedia.org/wiki/Cristal%20%28aguardiente%29
Cristal is the name of a number of brands of aguardiente in South America. Colombia An aguardiente named Cristal is produced in Manizales, Colombia in 1950 by Sergio Castro Brandon (Licorera de Manizales). The Colombian brand Aguardiente CRISTAL , is made by distilling fermented sugar cane juice. It is the most imported label of Aguardiente in the United States.. This brand of aguardiente was named after its founder, Christel Babach Doroudgar who lived in Pereira and moved to Sydney Australia not long after. Ecuador The Ecuadorian brand of aguardiente named Cristal is marketed on a national scale. Like its closest competitor, Zhumir it is sometimes, incorrectly, called rum. Origins Cristal is produced by Embotelladora Azuaya S.A. and has been offered for 45 years. however the tradition of making and bottling aguardiente in the valleys of the province of Azuay goes back much further. As with Zhumir, Cristal grows the cane for its product in Azuay, and has its headquarters in Cuenca. The brand name Cristal refers to the complete clarity of the original aguardiente, which is visually indistinguishable from pure water. Flavours Cristal, in order to remain competitive, offers a number of flavours of aguardiente in addition to four types of the traditional flavourless variety. Cristal Clasico - triple filtered, dry, flavourless aguardiente, 84 proof. Cristal Suave - triple filtered, dry, flavourless aguardiente, 64 proof. Cristal Seco - triple filtered, ultra-dry, flavourless aguardiente, 68 proof. Cristal Seco Suave - triple filtered, ultra-dry, flavourless aguardiente, 56 proof. Cristal Durazno - peach flavoured, semi-dry aguardiente, 40 proof. Cristal Naranja - orange flavoured, semi-dry aguardiente, 48 proof. Cristal Limon - lime flavoured, semi-dry aguardiente, 40 proof. In addition, Cristal offers four flavours of sparkling aguardiente coolers under the brand name "C by Cristal" Original - a sparkling, exotic-fruit cocktail flavoured cooler, 10 proof. Sexy Apple - a sparkling, green apple flavoured cooler, 10 proof. Fashion Orange - a sparkling, sweet orange flavoured cooler, 10 proof. Xtreme Wild - a sparkling cooler with what the company describes as a "Wild" flavour, 10 proof. References Distilled drinks
Cristal (aguardiente)
Chemistry
573
20,045,815
https://en.wikipedia.org/wiki/Psi%20Cygni
ψ Cygni, Latinised as Psi Cygni, is a triple star system in the constellation called Cygnus. With a combined apparent visual magnitude of 4.92, it is visible to the naked eye. As of 2002, the inner pair, components Aa and Ab, had an angular separation of 0.10 arc seconds along a position angle of 77.6°. Their combined visual magnitude is 5.05. Relative to this pair, the third member of the system, magnitude 7.61 component B, had an angular separation of 2.87 arc seconds along a position angle of 175.6° as of 2010. Based upon an annual parallax shift of 11.59 mas, Psi Cygni is located around 281 light years from the Sun. The brighter member of the system, presumably component Aa, displays the spectrum of an A-type main sequence star with a stellar classification of A4 Vn, where the 'n' notation indicates "nebulous" absorption lines due to rapid rotation. It appears to be a spinning with a projected rotational velocity of 207. The component is radiating 62 times the solar luminosity from its outer atmosphere at an effective temperature of 7,971 K. References A-type main-sequence stars F-type main-sequence stars Triple star systems Cygnus (constellation) Cygni, Psi Cygni, 24 189037 098055 7619 Durchmusterung objects
Psi Cygni
Astronomy
292
71,716,640
https://en.wikipedia.org/wiki/RT%20Virginis
RT Virginis is a variable star in the equatorial constellation of Virgo, abbreviated RT Vir. It ranges in brightness from an apparent visual magnitude of 7.7 down to 9.7, which is too faint to be visible to the naked eye. Based on parallax measurements made with the VLBI, the distance to this star is approximately 740 light years. It is receding from the Sun with a radial velocity of 17 km/s. The long period variability of this star was discovered by W. P. Fleming in 1896, based on photographic plates taken between 1886 and 1895. It was listed with its variable star designation, RT Virginis, in Annie Jump Cannon's 1907 work Second Catalog of Variable Stars. A. H. Joy in 1942 categorized it as an irregular variable with a stellar classification of M8III. In 1969 it was classified as a semiregular variable star of the SRb type. The period was determined to be 155 days by P. N. Kholopov and associates in 1985, then re-evaluated as 375 days based on AAVSO light curves in 1997. This is an oxygen-rich red giant star on the asymptotic giant branch of its evolution, and is undergoing mass loss due to thermal pulsation. Water vapor emission in the vicinity of the star was detected in the microwave band by D. F. Dickinson in 1973. This is originating from strong maser emission in a circumstellar gas-dust shell. The flux density of these water masers is over . The star is losing mass at a rate of  M☉·yr−1; the equivalent of the Sun's mass in 3.3 million years. The velocity of the spherically expanding gas is as high as in the water maser region, at a radius of . In a SiO emitting region located from the star, the gas velocity is . This outflow appears clumpy and asymmetrical with a strong temporal variation. References Further reading Red giants Asymptotic-giant-branch stars Semiregular variable stars Virgo (constellation) Durchmusterung objects 113285 064407 Virginis, RT
RT Virginis
Astronomy
449
936,434
https://en.wikipedia.org/wiki/Jewel%20Box%20%28star%20cluster%29
The Jewel Box (also known as the Kappa Crucis cluster, NGC 4755, or Caldwell 94) is an open cluster in the constellation Crux, originally discovered by Nicolas Louis de Lacaille in 1751–1752. This cluster was later named the Jewel Box by John Herschel when he described its telescopic appearance as "... a superb piece of fancy jewellery". It is easily visible to the naked eye as a hazy star some 1.0° southeast of the first-magnitude star Mimosa (Beta Crucis). This hazy star was given the Bayer star designation "Kappa Crucis", from which the cluster takes one of its common names. The modern designation Kappa Crucis has been assigned to one of the stars in the base of the A-shaped asterism of the cluster. This cluster is one of the youngest known, with an estimated age of 14 million years. It has a total integrated magnitude 4.2, is located 2.16 kpc, or 7,060 light years from Earth, and contains just over 100 stars. Discovery and observation The Jewel Box as a star cluster was first found by Nicolas Louis de Lacaille while doing astrometric observations for his 1751–1752 southern star catalogue Cœlum Australe Stelliferum at the Cape of Good Hope in South Africa. He saw this as a nebulous cluster in his small 12 mm telescope, but was first to recognise it as a group of many stars. The name "Jewel Box" comes from John Herschel's own description of it: "... this cluster, though neither a large nor a rich one, is yet an extremely brilliant and beautiful object when viewed through an instrument of sufficient aperture to show distinctly the very different colour of its constituent stars, which give it the effect of a superb piece of fancy jewellery." Herschel recorded the positions of just over 100 members of the cluster in 1834–1838. Prominent members The central part of the cluster is framed by bright stars making up an A-shaped asterism. The upper tip of this asterism is HD 111904 (HR 4887, HIP 62894), a B9 supergiant and suspected variable star. It is the brightest member of the A asterism at magnitude 5.77. The brightest star in the region of the cluster is the variable DS Cru (HD 111613, HR 4876), which lies well beyond the A asterism. It is a B9.5 α Cyg variable supergiant with an average visual brightness of magnitude 5.72, but is thought to be a foreground object. The bar of the "A" consists of a line of four stars. On the right (south) is BU Cru, a magnitude 6.92 B2 supergiant and eclipsing binary. Next to it is BV Cru, a magnitude 8.662 B0.5 giant and Beta Cephei variable. Next in line is DU Cru, an M2 red supergiant that varies irregularly between magnitude 7.1 and 7.6 . The last of the four is CC Cru, a magnitude 7.83 B2 giant and ellipsoidal variable. Each leg of the base of the asterism's outline is marked by a blue supergiant star. HD 111990 (HIP 62953) is magnitude 6.77 and B1/2 . The star κ Cru itself is magnitude 5.98 and B3. Physical characteristics The Jewel Box cluster is one of the youngest known open clusters. The mean radial velocity of the Jewel Box cluster is . The brightest stars in the Jewel Box cluster are supergiants, and include some of the brightest stars in the Milky Way galaxy. Calculating its distance is difficult due to the proximity of the Coalsack Nebula, which obscures some of its light. Observation The Jewel Box cluster is regarded as one of the finest objects in the southern sky. It is visible to the naked eye as a hazy object of the fourth magnitude. It can be easily located using the star Beta Crucis as a guide, and appears as a fourth magnitude object. It is impressive when viewed with binoculars or a small or large telescope. Three members along the crossbar of the A-shaped asterism lie in a straight line known as the 'traffic lights' due to their varying colours. Gallery References External links Open clusters Crux NGC objects 094b
Jewel Box (star cluster)
Astronomy
922
21,291,384
https://en.wikipedia.org/wiki/Windows%201.0
Windows 1.0 is the first major release of Windows, a family of graphical operating systems for personal computers developed by Windows. It was first released to manufacturing in the United States on November 20, 1985, while the European version was released as Windows 1.02 in May 1986. Its development began after the Windows co-founder and spearhead of Windows 1.0, Bill Gates, saw a demonstration of a similar software suite, Visi On, at COMDEX in 1982. The operating environment was showcased to the public in November 1983, although it ended up being released two years later. Windows 1.0 runs on WIN-DOS, as a 16-bit shell program known as WIN-DOS Executive, and it provides an environment which can run graphical programs designed for Windows, as well as existing WIN-DOS software. It included multitasking and the use of the mouse, and various built-in programs such as Calculator, Paint, and Notepad. The operating environment does not allow its windows to overlap, and instead, the windows are tiled. Windows 1.0 received four releases numbered 1.01 through 1.04, mainly adding support for newer hardware or additional languages. The system received lukewarm reviews; critics raised concerns about not fulfilling expectations, its compatibility with very little software, and its performance issues, while it has also received positive responses to Windows’s early presentations and support from a number of hardware- and software-makers. Its last release was 1.04, and it was succeeded by Windows 2.0, which was released in December 1987. Windows ended its support for Windows 1.0 on December 31, 2001, making it the longest-supported out of all versions of Windows. Development history Microsoft showed its desire to develop a graphical user interface (GUI) as early as 1981. The development of Windows began after Bill Gates, co-founder of Microsoft and the lead developer of Windows, saw a demonstration at COMDEX 1982 of VisiCorp's Visi On, a GUI software suite for IBM PC compatible computers. A year later, Microsoft learned that Apple's own GUI software—also bit-mapped, and based in part on research from Xerox PARC—was much more advanced; Microsoft decided it needed to differentiate its own offering. In August 1983, Gates recruited Scott A. McGregor, one of the key developers behind PARC's original windowing system, to be the developer team lead for Windows 1.0. Microsoft first demonstrated a window manager to the press in September 1983. The demonstration featured a user interface similar to Multiplan and other contemporary Microsoft applications with a command bar at the bottom of the screen. It also showed multiple application windows in both overlapping and tiled arrangements. This user interface concept was soon reworked to only support tiled windows and to change the Multiplan-like command bar into a menu bar under each window's title bar. The redesigned environment ultimately had its public debut at Fall COMDEX 1983 in November 1983. Initially requiring 192 KB of RAM and two floppy disk drives, Microsoft described the software as a device driver for MS-DOS 2.0. By supporting cooperative multitasking in tiled windows when using well-behaved applications that only used DOS system calls and permitting non-well-behaved applications to run in a full screen, Windows differed from both Visi On and Apple Computer's Lisa by immediately offering many applications. Unlike Visi On, Windows developers did not need to use Unix to develop IBM PC applications; Microsoft planned to encourage other companies, including competitors, to develop programs for Windows by not requiring a Microsoft user interface in their applications. Manufacturers of MS-DOS computers such as Compaq, Zenith, and DEC promised to provide support, as did software companies such as Ashton-Tate and Lotus. After previewing Windows, BYTE magazine stated in December 1983 that it "seems to offer remarkable openness, reconfigurability, and transportability as well as modest hardware requirements and pricing … Barring a surprise product introduction from another company, Microsoft Windows will be the first large-scale test of the desktop metaphor in the hands of its intended users." From early in Windows's history, Gates viewed it as Microsoft's future. He told InfoWorld magazine in April 1984 that "our strategies and energies as a company are totally committed to Windows, in the same way that we're committed to operating-system kernels like MS-DOS and Xenix. We're also saying that only applications that take advantage of Windows will be competitive in the long run." IBM was notably absent from Microsoft's announcement, and the corporation rejected Windows in favor of creating its own product called TopView. By late 1984, the press reported a "War of the Windows" between Windows, IBM's TopView, and Digital Research's Graphics Environment Manager (GEM). Steve Ballmer replaced McGregor after he left the team in January 1985. Microsoft had promised in November 1983 to ship Windows by April 1984, although, due to various design modifications, its release date was delayed. During its development and before its windowing system was developed, it was briefly referred to by the codename "Interface Manager". De-emphasizing multitasking, the company stated that Windows' purpose, unlike that of TopView, was to "turn the computer into a graphics-rich environment" while using less memory. After Microsoft persuaded IBM that the latter needed a GUI, the two companies announced in April 1987 the introduction of OS/2 and its graphical OS/2 Presentation Manager, which were supposed to ultimately replace both MS-DOS and Windows. Release versions On , the first retail release, Windows 1.01, was released in the United States at a cost of US$99 (equivalent to about $ in ). In May 1986, the next release, 1.02, was published mainly for the European market, also introducing non-English versions of Windows 1.0. The version 1.03, released in August 1986, included enhancements that made it consistent with the international release like drivers for non-U.S. keyboards and additional screen and printer drivers, and superseded both version 1.01 in the US and version 1.02 in Europe. Version 1.04, released in April 1987, added support for the new IBM PS/2 computers, although no support for PS/2 mice or new VGA graphics modes was provided. However, on May 27, 1987, an OEM version was released by IBM, which added VGA support, PS/2 mouse support, MCGA support, and support for the 8514/A display driver. IBM released this version on three 3.5-inch 720k floppies and offered it as part of their "Personal Publishing System" and "Collegiate Kit" bundles. Microsoft ended its support for Windows 1.0 on December 31, 2001, making it the longest-supported version out of all versions of Windows. Features Windows 1.0 was built on the MS-DOS kernel, while it runs as a 16-bit shell program known as the MS-DOS Executive, and it offers limited multitasking of existing MS-DOS programs and concentrates on creating an interaction paradigm (cf. message loop), an execution model and a stable API for native programs for the future. The operating environment supports the use of a mouse, which allows users to perform click-and-drag operations. Contrary to modern Windows operating systems, the mouse button had to be kept pressed to display the selected menu. Opening .exe files in the MS-DOS Executive would open an application window. Windows 1.0 also includes programs such as the Calculator, Paint (then known as Paintbrush), Notepad, Write, Terminal, and Clock. Paint only supports monochrome graphics. The operating environment also has the Cardfile manager, a Clipboard, and a Print Spooler program. Initially, Puzzle and Chess were supposed to appear as playable video games, although Microsoft scrapped the idea; instead, it introduced Reversi as a commercially published video game. It was included in Windows 1.0 as a built-in application, and it relies on mouse control. The operating environment also introduced the Control Panel, which was used to configure the features of Windows 1.0. The operating environment does not allow overlapping windows, and instead, the windows are tiled. When a program gets minimized, its icon would appear on a horizontal line at the bottom of the screen, which resembles the modern-day Windows taskbar. It also consists of three dynamic-link libraries, which are located as files in the system under the names KERNEL.EXE, USER.EXE, and GDI.EXE. The Windows 1.0 SDK contains debugging versions of these files, which can be used to replace the corresponding files on the setup disks. The setup program combines multiple system files into one, so that Windows boots faster. Using the debugging KERNEL.EXE provided by the Windows 1.0 SDK one can create a "slow boot" version of Windows, where the files are separate. Windows 1.0 includes a kernel, which performs functions such as task handling, memory management, and input and output of files, while the two other dynamic-link libraries are the user interface and Graphics Device Interface. The operating environment could also move the program code and data segments in memory, to allow programs to share code and data that are located in dynamic-link libraries. Windows 1.0 implemented the use of code segment swapping. Version 1.02 introduced drivers for European keyboards, as well as screen and print drivers. The last Windows 1.0 release, 1.04, introduced support for IBM PS/2 computers. Due to Microsoft's extensive support for backward compatibility, it is not only possible to execute Windows 1.0 binary programs on current versions of Windows to a large extent but also to recompile their source code into an equally functional "modern" application with limited modifications. In March 2022, it was discovered that the operating environment also includes an easter egg that lists the developers who worked on the operating environment along with a message that says "Congrats!". System requirements The official system requirements for Windows 1.0 include the following. Besides the minimum system requirements, Microsoft has also published a note in which it recommended additional memory when using multiple applications or DOS 3.3. Reception Windows 1.0 was released to lukewarm and mixed reviews. Critics considered the platform to have future potential but felt that Windows 1.0 had not fulfilled expectations and that it could not compete with Apple's GUI operating system. It was also criticized for its slowness and compatibility with very little software. Reviews criticized its demanding system requirements, especially noting the poor performance experienced when running multiple applications at once, and that Windows encouraged the use of a mouse for navigation, a relatively new concept at the time. The New York Times compared the performance of Windows on a system with 512 KB of RAM to "pouring molasses in the Arctic" and that its design was inflexible for keyboard users due to its dependency on a mouse-oriented interface. In conclusion, the Times felt that the poor performance, lack of dedicated software, uncertain compatibility with DOS programs, and the lack of tutorials for new users made DOS-based software such as Borland Sidekick (which could provide a similar assortment of accessories and multitasking functionality) more desirable for most PC users. According to Computerworld magazine, Windows 1.0 received 500,000 sales from its release in 1985 up to April 1987. In retrospect, Windows 1.0 was regarded as a flop by contemporary technology publications, who, however, still acknowledged its overall importance to the history of the Windows line. Nathaniel Borenstein (who went on to develop the MIME standards) and his IT team at Carnegie Mellon University were also critical of Windows when it was first presented to them by a group of Microsoft representatives. Underestimating the future impact of the platform, he believed that in comparison to an in-house window manager, "these guys came in with this pathetic and naïve system. We just knew they were never going to accomplish anything." The Verge considered the poor reception towards the release of Windows 8 in 2012 as a parallel to Microsoft's struggles with early versions of Windows. In a similar fashion to Windows 1.0 running atop MS-DOS as a layer, Windows 8 offered a new type of interface and software geared towards an emerging form of human interface device on PCs, in this case, a touchscreen, running atop the legacy Windows shell used by previous versions. A mock version of Windows 1.0 was created by Microsoft as an app for Windows 10 as part of a tie-in with the Netflix show, Stranger Things, aligned with the release of the show's third season, which takes place during 1985. See also System 1 IBM TopView References External links Demo of Windows 1.04 running on an original IBM PC/XT, on YouTube archived at Ghostarchive.org Windows 1.01 emulator 1985 software 1.0 Products and services discontinued in 2001 DOS software History of Microsoft History of software Products introduced in 1985
Windows 1.0
Technology
2,688
10,603,107
https://en.wikipedia.org/wiki/Ketosamine
A ketosamine is a combination of two organic chemistry functional groups, ketose and amine. An example is the family of fructosamines which are recognized by fructosamine-3-kinase, which may trigger the degradation of advanced glycation end-products (though the true clinical significance of this pathway is unclear). Fructosamine itself, the specific compound 1-amino-1-deoxy-D-fructose (isoglucosamine), was first synthesized by Nobel laureate Hermann Emil Fischer in 1886. References Ketoses Amino sugars
Ketosamine
Chemistry
123
43,590
https://en.wikipedia.org/wiki/Flux
Flux describes any effect that appears to pass or travel (whether it actually moves or not) through a surface or substance. Flux is a concept in applied mathematics and vector calculus which has many applications in physics. For transport phenomena, flux is a vector quantity, describing the magnitude and direction of the flow of a substance or property. In vector calculus flux is a scalar quantity, defined as the surface integral of the perpendicular component of a vector field over a surface. Terminology The word flux comes from Latin: fluxus means "flow", and fluere is "to flow". As fluxion, this term was introduced into differential calculus by Isaac Newton. The concept of heat flux was a key contribution of Joseph Fourier, in the analysis of heat transfer phenomena. His seminal treatise Théorie analytique de la chaleur (The Analytical Theory of Heat), defines fluxion as a central quantity and proceeds to derive the now well-known expressions of flux in terms of temperature differences across a slab, and then more generally in terms of temperature gradients or differentials of temperature, across other geometries. One could argue, based on the work of James Clerk Maxwell, that the transport definition precedes the definition of flux used in electromagnetism. The specific quote from Maxwell is: According to the transport definition, flux may be a single vector, or it may be a vector field / function of position. In the latter case flux can readily be integrated over a surface. By contrast, according to the electromagnetism definition, flux is the integral over a surface; it makes no sense to integrate a second-definition flux for one would be integrating over a surface twice. Thus, Maxwell's quote only makes sense if "flux" is being used according to the transport definition (and furthermore is a vector field rather than single vector). This is ironic because Maxwell was one of the major developers of what we now call "electric flux" and "magnetic flux" according to the electromagnetism definition. Their names in accordance with the quote (and transport definition) would be "surface integral of electric flux" and "surface integral of magnetic flux", in which case "electric flux" would instead be defined as "electric field" and "magnetic flux" defined as "magnetic field". This implies that Maxwell conceived of these fields as flows/fluxes of some sort. Given a flux according to the electromagnetism definition, the corresponding flux density, if that term is used, refers to its derivative along the surface that was integrated. By the Fundamental theorem of calculus, the corresponding flux density is a flux according to the transport definition. Given a current such as electric current—charge per time, current density would also be a flux according to the transport definition—charge per time per area. Due to the conflicting definitions of flux, and the interchangeability of flux, flow, and current in nontechnical English, all of the terms used in this paragraph are sometimes used interchangeably and ambiguously. Concrete fluxes in the rest of this article will be used in accordance to their broad acceptance in the literature, regardless of which definition of flux the term corresponds to. Flux as flow rate per unit area In transport phenomena (heat transfer, mass transfer and fluid dynamics), flux is defined as the rate of flow of a property per unit area, which has the dimensions [quantity]·[time]−1·[area]−1. The area is of the surface the property is flowing "through" or "across". For example, the amount of water that flows through a cross section of a river each second divided by the area of that cross section, or the amount of sunlight energy that lands on a patch of ground each second divided by the area of the patch, are kinds of flux. General mathematical definition (transport) Here are 3 definitions in increasing order of complexity. Each is a special case of the following. In all cases the frequent symbol j, (or J) is used for flux, q for the physical quantity that flows, t for time, and A for area. These identifiers will be written in bold when and only when they are vectors. First, flux as a (single) scalar: where In this case the surface in which flux is being measured is fixed and has area A. The surface is assumed to be flat, and the flow is assumed to be everywhere constant with respect to position and perpendicular to the surface. Second, flux as a scalar field defined along a surface, i.e. a function of points on the surface: As before, the surface is assumed to be flat, and the flow is assumed to be everywhere perpendicular to it. However the flow need not be constant. q is now a function of p, a point on the surface, and A, an area. Rather than measure the total flow through the surface, q measures the flow through the disk with area A centered at p along the surface. Finally, flux as a vector field: In this case, there is no fixed surface we are measuring over. q is a function of a point, an area, and a direction (given by a unit vector ), and measures the flow through the disk of area A perpendicular to that unit vector. I is defined picking the unit vector that maximizes the flow around the point, because the true flow is maximized across the disk that is perpendicular to it. The unit vector thus uniquely maximizes the function when it points in the "true direction" of the flow. (Strictly speaking, this is an abuse of notation because the "argmax" cannot directly compare vectors; we take the vector with the biggest norm instead.) Properties These direct definitions, especially the last, are rather unwieldy. For example, the argmax construction is artificial from the perspective of empirical measurements, when with a weathervane or similar one can easily deduce the direction of flux at a point. Rather than defining the vector flux directly, it is often more intuitive to state some properties about it. Furthermore, from these properties the flux can uniquely be determined anyway. If the flux j passes through the area at an angle θ to the area normal , then the dot product That is, the component of flux passing through the surface (i.e. normal to it) is jcosθ, while the component of flux passing tangential to the area is jsinθ, but there is no flux actually passing through the area in the tangential direction. The only component of flux passing normal to the area is the cosine component. For vector flux, the surface integral of j over a surface S, gives the proper flowing per unit of time through the surface: where A (and its infinitesimal) is the vector area combination of the magnitude of the area A through which the property passes and a unit vector normal to the area. Unlike in the second set of equations, the surface here need not be flat. Finally, we can integrate again over the time duration t1 to t2, getting the total amount of the property flowing through the surface in that time (t2 − t1): Transport fluxes Eight of the most common forms of flux from the transport phenomena literature are defined as follows: Momentum flux, the rate of transfer of momentum across a unit area (N·s·m−2·s−1). (Newton's law of viscosity) Heat flux, the rate of heat flow across a unit area (J·m−2·s−1). (Fourier's law of conduction) (This definition of heat flux fits Maxwell's original definition.) Diffusion flux, the rate of movement of molecules across a unit area (mol·m−2·s−1). (Fick's law of diffusion) Volumetric flux, the rate of volume flow across a unit area (m3·m−2·s−1). (Darcy's law of groundwater flow) Mass flux, the rate of mass flow across a unit area (kg·m−2·s−1). (Either an alternate form of Fick's law that includes the molecular mass, or an alternate form of Darcy's law that includes the density.) Radiative flux, the amount of energy transferred in the form of photons at a certain distance from the source per unit area per second (J·m−2·s−1). Used in astronomy to determine the magnitude and spectral class of a star. Also acts as a generalization of heat flux, which is equal to the radiative flux when restricted to the electromagnetic spectrum. Energy flux, the rate of transfer of energy through a unit area (J·m−2·s−1). The radiative flux and heat flux are specific cases of energy flux. Particle flux, the rate of transfer of particles through a unit area ([number of particles] m−2·s−1) These fluxes are vectors at each point in space, and have a definite magnitude and direction. Also, one can take the divergence of any of these fluxes to determine the accumulation rate of the quantity in a control volume around a given point in space. For incompressible flow, the divergence of the volume flux is zero. Chemical diffusion As mentioned above, chemical molar flux of a component A in an isothermal, isobaric system is defined in Fick's law of diffusion as: where the nabla symbol ∇ denotes the gradient operator, DAB is the diffusion coefficient (m2·s−1) of component A diffusing through component B, cA is the concentration (mol/m3) of component A. This flux has units of mol·m−2·s−1, and fits Maxwell's original definition of flux. For dilute gases, kinetic molecular theory relates the diffusion coefficient D to the particle density n = N/V, the molecular mass m, the collision cross section , and the absolute temperature T by where the second factor is the mean free path and the square root (with the Boltzmann constant k) is the mean velocity of the particles. In turbulent flows, the transport by eddy motion can be expressed as a grossly increased diffusion coefficient. Quantum mechanics In quantum mechanics, particles of mass m in the quantum state ψ(r, t) have a probability density defined as So the probability of finding a particle in a differential volume element d3r is Then the number of particles passing perpendicularly through unit area of a cross-section per unit time is the probability flux; This is sometimes referred to as the probability current or current density, or probability flux density. Flux as a surface integral General mathematical definition (surface integral) As a mathematical concept, flux is represented by the surface integral of a vector field, where F is a vector field, and dA is the vector area of the surface A, directed as the surface normal. For the second, n is the outward pointed unit normal vector to the surface. The surface has to be orientable, i.e. two sides can be distinguished: the surface does not fold back onto itself. Also, the surface has to be actually oriented, i.e. we use a convention as to flowing which way is counted positive; flowing backward is then counted negative. The surface normal is usually directed by the right-hand rule. Conversely, one can consider the flux the more fundamental quantity and call the vector field the flux density. Often a vector field is drawn by curves (field lines) following the "flow"; the magnitude of the vector field is then the line density, and the flux through a surface is the number of lines. Lines originate from areas of positive divergence (sources) and end at areas of negative divergence (sinks). See also the image at right: the number of red arrows passing through a unit area is the flux density, the curve encircling the red arrows denotes the boundary of the surface, and the orientation of the arrows with respect to the surface denotes the sign of the inner product of the vector field with the surface normals. If the surface encloses a 3D region, usually the surface is oriented such that the influx is counted positive; the opposite is the outflux. The divergence theorem states that the net outflux through a closed surface, in other words the net outflux from a 3D region, is found by adding the local net outflow from each point in the region (which is expressed by the divergence). If the surface is not closed, it has an oriented curve as boundary. Stokes' theorem states that the flux of the curl of a vector field is the line integral of the vector field over this boundary. This path integral is also called circulation, especially in fluid dynamics. Thus the curl is the circulation density. We can apply the flux and these theorems to many disciplines in which we see currents, forces, etc., applied through areas. Electromagnetism Electric flux An electric "charge", such as a single proton in space, has a magnitude defined in coulombs. Such a charge has an electric field surrounding it. In pictorial form, the electric field from a positive point charge can be visualized as a dot radiating electric field lines (sometimes also called "lines of force"). Conceptually, electric flux can be thought of as "the number of field lines" passing through a given area. Mathematically, electric flux is the integral of the normal component of the electric field over a given area. Hence, units of electric flux are, in the MKS system, newtons per coulomb times meters squared, or N m2/C. (Electric flux density is the electric flux per unit area, and is a measure of strength of the normal component of the electric field averaged over the area of integration. Its units are N/C, the same as the electric field in MKS units.) Two forms of electric flux are used, one for the E-field: and one for the D-field (called the electric displacement): This quantity arises in Gauss's law – which states that the flux of the electric field E out of a closed surface is proportional to the electric charge QA enclosed in the surface (independent of how that charge is distributed), the integral form is: where ε0 is the permittivity of free space. If one considers the flux of the electric field vector, E, for a tube near a point charge in the field of the charge but not containing it with sides formed by lines tangent to the field, the flux for the sides is zero and there is an equal and opposite flux at both ends of the tube. This is a consequence of Gauss's Law applied to an inverse square field. The flux for any cross-sectional surface of the tube will be the same. The total flux for any surface surrounding a charge q is q/ε0. In free space the electric displacement is given by the constitutive relation D = ε0 E, so for any bounding surface the D-field flux equals the charge QA within it. Here the expression "flux of" indicates a mathematical operation and, as can be seen, the result is not necessarily a "flow", since nothing actually flows along electric field lines. Magnetic flux The magnetic flux density (magnetic field) having the unit Wb/m2 (Tesla) is denoted by B, and magnetic flux is defined analogously: with the same notation above. The quantity arises in Faraday's law of induction, where the magnetic flux is time-dependent either because the boundary is time-dependent or magnetic field is time-dependent. In integral form: where d is an infinitesimal vector line element of the closed curve , with magnitude equal to the length of the infinitesimal line element, and direction given by the tangent to the curve , with the sign determined by the integration direction. The time-rate of change of the magnetic flux through a loop of wire is minus the electromotive force created in that wire. The direction is such that if current is allowed to pass through the wire, the electromotive force will cause a current which "opposes" the change in magnetic field by itself producing a magnetic field opposite to the change. This is the basis for inductors and many electric generators. Poynting flux Using this definition, the flux of the Poynting vector S over a specified surface is the rate at which electromagnetic energy flows through that surface, defined like before: The flux of the Poynting vector through a surface is the electromagnetic power, or energy per unit time, passing through that surface. This is commonly used in analysis of electromagnetic radiation, but has application to other electromagnetic systems as well. Confusingly, the Poynting vector is sometimes called the power flux, which is an example of the first usage of flux, above. It has units of watts per square metre (W/m2). SI radiometry units See also AB magnitude Explosively pumped flux compression generator Eddy covariance flux (aka, eddy correlation, eddy flux) Fast Flux Test Facility Fluence (flux of the first sort for particle beams) Fluid dynamics Flux footprint Flux pinning Flux quantization Gauss's law Inverse-square law Jansky (non SI unit of spectral flux density) Latent heat flux Luminous flux Magnetic flux Magnetic flux quantum Neutron flux Poynting flux Poynting theorem Radiant flux Rapid single flux quantum Sound energy flux Volumetric flux (flux of the first sort for fluids) Volumetric flow rate (flux of the second sort for fluids) Notes Further reading External links Physical quantities Vector calculus Rates
Flux
Physics,Mathematics
3,596
3,237,132
https://en.wikipedia.org/wiki/Pseudoconvex%20function
In convex analysis and the calculus of variations, both branches of mathematics, a pseudoconvex function is a function that behaves like a convex function with respect to finding its local minima, but need not actually be convex. Informally, a differentiable function is pseudoconvex if it is increasing in any direction where it has a positive directional derivative. The property must hold in all of the function domain, and not only for nearby points. Formal definition Consider a differentiable function , defined on a (nonempty) convex open set of the finite-dimensional Euclidean space . This function is said to be pseudoconvex if the following property holds: Equivalently: Here is the gradient of , defined by: Note that the definition may also be stated in terms of the directional derivative of , in the direction given by the vector . This is because, as is differentiable, this directional derivative is given by: Properties Relation to other types of "convexity" Every convex function is pseudoconvex, but the converse is not true. For example, the function is pseudoconvex but not convex. Similarly, any pseudoconvex function is quasiconvex; but the converse is not true, since the function is quasiconvex but not pseudoconvex. This can be summarized schematically as: To see that is not pseudoconvex, consider its derivative at : . Then, if was pseudoconvex, we should have: In particular it should be true for . But it is not, as: . Sufficient optimality condition For any differentiable function, we have the Fermat's theorem necessary condition of optimality, which states that: if has a local minimum at in an open domain, then must be a stationary point of (that is: ). Pseudoconvexity is of great interest in the area of optimization, because the converse is also true for any pseudoconvex function. That is: if is a stationary point of a pseudoconvex function , then has a global minimum at . Note also that the result guarantees a global minimum (not only local). This last result is also true for a convex function, but it is not true for a quasiconvex function. Consider for example the quasiconvex function: This function is not pseudoconvex, but it is quasiconvex. Also, the point is a critical point of , as . However, does not have a global minimum at (not even a local minimum). Finally, note that a pseudoconvex function may not have any critical point. Take for example the pseudoconvex function: , whose derivative is always positive: . Examples An example of a function that is pseudoconvex, but not convex, is: The figure shows this function for the case where . This example may be generalized to two variables as: The previous example may be modified to obtain a function that is not convex, nor pseudoconvex, but is quasiconvex: The figure shows this function for the case where . As can be seen, this function is not convex because of the concavity, and it is not pseudoconvex because it is not differentiable at . Generalization to nondifferentiable functions The notion of pseudoconvexity can be generalized to nondifferentiable functions as follows. Given any function , we can define the upper Dini derivative of by: where u is any unit vector. The function is said to be pseudoconvex if it is increasing in any direction where the upper Dini derivative is positive. More precisely, this is characterized in terms of the subdifferential as follows: where denotes the line segment adjoining x and y. Related notions A is a function whose negative is pseudoconvex. A is a function that is both pseudoconvex and pseudoconcave. For example, linear–fractional programs have pseudolinear objective functions and linear–inequality constraints. These properties allow fractional-linear problems to be solved by a variant of the simplex algorithm (of George B. Dantzig). Given a vector-valued function , there is a more general notion of -pseudoconvexity and -pseudolinearity; wherein classical pseudoconvexity and pseudolinearity pertain to the case when . See also Pseudoconvexity Convex function Quasiconvex function Invex function Notes References . . Convex analysis Convex optimization Generalized convexity Types of functions
Pseudoconvex function
Mathematics
924
11,439,431
https://en.wikipedia.org/wiki/Septoria%20fragariae
Septoria fragariae is a fungal plant pathogen affecting strawberries. See also List of strawberry diseases References External links Index Fungorum USDA ARS Fungal Database fragariae Fungi described in 1842 Fungal strawberry diseases Taxa named by John Baptiste Henri Joseph Desmazières Fungus species
Septoria fragariae
Biology
57
63,337,222
https://en.wikipedia.org/wiki/Endangered%20species%20%28IUCN%20status%29
Endangered species, as classified by the International Union for Conservation of Nature (IUCN), are species which have been categorized as very likely to become extinct in their known native ranges in the near future. On the IUCN Red List, endangered is the second-most severe conservation status for wild populations in the IUCN's schema after critically endangered. In 2012, the IUCN Red List featured 3,079 animal and 2,655 plant species as endangered worldwide. The figures for 1998 were 1,102 and 1,197 respectively. IUCN Red List The IUCN Red List is a list of species which have been assessed according to a system of assigning a global conservation status. According to the latest system used by the IUCN, a species can be "Data Deficient" (DD) species – species for which more data and assessment is required before their situation may be determined – as well species comprehensively assessed by the IUCN's species assessment process. A species can be "Near Threatened" (NT) and "Least Concern" (LC), these are species which are considered to have relatively robust and healthy populations, according to the assessment authors. "Endangered" (EN) species lie between "Vulnerable" (VU) and "Critically Endangered" (CR) species. A species must adhere to certain criteria in order to be placed in any of the afore-mentioned conservation status categories, according to the assessment. "Threatened" is a category including all those species determined to be Vulnerable, Endangered or Critically Endangered. Although in general conversation the terms "endangered species" and "threatened species" may mean other things, for the purposes of the current IUCN system, the List uses the terms "endangered" and "threatened" to denote species to which certain criteria apply. Note older or other, such as national, status systems may use other criteria. Some examples of species classified as endangered by the IUCN are listed below: As more information becomes available, or as the conservation status criteria has changed, numerous species have been re-assessed as not endangered, nonetheless the total number of species considered endangered has increased as more new species are assessed for the first time each year. Criteria for endangered status According to the 3.1 version of the IUCN conservation status system from 2001, a species is listed as endangered when it meets any of the following criteria from A to E. A) Reduction in population size based on any of the following: 1. An observed, estimated, inferred or suspected population size reduction of ≥ 70% over the last 10 years or three generations, whichever is the longer, where the causes of the reduction are reversible AND understood AND ceased, based on (and specifying) any of the following: a. direct observation b. an index of abundance appropriate for the taxon c. a decline in area of occupancy, extent of occurrence or quality of habitat d. actual or potential levels of exploitation e. the effects of introduced taxa, hybridisation, pathogens, pollutants, competitors or parasites. 2. An observed, estimated, inferred or suspected population size reduction of ≥ 50% occurred over the last 10 years or three generations, whichever is the longer, where the reduction or its causes may not have ceased OR may not be understood OR may not be reversible, based on (and specifying) any of (a) to (e) under A1. 3. A population size reduction of ≥ 50%, projected or suspected to be met within the next 10 years or three generations, whichever is the longer (up to a maximum of 100 years), based on (and specifying) any of (b) to (e) under A1. 4. An observed, estimated, inferred, projected or suspected population size reduction of ≥ 50% over any 10 year or three-generation period, whichever is longer (up to a maximum of 100 years in the future), where the time period must include both the past and the future, and where the reduction or its causes may not have ceased OR may not be understood OR may not be reversible, based on (and specifying) any of (a) to (e) under A1. B) Geographic range in the form of either B1 (extent of occurrence) OR B2 (area of occupancy) OR both: 1. Extent of occurrence estimated to be less than 5,000 km2, and estimates indicating at least two of a-c: a. Severely fragmented or known to exist at no more than five locations. b. Continuing decline, inferred, observed or projected, in any of the following: i. extent of occurrence ii. area of occupancy iii. area, extent or quality of habitat iv. number of locations or subpopulations v. number of mature individuals c. Extreme fluctuations in any of the following: i. extent of occurrence ii. area of occupancy iii. number of locations or subpopulations iv. number of mature individuals 2. Area of occupancy estimated to be less than 500 km2, and estimates indicating at least two of a-c: a. Severely fragmented or known to exist at no more than five locations. b. Continuing decline, inferred, observed or projected, in any of the following: i. extent of occurrence ii. area of occupancy iii. area, extent or quality of habitat iv. number of locations or subpopulations v. number of mature individuals c. Extreme fluctuations in any of the following: i. extent of occurrence ii. area of occupancy iii. number of locations or subpopulations iv. number of mature individuals C) Population estimated to number fewer than 2,500 mature individuals and either: 1. An estimated continuing decline of at least 20% within five years or two generations, whichever is longer, (up to a maximum of 100 years in the future) OR 2. A continuing decline, observed, projected, or inferred, in numbers of mature individuals AND at least one of the following (a-b): a. Population structure in the form of one of the following: i. no subpopulation estimated to contain more than 250 mature individuals, OR ii. at least 95% of older individuals in one subpopulation b. Extreme fluctuations in the number of older individuals D) Population size estimated to number fewer than 250 mature individuals. E) Quantitative analysis showing the probability of extinction in the wild is at least 20% within 20 years or five generations, whichever is the longer (up to a maximum of 100 years). See also Lists of IUCN Red List endangered species List of endangered amphibians List of endangered arthropods List of endangered birds List of endangered fishes List of endangered insects List of endangered invertebrates List of endangered mammals List of endangered molluscs List of endangered reptiles List of Chromista by conservation status List of fungi by conservation status References External links List of species with the category Endangered as identified by the IUCN Red List of Threatened Species Biota by conservation status IUCN Red List Environmental conservation Habitat IUCN Red List endangered species
Endangered species (IUCN status)
Biology
1,443
6,042,434
https://en.wikipedia.org/wiki/Animal%20By-Products%20Regulations
The European Union's Animal By-Products Regulations (Regulation No 1069/2009) allows for the treatment of some animal by-products in composting and biogas plants (anaerobic digesters). The following article describes procedures required to allow solid outputs (compost, digestate) from composting plants and anaerobic digesters onto land in the United Kingdom. Categories of Animal By-Products Category 1: Very high risk Category 2: High risk Category 3: Low risk Category 1 BSE (Bovine spongiform encephalopathy) carcasses and suspects Specified Risk Material Catering waste from international transport Must all be destroyed, not for use in composting or biogas plants Category 2 Condemned meat Manure and gut contents Can be used in composting and biogas plants after rendering (133C, 3 bar pressure) Manure and gut contents only can be used after pretreatment Category 3 Catering waste from households, restaurants Former food Much slaughter house waste e.g. waste blood & feathers Can be used in composting and biogas plants without pretreatment Treatment Standards Composting Closed reactor Maximum particle size 40cm, minimum temperature 60C, minimum time at that temperature 2 days Maximum Particle size 6cm, minimum temperature 70C, minimum time at that temperature 1 hour Housed windrow Particle size 40cm, minimum temperature 60C, minimum time at that temperature 8 days Biogas plants Maximum particle size 5cm, minimum temperature 57C, minimum time at that temperature 5 hours Maximum particle size 6cm, minimum temperature 70C, minimum time at that temperature 1 hour References Notes Further reading Leoci, R., Animal by-products (ABPs): origins, uses, and European regulations, Mantova: Universitas Studiorum, 2014. See also Anaerobic digestion Biodegradable waste Composting Pasteurization Biodegradable waste management Waste legislation in the United Kingdom Waste legislation in the European Union Agricultural law
Animal By-Products Regulations
Chemistry
410
46,900,218
https://en.wikipedia.org/wiki/Penicillium%20nodulum
Penicillium nodulum is a species of fungus in the genus Penicillium. References Further reading nodulum Fungi described in 1988 Fungus species
Penicillium nodulum
Biology
32
456,085
https://en.wikipedia.org/wiki/Gaussian%20period
In mathematics, in the area of number theory, a Gaussian period is a certain kind of sum of roots of unity. The periods permit explicit calculations in cyclotomic fields connected with Galois theory and with harmonic analysis (discrete Fourier transform). They are basic in the classical theory called cyclotomy. Closely related is the Gauss sum, a type of exponential sum which is a linear combination of periods. History As the name suggests, the periods were introduced by Gauss and were the basis for his theory of compass and straightedge construction. For example, the construction of the heptadecagon (a formula that furthered his reputation) depended on the algebra of such periods, of which is an example involving the seventeenth root of unity General definition Given an integer n > 1, let H be any subgroup of the multiplicative group of invertible residues modulo n, and let A Gaussian period P is a sum of the primitive n-th roots of unity , where runs through all of the elements in a fixed coset of H in G. The definition of P can also be stated in terms of the field trace. We have for some subfield L of Q(ζ) and some j coprime to n. This corresponds to the previous definition by identifying G and H with the Galois groups of Q(ζ)/Q and Q(ζ)/L, respectively. The choice of j determines the choice of coset of H in G in the previous definition. Example The situation is simplest when n is a prime number p > 2. In that case G is cyclic of order p − 1, and has one subgroup H of order d for every factor d of p − 1. For example, we can take H of index two. In that case H consists of the quadratic residues modulo p. Corresponding to this H we have the Gaussian period summed over (p − 1)/2 quadratic residues, and the other period P* summed over the (p − 1)/2 quadratic non-residues. It is easy to see that since the left-hand side adds all the primitive p-th roots of 1. We also know, from the trace definition, that P lies in a quadratic extension of Q. Therefore, as Gauss knew, P satisfies a quadratic equation with integer coefficients. Evaluating the square of the sum P is connected with the problem of counting how many quadratic residues between 1 and p − 1 are succeeded by quadratic residues. The solution is elementary (as we would now say, it computes a local zeta-function, for a curve that is a conic). One has (P − P*)2 = p or −p, for p = 4m + 1 or 4m + 3 respectively. This therefore gives us the precise information about which quadratic field lies in Q(ζ). (That could be derived also by ramification arguments in algebraic number theory; see quadratic field.) As Gauss eventually showed, to evaluate P − P*, the correct square root to take is the positive (resp. i times positive real) one, in the two cases. Thus the explicit value of the period P is given by Gauss sums As is discussed in more detail below, the Gaussian periods are closely related to another class of sums of roots of unity, now generally called Gauss sums (sometimes Gaussian sums). The quantity P − P* presented above is a quadratic Gauss sum mod p, the simplest non-trivial example of a Gauss sum. One observes that P − P* may also be written as where here stands for the Legendre symbol (a/p), and the sum is taken over residue classes modulo p. More generally, given a Dirichlet character χ mod n, the Gauss sum mod n associated with χ is For the special case of the principal Dirichlet character, the Gauss sum reduces to the Ramanujan sum: where μ is the Möbius function. The Gauss sums are ubiquitous in number theory; for example they occur significantly in the functional equations of L-functions. (Gauss sums are in a sense the finite field analogues of the gamma function.) Relationship of Gaussian periods and Gauss sums The Gaussian periods are related to the Gauss sums for which the character χ is trivial on H. Such χ take the same value at all elements a in a fixed coset of H in G. For example, the quadratic character mod p described above takes the value 1 at each quadratic residue, and takes the value -1 at each quadratic non-residue. The Gauss sum can thus be written as a linear combination of Gaussian periods (with coefficients χ(a)); the converse is also true, as a consequence of the orthogonality relations for the group (Z/nZ)×. In other words, the Gaussian periods and Gauss sums are each other's Fourier transforms. The Gaussian periods generally lie in smaller fields, since for example when n is a prime p, the values χ(a) are (p − 1)-th roots of unity. On the other hand, Gauss sums have nicer algebraic properties. References Galois theory Cyclotomic fields Euclidean plane geometry Carl Friedrich Gauss
Gaussian period
Mathematics
1,105
848,535
https://en.wikipedia.org/wiki/Cella
In Classical architecture, a () or () is the inner chamber of an ancient Greek or Roman temple. Its enclosure within walls has given rise to extended meanings, of a hermit's or monk's cell, and since the 17th century, of a biological cell in plants or animals. Greek and Roman temples In ancient Greek and Roman temples, the cella was a room at the center of the building, usually containing a cult image or statue representing the particular deity venerated in the temple. In addition, the cella might contain a table to receive supplementary votive offerings, such as votive statues of associated deities, precious and semi-precious stones, helmets, spear and arrow heads, swords, and war trophies. No gatherings or sacrifices took place in the cella, as the altar for sacrifices was always located outside the building along the axis and temporary altars for other deities were built next to it. The accumulated offerings made Greek and Roman temples virtual treasuries, and many of them were indeed used as treasuries during antiquity. The cella was typically a simple, windowless, rectangular room with a door or open entrance at the front behind a colonnaded portico facade. In larger temples, the cella was typically divided by two colonnades into a central nave flanked by two aisles. A cella may also contain an adyton, an inner area restricted to access by the priests—in religions that had a consecrated priesthood—or by the temple guard. With very few exceptions, Greek buildings were of a peripteral design that placed the cella in the center of the plan, such as the Parthenon and the Temple of Apollo at Paestum. The Romans favoured pseudoperipteral buildings with a portico offsetting the cella to the rear. The pseudoperipteral plan uses engaged columns embedded along the side and rear walls of the cella. The Temple of Venus and Roma built by Hadrian in Rome had two cellae arranged back-to-back enclosed by a single outer peristyle. Etruscan temples According to Vitruvius, the Etruscan type of temples (as, for example, at Portonaccio, near Veio) had three cellae, side by side, conjoined by a double row of columns on the façade. This is an entirely new setup with respect to the other types of constructions found in Etruria and the Tyrrhenian side of Italy, which have one cell with or without columns, as seen in Greece and the Orient. Egyptian temples In the Hellenistic culture of the Ptolemaic Kingdom in ancient Egypt, the cella referred to that which is hidden and unknown inside the inner sanctum of an Egyptian temple, existing in complete darkness, meant to symbolize the state of the universe before the act of creation. The cella, also called the naos, holds many box-like shrines. The Greek word "naos" has been extended by archaeologists to describe the central room of the pyramids. Towards the end of the Old Kingdom, naos construction went from being subterranean to being built directly into the pyramid, above ground. The naos was surrounded by many different paths and rooms, many used to confuse and divert thieves and grave robbers. Christian churches In early Christian and Byzantine architecture, the cella or naos is an area at the center of the church reserved for performing the liturgy. In later periods, a small chapel or monk's cell was also called a cella. This is the source of the Irish language cill or cell (Anglicised as Kil(l)-) in many Irish place names. See also List of Greco-Roman roofs References Bibliography Trachtenberg and Hyman, Architecture: From Prehistory to Post Modernity (second edition). External links Vitruvius, De architectura, Book IV. ch 7 : translation, plans and reconstructions of Tuscan cellae Ancient Roman architecture Greek temples Architectural elements Ancient Greek architecture
Cella
Technology,Engineering
818
61,871,893
https://en.wikipedia.org/wiki/Yellow%2C%20red%20and%20orange%20goods
Yellow, red and orange goods are a three-part classification for consumer goods which is based on consumer buying habits, the durability of the goods, and the ways that the goods are sold. The classifications are for yellow goods, red goods, and orange goods, with orange goods being goods that have a mix of yellow and red characteristics. The classification of goods into yellow, red, and orange categories is roughly equivalent to the categories of shopping goods, convenience goods, and specialty goods. Yellow goods Yellow goods (also called "shopping goods" or "white goods") are durable consumer items such as large household appliances that have a long period of useful life, and which are replaced rarely. While yellow goods are sold in low volumes, they have high profit margins. Yellow goods have a higher unit value than convenience goods and people buy them less often; as such consumers spend more time comparison shopping for yellow goods than for red goods. As well, there is a much greater role for personal selling (from salespeople) for yellow goods than for red goods, and there is more selective distribution of yellow goods. Yellow goods often need to be adjusted or customized by the store before they are delivered to the customer. The consumer goods term "yellow goods" is different from the construction and agricultural industry term of the same name, which refers to bulldozers, tractors, and similar equipment. Red goods Red goods (also called "convenience goods") such as food are consumed completely when the consumer uses them; as a result, they are replaced frequently and sold in high volumes. Red goods have low profit margins. Red goods need heavy advertising and competitive pricing, along with a well-developed selling organization to manage the widespread and numerous points of sale. As red goods are widely available through a wide distribution network, consumers do not have to spend much time searching for them. Orange goods Orange goods (also called "specialty goods") are moderately durable goods that wear out with regular use and have to be replaced, such as clothing. Orange goods are unique, so consumers need to make more effort to acquire these items; as such exclusive distributor arrangements and franchises are often used to sell them. References Goods (economics)
Yellow, red and orange goods
Physics
444
36,472,083
https://en.wikipedia.org/wiki/Androstanedione
Androstanedione, also known as 5α-androstanedione or as 5α-androstane-3,17-dione, is a naturally occurring androstane (5α-androstane) steroid and an endogenous metabolite of androgens like testosterone, dihydrotestosterone (DHT), dehydroepiandrosterone (DHEA), and androstenedione. It is the C5 epimer of etiocholanedione (5β-androstanedione). Androstanedione is formed from androstenedione by 5α-reductase and from DHT by 17β-hydroxysteroid dehydrogenase. It has some androgenic activity. In female genital skin, the conversion of androstenedione into DHT through 5α-androstanedione appears to be more important than the direct conversion of testosterone into DHT. References External links Androstanedione (HMDB0000899) - Human Metabolome Database 5α-Reduced steroid metabolites Anabolic–androgenic steroids Androstanes Diketones Steroid hormones
Androstanedione
Chemistry,Biology
273
74,735,692
https://en.wikipedia.org/wiki/Cophinforma%20tumefaciens
Cophinforma tumefaciens is an ascomycete fungus that is a plant pathogen infecting citruses, and other shrubs and trees. History It was published in 1911, as Sphaeropsis tumefaciens with the holotype found on Citrus limon in Jamaica. But it was transferred to Cophinforma tumefaciens in 2021. Due to the generic circumscriptions of the macroconidia and spermatia/microconidia of this species matching that of Botryosphaeria, Cophinforma, or Neofusicoccum genera, rather than genus Sphaeropsis. Description It can form galls (rounded swellings beneath undisturbed bark) on Edison's St. John's-Wort (Hypericum edisonianum ) in Florida. 'Sphaeropsis gall' also affects holly bushes as well. Many other plant genera in Florida and other places are also known to be affected by this disease, including citrus, lime (Citrus aurantifolia), oleander, holly (Ilex spp.), bottlebrush (Callistemon spp), Carissa, crape myrtle, Ligustrum and the Brazilian Peppertree (Schinus terebinthifolius). as well as rose bay (Nerium oleander) and avocado (Persea americana ). Other host plantsinclude; Bauhinia spp., Cinnamomum camphora, Citrofortunella mitis, Eucalyptus sp., (including Eucalyptus cinerea and Eucalyptus urophylla), Eugenia sp., Jatropha sp., Lagerstroemia indica, Mangifera indica, Morus alba, Myrica cerifera, Pittosporum tobira, Poncirus trifoliate, Portlandia grandiflora, Pyracantha coccinea, Vigna angularis and Wisteria sinensis. The mycelium have conidiomata which are pycnidial, superficial or semi-immersed and measureing 135–400 μm in diam. They are solitary or confluent, dark brown to black (in colour), complex, effuse, (sub-)globose, densely covered with dark brown hyphae. The condia wall is composed of three layers, an outer layer of wall (textura angularis), thick-walled and dark to light brown in shade, the middle layer of cells are thin-walled and light brown. The inner layer of cells are also thin-walled and hyaline (glass-like). The conidiophores are also hyaline, branched, or reduced to conidiogenous cells. The conidiogenous cells are hyaline, holoblastic (divided into planes), smooth, discrete and cylindrical in form. They measure about 11-20(-24) × 2.5 -4 μm. The conidia are hyaline, thin-walled, aseptate, granular, ellipsoid to obovoid (in form), 18-31.5 × 7.5-10 μm. The spermatophores are hyaline, smooth, branched, or reduced to solitary spermatogenous cells. They occur randomly among the conidiophores in the same conidioma. The spermatogenous cells are ampulliform (flask-shaped) or sub-cylindrical in form. They measure about 8–21 × 2.5–4.5 μm. The spermatia are hyaline, smooth, cylindrical (in form), straight or slightly curved. The apex is obtuse and the base is truncate, measuring 3.5–7.5 × 1.5–2.5 μm. Disease symptoms range from inconspicuous swellings on young twigs to irregular sized galls on older wood. They are usually rounded ( in diam.) but sometimes elongated. These swellings start covered with normal bark which then mutates into a whitish, rough, cork-like tissue, this begins to grow in size, becoming fissured, with much enlarged woody tissue. The knots are firmly attached to the stem or branch, and may occur in large numbers over considerable lengths of stem which may be girdled. The surface of the knot later may become soft and crumbling, but the centre is hard, where the presence of black streaking indicates the presence of mycelium. Multiple shoots can appear from the galled areas, causing a witches broom type of growth. Galls can form up to 40 shoots, some over 1 m long. Horizontal branches can also tip up to grow nearly vertically and dieback of infected branches eventually occurs. The knots can occur in large numbers and a severe infection can lead to death of the tree or shrub. The disease is related to water stress, causing more dieback and can cause the plant to eventually die. This often occurs when warm, wet weather follows periods of drought. Geographic distribution The disease has been reported as being found in the USA (within Florida), Cameroon, Ceylon, Cuba, Egypt, Guyana, Indonesia, India, Jamaica, Mexico, Puerto Rico, and Venezuela. Also by 2021, it was also found in Japan, Pakistan, West Indies and in Europe (within Austria and Greece). References External links USDA ARS Fungal Database Fungi described in 1911 Fungal citrus diseases Botryosphaeriaceae Fungus species
Cophinforma tumefaciens
Biology
1,140
31,261,584
https://en.wikipedia.org/wiki/Circular%20law
In probability theory, more specifically the study of random matrices, the circular law concerns the distribution of eigenvalues of an random matrix with independent and identically distributed entries in the limit . It asserts that for any sequence of random matrices whose entries are independent and identically distributed random variables, all with mean zero and variance equal to , the limiting spectral distribution is the uniform distribution over the unit disc. Ginibre ensembles The complex Ginibre ensemble is defined as for , with all their entries sampled IID from the standard normal distribution . The real Ginibre ensemble is defined as . Eigenvalues The eigenvalues of are distributed according to Global law Let be a sequence sampled from the complex Ginibre ensemble. Let denote the eigenvalues of . Define the empirical spectral measure of as Then, almost surely (i.e. with probability one), the sequence of measures converges in distribution to the uniform measure on the unit disk. Edge statistics Let be sampled from the real or complex ensemble, and let be the absolute value of its maximal eigenvalue:We have the following theorem for the edge statistics: This theorem refines the circular law of the Ginibre ensemble. In words, the circular law says that the spectrum of almost surely falls uniformly on the unit disc. and the edge statistics theorem states that the radius of the almost-unit-disk is about , and fluctuates on a scale of , according to the Gumbel law. History For random matrices with Gaussian distribution of entries (the Ginibre ensembles), the circular law was established in the 1960s by Jean Ginibre. In the 1980s, Vyacheslav Girko introduced an approach which allowed to establish the circular law for more general distributions. Further progress was made by Zhidong Bai, who established the circular law under certain smoothness assumptions on the distribution. The assumptions were further relaxed in the works of Terence Tao and Van H. Vu, Guangming Pan and Wang Zhou, and Friedrich Götze and Alexander Tikhomirov. Finally, in 2010 Tao and Vu proved the circular law under the minimal assumptions stated above. The circular law result was extended in 1985 by Girko to an elliptical law for ensembles of matrices with a fixed amount of correlation between the entries above and below the diagonal. The elliptic and circular laws were further generalized by Aceituno, Rogers and Schomerus to the hypotrochoid law which includes higher order correlations. See also Wigner semicircle distribution References Random matrices
Circular law
Physics,Mathematics
519
36,535,890
https://en.wikipedia.org/wiki/Film%20applicator
A film applicator is a device used to evenly spread a substance, such as paint, ink, or cosmetics, over a substrate such as a drawdown card. Applicators are usually metal bars that are manufactured to high tolerances to give consistent, repeatable results. Each bar will give a "theoretical wet film thickness" or, in other words, the thickness of the coating that should remain on the drawdown card after application. Even with high manufacturing tolerances, the actual wet film thickness can vary from 50% to 90% of the gap. There are multiple types of bar applicators, their forms and uses are shown below. Film applicators follow the ASTM standard D823. Applicators can be used either manually or automatically. Manual film applicator When using an applicator manually, small variations in speed and applied pressure are inevitable. These variations can affect the quality of the drawdown and thus the measurements of film properties such as abrasion resistance, hiding power and gloss. Automatic film applicator The use of an automatic film applicator guarantees consistent speed and pressure, providing repeatable and high quality results. Automatic film applicators vary in their construction. The most modern types of applicators use vacuum plates to hold the drawdown cards and allow a variety of application bars to be mounted. The speed and pressure of the bar can be adjusted to allow for customization of the final film thickness. The drawdown process is governed by ASTM standard D823. Types Four-sided applicator Used for high viscous coatings, this type of applicator has four different clearances built in, one on each horizontal surface. Applicator frame Used for low viscous coatings, this applicator also has four different clearances built in. It is used for non-rigid substances and has a two sided opening in the center of the bar. Bar type or Bird applicator This applicator has one, two or four clearances and contains a slanted trailing edge, also used for high viscous coatings. This is the most common type of film applicator. Gap This is a wire wound bar at each end, the gap height is controlled by the wire diameter, very simple and very cost effective. U shaped This applicator also has two clearances but features a U-shaped form instead of a straight bar. Square applicator or Octoplex applicator The square applicator is the more versatile than previously mentioned applicators. It has 8 clearances in the form of a square frame. This tool combines the accuracy of fixed applicators with the versatility of adjustable applicators. Film applicator knife This adjustable applicator has a metal frame with an adjustable "knife" that acts as the gap clearance. These applicators are the most versatile manual applicator, but is also the least precise. Wire-wound applicator These applicators differ from the bar applicators as they consist of a metal rod wound with wire of varying thickness. The coating will pass through the gaps between the wires and level off at a uniform thickness. There are two main types of wire bars. Close wire wound bars will produce coating layers from 4 to 120μm. Higher coating thickness up to 500μm can be obtained using open wound bars. Testing A standard procedure for testing substances that are applied in a liquid form, and then dry to a solid film, is to create a controlled, standardized test film of the substance, usually on a standardized substrate, such as a metal, plastic or paper panel. References Materials testing Printing devices Test equipment
Film applicator
Physics,Materials_science,Technology,Engineering
746
589,782
https://en.wikipedia.org/wiki/Swingometer
The swingometer is a graphics device that shows the effects of the swing from one party to another on British election results programmes. It is used to estimate the number of seats that will be won by different parties, given a particular national swing (in percentage points) in the vote towards or away from a given party, and assuming that that percentage change in the vote will apply in each constituency. The device was invented by Peter Milne, and later refined by David Butler and Robert McKenzie. The first outing on British television was during a regional output from the BBC studios in Bristol during the 1955 general election (the first UK general election to be televised) and was used to show the swing in the two constituencies of Southampton Itchen and Southampton Test. Following this use in 1955, the BBC adopted the swingometer on a national basis and it was unveiled in the national broadcasts for the 1959 general election. This swingometer merely showed the national swing in Britain but not the implications on that swing on the composition of parliament. These issues were not addressed until the 1964 general election. The swingometer for that election showed not only the national swing, but also the implications of that national swing. So for instance, a 3.5% swing to Labour would see Labour become a majority government whilst any swing to the Conservatives would see Sir Alec Douglas-Home reelected as Prime Minister with a huge parliamentary majority. In the end the result was a Labour overall majority of 4, and so when the 1966 general election came around, a new element had to be added (namely the prospect of a hung parliament). At the 1970 general election, the swingometer entered the age of colour television and showed the traditional party colours of red for Labour and blue for Conservative and had to be extended due to the success of the Conservative party at that election. However, following the success of the Liberals in the by-elections held between the 1970 and February 1974 general elections, the swingometer was reduced in scale to just a small standby as the computers used by the BBC were deemed more reliable. As the Liberal Party reduced in importance the swingometer was brought back for the 1979 general election but for the 1983 and 1987 general elections computers were introduced to show changes in support in both map and graphic form. The swingometer was brought back for the 1992 general election covering the whole side of the election studio and also had to be manhandled by at least four technicians as well as Peter Snow who had taken over the election graphics role following the death of Bob McKenzie. This swingometer was too big for comfort and in 1997 started on a shrinking process and was changed from an actual swingometer to a virtual reality construct. For the 2001 general election the graphic was reduced further. Following a few experiments in the United Kingdom local elections in 2003 and 2004 the swingometer for the next election in 2005 was held on virtual structs as well as swingometers for the Labour and Liberal Democrats parties. An online version of the swingometer, featuring Labour and the Conservatives only, was introduced on the BBC News website at the 2001 general election. In 2005 the online swingometer was substantially re-designed to include versions featuring the Liberal Democrats, plus information on specific constituencies - including "VIP" seats - won/ lost on different swings. For the 2010 general election, the swingometer was placed in a completely virtual environment and repositioned to appear on the back wall of the virtual studio, with named constituencies as opposed to virtual MPs. All three swingometers were updated (Con / Lab, Con / Lib Dem, Lab / Lib Dem) in this manner. 3D swingometer The 3D swingometer is used to illustrate the shift in election results from the previous election in a three-party system. It is similar to the "2D" swingometer used in two-party system elections, but uses the extra dimension to allow swings to occur among three parties. The sum of all the swings between parties must equal zero. In a three party system, the most complicated swings will involve a major swing either to or from one political party, with this swing being made up of two components from each of the other two parties. For instance there may be a 3-point swing towards the Purple party, consisting of a 2-point swing from the Orange party and a 1-point swing from the Brown party. Alternatively, there may be a 5-point swing from the Orange party, of which 3 points are towards the Brown party and 2 towards the Purple party. It is possible to split the swing space up into different regions indicating what the result would be if the swing indicated occurred linearly across the electorate. This gives rise to four regions: one each indicating overall control for each party, and a fourth region indicating no overall control. Where there are swings directly from one party to a second party with the third party's vote remaining unchanged, the 3D swingometer clearly indicates that the third party also benefits slightly from the reduction in vote of the first party. The three dimensions consist of the two used to create the swing space and the third for the pendulum to swing in. Parodies During the 2010 UK General election race, the Slapometer website allowed voters to slap along to the live TV debates between party leaders Gordon Brown, David Cameron and Nick Clegg. Rather than showing a swing in votes it merely gave feedback about the number of slaps each politician was receiving each second. References External links Sultan of swingometers The oldest swingometer in town Labour and Conservatives in the 2005 UK general election Conservatives and LibDems in the 2005 UK general election Labour and LibDems in the 2005 UK general election Electoral College Swingometer in the 2008 US Presidential election Slapometer website asked people to Vote with the back of your hand BBC Archive - Swingometer Television in the United Kingdom Television terminology Television technology Elections Politics of the United Kingdom British inventions 1955 introductions 1959 introductions 1959 in British television
Swingometer
Technology
1,186
15,547,210
https://en.wikipedia.org/wiki/Gyrokinetic%20ElectroMagnetic
Gyrokinetic ElectroMagnetic (GEM) is a gyrokinetic plasma turbulence simulation that uses the particle-in-cell method. It is used to study waves, instabilities and nonlinear behavior of tokamak fusion plasmas. Information about GEM can be found at the GEM web page. There are two versions of GEM, one is a flux-tube version and the other one is a global general geometry version. Both versions of GEM use a field-aligned coordinate system. Ions are treated kinetically, but averaged over their gyro-obits and electrons are treated as drift-kinetic. The modeling of the tokamak plasmas GEM solves the electromagnetic gyrokinetic equations which are the appropriate equations for well magnetized plasmas. The plasma is treated statistically as a kinetic distribution function. The distribution function depends on the three-dimensional position, the energy and magnetic moment. The time evolution of the distribution function is described by gyrokinetic theory which simply averages the Vlasov-Maxwell system of equations over the fast gyromotion associated with particles exhibiting cyclotron motion about the magnetic field lines. This eliminates fast time scales associated with the gyromotion and reduces the dimensionality of the problem from six down to five. Algorithm to solve the equations GEM uses the delta-f particle-in-cell (PIC) plasma simulation method. An expansion about an adiabatic response is made for electrons to overcome the limit of small time step, which is caused by the fast motion of electrons. GEM uses a novel electromagnetic algorithm allowing direct numerical simulation of the electromagnetic problem at high plasma pressures. GEM uses a two-dimensional domain decomposition (see domain decomposition method) of the grid and particles to obtain good performance on massively parallel computers. A Monte Carlo method is used to model small angle Coulomb collisions. Applications GEM is used to study nonlinear physics associated with tokamak plasma turbulence and transport. Tokamak turbulence driven by ion-temperature-gradient modes, electron-temperature gradient modes, trapped electron modes and micro-tearing modes has been investigated using GEM. It is also being used to look at energetic particle driven magnetohydrodynamic (see magnetohydrodynamics) eigenmodes. References External links http://cips.colorado.edu/Group/Simulation/Gem.php Plasma theory and modeling Computational physics Tokamaks
Gyrokinetic ElectroMagnetic
Physics
494
801,809
https://en.wikipedia.org/wiki/Kasiski%20examination
In cryptanalysis, Kasiski examination (also known as Kasiski's test or Kasiski's method) is a method of attacking polyalphabetic substitution ciphers, such as the Vigenère cipher. It was first published by Friedrich Kasiski in 1863, but seems to have been independently discovered by Charles Babbage as early as 1846. How it works In polyalphabetic substitution ciphers where the substitution alphabets are chosen by the use of a keyword, the Kasiski examination allows a cryptanalyst to deduce the length of the keyword. Once the length of the keyword is discovered, the cryptanalyst lines up the ciphertext in n columns, where n is the length of the keyword. Then each column can be treated as the ciphertext of a monoalphabetic substitution cipher. As such, each column can be attacked with frequency analysis. Similarly, where a rotor stream cipher machine has been used, this method may allow the deduction of the length of individual rotors. The Kasiski examination involves looking for strings of characters that are repeated in the ciphertext. The strings should be three characters long or more for the examination to be successful. Then, the distances between consecutive occurrences of the strings are likely to be multiples of the length of the keyword. Thus finding more repeated strings narrows down the possible lengths of the keyword, since we can take the greatest common divisor of all the distances. The reason this test works is that if a repeated string occurs in the plaintext, and the distance between corresponding characters is a multiple of the keyword length, the keyword letters will line up in the same way with both occurrences of the string. For example, consider the plaintext: the man and the woman retrieved the letter from the post office The word "" is a repeated string, appearing multiple times. If we line up the plaintext with a 5-character keyword "" : the man and the woman retrieved the letter from the post office The word "the" is sometimes mapped to "bea," sometimes to "sbe" and other times to "ead." However, it is mapped to "sbe" twice, and in a long enough text, it would likely be mapped multiple times to each of these possibilities. Kasiski observed that the distance between such repeated appearances must be a multiple of the encryption period. In this example, the period is 5, and the distance between the two occurrences of "sbe" is 30, which is 6 times the period. Therefore, the greatest common divisor of the distances between repeated sequences will reveal the key length or a multiple of it. A string-based attack The difficulty of using the Kasiski examination lies in finding repeated strings. This is a very hard task to perform manually, but computers can make it much easier. However, care is still required, since some repeated strings may just be coincidence, so that some of the repeat distances are misleading. The cryptanalyst has to rule out the coincidences to find the correct length. Then, of course, the monoalphabetic ciphertexts that result must be cryptanalyzed. A cryptanalyst looks for repeated groups of letters and counts the number of letters between the beginning of each repeated group. For instance, if the ciphertext were , the distance between groups is 10. The analyst records the distances for all repeated groups in the text. The analyst next factors each of these numbers. If any number is repeated in the majority of these factorings, it is likely to be the length of the keyword. This is because repeated groups are more likely to occur when the same letters are encrypted using the same key letters than by mere coincidence; this is especially true for long matching strings. The key letters are repeated at multiples of the key length, so most of the distances found in step 1 are likely to be multiples of the key length. A common factor is usually evident. Once the keyword length is known, the following observation of Babbage and Kasiski comes into play. If the keyword is N letters long, then every Nth letter must have been enciphered using the same letter of the keytext. Grouping every Nth letter together, the analyst has N "messages", each encrypted using a one-alphabet substitution, and each piece can then be attacked using frequency analysis. Using the solved message, the analyst can quickly determine what the keyword was. Or, in the process of solving the pieces, the analyst might use guesses about the keyword to assist in breaking the message. Once the interceptor knows the keyword, that knowledge can be used to read other messages that use the same key. Superimposition Kasiski actually used "superimposition" to solve the Vigenère cipher. He started by finding the key length, as above. Then he took multiple copies of the message and laid them one-above-another, each one shifted left by the length of the key. Kasiski then observed that each column was made up of letters encrypted with a single alphabet. His method was equivalent to the one described above, but is perhaps easier to picture. Modern attacks on polyalphabetic ciphers are essentially identical to that described above, with the one improvement of coincidence counting. Instead of looking for repeating groups, a modern analyst would take two copies of the message and lay one above another. Modern analysts use computers, but this description illustrates the principle that the computer algorithms implement. The generalized method: The analyst shifts the bottom message one letter to the left, then one more letters to the left, etc., each time going through the entire message and counting the number of times the same letter appears in the top and bottom message. The number of "coincidences" goes up sharply when the bottom message is shifted by a multiple of the key length, because then the adjacent letters are in the same language using the same alphabet. Having found the key length, cryptanalysis proceeds as described above using frequency analysis. External links - A video that shows how to break a Vigenère ciphertext using the Kasiski examination References Cryptographic attacks
Kasiski examination
Technology
1,279
1,123,902
https://en.wikipedia.org/wiki/Biosynthesis
Biosynthesis, i.e., chemical synthesis occurring in biological contexts, is a term most often referring to multi-step, enzyme-catalyzed processes where chemical substances absorbed as nutrients (or previously converted through biosynthesis) serve as enzyme substrates, with conversion by the living organism either into simpler or more complex products. Examples of biosynthetic pathways include those for the production of amino acids, lipid membrane components, and nucleotides, but also for the production of all classes of biological macromolecules, and of acetyl-coenzyme A, adenosine triphosphate, nicotinamide adenine dinucleotide and other key intermediate and transactional molecules needed for metabolism. Thus, in biosynthesis, any of an array of compounds, from simple to complex, are converted into other compounds, and so it includes both the catabolism and anabolism (building up and breaking down) of complex molecules (including macromolecules). Biosynthetic processes are often represented via charts of metabolic pathways. A particular biosynthetic pathway may be located within a single cellular organelle (e.g., mitochondrial fatty acid synthesis pathways), while others involve enzymes that are located across an array of cellular organelles and structures (e.g., the biosynthesis of glycosylated cell surface proteins). Elements of biosynthesis Elements of biosynthesis include: precursor compounds, chemical energy (e.g. ATP), and catalytic enzymes which may need coenzymes (e.g. NADH, NADPH). These elements create monomers, the building blocks for macromolecules. Some important biological macromolecules include: proteins, which are composed of amino acid monomers joined via peptide bonds, and DNA molecules, which are composed of nucleotides joined via phosphodiester bonds. Properties of chemical reactions Biosynthesis occurs due to a series of chemical reactions. For these reactions to take place, the following elements are necessary: Precursor compounds: these compounds are the starting molecules or substrates in a reaction. These may also be viewed as the reactants in a given chemical process. Chemical energy: chemical energy can be found in the form of high energy molecules. These molecules are required for energetically unfavourable reactions. Furthermore, the hydrolysis of these compounds drives a reaction forward. High energy molecules, such as ATP, have three phosphates. Often, the terminal phosphate is split off during hydrolysis and transferred to another molecule. Catalysts: these may be for example metal ions or coenzymes and they catalyze a reaction by increasing the rate of the reaction and lowering the activation energy. In the simplest sense, the reactions that occur in biosynthesis have the following format: Reactant ->[][enzyme] Product Some variations of this basic equation which will be discussed later in more detail are: Simple compounds which are converted into other compounds, usually as part of a multiple step reaction pathway. Two examples of this type of reaction occur during the formation of nucleic acids and the charging of tRNA prior to translation. For some of these steps, chemical energy is required: {Precursor~molecule} + ATP <=> {product~AMP} + PP_i Simple compounds that are converted into other compounds with the assistance of cofactors. For example, the synthesis of phospholipids requires acetyl CoA, while the synthesis of another membrane component, sphingolipids, requires NADH and FADH for the formation the sphingosine backbone. The general equation for these examples is: {Precursor~molecule} + Cofactor ->[][enzyme] macromolecule Simple compounds that join to create a macromolecule. For example, fatty acids join to form phospholipids. In turn, phospholipids and cholesterol interact noncovalently in order to form the lipid bilayer. This reaction may be depicted as follows: {Molecule~1} + Molecule~2 -> macromolecule Lipid Many intricate macromolecules are synthesized in a pattern of simple, repeated structures. For example, the simplest structures of lipids are fatty acids. Fatty acids are hydrocarbon derivatives; they contain a carboxyl group "head" and a hydrocarbon chain "tail". These fatty acids create larger components, which in turn incorporate noncovalent interactions to form the lipid bilayer. Fatty acid chains are found in two major components of membrane lipids: phospholipids and sphingolipids. A third major membrane component, cholesterol, does not contain these fatty acid units. Eukaryotic phospholipids The foundation of all biomembranes consists of a bilayer structure of phospholipids. The phospholipid molecule is amphipathic; it contains a hydrophilic polar head and a hydrophobic nonpolar tail. The phospholipid heads interact with each other and aqueous media, while the hydrocarbon tails orient themselves in the center, away from water. These latter interactions drive the bilayer structure that acts as a barrier for ions and molecules. There are various types of phospholipids; consequently, their synthesis pathways differ. However, the first step in phospholipid synthesis involves the formation of phosphatidate or diacylglycerol 3-phosphate at the endoplasmic reticulum and outer mitochondrial membrane. The synthesis pathway is found below: The pathway starts with glycerol 3-phosphate, which gets converted to lysophosphatidate via the addition of a fatty acid chain provided by acyl coenzyme A. Then, lysophosphatidate is converted to phosphatidate via the addition of another fatty acid chain contributed by a second acyl CoA; all of these steps are catalyzed by the glycerol phosphate acyltransferase enzyme. Phospholipid synthesis continues in the endoplasmic reticulum, and the biosynthesis pathway diverges depending on the components of the particular phospholipid. Sphingolipids Like phospholipids, these fatty acid derivatives have a polar head and nonpolar tails. Unlike phospholipids, sphingolipids have a sphingosine backbone. Sphingolipids exist in eukaryotic cells and are particularly abundant in the central nervous system. For example, sphingomyelin is part of the myelin sheath of nerve fibers. Sphingolipids are formed from ceramides that consist of a fatty acid chain attached to the amino group of a sphingosine backbone. These ceramides are synthesized from the acylation of sphingosine. The biosynthetic pathway for sphingosine is found below: As the image denotes, during sphingosine synthesis, palmitoyl CoA and serine undergo a condensation reaction which results in the formation of 3-dehydrosphinganine. This product is then reduced to form dihydrospingosine, which is converted to sphingosine via the oxidation reaction by FAD. Cholesterol This lipid belongs to a class of molecules called sterols. Sterols have four fused rings and a hydroxyl group. Cholesterol is a particularly important molecule. Not only does it serve as a component of lipid membranes, it is also a precursor to several steroid hormones, including cortisol, testosterone, and estrogen. Cholesterol is synthesized from acetyl CoA. The pathway is shown below: More generally, this synthesis occurs in three stages, with the first stage taking place in the cytoplasm and the second and third stages occurring in the endoplasmic reticulum. The stages are as follows: 1. The synthesis of isopentenyl pyrophosphate, the "building block" of cholesterol 2. The formation of squalene via the condensation of six molecules of isopentenyl phosphate 3. The conversion of squalene into cholesterol via several enzymatic reactions Nucleotides The biosynthesis of nucleotides involves enzyme-catalyzed reactions that convert substrates into more complex products. Nucleotides are the building blocks of DNA and RNA. Nucleotides are composed of a five-membered ring formed from ribose sugar in RNA, and deoxyribose sugar in DNA; these sugars are linked to a purine or pyrimidine base with a glycosidic bond and a phosphate group at the 5' location of the sugar. Purine nucleotides The DNA nucleotides adenosine and guanosine consist of a purine base attached to a ribose sugar with a glycosidic bond. In the case of RNA nucleotides deoxyadenosine and deoxyguanosine, the purine bases are attached to a deoxyribose sugar with a glycosidic bond. The purine bases on DNA and RNA nucleotides are synthesized in a twelve-step reaction mechanism present in most single-celled organisms. Higher eukaryotes employ a similar reaction mechanism in ten reaction steps. Purine bases are synthesized by converting phosphoribosyl pyrophosphate (PRPP) to inosine monophosphate (IMP), which is the first key intermediate in purine base biosynthesis. Further enzymatic modification of IMP produces the adenosine and guanosine bases of nucleotides. The first step in purine biosynthesis is a condensation reaction, performed by glutamine-PRPP amidotransferase. This enzyme transfers the amino group from glutamine to PRPP, forming 5-phosphoribosylamine. The following step requires the activation of glycine by the addition of a phosphate group from ATP. GAR synthetase performs the condensation of activated glycine onto PRPP, forming glycineamide ribonucleotide (GAR). GAR transformylase adds a formyl group onto the amino group of GAR, forming formylglycinamide ribonucleotide (FGAR). FGAR amidotransferase catalyzes the addition of a nitrogen group to FGAR, forming formylglycinamidine ribonucleotide (FGAM). FGAM cyclase catalyzes ring closure, which involves removal of a water molecule, forming the 5-membered imidazole ring 5-aminoimidazole ribonucleotide (AIR). N5-CAIR synthetase transfers a carboxyl group, forming the intermediate N5-carboxyaminoimidazole ribonucleotide (N5-CAIR). N5-CAIR mutase rearranges the carboxyl functional group and transfers it onto the imidazole ring, forming carboxyamino- imidazole ribonucleotide (CAIR). The two step mechanism of CAIR formation from AIR is mostly found in single celled organisms. Higher eukaryotes contain the enzyme AIR carboxylase, which transfers a carboxyl group directly to AIR imidazole ring, forming CAIR. SAICAR synthetase forms a peptide bond between aspartate and the added carboxyl group of the imidazole ring, forming N-succinyl-5-aminoimidazole-4-carboxamide ribonucleotide (SAICAR). SAICAR lyase removes the carbon skeleton of the added aspartate, leaving the amino group and forming 5-aminoimidazole-4-carboxamide ribonucleotide (AICAR). AICAR transformylase transfers a carbonyl group to AICAR, forming N-formylaminoimidazole- 4-carboxamide ribonucleotide (FAICAR). The final step involves the enzyme IMP synthase, which performs the purine ring closure and forms the inosine monophosphate intermediate. Pyrimidine nucleotides Other DNA and RNA nucleotide bases that are linked to the ribose sugar via a glycosidic bond are thymine, cytosine and uracil (which is only found in RNA). Uridine monophosphate biosynthesis involves an enzyme that is located in the mitochondrial inner membrane and multifunctional enzymes that are located in the cytosol. The first step involves the enzyme carbamoyl phosphate synthase combining glutamine with CO2 in an ATP dependent reaction to form carbamoyl phosphate. Aspartate carbamoyltransferase condenses carbamoyl phosphate with aspartate to form uridosuccinate. Dihydroorotase performs ring closure, a reaction that loses water, to form dihydroorotate. Dihydroorotate dehydrogenase, located within the mitochondrial inner membrane, oxidizes dihydroorotate to orotate. Orotate phosphoribosyl hydrolase (OMP pyrophosphorylase) condenses orotate with PRPP to form orotidine-5'-phosphate. OMP decarboxylase catalyzes the conversion of orotidine-5'-phosphate to UMP. After the uridine nucleotide base is synthesized, the other bases, cytosine and thymine are synthesized. Cytosine biosynthesis is a two-step reaction which involves the conversion of UMP to UTP. Phosphate addition to UMP is catalyzed by a kinase enzyme. The enzyme CTP synthase catalyzes the next reaction step: the conversion of UTP to CTP by transferring an amino group from glutamine to uridine; this forms the cytosine base of CTP. The mechanism, which depicts the reaction UTP + ATP + glutamine ⇔ CTP + ADP + glutamate, is below: Cytosine is a nucleotide that is present in both DNA and RNA. However, uracil is only found in RNA. Therefore, after UTP is synthesized, it is must be converted into a deoxy form to be incorporated into DNA. This conversion involves the enzyme ribonucleoside triphosphate reductase. This reaction that removes the 2'-OH of the ribose sugar to generate deoxyribose is not affected by the bases attached to the sugar. This non-specificity allows ribonucleoside triphosphate reductase to convert all nucleotide triphosphates to deoxyribonucleotide by a similar mechanism. In contrast to uracil, thymine bases are found mostly in DNA, not RNA. Cells do not normally contain thymine bases that are linked to ribose sugars in RNA, thus indicating that cells only synthesize deoxyribose-linked thymine. The enzyme thymidylate synthetase is responsible for synthesizing thymine residues from dUMP to dTMP. This reaction transfers a methyl group onto the uracil base of dUMP to generate dTMP. The thymidylate synthase reaction, dUMP + 5,10-methylenetetrahydrofolate ⇔ dTMP + dihydrofolate, is shown to the right. DNA Although there are differences between eukaryotic and prokaryotic DNA synthesis, the following section denotes key characteristics of DNA replication shared by both organisms. DNA is composed of nucleotides that are joined by phosphodiester bonds. DNA synthesis, which takes place in the nucleus, is a semiconservative process, which means that the resulting DNA molecule contains an original strand from the parent structure and a new strand. DNA synthesis is catalyzed by a family of DNA polymerases that require four deoxynucleoside triphosphates, a template strand, and a primer with a free 3'OH in which to incorporate nucleotides. In order for DNA replication to occur, a replication fork is created by enzymes called helicases which unwind the DNA helix. Topoisomerases at the replication fork remove supercoils caused by DNA unwinding, and single-stranded DNA binding proteins maintain the two single-stranded DNA templates stabilized prior to replication. DNA synthesis is initiated by the RNA polymerase primase, which makes an RNA primer with a free 3'OH. This primer is attached to the single-stranded DNA template, and DNA polymerase elongates the chain by incorporating nucleotides; DNA polymerase also proofreads the newly synthesized DNA strand. During the polymerization reaction catalyzed by DNA polymerase, a nucleophilic attack occurs by the 3'OH of the growing chain on the innermost phosphorus atom of a deoxynucleoside triphosphate; this yields the formation of a phosphodiester bridge that attaches a new nucleotide and releases pyrophosphate. Two types of strands are created simultaneously during replication: the leading strand, which is synthesized continuously and grows towards the replication fork, and the lagging strand, which is made discontinuously in Okazaki fragments and grows away from the replication fork. Okazaki fragments are covalently joined by DNA ligase to form a continuous strand. Then, to complete DNA replication, RNA primers are removed, and the resulting gaps are replaced with DNA and joined via DNA ligase. Amino acids A protein is a polymer that is composed from amino acids that are linked by peptide bonds. There are more than 300 amino acids found in nature of which only twenty two, known as the proteinogenic amino acids, are the building blocks for protein. Only green plants and most microbes are able to synthesize all of the 20 standard amino acids that are needed by all living species. Mammals can only synthesize ten of the twenty standard amino acids. The other amino acids, valine, methionine, leucine, isoleucine, phenylalanine, lysine, threonine and tryptophan for adults and histidine, and arginine for babies are obtained through diet. Amino acid basic structure The general structure of the standard amino acids includes a primary amino group, a carboxyl group and the functional group attached to the α-carbon. The different amino acids are identified by the functional group. As a result of the three different groups attached to the α-carbon, amino acids are asymmetrical molecules. For all standard amino acids, except glycine, the α-carbon is a chiral center. In the case of glycine, the α-carbon has two hydrogen atoms, thus adding symmetry to this molecule. With the exception of proline, all of the amino acids found in life have the L-isoform conformation. Proline has a functional group on the α-carbon that forms a ring with the amino group. Nitrogen source One major step in amino acid biosynthesis involves incorporating a nitrogen group onto the α-carbon. In cells, there are two major pathways of incorporating nitrogen groups. One pathway involves the enzyme glutamine oxoglutarate aminotransferase (GOGAT) which removes the amide amino group of glutamine and transfers it onto 2-oxoglutarate, producing two glutamate molecules. In this catalysis reaction, glutamine serves as the nitrogen source. An image illustrating this reaction is found to the right. The other pathway for incorporating nitrogen onto the α-carbon of amino acids involves the enzyme glutamate dehydrogenase (GDH). GDH is able to transfer ammonia onto 2-oxoglutarate and form glutamate. Furthermore, the enzyme glutamine synthetase (GS) is able to transfer ammonia onto glutamate and synthesize glutamine, replenishing glutamine. The glutamate family of amino acids The glutamate family of amino acids includes the amino acids that derive from the amino acid glutamate. This family includes: glutamate, glutamine, proline, and arginine. This family also includes the amino acid lysine, which is derived from α-ketoglutarate. The biosynthesis of glutamate and glutamine is a key step in the nitrogen assimilation discussed above. The enzymes GOGAT and GDH catalyze the nitrogen assimilation reactions. In bacteria, the enzyme glutamate 5-kinase initiates the biosynthesis of proline by transferring a phosphate group from ATP onto glutamate. The next reaction is catalyzed by the enzyme pyrroline-5-carboxylate synthase (P5CS), which catalyzes the reduction of the ϒ-carboxyl group of L-glutamate 5-phosphate. This results in the formation of glutamate semialdehyde, which spontaneously cyclizes to pyrroline-5-carboxylate. Pyrroline-5-carboxylate is further reduced by the enzyme pyrroline-5-carboxylate reductase (P5CR) to yield a proline amino acid. In the first step of arginine biosynthesis in bacteria, glutamate is acetylated by transferring the acetyl group from acetyl-CoA at the N-α position; this prevents spontaneous cyclization. The enzyme N-acetylglutamate synthase (glutamate N-acetyltransferase) is responsible for catalyzing the acetylation step. Subsequent steps are catalyzed by the enzymes N-acetylglutamate kinase, N-acetyl-gamma-glutamyl-phosphate reductase, and acetylornithine/succinyldiamino pimelate aminotransferase and yield the N-acetyl-L-ornithine. The acetyl group of acetylornithine is removed by the enzyme acetylornithinase (AO) or ornithine acetyltransferase (OAT), and this yields ornithine. Then, the enzymes citrulline and argininosuccinate convert ornithine to arginine. There are two distinct lysine biosynthetic pathways: the diaminopimelic acid pathway and the α-aminoadipate pathway. The most common of the two synthetic pathways is the diaminopimelic acid pathway; it consists of several enzymatic reactions that add carbon groups to aspartate to yield lysine: Aspartate kinase initiates the diaminopimelic acid pathway by phosphorylating aspartate and producing aspartyl phosphate. Aspartate semialdehyde dehydrogenase catalyzes the NADPH-dependent reduction of aspartyl phosphate to yield aspartate semialdehyde. 4-hydroxy-tetrahydrodipicolinate synthase adds a pyruvate group to the β-aspartyl-4-semialdehyde, and a water molecule is removed. This causes cyclization and gives rise to (2S,4S)-4-hydroxy-2,3,4,5-tetrahydrodipicolinate. 4-hydroxy-tetrahydrodipicolinate reductase catalyzes the reduction of (2S,4S)-4-hydroxy-2,3,4,5-tetrahydrodipicolinate by NADPH to yield Δ'-piperideine-2,6-dicarboxylate (2,3,4,5-tetrahydrodipicolinate) and H2O. Tetrahydrodipicolinate acyltransferase catalyzes the acetylation reaction that results in ring opening and yields N-acetyl α-amino-ε-ketopimelate. N-succinyl-α-amino-ε-ketopimelate-glutamate aminotransaminase catalyzes the transamination reaction that removes the keto group of N-acetyl α-amino-ε-ketopimelate and replaces it with an amino group to yield N-succinyl-L-diaminopimelate. N-acyldiaminopimelate deacylase catalyzes the deacylation of N-succinyl-L-diaminopimelate to yield L,L-diaminopimelate. DAP epimerase catalyzes the conversion of L,L-diaminopimelate to the meso form of L,L-diaminopimelate. DAP decarboxylase catalyzes the removal of the carboxyl group, yielding L-lysine. The serine family of amino acids The serine family of amino acid includes: serine, cysteine, and glycine. Most microorganisms and plants obtain the sulfur for synthesizing methionine from the amino acid cysteine. Furthermore, the conversion of serine to glycine provides the carbons needed for the biosynthesis of the methionine and histidine. During serine biosynthesis, the enzyme phosphoglycerate dehydrogenase catalyzes the initial reaction that oxidizes 3-phospho-D-glycerate to yield 3-phosphonooxypyruvate. The following reaction is catalyzed by the enzyme phosphoserine aminotransferase, which transfers an amino group from glutamate onto 3-phosphonooxypyruvate to yield L-phosphoserine. The final step is catalyzed by the enzyme phosphoserine phosphatase, which dephosphorylates L-phosphoserine to yield L-serine. There are two known pathways for the biosynthesis of glycine. Organisms that use ethanol and acetate as the major carbon source utilize the glyconeogenic pathway to synthesize glycine. The other pathway of glycine biosynthesis is known as the glycolytic pathway. This pathway converts serine synthesized from the intermediates of glycolysis to glycine. In the glycolytic pathway, the enzyme serine hydroxymethyltransferase catalyzes the cleavage of serine to yield glycine and transfers the cleaved carbon group of serine onto tetrahydrofolate, forming 5,10-methylene-tetrahydrofolate. Cysteine biosynthesis is a two-step reaction that involves the incorporation of inorganic sulfur. In microorganisms and plants, the enzyme serine acetyltransferase catalyzes the transfer of acetyl group from acetyl-CoA onto L-serine to yield O-acetyl-L-serine. The following reaction step, catalyzed by the enzyme O-acetyl serine (thiol) lyase, replaces the acetyl group of O-acetyl-L-serine with sulfide to yield cysteine. The aspartate family of amino acids The aspartate family of amino acids includes: threonine, lysine, methionine, isoleucine, and aspartate. Lysine and isoleucine are considered part of the aspartate family even though part of their carbon skeleton is derived from pyruvate. In the case of methionine, the methyl carbon is derived from serine and the sulfur group, but in most organisms, it is derived from cysteine. The biosynthesis of aspartate is a one step reaction that is catalyzed by a single enzyme. The enzyme aspartate aminotransferase catalyzes the transfer of an amino group from aspartate onto α-ketoglutarate to yield glutamate and oxaloacetate. Asparagine is synthesized by an ATP-dependent addition of an amino group onto aspartate; asparagine synthetase catalyzes the addition of nitrogen from glutamine or soluble ammonia to aspartate to yield asparagine. The diaminopimelic acid biosynthetic pathway of lysine belongs to the aspartate family of amino acids. This pathway involves nine enzyme-catalyzed reactions that convert aspartate to lysine. Aspartate kinase catalyzes the initial step in the diaminopimelic acid pathway by transferring a phosphoryl from ATP onto the carboxylate group of aspartate, which yields aspartyl-β-phosphate. Aspartate-semialdehyde dehydrogenase catalyzes the reduction reaction by dephosphorylation of aspartyl-β-phosphate to yield aspartate-β-semialdehyde. Dihydrodipicolinate synthase catalyzes the condensation reaction of aspartate-β-semialdehyde with pyruvate to yield dihydrodipicolinic acid. 4-hydroxy-tetrahydrodipicolinate reductase catalyzes the reduction of dihydrodipicolinic acid to yield tetrahydrodipicolinic acid. Tetrahydrodipicolinate N-succinyltransferase catalyzes the transfer of a succinyl group from succinyl-CoA on to tetrahydrodipicolinic acid to yield N-succinyl-L-2,6-diaminoheptanedioate. N-succinyldiaminopimelate aminotransferase catalyzes the transfer of an amino group from glutamate onto N-succinyl-L-2,6-diaminoheptanedioate to yield N-succinyl-L,L-diaminopimelic acid. Succinyl-diaminopimelate desuccinylase catalyzes the removal of acyl group from N-succinyl-L,L-diaminopimelic acid to yield L,L-diaminopimelic acid. Diaminopimelate epimerase catalyzes the inversion of the α-carbon of L,L-diaminopimelic acid to yield meso-diaminopimelic acid. Siaminopimelate decarboxylase catalyzes the final step in lysine biosynthesis that removes the carbon dioxide group from meso-diaminopimelic acid to yield L-lysine. Proteins Protein synthesis occurs via a process called translation. During translation, genetic material called mRNA is read by ribosomes to generate a protein polypeptide chain. This process requires transfer RNA (tRNA) which serves as an adaptor by binding amino acids on one end and interacting with mRNA at the other end; the latter pairing between the tRNA and mRNA ensures that the correct amino acid is added to the chain. Protein synthesis occurs in three phases: initiation, elongation, and termination. Prokaryotic (archaeal and bacterial) translation differs from eukaryotic translation; however, this section will mostly focus on the commonalities between the two organisms. Additional background Before translation can begin, the process of binding a specific amino acid to its corresponding tRNA must occur. This reaction, called tRNA charging, is catalyzed by aminoacyl tRNA synthetase. A specific tRNA synthetase is responsible for recognizing and charging a particular amino acid. Furthermore, this enzyme has special discriminator regions to ensure the correct binding between tRNA and its cognate amino acid. The first step for joining an amino acid to its corresponding tRNA is the formation of aminoacyl-AMP: {Amino~acid} + ATP <=> {aminoacyl-AMP} + PP_i This is followed by the transfer of the aminoacyl group from aminoacyl-AMP to a tRNA molecule. The resulting molecule is aminoacyl-tRNA: {Aminoacyl-AMP} + tRNA <=> {aminoacyl-tRNA} + AMP The combination of these two steps, both of which are catalyzed by aminoacyl tRNA synthetase, produces a charged tRNA that is ready to add amino acids to the growing polypeptide chain. In addition to binding an amino acid, tRNA has a three nucleotide unit called an anticodon that base pairs with specific nucleotide triplets on the mRNA called codons; codons encode a specific amino acid. This interaction is possible thanks to the ribosome, which serves as the site for protein synthesis. The ribosome possesses three tRNA binding sites: the aminoacyl site (A site), the peptidyl site (P site), and the exit site (E site). There are numerous codons within an mRNA transcript, and it is very common for an amino acid to be specified by more than one codon; this phenomenon is called degeneracy. In all, there are 64 codons, 61 of each code for one of the 20 amino acids, while the remaining codons specify chain termination. Translation in steps As previously mentioned, translation occurs in three phases: initiation, elongation, and termination. Step 1: Initiation The completion of the initiation phase is dependent on the following three events: 1. The recruitment of the ribosome to mRNA 2. The binding of a charged initiator tRNA into the P site of the ribosome 3. The proper alignment of the ribosome with mRNA's start codon Step 2: Elongation Following initiation, the polypeptide chain is extended via anticodon:codon interactions, with the ribosome adding amino acids to the polypeptide chain one at a time. The following steps must occur to ensure the correct addition of amino acids: 1. The binding of the correct tRNA into the A site of the ribosome 2. The formation of a peptide bond between the tRNA in the A site and the polypeptide chain attached to the tRNA in the P site 3. Translocation or advancement of the tRNA-mRNA complex by three nucleotides Translocation "kicks off" the tRNA at the E site and shifts the tRNA from the A site into the P site, leaving the A site free for an incoming tRNA to add another amino acid. Step 3: Termination The last stage of translation occurs when a stop codon enters the A site. Then, the following steps occur: 1. The recognition of codons by release factors, which causes the hydrolysis of the polypeptide chain from the tRNA located in the P site 2. The release of the polypeptide chain 3. The dissociation and "recycling" of the ribosome for future translation processes A summary table of the key players in translation is found below: Diseases associated with macromolecule deficiency Errors in biosynthetic pathways can have deleterious consequences including the malformation of macromolecules or the underproduction of functional molecules. Below are examples that illustrate the disruptions that occur due to these inefficiencies. Familial hypercholesterolemia: this disorder is characterized by the absence of functional receptors for LDL. Deficiencies in the formation of LDL receptors may cause faulty receptors which disrupt the endocytic pathway, inhibiting the entry of LDL into the liver and other cells. This causes a buildup of LDL in the blood plasma, which results in atherosclerotic plaques that narrow arteries and increase the risk of heart attacks. Lesch–Nyhan syndrome: this genetic disease is characterized by self- mutilation, mental deficiency, and gout. It is caused by the absence of hypoxanthine-guanine phosphoribosyltransferase, which is a necessary enzyme for purine nucleotide formation. The lack of enzyme reduces the level of necessary nucleotides and causes the accumulation of biosynthesis intermediates, which results in the aforementioned unusual behavior. Severe combined immunodeficiency (SCID): SCID is characterized by a loss of T cells. Shortage of these immune system components increases the susceptibility to infectious agents because the affected individuals cannot develop immunological memory. This immunological disorder results from a deficiency in adenosine deaminase activity, which causes a buildup of dATP. These dATP molecules then inhibit ribonucleotide reductase, which prevents of DNA synthesis. Huntington's disease: this neurological disease is caused from errors that occur during DNA synthesis. These errors or mutations lead to the expression of a mutant huntingtin protein, which contains repetitive glutamine residues that are encoded by expanding CAG trinucleotide repeats in the gene. Huntington's disease is characterized by neuronal loss and gliosis. Symptoms of the disease include: movement disorder, cognitive decline, and behavioral disorder. See also Lipids Phospholipid bilayer Nucleotides DNA DNA replication Proteinogenic amino acid Codon table Prostaglandin Porphyrins Chlorophylls and bacteriochlorophylls Vitamin B12 References Biochemical reactions Metabolism
Biosynthesis
Chemistry,Biology
8,011
12,665,211
https://en.wikipedia.org/wiki/Numbers%20%28spreadsheet%29
Numbers is a spreadsheet application developed by Apple Inc. as part of the iWork productivity suite alongside Keynote and Pages. Numbers is available for iOS and macOS High Sierra or newer. Numbers 1.0 on Mac OS X was announced on August 7, 2007, making it the newest application in the iWork suite. The iPad version was released on January 27, 2010. The app was later updated to support iPhone and iPod Touch. Numbers uses a free-form "canvas" approach that demotes tables to one of many different media types placed on a page. Other media, like charts, graphics, and text, are treated as peers. In comparison, traditional spreadsheets like Microsoft Excel use the table as the primary container, with other media placed within the table. Numbers also includes features from the seminal Lotus Improv, notably the use of formulas based on ranges rather than cells. However, it implements these using traditional spreadsheet concepts, as opposed to Improv's use of multidimensional databases. Numbers also includes numerous stylistic improvements to improve the visual appearance of spreadsheets. At its introductory demonstration, Steve Jobs pitched a more usable interface and better control over the appearance and presentation of tables of data. Description Basic model Numbers works in a fashion somewhat different from traditional spreadsheets like Microsoft Excel or Lotus 1-2-3. In the traditional model, the table is the first-class citizen of the system, acting as both the primary interface for work and as the container for other types of media like charts or digital images. In effect, the spreadsheet and the table are the same. In contrast, Numbers uses a separate "canvas" as its basic container object and tables are among the many objects that can be placed within the canvas. This difference is not simply a case of syntax. To provide a large workspace, conventional spreadsheets extend a table in X and Y to form a very large grid — ideally infinite, but normally limited to some smaller dimension. Some of these cells, selected by the user, hold data. Data is manipulated using formulas, which are placed in other cells in the same sheet and output their results back into the formula cell's display. The rest of the sheet is "sparse", and currently unused. Sheets often grow very complex with input data, intermediate values from formulas, and output areas, separated by blank areas. To manage this complexity, Excel allows one to hide data that is not of interest, often intermediate values. Quattro Pro commonly introduced the idea of multiple sheets in a single book, allowing further subdivision of the data; Excel implements this as a set of tabs along the bottom of the workbook. In contrast, Numbers does not have an underlying spreadsheet in the traditional sense but uses multiple individual tables for this purpose. Tables are an X and Y collection of cells, like a sheet, but extend only to the limits of the data they hold. Each section of data or output from formulas can be combined into an existing table or placed into a new table. Tables can be collected by the user onto single or multiple canvases. Whereas a typical Excel sheet has data strewn across it, a Numbers canvas could build the same output through smaller individual tables encompassing the same data. Formulas and functions Consider a simple spreadsheet used to calculate the average value of all car sales in a month for a given year. The sheet might contain the month number or name in column A, the number of cars sold in column B, and the total income in column C. The user wishes to complete the task of "calculating the average income per car sold by dividing the total income by the number of cars sold and putting the resulting average in column D". From the user's perspective, the values in the cells have semantic content, they are "cars sold" and "total income" and they want to manipulate this to produce an output value, "average price". In traditional spreadsheets, the semantic value of the numbers is lost. The number in cell B2 is not "the number of cars sold in January", but simply "the value in cell B2". The formula for calculating the average is based on the manipulation of the cells, in the form =C2/B2. As the spreadsheet is unaware of the user's desire for D to be an output column, the user copies that formula into all of the cells in D. However, as the formula refers to data on different rows, it must be modified as it is copied into the cells in D, changing it to refer to the correct row. For instance, the formula in D4 would read =C4/B4. Excel automates this later task by using a relative referencing system that works as long as the cells retain their location relative to the formula. However, this system requires Excel to track any changes to the layout of the sheet and adjust the formulas, a process that is far from foolproof. During the development of Improv, the Lotus team discovered that these sorts of formulas were both difficult to use and resistant to future changes in the spreadsheet layout. Their solution was to make the user explicitly define the semantic content of the sheets — that the B column contained "cars sold". These data ranges were known as "categories". Formulas were written by referring to these categories by name, creating a new category that could be (if desired) placed in the sheet for display. Using the car example, the formula in Improv would be average per car = total income / cars sold. Changes to the layout of the sheet would not affect the formulas; the data remains defined no matter where it is moved. It also meant that formulas calculating intermediate values did not have to be placed in the sheet and normally did not take up room. The downside to Improv's approach is that it demanded more information from the user up-front and was considered less suitable for "quick and dirty" calculations or basic list building. Numbers uses a hybrid approach to the creation of formulas, supporting the use of named data like Improv, but implementing them in-sheet like Excel. In basic operation, Numbers can be used just like Excel; data can be typed anywhere, and formulas can be created by referring to the data by its cell. However, if the user types a header into the table, something one normally does as a matter of course, Numbers uses this to automatically construct a named range for the cells on that row or column. For instance, if the user types "month" into A1 and then types the names "January", "February", etc. into the cells below it, Numbers constructs a named range for the cells A2 through A13 and gives it the name "month". The same is true when the user types in the figures for "sales" and "income". The user can then write the averaging formula in a category-like text format, = total income / cars sold. The formula will find the appropriate data and calculate the results independent of the row. Like Improv, this formula does not refer to the physical location of the data in the sheet, so the sheet can be dramatically modified without causing the formula to fail. Similar to Improv, formulas can be represented as icons in Numbers, allowing them to be dragged about the sheets. One noteworthy example of this is a sidebar that contains the sum, average, and other basic calculations for the current selection in the active table. These serve a function similar to the sum that appears at the bottom of the window in Microsoft Excel. However, the user can drag one of the function icons from the sidebar into the sheet to make the calculation appear in that location. In another nod to Improv, the Formula List shows all of the formulas in the spreadsheet in a separate area and allows edits in place or easy navigation to their use in the sheets. Numbers '09 contains 262 built-in functions that can be used in formulas. This contrasts with Excel 2007's 338 functions. Many of the functions in Numbers are identical to those in Excel; missing ones tend to be related to statistics, although this area was greatly improved in Numbers '09. Numbers '09 includes a system for categorizing data similar to pivot tables. Pivots were introduced in Improv and were manipulated by dragging the category headers, allowing the user to quickly rotate rows into columns or vice versa. Although Numbers has similar draggable objects representing formulas, they are not used for this feature and direct manipulation is missing. Instead, Numbers places pop-up menus in the column headers allowing the user to collapse multiple rows into totals (sums, averages, etc.) based on data that is common across rows. This is similar functionality to a pivot table but lacks the ease of re-arrangement of the Improv model and other advanced features. Numbers 5.2, released on September 17, 2018, further improves on these features by adding Smart Categories, allowing the user to "quickly organize and summarize tables to gain new insights". Pivot tables were later added to Numbers 11.2 on September 28, 2021. Layout and display As Numbers uses the canvas as the basis for the document, media is not tied to the tables; one could build a Numbers canvas that contains a collection of photographs but no tables. In typical use, one or more tables are placed on the canvas and sized and styled to show only the data of interest. Charts and labels are commonly positioned around the tables. Other media, like photographs or illustrations, can be added as well. Like other products in the iWork suite, Numbers includes a variety of styles and layouts designed by professional illustrators. Opening an Excel sheet in Numbers results in a display with smooth fonts, a clean layout, and color selections. These can then be modified, optionally using one of the supplied templates, and saved out to Excel format again with these styles intact. Numbers also allows sheets to be emailed in Excel format in a single step or shared through Numbers for iCloud. Other features Table-centric workflow enabling lists to be structured with headers and summaries. Checkbox, slider, and pulldown list cells. Drag-and-drop of functions from a sidebar into cells. A Print Preview which allows editing while previewing. Imports from Microsoft Excel. This lacks certain Excel features, including Visual Basic for Applications (also absent in the 2008 version of Office for Mac, although it was reintroduced for the 2011 version) and pivot tables (added in ver 11.2). Exports to Microsoft Excel. Reception Numbers has been well received in the press, notably for its text-based formulas, clean look, and ease of use. Macworld has given it high marks, especially newer versions, awarding Numbers '09 four mice out of five. They did point out several common issues, especially problems exporting to Excel and the inability to "lock" cells to prevent them from moving when the table is scrolled. Numbers for the iPhone and iPad have received similar favorable reviews. However, version 3.0 of Numbers created an outpouring of complaints due to the loss of important business features, with the Apple support community showing a 10 to 1 ratio of dissatisfied users with the newer version of Numbers. Versions 4 and 5 of the software put many of these features back and added many new features and functionalities. In their review of Version 5, MacWorld concluded that "Numbers 5 for Mac advances the app, making it more useful for more purposes with less effort, but it’s still a shadow of full-feature business spreadsheet programs." See also Comparison of spreadsheet software Microsoft Excel Keynote (presentation software) Pages (word processor) Notes References External links —official site Numbers—Free resources at iWork Community Spreadsheet software Spreadsheet software for macOS MacOS-only software made by Apple Inc. 2007 software IOS-based software made by Apple Inc. ru:IWork#Numbers
Numbers (spreadsheet)
Mathematics
2,444
62,308,963
https://en.wikipedia.org/wiki/Split%20screen%20%28computing%29
Split screen is a display technique in computer graphics that consists of dividing graphics and/or text into non-overlapping adjacent parts, typically as two or four rectangular areas. This allows for the simultaneous presentation of (usually) related graphical and textual information on a computer display. TV sports adopted this presentation methodology in the 1960s for instant replay. Originally, non-dynamic split screens differed from windowing systems in that the latter allowed overlapping and freely movable parts of the screen (the "windows") to present both related and unrelated application data to the user. In contrast, the former were strictly limited to fixed positions. The split screen technique can also be used to run two instances of an application, potentially allowing another user to interact with the second instance. In video games The split screen feature is commonly used in non-networked, also known as couch co-op, video games with multiplayer options. In its most easily understood form, a split screen for a multiplayer video game is an audiovisual output device (usually a standard television for video game consoles) where the display has been divided into 2-4 equally sized areas (depending on number of players) so that the players can explore different areas simultaneously without being close to each other. This has historically been remarkably popular on consoles, which until the 2000s did not have access to the Internet or any other network and is less common today with modern support for networked console-to-console multiplayer. In competitive split-screen games, it is customarily considered cheating to look at another player's screen section to gain an advantage. History Split screen gaming dates back to at least the 1970s, with games such Drag Race (1977) from Kee Games in the arcades being presented in this format. It has always been a common feature of two or more player home console and computer games too, with notable titles being Kikstart II for 8-bit systems, a number of 16-bit racing games (such as Lotus Esprit Turbo Challenge and Road Rash II), and action/strategy games (such as Toejam & Earl and Lemmings ), all employing a vertical or horizontal screen split for two player games. Xenophobe is notable as a three-way split screen arcade title, although on home platforms it was reduced to one or two screens. The addition of four controller ports on home consoles also ushered in more four-way split screen games, with Mario Kart 64 and Goldeneye 007 on the Nintendo 64 being two well known examples. In arcades, machines tended to move towards having a whole screen for each player, or multiple connected machines, for multiplayer. On home machines, especially in the first and third person shooter genres, multiplayer is now more common over a network or the internet rather than locally with split screen. See also Multiplayer video game Screen tearing Split screen (video production) References Computer graphics User interface techniques Video game terminology
Split screen (computing)
Technology
590
37,759,555
https://en.wikipedia.org/wiki/International%20Radiation%20Protection%20Association
The International Radiation Protection Association (IRPA) is an independent non-profit association of national and regional radiation protection societies, and its mission is to advance radiation protection throughout the world. It is the international professional association for radiation protection. IRPA is recognized by the IAEA as a Non Governmental Organization (NGO) and is an observer on the IAEA Radiation Safety Standards Committee (RASSC). IRPA was formed on June 19, 1965, at a meeting in Los Angeles; stimulated by the desire of radiation protection professionals to have a world-wide body. Membership includes 50 Associate Societies covering 65 countries, totaling approximately 18,000 individual members. Structure The General Assembly, made up of representatives from the Associate Societies, is the representative body of the Association. It delegates authority to the Executive Council for the efficient administration of the affairs of the Association. Specific duties are carried out by IRPA Commissions, Committees, Task Groups and Working Groups: Commission on Publications Societies Admission and Development Committee International Congress Organising Committee International Congress Programme Committee Montreal Fund Committee Radiation Protection Strategy and Practice Committee Regional Congresses Co-ordinating Committee Rules Committee Sievert Award Committee Task Group on Security of Radioactive Sources Task Group on Public Understanding of Radiation Risk Working Group on Radiation Protection Certification and Qualification Associate societies The following is a list of the 50 Associate Societies (covering 65 countries): List of International Congresses The 2032 Congress (IRPA18) will be in Australia. The 2028 Congress (IRPA17) will be in Spain. Past Congresses IRPA 16 Orlando, July 2024 IRPA 15 Seoul, January 2021 IRPA 14 Cape Town, May 2016 IRPA 13 Glasgow, May 2012 IRPA 12 Buenos Aires, October 2008 IRPA 11 Madrid, May 2004 IRPA 10 Hiroshima, May 2000 IRPA 9 Vienna, April 1996 IRPA 8 Montreal, May 1992 IRPA 7 Sydney, April 1988 IRPA 6 Berlin, May 1984 IRPA 5 Jerusalem, March 1980 IRPA 4 Paris, April 1977 IRPA 3 Washington, September 1973 IRPA 2 Brighton, May 1970 IRPA 1 Rome, September 1966 International Cooperation IRPA maintains relations with many other international organizations in the field of radiation protection, such as those listed here. Inter-Governmental Organizations Non-Governmental Organizations Professional Organizations Awards Rolf M. Sievert Award Commencing with the 1973 IRPA Congress, each International Congress has been opened by the Sievert Lecture which is presented by the winner of the Sievert Award. This award is in honour of Rolf M. Sievert, a pioneer in radiation physics and radiation protection. The Sievert Award consists of a suitable scroll, certificate or parchment, containing the name of the recipient, the date it is presented, and an indication that the award honours the memory of Professor Rolf M. Sievert. The recipients of the Sievert Award are listed below: 1973 Prof. (Sweden), Radiation and Man Health Physics 31 (September), pp 265–272, 1976 1977 Prof. W.V. Mayneord (United Kingdom), The Time Factor in Carcinogenesis Health Physics 34 (April), pp 297–309, 1978 1980 Lauriston S. Taylor (USA), Some Nonscientific Influences on Radiation Protection Standards and Practice Health Physics 39 (December), pp 851–874, 1980 1984 Sir Edward Pochin (United Kingdom), Sieverts and Safety Health Physics 46(6), pp 1173–1179, 1984 1988 Prof. Dr. (Germany), Environmental Radioactivity and Man Health Physics 55(6), pp 845–853, 1988 1992 Dr. Giovanni Silini (Italy), Ethical Issues in Radiation Protection Health Physics 63(2), pp 139–148, 1992 1996 Dr. Daniel Beninson (Argentina), Risk of Radiation at Low Doses Health Physics 71(2), pp 122–125, 1996 2000 Prof. Dr. Itsuzo Shigematsu (Japan), Lessons from Atomic Bomb Survivors in Hiroshima and Nagasaki Health Physics 78(3), pp 234–241, 2000 2004 Dr. Abel J. Gonzalez (Argentina), Protecting Life against the Detrimental Effects Attributable to Radiation Exposure: Towards a Globally Harmonized Radiation Protection Regime Paper prepared for IRPA 2008 Prof. (Germany), Radiological Protection: Challenges and Fascination of Biological Research Stralenschutz Praxis 2009/2, pp 35–45, 2009 2012 Dr. Richard Osborne (Canada), A Story of T Lightly edited transcript of Dr. Osborne's lecture 2016 Dr. John Boice (USA), How to Protect the Public When you Can't Measure the Risk - The Role of Radiation Epidemiology 2020 Prof. Dr. Eliseo Vañó (Spain) See also Radioactivity Ionizing radiation Radiation protection International Atomic Energy Agency (IAEA) International Commission on Radiological Protection (ICRP) United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) International Commission on Non-Ionizing Radiation Protection (ICNIRP) Index of radiation articles References External links IRPA website Radiation Nuclear energy Nuclear organizations 1965 establishments in California Radiation protection
International Radiation Protection Association
Physics,Chemistry,Engineering
1,043
6,505,575
https://en.wikipedia.org/wiki/Milne%20model
The Milne model was a special-relativistic cosmological model of the universe proposed by Edward Arthur Milne in 1935. It is mathematically equivalent to a special case of the FLRW model in the limit of zero energy density and it obeys the cosmological principle. The Milne model is also similar to Rindler space in that both are simple re-parameterizations of flat Minkowski space. Since it features both zero energy density and maximally negative spatial curvature, the Milne model is inconsistent with cosmological observations. Cosmologists actually observe the universe's density parameter to be consistent with unity and its curvature to be consistent with flatness. Milne metric The Milne universe is a special case of a more general Friedmann–Lemaître–Robertson–Walker model (FLRW). The Milne solution can be obtained from the more generic FLRW model by demanding that the energy density, pressure and cosmological constant all equal zero and the spatial curvature is negative. From these assumptions and the Friedmann equations it follows that the scale factor must depend on time coordinate linearly. Setting the spatial curvature and speed of light to unity the metric for a Milne universe can be expressed with hyperspherical coordinates as: where is the metric for a two-sphere and is the curvature-corrected radial component for negatively curved space that varies between 0 and . The empty space that the Milne model describes can be identified with the inside of a light cone of an event in Minkowski space by a change of coordinates. Milne developed this model independent of general relativity but with awareness of special relativity. As he initially described it, the model has no expansion of space, so all of the redshift (except that caused by peculiar velocities) is explained by a recessional velocity associated with the hypothetical "explosion". However, the mathematical equivalence of the zero energy density () version of the FLRW metric to Milne's model implies that a full general relativistic treatment using Milne's assumptions would result in a linearly increasing scale factor for all time since the deceleration parameter is uniquely zero for such a model. Milne's density function Milne proposed that the universe's density changes in time because of an initial outward explosion of matter. Milne's model assumes an inhomogeneous density function which is Lorentz Invariant (around the event t=x=y=z=0). When rendered graphically Milne's density distribution shows a three-dimensional spherical Lobachevskian pattern with outer edges moving outward at the speed of light. Every inertial body perceives itself to be at the center of the explosion of matter (see observable universe), and sees the local universe as homogeneous and isotropic in the sense of the cosmological principle. In order to be consistent with general relativity, the universe's density must be negligible in comparison to the critical density at all times for which the Milne model is taken to apply. Notes References Milne Cosmology: Why I Keep Talking About It - a detailed non-technical introduction to the Milne model A thorough historical and theoretical study of the British Tradition in Cosmology, and one long celebration of Milne. Obsolete theories in physics Exact solutions in general relativity Minkowski spacetime 1935 in science
Milne model
Physics,Mathematics
675
43,431,530
https://en.wikipedia.org/wiki/Truscon%20Laboratories
Truscon Laboratories was a research and development chemical laboratory of the Trussed Concrete Steel Company ("Truscon") of Detroit, Michigan. It made waterproofing liquid chemical products that went into or on cement and plaster. The products goals were to provide damp-proofing and waterproofing finishing for concrete and Truscon steel to guard against disintegrating action of water and air. Description of water resistant products From Truscon laboratories viewpoint waterproofing was considered methods and means of protecting underground construction like foundations and footings. It also pertains to structures intended for retaining water like water tanks and containing water under hydrostatic conditions like in water pipes, tunnels, reservoirs, and cisterns. Damp-proofing was considered the methods of keeping dampness out of the main part of concrete buildings. It involves the methods of treating exposed walls above ground level to avoid the entrance of moisture into the building. These definitions then qualified their various products as servicing particular needs. From the initial idea of protecting against water damage developed the science of integral waterproofing—the introduction of some element into the wet cement during the process of making causing a high degree of impermeability and imperviousness. Water in masonry does harm structurally because of its solvent properties and because it expands when frozen breaking concrete. Water is observed into concrete walls like a sponge absorbs water through capillary action. A wet wall produces damp and clammy conditions that promote and spread disease. While structural damp-proofing and waterproofing to prevent decay was a motive of the Truscon laboratories chemicals, the side benefit was that it provided better hygienic conditions. Some of the products developed for damp-proofing was Por-Seal, Stone-Tex, Stone-Backing, and Plaster Bond. Water Proofing Paste, an ingredient used in the making of stucco cement and plaster, was developed for waterproofing. Another waterproofing product was named Water Proofed Cement Stucco. Surface protection products Truscon laboratories waterproofing protection products were for residential housing, apartment buildings, office buildings, hotels, hospitals, and manufacturing plants. They involved enamels and interior finishes. The products were coatings to provide a dustless waterproof washable surface for cement floors and walls. The Truscon laboratories slogan was Waterproof is Weatherproof. Some of the brand names of these specialized products were Asepticote, Sno-Wite, Industrial Enamel, Hospital Enamel, Dairy Enamel, Floor Finish, Edelweiss and Alkali-Proof Wall Size. Its Asepticote waterproofing product was used in houses, hospitals, and hotels for its eye-soothing finish. Truscon's "Waterproofing Paste" was an integral part of cement and used in floors, plaster, and stucco to waterproof walls and floors. "Industrial White" was used in the interiors of mills and factories because of its white brightness qualities. Truscon's "Granatex Floor Varnish", a stain-resistant product, was a transparent waterproofing agent that was used on concrete and wood floors. Truscon's Agatex was a chemical that hardened cement floors that was popular. It was a wet liquid that was applied on the surface of concrete like a paint or varnish. The product interacted with concrete and its dust chemically to resemble that of agate resulting in a hardened dustless surface when dry. The company claimed that the resultant treated surface was so hard that it would ring under a hammer like an anvil. Truscon's "Stone-Tex" product was used on all kinds of masonry building exterior walls as a water-resistant protective coating. It was used on concrete blocks, cement walls, stucco and brick. It was also used for making the building more attractive looking. Some of the thousands of Truscon laboratories product users were The Cincinnati Enquirer, American Tobacco Company, R. J. Reynolds Tobacco Company, Atlantic Petroleum, Haynes Automobile Company, Pennsylvania Railroad, E. W. Bliss Company, United States Military Academy, United States Marine Barracks, United States Shipping Board, Ferry-Morse Seed Company, Lowell Mills, Arlington Mills, H. J. Heinz Company, Dow Chemical laboratories, Winterhaven Citrus Growers' Association, Savage Arms, Curtiss Aeroplane and Motor Company, Liggett-Myers Tobacco Company and Sinclair Oil Corporation. St Johns County School District on Fullerwood Grade School building. Buildings that used Truscon's products were the Packard automobile factory plant building number 10, Highland Park Ford Plant, Fisher Building, Fisher Body, Frederick Stearns Building, Youth's Companion Building, Midland Packing Building, Majestic Theater (Detroit, Michigan), Beech-Nut, Waco High School, Kalamazoo Paper, Detroit Crosstown Garage, Pennsylvania Rubber Company building, Minneapolis High School, Detroit Athletic Club, Detroit News building, and Milton Bradley building. Iron and steel protection products Truscon laboratories iron and steel protection products were for priming structural steel like in structures iron industrial building frames, factories, bridges, viaducts, stacks, and boilers. They were also used with brewing coils, ice making coils, fireproofing, and acid-proofing. These paint on products were waterproofing and rust preventing agents. Many of these products went under the brand name of Bar-Ox and were given numbers that related to specific applications. Examples were Bar-Ox No. 7 for coating on exposed structural steel, Bar-Ox No. I4 for brine and condenser pipes, Bar-Ox 21 for stack enamel and boiler front enamel, Bar-Ox No. 28 for acid-proofing, Bar-Ox No. 35 for guarding against alkaline conditions, Bar-Ox No. 42 for conduit coating, and Bar-Ox No. 49 for gas holder tanks. Labor relations The Detroit operation's employees were organized by District 50 of the United Mine Workers. See also Hy-Rib Julius Kahn Kahn System Albert Kahn Associates References Bibliography Further reading Truscon permanent buildings standardized for general industries External links Video on "Engineering Industrial Architecture: Albert Kahn and the Trussed Concrete Steel Company" Michigan Historical Collections, Bentley Historical Library, University of Michigan, Albert Kahn Papers, 1896–2011 Laboratories in the United States Chemistry laboratories
Truscon Laboratories
Chemistry
1,285
23,506,671
https://en.wikipedia.org/wiki/Plasma%20membrane%20monoamine%20transporter
The plasma membrane monoamine transporter (PMAT) is a low-affinity monoamine transporter protein which in humans is encoded by the SLC29A4 gene. It is known alternatively as the human equilibrative nucleoside transporter-4 (hENT4). It was discovered in 2004 and has been identified as a potential alternate target for treating various conditions. Structure and function The plasma membrane monoamine transporter is an integral membrane protein that transports the monoamine neurotransmitters (serotonin, dopamine, norepinephrine) as well as adenosine, from synaptic spaces into presynaptic neurons or neighboring glial cells. It is abundantly expressed in the human brain, heart tissue, and skeletal muscle, as well as in the kidneys, liver, and small intestine. It is relatively insensitive to the high affinity inhibitors (such as SSRIs) of the SLC6A monoamine transporters (SERT, DAT, NET), as well being only weakly sensitive to the adenosine transport inhibitor, dipyridamole. PMAT is especially prevalent in dendrites with dense monoaminergic input, and has a significant impact on synaptic clearance of monoamines, especially under non-homeostatic conditions. PMAT transport is electrogenic, utilizing the naturally negative interior of the cells to attract the cationic monoamines, thereby increasing its Vmax (without changing affinity) with increasingly negative membrane potentials. PMAT preferentially transports 5-HT and DA, with a transport efficiency comparable to SERT and DAT, but a with a lower Km. PMAT and similar transporters like OCT3 are commonly referred to as uptake2 transporters. Uptake2 transport refers to the transport of biogenic amines through low affinity, high-capacity transporters. At low a pH, (5.5-6.5 range, as occurs under ischemic conditions) its transport efficiency increases for all substrates, whereas at high pH (>8) transport is blocked. Unlike other members of the ENT family, it is impermeable to most nucleosides, with the exception of the inhibitory neurotransmitter and ribonucleoside adenosine, which it is permeable to in a highly pH-dependent manner. In addition to transporting neurotransmitters at synapses, PMAT plays a key role in neurotoxin and drug removal from the cerebrospinal fluid. It is also likely to play a key role in histamine clearance from synapses, specifically through astrocytes. PMAT has 530 amino acid residues with a predicted molecular weight of 58kD, 11 transmembrane segments, an extracellular C-terminus, and an intracellular N-terminus. It has several phosphorylation sites and a potential glycosylation site, and its first 6 transmembrane domains are suspected to be important for substrate recognition. It is not homologous to other known monoamine transporters, such as the high-affinity SERT, DAT, and NET, or the low-affinity SLC22A OCT family. It was initially identified by a search of the draft human genome database through its sequence homology to ENTs (equilibrative nucleoside transporters). Clinical significance Common SSRIs have been shown to inhibit PMAT uptake but at far greater concentrations than SERT. Residual uptake due to incomplete inhibition of PMAT may contribute to SSRI treatment resistance. Mice models with specific constitutive genetic deficiencies in PMAT have demonstrated behavioral changes relative to WT, including upon anti-depressant administration. PMAT was demonstrated to be differentially expressed in juvenile or adult mice. This differential expression coincided with decreased SSRI efficacy, and an anti-depressant-like effect of the PMAT inhibitor Decynium-22, suggesting a tentative mechanism for treatment-resistant depression in human adolescents and children. Parkinson's disease states may be affected by PMAT activity at the synapse, due to its higher affinity for dopamine. In seeking to treat Parkinson's through increasing synaptic dopamine concentrations, it is possible that PMAT along with standard DAT inhibition could lead to better treatment outcomes with more complete blockage of uptake. PMAT is expressed within the apical membranes of enterocytes in the small intestine. Gene variants affecting the expression of PMAT have been demonstrated to increase the occurrence of GI disturbance side effects with metformin administration, the most common type II diabetes medication. Inhibitors No highly selective PMAT inhibitors are yet available, but a number of existing compounds have been found to act as weak inhibitors of this transporter, with the exception of decynium-22, which is more potent. These compounds include: Luteolin Cimetidine Decynium-22 Dipyridamole Quinidine Quinine Tryptamine Verapamil Fluoxetine Sertraline Citalopram Fluvoxamine Paroxetine Lopinavir shows promising results as a newly discovered selective PMAT inhibitor that does not impact. Substrates Acetylcholine (poor) Adenosine (at low pH) Dopamine Epinephrine Histamine (poor) Metformin (poor, pH-dependent) MPP+ Norepinephrine Serotonin Ritonavir See also Extraneuronal monoamine transporter (EMT) References Membrane proteins Neurotransmitter transporters Solute carrier family Molecular neuroscience
Plasma membrane monoamine transporter
Chemistry,Biology
1,168
214,124
https://en.wikipedia.org/wiki/Umbral%20calculus
The term umbral calculus has two related but distinct meanings. In mathematics, before the 1970s, umbral calculus referred to the surprising similarity between seemingly unrelated polynomial equations and certain shadowy techniques used to prove them. These techniques were introduced in 1861 by John Blissard and are sometimes called Blissard's symbolic method. They are often attributed to Édouard Lucas (or James Joseph Sylvester), who used the technique extensively. The use of shadowy techniques was put on a solid mathematical footing starting in the 1970s, and the resulting mathematical theory is also referred to as "umbral calculus". History In the 1930s and 1940s, Eric Temple Bell attempted to set the umbral calculus on a rigorous footing, however his attempt in making this kind of argument logically rigorous was unsuccessful. The combinatorialist John Riordan in his book Combinatorial Identities published in the 1960s, used techniques of this sort extensively. In the 1970s, Steven Roman, Gian-Carlo Rota, and others developed the umbral calculus by means of linear functionals on spaces of polynomials. Currently, umbral calculus refers to the study of Sheffer sequences, including polynomial sequences of binomial type and Appell sequences, but may encompass systematic correspondence techniques of the calculus of finite differences. 19th-century umbral calculus The method is a notational procedure used for deriving identities involving indexed sequences of numbers by pretending that the indices are exponents. Construed literally, it is absurd, and yet it is successful: identities derived via the umbral calculus can also be properly derived by more complicated methods that can be taken literally without logical difficulty. An example involves the Bernoulli polynomials. Consider, for example, the ordinary binomial expansion (which contains a binomial coefficient): and the remarkably similar-looking relation on the Bernoulli polynomials: Compare also the ordinary derivative to a very similar-looking relation on the Bernoulli polynomials: These similarities allow one to construct umbral proofs, which on the surface cannot be correct, but seem to work anyway. Thus, for example, by pretending that the subscript n − k is an exponent: and then differentiating, one gets the desired result: In the above, the variable b is an "umbra" (Latin for shadow). See also Faulhaber's formula. Umbral Taylor series In differential calculus, the Taylor series of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. That is, a real or complex-valued function f (x) that is analytic at can be written as: Similar relationships were also observed in the theory of finite differences. The umbral version of the Taylor series is given by a similar expression involving the k-th forward differences of a polynomial function f, where is the Pochhammer symbol used here for the falling sequential product. A similar relationship holds for the backward differences and rising factorial. This series is also known as the Newton series or Newton's forward difference expansion. The analogy to Taylor's expansion is utilized in the calculus of finite differences. Modern umbral calculus Another combinatorialist, Gian-Carlo Rota, pointed out that the mystery vanishes if one considers the linear functional L on polynomials in z defined by Then, using the definition of the Bernoulli polynomials and the definition and linearity of L, one can write This enables one to replace occurrences of by , that is, move the n from a subscript to a superscript (the key operation of umbral calculus). For instance, we can now prove that: Rota later stated that much confusion resulted from the failure to distinguish between three equivalence relations that occur frequently in this topic, all of which were denoted by "=". In a paper published in 1964, Rota used umbral methods to establish the recursion formula satisfied by the Bell numbers, which enumerate partitions of finite sets. In the paper of Roman and Rota cited below, the umbral calculus is characterized as the study of the umbral algebra, defined as the algebra of linear functionals on the vector space of polynomials in a variable x, with a product L1L2 of linear functionals defined by When polynomial sequences replace sequences of numbers as images of yn under the linear mapping L, then the umbral method is seen to be an essential component of Rota's general theory of special polynomials, and that theory is the umbral calculus by some more modern definitions of the term. A small sample of that theory can be found in the article on polynomial sequences of binomial type. Another is the article titled Sheffer sequence. Rota later applied umbral calculus extensively in his paper with Shen to study the various combinatorial properties of the cumulants. See also Bernoulli umbra Umbral composition of polynomial sequences Calculus of finite differences Pidduck polynomials Symbolic method in invariant theory Narumi polynomials Notes References G.-C. Rota, D. Kahaner, and A. Odlyzko, "Finite Operator Calculus," Journal of Mathematical Analysis and its Applications, vol. 42, no. 3, June 1973. Reprinted in the book with the same title, Academic Press, New York, 1975. . Reprinted by Dover, 2005. External links Roman, S. (1982), The Theory of the Umbral Calculus, I Combinatorics Polynomials Finite differences
Umbral calculus
Mathematics
1,125
857,780
https://en.wikipedia.org/wiki/Entropic%20uncertainty
In quantum mechanics, information theory, and Fourier analysis, the entropic uncertainty or Hirschman uncertainty is defined as the sum of the temporal and spectral Shannon entropies. It turns out that Heisenberg's uncertainty principle can be expressed as a lower bound on the sum of these entropies. This is stronger than the usual statement of the uncertainty principle in terms of the product of standard deviations. In 1957, Hirschman considered a function f and its Fourier transform g such that where the "≈" indicates convergence in 2, and normalized so that (by Plancherel's theorem), He showed that for any such functions the sum of the Shannon entropies is non-negative, A tighter bound, was conjectured by Hirschman and Everett, proven in 1975 by W. Beckner and in the same year interpreted as a generalized quantum mechanical uncertainty principle by Białynicki-Birula and Mycielski. The equality holds in the case of Gaussian distributions. Note, however, that the above entropic uncertainty function is distinctly different from the quantum Von Neumann entropy represented in phase space. Sketch of proof The proof of this tight inequality depends on the so-called (q, p)-norm of the Fourier transformation. (Establishing this norm is the most difficult part of the proof.) From this norm, one is able to establish a lower bound on the sum of the (differential) Rényi entropies, , where , which generalize the Shannon entropies. For simplicity, we consider this inequality only in one dimension; the extension to multiple dimensions is straightforward and can be found in the literature cited. Babenko–Beckner inequality The (q, p)-norm of the Fourier transform is defined to be where   and In 1961, Babenko found this norm for even integer values of q. Finally, in 1975, using Hermite functions as eigenfunctions of the Fourier transform, Beckner proved that the value of this norm (in one dimension) for all q ≥ 2 is Thus we have the Babenko–Beckner inequality that Rényi entropy bound From this inequality, an expression of the uncertainty principle in terms of the Rényi entropy can be derived. Let so that and , we have Squaring both sides and taking the logarithm, we get We can rewrite the condition on as Assume , then we multiply both sides by the negative to get Rearranging terms yields an inequality in terms of the sum of Rényi entropies, Right-hand side Shannon entropy bound Taking the limit of this last inequality as and the substitutions yields the less general Shannon entropy inequality, valid for any base of logarithm, as long as we choose an appropriate unit of information, bit, nat, etc. The constant will be different, though, for a different normalization of the Fourier transform, (such as is usually used in physics, with normalizations chosen so that ħ=1 ), i.e., In this case, the dilation of the Fourier transform absolute squared by a factor of 2 simply adds log(2) to its entropy. Entropy versus variance bounds The Gaussian or normal probability distribution plays an important role in the relationship between variance and entropy: it is a problem of the calculus of variations to show that this distribution maximizes entropy for a given variance, and at the same time minimizes the variance for a given entropy. In fact, for any probability density function on the real line, Shannon's entropy inequality specifies: where H is the Shannon entropy and V is the variance, an inequality that is saturated only in the case of a normal distribution. Moreover, the Fourier transform of a Gaussian probability amplitude function is also Gaussian—and the absolute squares of both of these are Gaussian, too. This can then be used to derive the usual Robertson variance uncertainty inequality from the above entropic inequality, enabling the latter to be tighter than the former. That is (for ħ=1), exponentiating the Hirschman inequality and using Shannon's expression above, Hirschman explained that entropy—his version of entropy was the negative of Shannon's—is a "measure of the concentration of [a probability distribution] in a set of small measure." Thus a low or large negative Shannon entropy means that a considerable mass of the probability distribution is confined to a set of small measure. Note that this set of small measure need not be contiguous; a probability distribution can have several concentrations of mass in intervals of small measure, and the entropy may still be low no matter how widely scattered those intervals are. This is not the case with the variance: variance measures the concentration of mass about the mean of the distribution, and a low variance means that a considerable mass of the probability distribution is concentrated in a contiguous interval of small measure. To formalize this distinction, we say that two probability density functions and are equimeasurable if where is the Lebesgue measure. Any two equimeasurable probability density functions have the same Shannon entropy, and in fact the same Rényi entropy, of any order. The same is not true of variance, however. Any probability density function has a radially decreasing equimeasurable "rearrangement" whose variance is less (up to translation) than any other rearrangement of the function; and there exist rearrangements of arbitrarily high variance, (all having the same entropy.) See also Inequalities in information theory Logarithmic Schrödinger equation Uncertainty principle Riesz–Thorin theorem Fourier transform References Further reading Jizba, P.; Ma, Y.; Hayes, A.; Dunningham, J.A. (2016). "One-parameter class of uncertainty relations based on entropy power". Phys. Rev. E 93 (6): 060104(R). doi:10.1103/PhysRevE.93.060104. arXiv:math/0605510v1 Quantum mechanical entropy Information theory Physical quantities Inequalities
Entropic uncertainty
Physics,Mathematics,Technology,Engineering
1,253
41,706
https://en.wikipedia.org/wiki/Signal-to-noise%20ratio
Signal-to-noise ratio (SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. SNR is defined as the ratio of signal power to noise power, often expressed in decibels. A ratio higher than 1:1 (greater than 0 dB) indicates more signal than noise. SNR is an important parameter that affects the performance and quality of systems that process or transmit signals, such as communication systems, audio systems, radar systems, imaging systems, and data acquisition systems. A high SNR means that the signal is clear and easy to detect or interpret, while a low SNR means that the signal is corrupted or obscured by noise and may be difficult to distinguish or recover. SNR can be improved by various methods, such as increasing the signal strength, reducing the noise level, filtering out unwanted noise, or using error correction techniques. SNR also determines the maximum possible amount of data that can be transmitted reliably over a given channel, which depends on its bandwidth and SNR. This relationship is described by the Shannon–Hartley theorem, which is a fundamental law of information theory. SNR can be calculated using different formulas depending on how the signal and noise are measured and defined. The most common way to express SNR is in decibels, which is a logarithmic scale that makes it easier to compare large or small values. Other definitions of SNR may use different factors or bases for the logarithm, depending on the context and application. Definition One definition of signal-to-noise ratio is the ratio of the power of a signal (meaningful input) to the power of background noise (meaningless or unwanted input): where is average power. Both signal and noise power must be measured at the same or equivalent points in a system, and within the same system bandwidth. The signal-to-noise ratio of a random variable () to random noise is: where E refers to the expected value, which in this case is the mean square of . If the signal is simply a constant value of , this equation simplifies to: If the noise has expected value of zero, as is common, the denominator is its variance, the square of its standard deviation . The signal and the noise must be measured the same way, for example as voltages across the same impedance. Their root mean squares can alternatively be used according to: where is root mean square (RMS) amplitude (for example, RMS voltage). Decibels Because many signals have a very wide dynamic range, signals are often expressed using the logarithmic decibel scale. Based upon the definition of decibel, signal and noise may be expressed in decibels (dB) as and In a similar manner, SNR may be expressed in decibels as Using the definition of SNR Using the quotient rule for logarithms Substituting the definitions of SNR, signal, and noise in decibels into the above equation results in an important formula for calculating the signal to noise ratio in decibels, when the signal and noise are also in decibels: In the above formula, P is measured in units of power, such as watts (W) or milliwatts (mW), and the signal-to-noise ratio is a pure number. However, when the signal and noise are measured in volts (V) or amperes (A), which are measures of amplitude, they must first be squared to obtain a quantity proportional to power, as shown below: Dynamic range The concepts of signal-to-noise ratio and dynamic range are closely related. Dynamic range measures the ratio between the strongest un-distorted signal on a channel and the minimum discernible signal, which for most purposes is the noise level. SNR measures the ratio between an arbitrary signal level (not necessarily the most powerful signal possible) and noise. Measuring signal-to-noise ratios requires the selection of a representative or reference signal. In audio engineering, the reference signal is usually a sine wave at a standardized nominal or alignment level, such as 1 kHz at +4 dBu (1.228 VRMS). SNR is usually taken to indicate an average signal-to-noise ratio, as it is possible that instantaneous signal-to-noise ratios will be considerably different. The concept can be understood as normalizing the noise level to 1 (0 dB) and measuring how far the signal 'stands out'. Difference from conventional power In physics, the average power of an AC signal is defined as the average value of voltage times current; for resistive (non-reactive) circuits, where voltage and current are in phase, this is equivalent to the product of the rms voltage and current: But in signal processing and communication, one usually assumes that so that factor is usually not included while measuring power or energy of a signal. This may cause some confusion among readers, but the resistance factor is not significant for typical operations performed in signal processing, or for computing power ratios. For most cases, the power of a signal would be considered to be simply Alternative definition An alternative definition of SNR is as the reciprocal of the coefficient of variation, i.e., the ratio of mean to standard deviation of a signal or measurement: where is the signal mean or expected value and is the standard deviation of the noise, or an estimate thereof. Notice that such an alternative definition is only useful for variables that are always non-negative (such as photon counts and luminance), and it is only an approximation since . It is commonly used in image processing, where the SNR of an image is usually calculated as the ratio of the mean pixel value to the standard deviation of the pixel values over a given neighborhood. Sometimes SNR is defined as the square of the alternative definition above, in which case it is equivalent to the more common definition: This definition is closely related to the sensitivity index or d, when assuming that the signal has two states separated by signal amplitude , and the noise standard deviation does not change between the two states. The Rose criterion (named after Albert Rose) states that an SNR of at least 5 is needed to be able to distinguish image features with certainty. An SNR less than 5 means less than 100% certainty in identifying image details. Yet another alternative, very specific, and distinct definition of SNR is employed to characterize sensitivity of imaging systems; see Signal-to-noise ratio (imaging). Related measures are the "contrast ratio" and the "contrast-to-noise ratio". Modulation system measurements Amplitude modulation Channel signal-to-noise ratio is given by where W is the bandwidth and is modulation index Output signal-to-noise ratio (of AM receiver) is given by Frequency modulation Channel signal-to-noise ratio is given by Output signal-to-noise ratio is given by Noise reduction All real measurements are disturbed by noise. This includes electronic noise, but can also include external events that affect the measured phenomenon — wind, vibrations, the gravitational attraction of the moon, variations of temperature, variations of humidity, etc., depending on what is measured and of the sensitivity of the device. It is often possible to reduce the noise by controlling the environment. Internal electronic noise of measurement systems can be reduced through the use of low-noise amplifiers. When the characteristics of the noise are known and are different from the signal, it is possible to use a filter to reduce the noise. For example, a lock-in amplifier can extract a narrow bandwidth signal from broadband noise a million times stronger. When the signal is constant or periodic and the noise is random, it is possible to enhance the SNR by averaging the measurements. In this case the noise goes down as the square root of the number of averaged samples. Digital signals When a measurement is digitized, the number of bits used to represent the measurement determines the maximum possible signal-to-noise ratio. This is because the minimum possible noise level is the error caused by the quantization of the signal, sometimes called quantization noise. This noise level is non-linear and signal-dependent; different calculations exist for different signal models. Quantization noise is modeled as an analog error signal summed with the signal before quantization ("additive noise"). This theoretical maximum SNR assumes a perfect input signal. If the input signal is already noisy (as is usually the case), the signal's noise may be larger than the quantization noise. Real analog-to-digital converters also have other sources of noise that further decrease the SNR compared to the theoretical maximum from the idealized quantization noise, including the intentional addition of dither. Although noise levels in a digital system can be expressed using SNR, it is more common to use Eb/No, the energy per bit per noise power spectral density. The modulation error ratio (MER) is a measure of the SNR in a digitally modulated signal. Fixed point For n-bit integers with equal distance between quantization levels (uniform quantization) the dynamic range (DR) is also determined. Assuming a uniform distribution of input signal values, the quantization noise is a uniformly distributed random signal with a peak-to-peak amplitude of one quantization level, making the amplitude ratio 2n/1. The formula is then: This relationship is the origin of statements like "16-bit audio has a dynamic range of 96 dB". Each extra quantization bit increases the dynamic range by roughly 6 dB. Assuming a full-scale sine wave signal (that is, the quantizer is designed such that it has the same minimum and maximum values as the input signal), the quantization noise approximates a sawtooth wave with peak-to-peak amplitude of one quantization level and uniform distribution. In this case, the SNR is approximately Floating point Floating-point numbers provide a way to trade off signal-to-noise ratio for an increase in dynamic range. For n-bit floating-point numbers, with n-m bits in the mantissa and m bits in the exponent: The dynamic range is much larger than fixed-point but at a cost of a worse signal-to-noise ratio. This makes floating-point preferable in situations where the dynamic range is large or unpredictable. Fixed-point's simpler implementations can be used with no signal quality disadvantage in systems where dynamic range is less than 6.02m. The very large dynamic range of floating-point can be a disadvantage, since it requires more forethought in designing algorithms. Optical signals Optical signals have a carrier frequency (about and more) that is much higher than the modulation frequency. This way the noise covers a bandwidth that is much wider than the signal itself. The resulting signal influence relies mainly on the filtering of the noise. To describe the signal quality without taking the receiver into account, the optical SNR (OSNR) is used. The OSNR is the ratio between the signal power and the noise power in a given bandwidth. Most commonly a reference bandwidth of 0.1 nm is used. This bandwidth is independent of the modulation format, the frequency and the receiver. For instance an OSNR of 20 dB/0.1 nm could be given, even the signal of 40 GBit DPSK would not fit in this bandwidth. OSNR is measured with an optical spectrum analyzer. Types and abbreviations Signal to noise ratio may be abbreviated as SNR and less commonly as S/N. PSNR stands for peak signal-to-noise ratio. GSNR stands for geometric signal-to-noise ratio. SINR is the signal-to-interference-plus-noise ratio. Other uses While SNR is commonly quoted for electrical signals, it can be applied to any form of signal, for example isotope levels in an ice core, biochemical signaling between cells, or financial trading signals. The term is sometimes used metaphorically to refer to the ratio of useful information to false or irrelevant data in a conversation or exchange. For example, in online discussion forums and other online communities, off-topic posts and spam are regarded as that interferes with the of appropriate discussion. SNR can also be applied in marketing and how business professionals manage information overload. Managing a healthy signal to noise ratio can help business executives improve their KPIs (Key Performance Indicators). Similar concepts The signal-to-noise ratio is similar to Cohen's d given by the difference of estimated means divided by the standard deviation of the data and is related to the test statistic in the t-test. See also Audio system measurements Generation loss Matched filter Near–far problem Noise margin Omega ratio Pareidolia Peak signal-to-noise ratio Signal-to-noise statistic Signal-to-interference-plus-noise ratio SINAD SINADR Subjective video quality Total harmonic distortion Video quality Notes References External links ADC and DAC Glossary – Maxim Integrated Products Understand SINAD, ENOB, SNR, THD, THD + N, and SFDR so you don't get lost in the noise floor – Analog Devices The Relationship of dynamic range to data word size in digital audio processing Calculation of signal-to-noise ratio, noise voltage, and noise level Learning by simulations – a simulation showing the improvement of the SNR by time averaging Dynamic Performance Testing of Digital Audio D/A Converters Fundamental theorem of analog circuits: a minimum level of power must be dissipated to maintain a level of SNR Interactive webdemo of visualization of SNR in a QAM constellation diagram Institute of Telecommunicatons, University of Stuttgart Quantization Noise Widrow & Kollár Quantization book page with sample chapters and additional material Signal-to-noise ratio online audio demonstrator - Virtual Communications Lab Engineering ratios Error measures Measurement Electrical parameters Audio amplifier specifications Noise (electronics) Statistical ratios Acoustics Sound
Signal-to-noise ratio
Physics,Mathematics,Engineering
2,852
24,508,689
https://en.wikipedia.org/wiki/Gymnopilus%20perplexus
Gymnopilus perplexus is a species of mushroom in the family Hymenogastraceae. See also List of Gymnopilus species External links Gymnopilus perplexus at Index Fungorum perplexus Fungi of North America Fungus species
Gymnopilus perplexus
Biology
56
47,385,109
https://en.wikipedia.org/wiki/Kepler-296
Kepler-296 is a binary star system in the constellation Draco. The primary star appears to be a late K-type main-sequence star, while the secondary is a red dwarf. Planetary system Five exoplanets have been detected around the system; all are believed to be orbiting the primary star rather than its dimmer companion. Two planets in particular, Kepler-296e and Kepler-296f, are likely located in the habitable zone. For the planetary system to remain stable, no additional giant planets can be located up to orbital radius 10.1 AU. See also Habitability of red dwarf systems List of potentially habitable exoplanets References External links Kepler-296 - Open Exoplanet Catalogue Draco (constellation) M-type main-sequence stars Binary stars J19060960+4926143 Planetary systems with five confirmed planets K-type main-sequence stars
Kepler-296
Astronomy
186
19,206,276
https://en.wikipedia.org/wiki/Abell%2039
Abell 39 (PN A66 39) is a low surface brightness planetary nebula in the constellation of Hercules. It is the 39th entry in George Abell's 1966 Abell Catalog of Planetary Nebulae (and 27th in his 1955 catalog) of 86 old planetary nebulae which either Abell or Albert George Wilson discovered before August 1955 as part of the National Geographic Society - Palomar Observatory Sky Survey. It is estimated to be about 3,800 light-years from earth and thus 2,600 light-years above the Galactic plane. It is almost perfectly spherical and also one of the largest known spheres with a radius of about 1.4 light-years. Central star Its central star is slightly west of center by about 2 or 0.1 light-years. This offset does not appear to be due to interaction with the interstellar medium, but instead, it is hypothesized that a small asymmetric mass ejection has accelerated the central star. The mass of the central star is estimated to be about with the material in the planetary nebula comprising an additional . This planetary nebula has a nearly uniform spherical shell. However, the eastern limb of the nebula is 50% more luminous than the western limb. Additionally, irregularities in the surface brightness are seen across the face of the shell. The source of the east–west asymmetry is not known but it could be related to the offset of the central star. The central star is classified as a subdwarf O star with surface temperature of 8900 K. Structure and composition The bright rim of the planetary nebula has an average thickness of about 10.1, or an absolute size of about 0.19 light-years. There is a faint halo that extends about 18 beyond the bright rim giving a complete diameter of around 190 under the assumption that this emission is uniform around the planetary nebula. This planetary nebula has been expanding for an estimated 11,000 years, based on an assumed expansion velocity between 32 and 37 km/s and a 0.4 parsec radius. Background galaxies are visible near the nebula, and some can be seen through the translucent nebula. Oxygen is only about half as abundant in the nebula as it is in our own sun. References External links George Jacoby et al. created a spectacular image of Abell 39 in 1997. Planetary nebulae Hercules (constellation) 39
Abell 39
Astronomy
479