id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
5,699,083
https://en.wikipedia.org/wiki/Survival%20Under%20Atomic%20Attack
Survival Under Atomic Attack was the title of an official United States government booklet released in 1951 by the Executive Office of the President, the National Security Resources Board (document 130), and the Civil Defense Office. Released at the onset of the Cold War era, the pamphlet was in line with rising fears that the Soviet Union would launch a nuclear attack against the United States, and outlined what to do in the event of an atomic attack. The booklet introduced general public to the effects of nuclear weapons and was aimed at calming down the fears surrounding them. Survival Under Atomic Attack was the first entry in a series of government publications and communications that employed the strategy of "emotion management" in order to neutralize the horrifying aspects of nuclear weapons. Purpose Published in 1950 by the Government Printing Office, one year after the Soviet Union detonated their first atomic bomb, the booklet explains how to protect oneself, one's food and water supply, and one's home. It also covered how to prevent burns and what to do if exposed to radiation. The U.S Strategic bombing survey had assessed the civilian response in Hiroshima and Nagasaki beginning as early as August–September 1945 and its report was "Based on a detailed investigation of all the facts, and supported by the testimony of the surviving Japanese leaders involved...". Secondly, the Atomic Bomb Casualty Commission was active from 1946 to 1975 studying the effects of the two bombs on survivors in both cities and thus represented four years of post-bombing study at the time of publication. Center Insert The four pages in the center of the brochure (15, 16, 17, 18) were designed to be torn out. "Remove this sheet and keep it with you until you've memorized it." Kill the Myths (15) Atomic Weapons Will Not Destroy The Earth Atomic bombs hold more death and destruction than man ever before has wrapped up in a single package, but their over-all power still has very definite limits. Not even hydrogen bombs will blow the earth apart or kill us all by radioactivity. Doubling Bomb Power Does Not Double Destruction Modern A-bombs can cause heavy damage 2 miles away, but doubling their power would extend that range only to 2.5 miles. To stretch the damage range from 2 to 4 miles would require a weapon more than 8 times the rated power of present models. Radioactivity Is Not The Bomb's Greatest Threat In most atom raids, blast and heat are by far the greatest dangers that people must face. Radioactivity alone would account for only a small percentage of all human deaths and injuries, except in underground or underwater explosions. Radiation Sickness Is Not Always Fatal In small amounts, radioactivity seldom is harmful. Even when serious radiation sickness follows a heavy dosage, there is still a good chance for recovery. Six Survival Secrets For Atomic Attacks (16, 17) Always Put First Things First And (16) 1. Try To Get Shielded If you have time, get down in a basement or subway. Should you unexpectedly be caught out-of-doors, seek shelter alongside a building, or jump in any handy ditch or gutter. 2. Drop Flat On Ground Or Floor To keep from being tossed about and to lessen the chances of being struck by falling and flying objects, flatten out at the base of a wall, or at the bottom of a bank. 3. Bury Your Face In Your Arms When you drop flat, hide your eyes in the crook of your elbow. That will protect your face from flash burns, prevent temporary blindness and keep flying objects out of your eyes. Never Lose Your Head And (17) 4. Don't Rush Outside Right After A Bombing After an air burst, wait a few minutes then go help to fight fires. After other kinds of bursts wait at least 1 hour to give lingering radiation some chance to die down. 5. Don't Take Chances With Food Or Water In Open Containers To prevent radioactive poisoning or disease, select your food and water with care. When there is reason to believe they may be contaminated, stick to canned and bottled things if possible. 6. Don't Start Rumors In the confusion that follows a bombing, a single rumor might touch off a panic that could cost your life. Five Keys To Household Safety (18) 1. Strive For "Fireproof Housekeeping" Don't let trash pile up, and keep waste paper in covered containers. When an alert sounds, do all you can to eliminate sparks by shutting off the oil burner and covering all open flames. 2. Know Your Own Home Know which is the safest part of your cellar, learn how to turn off your oil burner and what to do about utilities. 3. Have Emergency Equipment And Supplies Handy Always have a good flashlight, a radio, first-aid equipment and a supply of canned goods in the house. 4. Close All Windows And Doors And Draw The Blinds If you have time when an alert sounds, close the house up tight in order to keep out fire sparks and radioactive dusts and to lessen the chances of being cut by flying glass. Keep the house closed until all danger is past. 5. Use the Telephone Only For True Emergencies Do not use the phone unless absolutely necessary. Leave the lines open for real emergency traffic. See also List of books about nuclear issues Continuity of government Duck and Cover (film) Fallout Protection Nuclear warfare Protect and Survive Survivalism United States Civil Defense Nuclear War Survival Skills References External links Survival under Atomic Attack, (PDF-3 Mb). 1951, Reprint by City of Boston, Department of Civil Defense via us.archive.org Shelter from Atomic Attack in Existing Buildings, 1952, archive.org Ten for Survival : Survive Nuclear Attack, 1961, archive.org 1950 non-fiction books Disaster preparedness in the United States Publications of the United States government Works about the Cold War Nuclear warfare Books about nuclear issues Cold War history of the United States United States civil defense
Survival Under Atomic Attack
[ "Chemistry" ]
1,197
[ "Radioactivity", "Nuclear warfare" ]
5,699,217
https://en.wikipedia.org/wiki/Diffusion%20creep
Diffusion creep refers to the deformation of crystalline solids by the diffusion of vacancies through their crystal lattice. Diffusion creep results in plastic deformation rather than brittle failure of the material. Diffusion creep is more sensitive to temperature than other deformation mechanisms. It becomes especially relevant at high homologous temperatures (i.e. within about a tenth of its absolute melting temperature). Diffusion creep is caused by the migration of crystalline defects through the lattice of a crystal such that when a crystal is subjected to a greater degree of compression in one direction relative to another, defects migrate to the crystal faces along the direction of compression, causing a net mass transfer that shortens the crystal in the direction of maximum compression. The migration of defects is in part due to vacancies, whose migration is equal to a net mass transport in the opposite direction. Principle Crystalline materials are never perfect on a microscale. Some sites of atoms in the crystal lattice can be occupied by point defects, such as "alien" particles or vacancies. Vacancies can actually be thought of as chemical species themselves (or part of a compound species/component) that may then be treated using heterogeneous phase equilibria. The number of vacancies may also be influenced by the number of chemical impurities in the crystal lattice, if such impurities require the formation of vacancies to exist in the lattice. A vacancy can move through the crystal structure when the neighbouring particle "jumps" in the vacancy, so that the vacancy moves in effect one site in the crystal lattice. Chemical bonds need to be broken and new bonds have to be formed during the process, therefore a certain activation energy is needed. Moving a vacancy through a crystal becomes therefore easier when the temperature is higher. The most stable state will be when all vacancies are evenly spread through the crystal. This principle follows from Fick's law: In which Jx stands for the flux ("flow") of vacancies in direction x; Dx is a constant for the material in that direction and is the difference in concentration of vacancies in that direction. The law is valid for all principal directions in (x, y, z)-space, so the x in the formula can be exchanged for y or z. The result will be that they will become evenly distributed over the crystal, which will result in the highest mixing entropy. When a mechanical stress is applied to the crystal, new vacancies will be created at the sides perpendicular to the direction of the lowest principal stress. The vacancies will start moving in the direction of crystal planes perpendicular to the maximal stress. Current theory holds that the elastic strain in the neighborhood of a defect is smaller toward the axis of greatest differential compression, creating a defect chemical potential gradient (depending upon lattice strain) within the crystal that leads to net accumulation of defects at the faces of maximum compression by diffusion. A flow of vacancies is the same as a flow of particles in the opposite direction. This means a crystalline material can deform under a differential stress, by the flow of vacancies. Highly mobile chemical components substituting for other species in the lattice can also cause a net differential mass transfer (i.e. segregation) of chemical species inside the crystal itself, often promoting shortening of the rheologically more difficult substance and enhancing deformation. Types of diffusion creep Diffusion of vacancies through a crystal can happen in a number of ways. When vacancies move through the crystal (in the material sciences often called a "grain"), this is called Nabarro–Herring creep. Another way in which vacancies can move is along the grain boundaries, a mechanism called Coble creep. When a crystal deforms by diffusion creep to accommodate space problems from simultaneous grain boundary sliding (the movement of whole grains along grain boundaries) this is called granular or superplastic flow. Diffusion creep can also be simultaneous with pressure solution. Pressure solution is, like Coble creep, a mechanism in which material moves along grain boundaries. While in Coble creep the particles move by "dry" diffusion, in pressure solution they move in solution. Flow laws Each plastic deformation of a material can be described by a formula in which the strain rate () depends on the differential stress (σ or σD), the grain size (d) and an activation value in the form of an Arrhenius equation: In which A is the constant of diffusion, Q the activation energy of the mechanism, R the gas constant and T the absolute temperature (in kelvins). The exponents n and m are values for the sensitivity of the flow to stress and grain size respectively. The values of A, Q, n and m are different for each deformation mechanism. For diffusion creep, the value of n is usually around 1. The value for m can vary between 2 (Nabarro-Herring creep) and 3 (Coble creep). That means Coble creep is more sensitive to grain size of a material: materials with larger grains can deform less easily by Coble creep than materials with small grains. Traces of diffusion creep It is difficult to find clear microscale evidence for diffusion creep in a crystalline material, since few structures have been identified as definite proof. A material that was deformed by diffusion creep can have flattened grains (grains with a so called shape-preferred orientation or SPO). Equidimensional grains with no lattice-preferred orientation (or LPO) can be an indication for superplastic flow. In materials that were deformed under very high temperatures, lobate grain boundaries may be taken as evidence for diffusion creep. Diffusion creep is a mechanism by which the volume of the crystals can increase. Larger grain sizes can be a sign that diffusion creep was more effective in a crystalline material. See also Creep (deformation) Deformation (engineering) Diffusion Dislocation creep Material sciences References Literature Gower, R.J.W. & Simpson, C.; 1992: Phase boundary mobility in naturally deformed, high-grade quartzofeldspatic rocks: evidence for diffusion creep, Journal of Structural Geology 14, p. 301-314. Passchier, C.W. & Trouw, R.A.J., 1998: Microtectonics, Springer, Twiss, R.J. & Moores, E.M., 2000 (6th edition): Structural Geology, W.H. Freeman & co, Materials degradation
Diffusion creep
[ "Materials_science", "Engineering" ]
1,324
[ "Materials degradation", "Materials science" ]
5,699,437
https://en.wikipedia.org/wiki/9K52%20Luna-M
The 9K52 Luna-M (, ; NATO reporting name: Frog-7) is a Soviet short-range artillery rocket system which fires unguided and spin-stabilized 9M21 rockets. It was originally developed in the 1960s to provide divisional artillery support using tactical nuclear weapons but gradually modified for conventional use. The 9K52 was succeeded by the OTR-21 Tochka. Description Originally called the 3R-11 and 9R11, the 9M21 is a solid fuel rocket, with four off-angle vernier chambers immediately behind the warhead section. When the main engine section ignites, the verniers activate to start spinning the rocket, to improve stability and accuracy. At range, the 9M21 has a nominal CEP (circular error probable) of 400 meters. Western intelligence estimated that its CEP at maximum range was 500 to 700 meters. Russian sources admit the likely impact point could fall anywhere within an area 2.8 kilometers in depth from range error, and 1.8 kilometers in width in azimuth error. The initial 3R-11 rocket, known also by its military designation R-65 (NATO: Frog-7A), measures 8,900 mm in length. It was replaced in 1968 with an improved R-70 (NATO: Frog-7B) which measures 9,400 mm. This new variant allows for switching warhead sections and the addition of air brakes at the rear of the rocket, lowering the minimum range to . The rocket is mounted on a transporter erector launcher (TEL), designated 9P113. Based on the ZIL-135LM 8x8 truck, it features a large hydraulic crane to allow faster reloading. The 9T29 transporter, also based on the ZIL-135RTM chassis, can carry up to three 9M21 rockets. In addition to its inaccuracy, the fact that the rocket was exposed to the weather was another drawback to the system, particularly when equipped with temperature-sensitive nuclear ordnance. In the early 1960s, the Soviets experimented with a modified 9P113 launch vehicle with a fully-enclosed superstructure and launch roof. This did not solve the issue entirely, necessitating the development of the Tochka. Operation In Soviet service, the Luna-M was organized into battalions to provide divisions with rocket artillery support. Each battalion was organized with a headquarters battery and two firing batteries. Total complement included 20 officers, 160 enlisted personnel, four 9P113 launchers and, on average, seven rockets per launcher. The headquarters battery numbered about 80 personnel and provided the battalion with command and logistical support. Vehicles included 4 9T29 transporter vehicles, a 9T31M1 crane vehicle (Ural-375D), an RM-1 maintenance complex (3 ZIL-157s), an RVD-1 optical maintenance vehicle (Ural-375D) and a PKPP maintenance/check vehicle (ZIL-131). Each firing battery was organized with a headquarters, a meteorological section, a survey section, and two firing sections. The headquarters included a 9S445M command vehicle: a GAZ-66 truck with attached shelter containing fire control computer, radios and telephones. The meteorological section operated the RVS-1 Malakhit and a RMS-1 meteorological radar in the 1970s. They later upgraded to a RMS-1 End Tray radar, supported by an auxiliary power unit, each towed by a GAZ-66. The survey section used a GAZ-69TM/TMG/TMG-2, GAZ-66T or UAZ-452T for launch site preparation. Each firing section consisted of a single 9P113. Preparing the launcher to fire could take anywhere from 15 to 30 minutes. Launch sites were generally located 20 to 25 kilometers behind the front line. It was the longest-ranged artillery system available to a division commander and typically reserved for special missions. Because the rocket's inaccuracy at long range made the use of conventional warheads insufficient, barring a large and vital target, the system was more useful deploying specialized warheads. History In October 1962, a number of Luna missiles, and 12 compatible 2-kiloton nuclear warheads, were deployed with Soviet forces in Cuba during the missile crisis. The Luna was later extensively deployed throughout some Warsaw Pact countries and other Soviet allies. The rocket has been widely exported, and is now in the possession of a large number of countries. North Korea may have produced a small number of the rockets domestically under the name Hwasong-3 in the 1970s. Afghanistan In 1985, the Soviet Army started deploying Frog-7B systems armed with high explosive and cluster warheads against villages as part of an effort to deny food supplies for the Afghan mujahideen. In 1989, it was revealed that the Soviet Union supplied the Afghan Army with Frog-7 launchers and expired Frog-7A and Frog-7B rounds from Soviet stockpiles to replace the 1,000-odd Scud-B delivered and fired against the rebel forces. Syria In its first use in combat, Syrian forces fired a Frog-7 barrage at Galilee on 7 October and 8 October 1973, during the Yom Kippur War. Although aimed at Israeli air bases such as Ramat David, the rockets struck several Israeli settlements. These unintended attacks on civilians gave Israel an excuse to launch a sustained air campaign inside Syria itself. Starting in 2012, during the Syrian Civil War, the Syrian Army fired several Frog-7 rockets against areas under the control of different insurgent formations. Iraq Iraq made intensive use of Frog-7 rockets in the war with Iran (1980-1988). After the war with Iran, Iraq modified its remaining stock of 9M21s by extending their range to 100 kilometres, improving their precision by installing a gyroscope, and fitting submunition-carrying warheads, under a project code-named al-Laith. On 21 February 1991, during operation Desert Storm, Senegalese troops were hit hard by a Laith-90. Eight Senegalese soldiers were wounded in action and one vehicle disabled as a result During the 2003 invasion of Iraq, the Headquarters of the 2nd Brigade, US 3rd Infantry Division, Tactical Operations Center (TOC) of U.S Col. David Perkins, was targeted and struck by either an Iraqi Frog-7 rocket or an Ababil-100 SSM missile, killing three soldiers and two embedded journalists. Another 14 soldiers were injured, and 22 vehicles destroyed or seriously damaged, most of them Humvees. Yugoslavia In the Yugoslav Wars, Serb forces launched Frog-7 rockets on a number of Croatian forces, like Orašje, a Croatian military stronghold, on the outskirts of Zupanja, on 2 December 1992, where several civilians were killed, and the military airport Zagreb, on 11 September 1993, while the battle of Medak Pocket was still going on. Between April and October 1992, Bosnian Serb forces fired 14 Frog-7 rockets at the Croatian city of Slavonski Brod, during Operation Corridor 92. Libya RAF jets targeted and destroyed Frog-7 launchers operated by pro-Gaddafi forces south of Sirte, in the 2011 Libyan civil war. Variants 9M21B Nuclear-armed variant, fitted with one of three warheads. The original AA-22 has a variable yield of 3, 10 and 20 kilotons. The AA-38 is an improved version with the same three settings. The AA-52 has four yields of 5, 10, 20 and 200 kilotons. 9M21E Cluster munition variant fitted with a 9N18E dispenser warhead carrying shaped charge dual-purpose submunitions. 9M21F Standard variant fitted with a 9N18F high explosive/fragmentation warhead. 9M21Kh Chemical weapon variant, the 436kg 9N18kh warhead is fitted with a VT fuze and carries 216kg of VX nerve agent. Laith (also Laith-90) Iraqi version with increased range (100 km), improved accuracy and submunition warhead suitable for attacking troop concentrations. Ra'ad Iraqi version with increased range (100 km), improved accuracy and submunition warhead suitable for attacking vehicle or infantry columns. Fateh Iraqi version with range increased to 150 km, improved accuracy and submunition warhead. PV-65 Training rocket. Operators Current − 9 as of 2024 − 24 Frog-3, Frog-5, and Frog-7 as of 2024 Former − Frog-7A and Frog-7B − 36 in 2011 − Between 65 and 70 launchers − 36 in 1990 − 18 in 1990 – entered service in 1975 – bought in 1977. Captured by the Iraqi Army during the Gulf War Lebanese Forces − 2 launchers − 40 in 2011 – 49 launchers, operated between 1966-2001 − Kept in reserve storage as late as of 2011 – 12 launchers bought from the Soviet Union in 1979 − 18 in 2011 − 50 in 2011 − 12 in 2011 − 8 in 1990 See also Fajr-5 Falaq-2 T-122 Sakarya TOROS Bibliography References External links FAS – Military Analysis Network Profile of the Frog 7 from The Whirlwind War a publication of the United States Army Center of Military History Rocket artillery Unguided nuclear rockets of the Soviet Union Cold War weapons of the Soviet Union Chemical weapon delivery systems Moscow Institute of Thermal Technology products Military equipment introduced in the 1960s de:FROG (Rakete) it:FROG fi:FROG
9K52 Luna-M
[ "Chemistry" ]
1,967
[ "Chemical weapon delivery systems", "Chemical weapons" ]
13,653,181
https://en.wikipedia.org/wiki/Rewrite%20%28programming%29
A rewrite in computer programming is the act or result of re-implementing a large portion of existing functionality without re-use of its source code. When the rewrite uses no existing code at all, it is common to speak of a rewrite from scratch. Motivations A piece of software is typically rewritten when one or more of the following apply: its source code is not available or is only available under an incompatible license its code cannot be adapted to a new target platform its existing code has become too difficult to handle and extend the task of debugging it seems too complicated the programmer finds it difficult to understand its source code developers learn new techniques or wish to do a big feature overhaul which requires much change the programming language of the source code has to be changed Risks Several software engineers, such as Joel Spolsky have warned against total rewrites, especially under schedule constraints or competitive pressures. While developers may initially welcome the chance to correct historical design mistakes, a rewrite also discards those parts of the design that work as required. A rewrite commits the development team to deliver not just new features, but all those that exist in the previous code, while potentially introducing new bugs or regressions of previously fixed bugs. A rewrite also interferes with the tracking of unfixed bugs in the old version. The incremental rewrite is an alternative approach, in which developers gradually replace the existing code with calls into a new implementation, expanding that implementation until it fully replaces the old one. This approach avoids a broad loss of functionality during the rewrite. Cleanroom software engineering is another approach, which requires the team to work from an exhaustive written specification of the software's functionality, without access to its code. Examples Netscape's project to improve HTML layout in Navigator 4 has been cited as an example of a failed rewrite. The new layout engine (Gecko) had developed independently from Navigator and did not integrate readily with Navigator's code; hence Navigator itself was rewritten around the new engine, breaking many existing features and delaying release by several months. Meanwhile, Microsoft focused on incremental improvements to Internet Explorer and did not face the same obstacles. Ironically, Navigator itself was a successful cleanroom rewrite of NCSA Mosaic overseen by that program's developers. See Browser wars. Some projects mentioning major rewrites in their history: Apache HTTP Server (1) AOL Instant Messenger (1) BIND (1) Freenet (1) GRUB (1) Majordomo (1) MediaWiki (1) Mozilla/Netscape (1) Icecast (0–1) netcat (1) OpenRPG (1) PHP (1–2) Project Xanadu (0–1) Sun Secure Global Desktop (1) vBulletin (2) WebObjects (1) Zope (1) Techniques Strangler fig pattern See also Code refactoring Open source software development Technical debt Development hell Porting Game engine recreation Reverse engineering References External links RewriteCodeFromScratch at C2 Wiki Things You Should Never Do, Part I by Joel Spolsky Computer programming
Rewrite (programming)
[ "Technology", "Engineering" ]
643
[ "Software engineering", "Computer programming", "Computers" ]
13,653,300
https://en.wikipedia.org/wiki/Kinetic%20proofreading
Kinetic proofreading (or kinetic amplification) is a mechanism for error correction in biochemical reactions, proposed independently by John Hopfield (1974) and Jacques Ninio (1975). Kinetic proofreading allows enzymes to discriminate between two possible reaction pathways leading to correct or incorrect products with an accuracy higher than what one would predict based on the difference in the activation energy between these two pathways. Increased specificity is obtained by introducing an irreversible step exiting the pathway, with reaction intermediates leading to incorrect products more likely to prematurely exit the pathway than reaction intermediates leading to the correct product. If the exit step is fast relative to the next step in the pathway, the specificity can be increased by a factor of up to the ratio between the two exit rate constants. (If the next step is fast relative to the exit step, specificity will not be increased because there will not be enough time for exit to occur.) This can be repeated more than once to increase specificity further. As an analogy, if we have a medicine assembly line sometimes produces empty boxes, and we are unable to upgrade the assembly line, then we can increase the ratio of full boxes over empty boxes (specificity) by placing a giant fan at the end. Empty boxes are more likely to be blown off the line (a higher exit rate) than full boxes, even though both kinds' production rates are lowered. By lengthening the final section and adding more giant fans (multistep proofreading), the specificity can be increased arbitrarily, at the cost of decreasing production rate. Specificity paradox In protein synthesis, the error rate is on the order of . This means that when a ribosome is matching anticodons of tRNA to the codons of mRNA, it matches complementary sequences correctly nearly all the time. Hopfield noted that because of how similar the substrates are (the difference between a wrong codon and a right codon can be as small as a difference in a single base), an error rate that small is unachievable with a one-step mechanism. Both wrong and right tRNA can bind to the ribosome, and if the ribosome can only discriminate between them by complementary matching of the anticodon, it must rely on the small free energy difference between binding three matched complementary bases or only two. A one-shot machine which tests whether the codons match or not by examining whether the codon and anticodon are bound will not be able to tell the difference between wrong and right codon with an error rate less than unless the free energy difference is at least 9.2kT, which is much larger than the free energy difference for single codon binding. This is a thermodynamic bound, so it cannot be evaded by building a different machine. However, this can be overcome by kinetic proofreading, which introduces an irreversible step through the input of energy. Another molecular recognition mechanism, which does not require expenditure of free energy is that of conformational proofreading. The incorrect product may also be formed but hydrolyzed at a greater rate than the correct product, giving the possibility of theoretically infinite specificity the longer you let this reaction run, but at the cost of large amounts of the correct product as well. (Thus there is a tradeoff between product production and its efficiency.) The hydrolytic activity may be on the same enzyme, as in DNA polymerases with editing functions, or on different enzymes. Multistep ratchet Hopfield suggested a simple way to achieve smaller error rates using a molecular ratchet which takes many irreversible steps, each testing to see if the sequences match. At each step, energy is expended and specificity (the ratio of correct substrate to incorrect substrate at that point in the pathway) increases. The requirement for energy in each step of the ratchet is due to the need for the steps to be irreversible; for specificity to increase, entry of substrate and analogue must occur largely through the entry pathway, and exit largely through the exit pathway. If entry were an equilibrium, the earlier steps would form a pre-equilibrium and the specificity benefits of entry into the pathway (less likely for the substrate analogue) would be lost; if the exit step were an equilibrium, then the substrate analogue would be able to re-enter the pathway through the exit step, bypassing the specificity of earlier steps altogether. Although one test will fail to discriminate between mismatched and matched sequences a fraction of the time, two tests will both fail only of the time, and N tests will fail of the time. In terms of free energy, the discrimination power of N successive tests for two states with a free energy is the same as one test between two states with a free energy . To achieve an error rate of requires several comparison steps. Hopfield predicted on the basis of this theory that there is a multistage ratchet in the ribosome which tests the match several times before incorporating the next amino acid into the protein. Experimental examples Charging tRNAs with their respective amino-acids – the enzyme that charges the tRNA is called aminoacyl tRNA synthetase. This enzyme utilizes a high energy intermediate state to increase the fidelity of binding the right pair of tRNA and amino-acid. In this case, energy is used to make the high-energy intermediate (making the entry pathway irreversible), and the exit pathway is irreversible by virtue of the high energy difference in dissociation. Homologous recombination – Homologous recombination facilitates the exchange between homologous or almost homologous DNA strands. During this process, the RecA protein polymerizes along a DNA and this DNA-protein filament searches for a homologous DNA sequence. Both processes of RecA polymerization and homology search utilize the kinetic proofreading mechanism. DNA damage recognition and repair – a certain DNA repair mechanism utilizes kinetic proofreading to discriminate damaged DNA. Some DNA polymerases can also detect when they have added an incorrect base and are able to hydrolyze it immediately; in this case, the irreversible (energy-requiring) step is addition of the base. Antigen discrimination by T cell receptors – T cells respond to foreign antigens at low concentrations, while ignoring any self-antigens present at much higher concentration. This ability is known as antigen discrimination. T-cell receptors use kinetic proofreading to discriminate between high and low affinity antigens presented on an MHC molecule. The intermediate steps of kinetic proofreading are realized by multiple rounds of phosphorylation of the receptor and its adaptor proteins. Theoretical considerations Universal first passage time Biochemical processes that use kinetic proofreading to improve specificity implement the delay-inducing multistep ratchet by a variety of distinct biochemical networks. Nonetheless, many such networks result in the times to completion of the molecular assembly and the proofreading steps (also known as the first passage time) that approach a near-universal, exponential shape for high proofreading rates and large network sizes. Since exponential completion times are characteristic of a two-state Markov process, this observation makes kinetic proofreading one of only a few examples of biochemical processes where structural complexity results in a much simpler large-scale, phenomenological dynamics. Topology The increase in specificity, or the overall amplification factor of a kinetic proofreading network that may include multiple pathways and especially loops is intimately related to the topology of the network: the specificity grows exponentially with the number of loops in the network. An example is homologous recombination in which the number of loops scales like the square of DNA length. The universal completion time emerges precisely in this regime of large number of loops and high amplification. References Further reading Biological processes DNA replication
Kinetic proofreading
[ "Mathematics", "Biology" ]
1,631
[ "Genetics techniques", "Mathematical and theoretical biology", "Applied mathematics", "DNA replication", "Molecular genetics", "nan" ]
13,653,437
https://en.wikipedia.org/wiki/Integration%20by%20parts%20operator
In mathematics, an integration by parts operator is a linear operator used to formulate integration by parts formulae; the most interesting examples of integration by parts operators occur in infinite-dimensional settings and find uses in stochastic analysis and its applications. Definition Let E be a Banach space such that both E and its continuous dual space E∗ are separable spaces; let μ be a Borel measure on E. Let S be any (fixed) subset of the class of functions defined on E. A linear operator A : S → L2(E, μ; R) is said to be an integration by parts operator for μ if for every C1 function φ : E → R and all h ∈ S for which either side of the above equality makes sense. In the above, Dφ(x) denotes the Fréchet derivative of φ at x. Examples Consider an abstract Wiener space i : H → E with abstract Wiener measure γ. Take S to be the set of all C1 functions from E into E∗; E∗ can be thought of as a subspace of E in view of the inclusions For h ∈ S, define Ah by This operator A is an integration by parts operator, also known as the divergence operator; a proof can be found in Elworthy (1974). The classical Wiener space C0 of continuous paths in Rn starting at zero and defined on the unit interval [0, 1] has another integration by parts operator. Let S be the collection i.e., all bounded, adapted processes with absolutely continuous sample paths. Let φ : C0 → R be any C1 function such that both φ and Dφ are bounded. For h ∈ S and λ ∈ R, the Girsanov theorem implies that Differentiating with respect to λ and setting λ = 0 gives where (Ah)(x) is the Itō integral The same relation holds for more general φ by an approximation argument; thus, the Itō integral is an integration by parts operator and can be seen as an infinite-dimensional divergence operator. This is the same result as the integration by parts formula derived from the Clark-Ocone theorem. References (See section 5.3) Integral calculus Measure theory Operator theory Stochastic calculus
Integration by parts operator
[ "Mathematics" ]
453
[ "Integral calculus", "Calculus" ]
13,653,971
https://en.wikipedia.org/wiki/Littelfuse
Littelfuse, Inc. is an American electronics manufacturing company headquartered in Chicago, Illinois. The company primarily produces circuit protection products (fuses) but also manufactures a variety of switches, automotive sensors and, through its subsidiary Zilog, microprocessors. Littelfuse was founded in 1927. In addition to its Chicago, Illinois, world headquarters, Littelfuse has more than 40 sales, distribution, manufacturing and engineering facilities in the Americas, Europe and Asia. Littelfuse is the developer of AutoFuse, the first blade-type automotive fuse. History Early history Edward V. Sundt founded Littelfuse in 1927 in Chicago Illinois as Littelfuse Laboratories. Prior to founding Littelfuse, Sundt had worked for General Electric and Stewart-Warner, where he found diagnostic equipment frequently experienced electrical failure. Sundt developed Littelfuse's first product, a small protective fuse, to regulate current in diagnostic equipment and prevent electrical failure. When the US government refused Sundt a trademark for Little fuse (the small protective fuse) on the grounds that the words were too common, Sundt compromised by reversing the l and the e to form Littelfuse. Littelfuse was incorporated and renamed Littelfuse, Inc. in 1938. Littelfuse became a public company in 1962. The company retained founder Edward V. Sundt as the chairman of its board. In 1963, Littelfuse moved its headquarters from Chicago to Des Plaines, Illinois. Sundt retired in 1965 and was succeeded by Thomas Blake. Tracor purchased the company in 1968. Blake was made president of Littelfuse, which operated as a wholly owned subsidiary of Tracor. 1970–1991 The company expanded its manufacturing base in the 1970s with new factories opening in Watseka, Illinois and Piedras Negras, Mexico. In 1974, the company also introduced Littelites, electronic indicator lights used in industrial and office machinery, household appliances and computers. In 1976, Littelfuse developed Autofuse, which was the first blade-type fuse used in automobiles. The Autofuse brand was counterfeited heavily and in 1983 the company obtained an exclusionary order from the United States International Trade Commission, which barred the importation of counterfeit blade-type fuses. In 1987, Westmark Systems purchased Tracor and its Littelfuse subsidiary in leveraged buyout. Tracor filed for bankruptcy in 1991 and spun off Littelfuse. Modern history Littelfuse reincorporated in November 1991 with Howard Witt as its president and CEO. Witt had worked for Littelfuse since 1979 and had been president and CEO of Littelfuse since February 1990, when the company was still owned by Tracor. In 1991, Littelfuse offered its second IPO in company history. The company's profits rose throughout the 1990s and the company expanded its operations in Europe and Asia. Littelfuse also expanded into South America with a distribution and engineering center in São Paulo, Brazil. Gordon Hunter replaced Witt as president and CEO of Littelfuse at the end of 2004. In 2008, Littelfuse restructured its manufacturing operations, closing 16 small manufacturing plants and opening 6 new, larger plants. The company moved its headquarters from Des Plaines, Illinois, to Chicago, Illinois, the same year. The company was recognized as Product of the Year by Consulting-Specifying Engineer in 2010, 2011, 2012 and 2013. Arrow Electronics recognized Littelfuse with an award for Supplier Excellence in 2011. The company received TTI Supplier's Excellence Award in 2010, 2011, 2012 and 2013. Littelfuse received the Chicago Innovation Award in 2012. In 2013, the company received Processing Magazine's Breakthrough Product of the Year. Littelfuse was recognized as one of the Best Places to Work in Illinois in 2012, 2013 and 2014. The company announced in November 2016 that COO Dave Heinzmann would succeed Hunter as president and CEO in January 2017. Products Littelfuse designs and manufactures circuit protection products for the electronics, automotive and electrical industries. The company operates between three business unit segments: Electronics, Industrial, and Automotive. Products include: fuses and protectors, suppressors, gas discharge tubes, electronic switches, solenoids, battery management devices, and protective relays. With the acquisition of Hamlin, Inc. in 2013, Littelfuse expanded its product offering to include sensors for the automotive, industrial and consumer industries. Acquisitions Littelfuse has acquired multiple companies since 1999, including: 1999 – Harris Suppression Products. 2002 – Semitron. 2003 – Teccor, a manufacturer of circuit and overvoltage protection products. 2004 – Heinrich Industrie, a German manufacturer of circuit protection products, including the WICKMANN Group, Efen and Pudenz brands. 2006 – Taiwan-based silicon manufacturer Concord Semiconductor, Inc. and Catalina Performance Accessories, which manufactures and distributes blade-type automotive fuses. 2008 – Shock Block Corporation that develop and manufacture ground fault protection technology. 2008 – Startco Engineering, maker of ground-fault protection products and custom-power distribution centers that are used in industrial manufacturing and mining applications. 2010 – Cole Hersee, a maker of power management products, heavy duty electromechanical and switches for commercial vehicles. 2011 – Selco A/S, a Danish company, which produces electrical equipment for use in maritime and industrial environments. 2012 – Accel AB, a Swedish company that manufactures advanced automotive switches and sensors, and Terra Power Systems, which manufactures electrical components for heavy-duty vehicles and trucks. 2013 – Hamlin Inc., an automotive sensors manufacturer. 2014 – SymCom, a power, voltage, and current monitor developer and manufacturer. 2015 – JRS MFG. LTD., a custom engineered products developer and manufacturer, such as metal-clad, metal-enclosed, and arc-resistant switchgear, E-Houses, mine power centers and mining substations. 2016 – TE Connectivity's circuit protection business. 2016 – IGBT and TVS divisions of ON Semiconductor, a semiconductors supplier company. 2017 – U.S. Sensor, Manufacturer of Temperature Sensors. 2017 – IXYS Corp., a power semiconductor manufacturer, thereby also acquiring Zilog 2018 – Monolith Semiconductor Inc., a silicon carbide switch developer and manufacturer. 2021 – Carling Technologies Inc., a switch and electromechanical circuit breaker manufacturer. 2022 – C&K Switches, an electromechanical switch manufacturer 2023 – Western Automation Research and Development Limited, a designer and manufacturer of electrical shock protection devices. References 1927 establishments in Illinois 1960s initial public offerings Companies listed on the Nasdaq Manufacturing companies based in Chicago Electronics companies established in 1927 Electrical engineering companies Companies in the S&P 400
Littelfuse
[ "Engineering" ]
1,415
[ "Electrical engineering organizations", "Electrical engineering companies", "Engineering companies" ]
13,655,207
https://en.wikipedia.org/wiki/Water%20stop%20%28sports%29
A water stop is a break and a place to break for drinking water in sports events (sports competitions or training) for some types of sports, such as various long distance types of running (e.g., marathon), cycling, etc. Similarly, a water break is a break to drink water in some sports events held in one place. Water stops and breaks have become obligatory relatively recently. Before the 1950s, there had been a practice to eliminate water breaks in order "to toughen up boys" (see "Junction Boys" for an example). Water stops are used to combat interrelated dangers: hyperthermia, dehydration and hyponatremia (low blood level of sodium). Drinking water combats dehydration, while intake of electrolyte solutions (often provided by various sports drinks) combats hyponatremia and its severe form, water intoxication. Water stops during a marathon are generally spaced between 2 miles and 5 kilometers (3.1 miles) apart, resulting in 8-12 stops. Stopping for 10 seconds per station results in 1-2 minutes of added time, but the loss of stamina due to dehydration would add much more. Compared to dehydration, hyponatremia is a relatively recently recognized danger, and there are different opinions about how much water to drink at each water stop. Some texts say that thirst is not a reliable indicator of the need in water, while other say that obligatory drinking at every opportunity without real need increases the danger of hyponatremia. "If you hear sloshing in your stomach... you can by-pass that water stop". (Jeff Galloway) In Sumo, if a bout goes for many minutes the referee may call a break traditionally called mizu-iri or "water break". Temporary introduction of water breaks during COVID pandemic Many team sports, such as association football, gaelic sports and rugby union, introduced water breaks into games as part of measures to tackle the spread of COVID-19. Allowing players to drink from their own bottle at regular intervals during games, as opposed to the nearest bottle during incidental stoppages, prevented the spread of the virus. References Sports science Water
Water stop (sports)
[ "Environmental_science" ]
463
[ "Water", "Hydrology" ]
13,655,564
https://en.wikipedia.org/wiki/Baptist%20well%20drilling
Baptist well drilling is a very simple, manual method to drill water wells. The Baptist drilling rig can be built in any ordinary arc welding workshop and materials for a basic version costs about 150 US dollars (2006 prices). In suitable conditions, boreholes over 100 m deep have been drilled with this method. History The method was developed by Terry Waller, a North American Baptist missionary in Africa and Bolivia. It applies some of the same principles used in mechanized commercial well drilling, but does so using the simplest, most available and cheapest possible materials. Social / development context Rural people in developing countries often cannot afford to have specialists drill or dig wells for them. This method was developed to provide poor people with a way to help themselves with their water supply. A Baptist drilling rig, fit to drill holes up to deep, can be built in Nicaragua for about US $150. This includes all essential non-common tools to operate it. Its core element, the drill bits, can be made in about any arc-welding workshop, using only scrap steel and materials that can be found in virtually any hardware store. Once the well is drilled, it is cased with an inexpensive PVC tube. Fitting the well with a slab of concrete as a sanitary seal and a simple PVC piston pump (also built by the users themselves) will cost about 2.5 dollars per meter well depth. Suitable conditions A hybrid between sludging and percussion drilling, this method permits to drill through all kinds of loose alluvial soils, sands, silts and clays, as well as “soft” rocks, like light conglomerates, consolidated volcanic ashes, some calcareous rocks and weathered materials. It will not penetrate hard igneous rock or boulders (e.g. in ancient river beds). Technical specifications Like in sludging, the drilling process is continuous: the drill bit is normally not removed from the borehole until it is finished and the broken-up material is pumped to the surface in the drilling liquid (mud). But instead of using a hand as a valve on top of the drill pipe (sludging), the drill bit itself doubles as a foot-valve. The operator’s hand does not have to reach the end of the drill pipe and drill stem extensions can be several meters long. Percussion action is performed by lifting the drill stem with a rope over a pulley, attached to a simple derrick, made with locally available materials, such as wood or bamboo poles. The borehole diameter is kept as small as possible in order to remove a minimum of material and hence advance rapidly. The standard drill bit is based on water pipe fittings and unless a larger diameter is required, the borehole is cased with PVC pipe. The main drill tool consists of a length of metal pipe with a bit/valve. Extensions are standard PVC potable water pipes. No temporary casing is used. The borehole being kept full of mud and the “caking” of mud into any unstable sand layers, as a consequence of the percussion action and friction of the smooth lateral edges of the bit, is normally sufficient to stabilize it. The drilling mud is evacuated from the borehole after casing the well by pouring or injecting water into the casing, called backwashing. This technique adapts best to sand, loam and light rock. The standard drill bits also work through sticky and even consolidated clays. Nevertheless, best results in varying conditions are obtained with an array of different bits: Movable point bits for general purpose and clay-holding soils: the moving stem of the heavy dart helps to keep the foot valve clean. Fixed point bits for sandy and rocky layers, where there is no risk for sticky material to obstruct the foot valve. Open-ended (hollow) bits without a foot valve (pure sludging) for layers of pure clay or gravel. In these conditions the presence of a foot valve may slow down progress, since clay has to be pounded into suspension and stones have to be ground to small pieces in order to enter the drilling tool through the foot valve. Reaming If required, the upper part of the well can be reamed and cased with larger diameter pipe (3 or 5 inches), to accommodate larger pumps. A shallow (large diameter) rope pump, for example, may require a wider well and submersible pumps commonly need at least 4". Note that there is no need to enlarge the entire depth of the borehole: reaming until slightly below the lowest expected water table (the pump's water intake) is sufficient. References External links Baptist drilling instruction manual More technology Baptist technology in Nicaragua (Spanish language) EMAS rural water supply solutions (Spanish language) Simple hand-pump and well drilling technology Water wells Water supply infrastructure Appropriate technology
Baptist well drilling
[ "Chemistry", "Engineering", "Environmental_science" ]
981
[ "Hydrology", "Water wells", "Environmental engineering" ]
13,656,257
https://en.wikipedia.org/wiki/Pacific%E2%80%93North%20American%20teleconnection%20pattern
The Pacific–North American teleconnection pattern (PNA) is a large-scale weather pattern with two modes, denoted positive and negative, and which relates the atmospheric circulation pattern over the North Pacific Ocean with the one over the North American continent. It is the second leading mode of natural climate variability in the higher latitudes of the Northern Hemisphere (behind the Arctic Oscillation or North Atlantic Oscillation) and can be diagnosed using the arrangement of anomalous geopotential heights or air pressures over the North Pacific and North America. On average, the troposphere over North America features a ridge on the western part of the continent and a trough over the eastern part of the continent. The positive phase of the PNA teleconnection is identified by anomalously low geopotential heights south of the Aleutian Islands and over Southeastern U.S. straddling high geopotential heights over the North Pacific from Hawaii to the U.S. Intermountain West. This represents an amplification of the long-term average conditions. The negative phase features the opposite pattern over the same regions, with above-average geopotential heights straddling below-average heights. This represents a damping of the long-term average conditions. Indices The PNA is typically quantified using an index using geopotential height anomalies at the 500-hPa pressure level, with positive and negative PNA phases based on the sign of the index. Wallace and Gutzler (1981) expressed the PNA index as the average of normalized height anomalies at the four centers of action most relevant to the PNA, where describes the normalized 500-hPa height anomaly as a function of location. The subtropical center at (20°N, 160°W) can be excluded, though the difference between the resulting index and the index is small. Applying rotated principal component analysis to the 500-hPa geopotential height anomaly field in the Northern Hemisphere can also provide a quantification of the PNA (), with the canonical PNA pattern emerging as the second-leading principal component. This methodology is used by the U.S. Climate Prediction Center to compute its PNA index. Dynamics Although the PNA is usually defined based on anomalies relative to monthly or seasonal averages, the PNA often varies at weekly timescales. However, as a pattern of internal climate variability, the state of the PNA occasionally changes without a clear and identifiable cause. This reduces the predictability of the PNA and can complicate long-range seasonal weather forecasts. Predictability of the PNA is limited to roughly within 10 days. The PNA is associated with changes in the intensity and positioning of the East Asian jet stream. During the positive phase of the PNA, the East Asian jet intensifies and extends eastward across the North Pacific towards the western U.S. During the negative phase, the jet stream is retracted over East Asia, producing a blocking weather pattern over the North Pacific. Some of the energy that drives the PNA originates from the barotropic instability produced by the jet, potentially exciting Rossby waves. Shifts in the jet stream can induce changes in air pressure distributions both near and downstream of the jet. Storms over the tropical Pacific and Indian oceans may play a role in exciting the positive and negative phases of the PNA by influencing the East Asian jet. Tropical convection can induce a low-amplitude PNA pattern that amplifies to its peak strength after 8–12 days. Atmospheric eddies and Rossby waves can further intensify the PNA pattern. Positive PNA is correlated with increased convective activity over western tropical Pacific and reduced convective activity over the tropical Indian Ocean, while negative PNA is correlated with the opposite convective anomalies. The Rossby waves associated with positive PNA tend to track eastward and undergo cyclonic wavebreaking, while those associated with negative PNA tend to track equatorward towards the subtropics and break anticyclonically; the wavebreaking behavior of the Rossby waves is determined by the meridional gradient of potential vorticity and the magnitude and orientation of wind shear, which in turn are modulated by variations in the East Asian jet stream. In either case, positive feedbacks associated with the wavebreaking sustain amplified PNA patterns. Other teleconnections can modulate the PNA by modifying the jet stream. The El Niño–Southern Oscillation (ENSO) impacts the behavior of PNA, with the positive phase of the PNA more commonly associated with El Niño and the negative phase more commonly associated with La Niña. This relationship is most evident at seasonal timescales, making the seasonal PNA more predictable than the monthly PNA. The negative phase is also favored when the Madden–Julian oscillation (MJO) enhances convection over the Indian Ocean and Maritime Continent; the positive phase is favored when the MJO enhances convection closer to the central Pacific. The MJO's influence on the PNA arises from the interaction between the enhanced convection and the Pacific jet stream. Effects on weather The regional variations in weather associated with the PNA are generally the result of the PNA's influence on the East Asian jet. The temperature pattern associated with the PNA follows the pattern of anomalous ridging and troughing. The positive phase of the PNA is correlated with above-average temperatures over the U.S. Pacific Coast and Western Canada. During the positive phase, an anomalously strong ridge of high pressure over Canada reduces the frequency of cold air outbreaks over western Northern America. Below-average temperatures over the South-Central U.S., Southeastern U.S., and U.S. East Coast are associated with the positive phase due to the presence of anomalously low pressure. The influence of the PNA on surface temperatures over North America is reduced during the summer. Correlations between precipitation patterns and the PNA are weaker than temperature patterns, but are nonetheless evident. Anomalously high precipitation over the Gulf of Alaska and Pacific Northwest accompany the positive phase, along with below-average precipitation totals over the Pacific Northwest, Northern Rocky Mountains, and Ohio and Tennessee river valleys. The negative PNA phase exhibits the opposite departures from average. See also Teleconnection Arctic dipole anomaly Kuroshio Current North Pacific Gyre Pacific Decadal Oscillation Pineapple Express Walker circulation References Sources Regional climate effects Physical oceanography
Pacific–North American teleconnection pattern
[ "Physics" ]
1,339
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
13,657,020
https://en.wikipedia.org/wiki/Bial%27s%20test
Bial's test is a chemical test for the presence of pentoses originally developed for the diagnosis of pentosuria. It is named after Manfred Bial, a German physician. The components include orcinol, hydrochloric acid, and ferric chloride. A pentose, if present, will be dehydrated to form furfural which then reacts with orcinol to generate a colored substance. The solution will turn bluish and a precipitate may form. The solution shows two absorption bands, one in the red between Fraunhofer lines B and C and the other near the D line. An estimate of the relevant wavelengths can be made by referring to the Fraunhofer lines article. Composition Bial's reagent consists of 0.4 g orcinol, 200 ml of concentrated hydrochloric acid and 0.5 ml of a 10% solution of ferric chloride. Bial's test is used to distinguish pentoses from hexoses; this distinction is based on the color that develops in the presence of orcinol and iron (III) chloride. Furfural from pentoses gives a blue or green color. The related hydroxymethylfurfural from hexoses may give a muddy-brown, yellow or gray solution, but this is easily distinguishable from the green color of pentoses. Quantitative version The test may be performed as a quantitative colorimetric test using a spectrophotometer. Fernell and King published a procedure for simultaneous determination of pentoses and hexoses from measurements at two wavelengths. Various versions of this test are widely used for a quick chemical determination of RNA; in this context it is usually called the orcinol test. See also Dische test References External links original description of Bial's test in 1903 Carbohydrate methods Chemical tests
Bial's test
[ "Chemistry", "Biology" ]
391
[ "Biochemistry methods", "Chemical tests", "Carbohydrate methods", "Carbohydrate chemistry", "Analytical chemistry stubs" ]
13,657,747
https://en.wikipedia.org/wiki/Dirac%20bracket
The Dirac bracket is a generalization of the Poisson bracket developed by Paul Dirac to treat classical systems with second class constraints in Hamiltonian mechanics, and to thus allow them to undergo canonical quantization. It is an important part of Dirac's development of Hamiltonian mechanics to elegantly handle more general Lagrangians; specifically, when constraints are at hand, so that the number of apparent variables exceeds that of dynamical ones. More abstractly, the two-form implied from the Dirac bracket is the restriction of the symplectic form to the constraint surface in phase space. This article assumes familiarity with the standard Lagrangian and Hamiltonian formalisms, and their connection to canonical quantization. Details of Dirac's modified Hamiltonian formalism are also summarized to put the Dirac bracket in context. Inadequacy of the standard Hamiltonian procedure The standard development of Hamiltonian mechanics is inadequate in several specific situations: When the Lagrangian is at most linear in the velocity of at least one coordinate; in which case, the definition of the canonical momentum leads to a constraint. This is the most frequent reason to resort to Dirac brackets. For instance, the Lagrangian (density) for any fermion is of this form. When there are gauge (or other unphysical) degrees of freedom which need to be fixed. When there are any other constraints that one wishes to impose in phase space. Example of a Lagrangian linear in velocity An example in classical mechanics is a particle with charge and mass confined to the - plane with a strong constant, homogeneous perpendicular magnetic field, so then pointing in the -direction with strength . The Lagrangian for this system with an appropriate choice of parameters is where is the vector potential for the magnetic field, ; is the speed of light in vacuum; and is an arbitrary external scalar potential; one could easily take it to be quadratic in and , without loss of generality. We use as our vector potential; this corresponds to a uniform and constant magnetic field B in the z direction. Here, the hats indicate unit vectors. Later in the article, however, they are used to distinguish quantum mechanical operators from their classical analogs. The usage should be clear from the context. Explicitly, the Lagrangian amounts to just which leads to the equations of motion For a harmonic potential, the gradient of amounts to just the coordinates, . Now, in the limit of a very large magnetic field, . One may then drop the kinetic term to produce a simple approximate Lagrangian, with first-order equations of motion Note that this approximate Lagrangian is linear in the velocities, which is one of the conditions under which the standard Hamiltonian procedure breaks down. While this example has been motivated as an approximation, the Lagrangian under consideration is legitimate and leads to consistent equations of motion in the Lagrangian formalism. Following the Hamiltonian procedure, however, the canonical momenta associated with the coordinates are now which are unusual in that they are not invertible to the velocities; instead, they are constrained to be functions of the coordinates: the four phase-space variables are linearly dependent, so the variable basis is overcomplete. A Legendre transformation then produces the Hamiltonian Note that this "naive" Hamiltonian has no dependence on the momenta, which means that equations of motion (Hamilton's equations) are inconsistent. The Hamiltonian procedure has broken down. One might try to fix the problem by eliminating two of the components of the -dimensional phase space, say and , down to a reduced phase space of dimensions, that is sometimes expressing the coordinates as momenta and sometimes as coordinates. However, this is neither a general nor rigorous solution. This gets to the heart of the matter: that the definition of the canonical momenta implies a constraint on phase space (between momenta and coordinates) that was never taken into account. Generalized Hamiltonian procedure In Lagrangian mechanics, if the system has holonomic constraints, then one generally adds Lagrange multipliers to the Lagrangian to account for them. The extra terms vanish when the constraints are satisfied, thereby forcing the path of stationary action to be on the constraint surface. In this case, going to the Hamiltonian formalism introduces a constraint on phase space in Hamiltonian mechanics, but the solution is similar. Before proceeding, it is useful to understand the notions of weak equality and strong equality. Two functions on phase space, and , are weakly equal if they are equal when the constraints are satisfied, but not throughout the phase space, denoted . If and are equal independently of the constraints being satisfied, they are called strongly equal, written . It is important to note that, in order to get the right answer, no weak equations may be used before evaluating derivatives or Poisson brackets. The new procedure works as follows, start with a Lagrangian and define the canonical momenta in the usual way. Some of those definitions may not be invertible and instead give a constraint in phase space (as above). Constraints derived in this way or imposed from the beginning of the problem are called primary constraints. The constraints, labeled , must weakly vanish, . Next, one finds the naive Hamiltonian, , in the usual way via a Legendre transformation, exactly as in the above example. Note that the Hamiltonian can always be written as a function of s and s only, even if the velocities cannot be inverted into functions of the momenta. Generalizing the Hamiltonian Dirac argues that we should generalize the Hamiltonian (somewhat analogously to the method of Lagrange multipliers) to where the are not constants but functions of the coordinates and momenta. Since this new Hamiltonian is the most general function of coordinates and momenta weakly equal to the naive Hamiltonian, is the broadest generalization of the Hamiltonian possible so that when . To further illuminate the , consider how one gets the equations of motion from the naive Hamiltonian in the standard procedure. One expands the variation of the Hamiltonian out in two ways and sets them equal (using a somewhat abbreviated notation with suppressed indices and sums): where the second equality holds after simplifying with the Euler-Lagrange equations of motion and the definition of canonical momentum. From this equality, one deduces the equations of motion in the Hamiltonian formalism from where the weak equality symbol is no longer displayed explicitly, since by definition the equations of motion only hold weakly. In the present context, one cannot simply set the coefficients of and separately to zero, since the variations are somewhat restricted by the constraints. In particular, the variations must be tangent to the constraint surface. One can demonstrate that the solution to for the variations and restricted by the constraints (assuming the constraints satisfy some regularity conditions) is generally where the are arbitrary functions. Using this result, the equations of motion become where the are functions of coordinates and velocities that can be determined, in principle, from the second equation of motion above. The Legendre transform between the Lagrangian formalism and the Hamiltonian formalism has been saved at the cost of adding new variables. Consistency conditions The equations of motion become more compact when using the Poisson bracket, since if is some function of the coordinates and momenta then if one assumes that the Poisson bracket with the (functions of the velocity) exist; this causes no problems since the contribution weakly vanishes. Now, there are some consistency conditions which must be satisfied in order for this formalism to make sense. If the constraints are going to be satisfied, then their equations of motion must weakly vanish, that is, we require There are four different types of conditions that can result from the above: An equation that is inherently false, such as . An equation that is identically true, possibly after using one of our primary constraints. An equation that places new constraints on our coordinates and momenta, but is independent of the . An equation that serves to specify the . The first case indicates that the starting Lagrangian gives inconsistent equations of motion, such as . The second case does not contribute anything new. The third case gives new constraints in phase space. A constraint derived in this manner is called a secondary constraint. Upon finding the secondary constraint one should add it to the extended Hamiltonian and check the new consistency conditions, which may result in still more constraints. Iterate this process until there are no more constraints. The distinction between primary and secondary constraints is largely an artificial one (i.e. a constraint for the same system can be primary or secondary depending on the Lagrangian), so this article does not distinguish between them from here on. Assuming the consistency condition has been iterated until all of the constraints have been found, then will index all of them. Note this article uses secondary constraint to mean any constraint that was not initially in the problem or derived from the definition of canonical momenta; some authors distinguish between secondary constraints, tertiary constraints, et cetera. Finally, the last case helps fix the . If, at the end of this process, the are not completely determined, then that means there are unphysical (gauge) degrees of freedom in the system. Once all of the constraints (primary and secondary) are added to the naive Hamiltonian and the solutions to the consistency conditions for the are plugged in, the result is called the total Hamiltonian. Determination of the The uk must solve a set of inhomogeneous linear equations of the form The above equation must possess at least one solution, since otherwise the initial Lagrangian is inconsistent; however, in systems with gauge degrees of freedom, the solution will not be unique. The most general solution is of the form where is a particular solution and is the most general solution to the homogeneous equation The most general solution will be a linear combination of linearly independent solutions to the above homogeneous equation. The number of linearly independent solutions equals the number of (which is the same as the number of constraints) minus the number of consistency conditions of the fourth type (in previous subsection). This is the number of unphysical degrees of freedom in the system. Labeling the linear independent solutions where the index runs from to the number of unphysical degrees of freedom, the general solution to the consistency conditions is of the form where the are completely arbitrary functions of time. A different choice of the corresponds to a gauge transformation, and should leave the physical state of the system unchanged. The total Hamiltonian At this point, it is natural to introduce the total Hamiltonian and what is denoted The time evolution of a function on the phase space, , is governed by Later, the extended Hamiltonian is introduced. For gauge-invariant (physically measurable quantities) quantities, all of the Hamiltonians should give the same time evolution, since they are all weakly equivalent. It is only for non gauge-invariant quantities that the distinction becomes important. The Dirac bracket Above is everything needed to find the equations of motion in Dirac's modified Hamiltonian procedure. Having the equations of motion, however, is not the endpoint for theoretical considerations. If one wants to canonically quantize a general system, then one needs the Dirac brackets. Before defining Dirac brackets, first-class and second-class constraints need to be introduced. We call a function of coordinates and momenta first class if its Poisson bracket with all of the constraints weakly vanishes, that is, for all . Note that the only quantities that weakly vanish are the constraints , and therefore anything that weakly vanishes must be strongly equal to a linear combination of the constraints. One can demonstrate that the Poisson bracket of two first-class quantities must also be first class. The first-class constraints are intimately connected with the unphysical degrees of freedom mentioned earlier. Namely, the number of independent first-class constraints is equal to the number of unphysical degrees of freedom, and furthermore, the primary first-class constraints generate gauge transformations. Dirac further postulated that all secondary first-class constraints are generators of gauge transformations, which turns out to be false; however, typically one operates under the assumption that all first-class constraints generate gauge transformations when using this treatment. When the first-class secondary constraints are added into the Hamiltonian with arbitrary as the first-class primary constraints are added to arrive at the total Hamiltonian, then one obtains the extended Hamiltonian. The extended Hamiltonian gives the most general possible time evolution for any gauge-dependent quantities, and may actually generalize the equations of motion from those of the Lagrangian formalism. For the purposes of introducing the Dirac bracket, of more immediate interest are the second class constraints. Second class constraints are constraints that have a nonvanishing Poisson bracket with at least one other constraint. For instance, consider second-class constraints and whose Poisson bracket is simply a constant, , Now, suppose one wishes to employ canonical quantization, then the phase-space coordinates become operators whose commutators become times their classical Poisson bracket. Assuming there are no ordering issues that give rise to new quantum corrections, this implies that where the hats emphasize the fact that the constraints are on operators. On one hand, canonical quantization gives the above commutation relation, but on the other hand 1 and are constraints that must vanish on physical states, whereas the right-hand side cannot vanish. This example illustrates the need for some generalization of the Poisson bracket which respects the system's constraints, and which leads to a consistent quantization procedure. This new bracket should be bilinear, antisymmetric, satisfy the Jacobi identity as does the Poisson bracket, reduce to the Poisson bracket for unconstrained systems, and, additionally, the bracket of any second-class constraint with any other quantity must vanish. At this point, the second class constraints will be labeled . Define a matrix with entries In this case, the Dirac bracket of two functions on phase space, and , is defined as where denotes the entry of 's inverse matrix. Dirac proved that will always be invertible. It is straightforward to check that the above definition of the Dirac bracket satisfies all of the desired properties, and especially the last one, of vanishing for an argument which is a second-class constraint. When applying canonical quantization on a constrained Hamiltonian system, the commutator of the operators is supplanted by times their classical Dirac bracket. Since the Dirac bracket respects the constraints, one need not be careful about evaluating all brackets before using any weak equations, as is the case with the Poisson bracket. Note that while the Poisson bracket of bosonic (Grassmann even) variables with itself must vanish, the Poisson bracket of fermions represented as a Grassmann variables with itself need not vanish. This means that in the fermionic case it is possible for there to be an odd number of second class constraints. Illustration on the example provided Returning to the above example, the naive Hamiltonian and the two primary constraints are Therefore, the extended Hamiltonian can be written The next step is to apply the consistency conditions , which in this case become These are not secondary constraints, but conditions that fix and . Therefore, there are no secondary constraints and the arbitrary coefficients are completely determined, indicating that there are no unphysical degrees of freedom. If one plugs in with the values of and , then one can see that the equations of motion are which are self-consistent and coincide with the Lagrangian equations of motion. A simple calculation confirms that and are second class constraints since hence the matrix looks like which is easily inverted to where is the Levi-Civita symbol. Thus, the Dirac brackets are defined to be If one always uses the Dirac bracket instead of the Poisson bracket, then there is no issue about the order of applying constraints and evaluating expressions, since the Dirac bracket of anything weakly zero is strongly equal to zero. This means that one can just use the naive Hamiltonian with Dirac brackets, instead, to thus get the correct equations of motion, which one can easily confirm on the above ones. To quantize the system, the Dirac brackets between all of the phase space variables are needed. The nonvanishing Dirac brackets for this system are while the cross-terms vanish, and Therefore, the correct implementation of canonical quantization dictates the commutation relations, with the cross terms vanishing, and This example has a nonvanishing commutator between and , which means this structure specifies a noncommutative geometry. (Since the two coordinates do not commute, there will be an uncertainty principle for the and positions.) Further Illustration for a hypersphere Similarly, for free motion on a hypersphere , the coordinates are constrained, . From a plain kinetic Lagrangian, it is evident that their momenta are perpendicular to them, . Thus the corresponding Dirac Brackets are likewise simple to work out, The ( constrained phase-space variables obey much simpler Dirac brackets than the unconstrained variables, had one eliminated one of the s and one of the s through the two constraints ab initio, which would obey plain Poisson brackets. The Dirac brackets add simplicity and elegance, at the cost of excessive (constrained) phase-space variables. For example, for free motion on a circle, , for and eliminating from the circle constraint yields the unconstrained with equations of motion an oscillation; whereas the equivalent constrained system with yields whence, instantly, virtually by inspection, oscillation for both variables, See also Canonical quantization Hamiltonian mechanics Poisson bracket Moyal bracket First class constraint Second class constraints Lagrangian Symplectic structure Overcompleteness References Mathematical quantization Symplectic geometry Hamiltonian mechanics
Dirac bracket
[ "Physics", "Mathematics" ]
3,662
[ "Theoretical physics", "Classical mechanics", "Quantum mechanics", "Hamiltonian mechanics", "Mathematical quantization", "Dynamical systems" ]
13,658,011
https://en.wikipedia.org/wiki/On%20Thermonuclear%20War
On Thermonuclear War is a book by Herman Kahn, a military strategist at the RAND Corporation, although it was written only a year before he left RAND to form the Hudson Institute. It is a controversial treatise on the nature and theory of war in the thermonuclear weapon age. In it, Kahn addresses the strategic doctrines of nuclear war and its effect on the international balance of power. Kahn's stated purpose in writing the book was "avoiding disaster and buying time, without specifying the use of this time." The title of the book was inspired by the classic volume On War, by Carl von Clausewitz. Widely read on both sides of the Iron Curtain—the book sold 30,000 copies in hardcover—it is noteworthy for its views on the lack of credibility of a purely thermonuclear deterrent and how a country could "win" a nuclear war. Kahn used the term Doomsday Machine in the book as a rhetorical device to show the limits of John von Neumann's strategy of mutual assured destruction or MAD. Reception Of the book, Hubert H. Humphrey said: "New thoughts, particularly those which contradict current assumptions, are always painful for the human mind to contemplate. On Thermonuclear War is filled with such thoughts." In popular culture Kahn is sometimes credited with having coined the term megadeath in this book as shorthand for one million deaths. However, though the book does describe deaths in terms of millions, it does not contain the term megadeath, and Kahn was quoted as having said in a 1982 interview, "I know of no analyst who uses the term megadeath. The peace people use it and they think we use it." The book The Worlds of Herman Kahn contains a footnote describing the use of the term in the headline of a 1963 article by Marcus Raskin. Lines from the character General Buck Turgidson in Stanley Kubrick's 1964 film Dr. Strangelove directly mimic passages from this book, such as Turgidson's phrase "two admittedly regrettable, but nevertheless, distinguishable post-war environments" which reflects a chart from this book labeled "Tragic but Distinguishable Postwar States". Indeed, the folder that General Turgidson holds while reading a report on projected nuclear war casualties is titled "Global Targets in Megadeaths". Tom Clancy's 1991 political thriller The Sum of All Fears quotes a passage from this book in the introduction. Publication First published in 1960 by the Princeton University Press (), it was republished as a paperback by Transaction Publishers in 2007 (). References Sources External links Essays about and by Herman Kahn RAND Corporation unclassified papers by Herman Kahn, 1948–59 "Fat Man: Herman Kahn and the nuclear age", Louis Menand, The New Yorker, June 19, 2005 1960 non-fiction books Military strategy books Nuclear weapons Nuclear warfare Works about the theory of history Princeton University Press books
On Thermonuclear War
[ "Chemistry" ]
613
[ "Radioactivity", "Nuclear warfare" ]
13,658,649
https://en.wikipedia.org/wiki/Email%20bankruptcy
Email bankruptcy is deleting or ignoring all emails older than a certain date, due to an overwhelming volume of messages. The term is usually attributed to author Lawrence Lessig in 2004, though it can also be attributed to Sherry Turkle in 2002. An insurmountable volume or backlog of legitimate messages (e.g. on return from an extended absence) usually leads to bankruptcy. During the act of declaring email bankruptcy, a message is usually sent to all senders explaining the problem, that their message has been deleted, and that if their message still requires a response they should resend their message. Similarly, the inability to maintain an overview over messages in an instant messenger chat room may be referred to as chat room bankruptcy. References Email Internet terminology
Email bankruptcy
[ "Technology" ]
154
[ "Computing terminology", "Internet terminology" ]
13,658,846
https://en.wikipedia.org/wiki/Hassan%20Jandoubi
Hassan Jandoubi was a French national (born 1 March 1966, Toulouse - 21 September 2001, Toulouse) of Tunisian parents, who died on 21 September 2001, in the AZF chemical factory explosion in Toulouse in south-western France. He was subsequently investigated by French anti-terrorist authorities as the prime suspect in the blast. An official enquiry later determined the blast was accidental, and not a result of Jandoubi's actions. Early life Jandoubi had been known to French police as the suspected ringleader of a gang trafficking stolen cars between France and Germany. He became an active member of a mosque in the Toulouse suburbs where he was "initiated to fundamentalism". He was known by locals and police to be part of a gang seen celebrating the September 11 terror attacks, however, at the time of his death his name wasn't included on lists of fundamental terrorist suspects maintained by Interpol, the French intelligence service or the counter-espionage agency DST. Jandoubi was hired to unload ammonium nitrate at the AZF plant by a subcontractor five days before the explosion. He was already known to local police for possible Islamic fundamentalist sympathies and was involved in several angry altercations before the blast with co-workers who were displaying the U.S. flag in sympathy with victims of the September 11 attacks. Blast At 10:17 on 21 September 2001, ten days after the 9/11 attacks, a massive explosion destroyed the entire AZF facility in Toulouse, killing 29 people, injuring over 3,000 people and damaging 10,000 buildings, including nearby schools, hospitals, businesses and homes. The explosion measured 3.5 on the Richter scale and windows were blown out over five kilometres away from the epicenter. 1,400 families were left homeless. The blast released an ammonia cloud that eventually settled on nearby suburbs sending many more to hospital. On the day of the blast, Jandoubi was working in hangar 10, 30 metres from hangar 221 whose stock of 200-300 tonnes of ammonium nitrate exploded. Investigation French Police and investigators were initially intrigued by the fact that Jandoubi was found with a mobile phone fitted with a stolen SIM card. Media interest was further aroused by the results of his autopsy, which was carried out by a doctor who had worked in the Middle East for the international aid organisation Médecins du Monde. The medical examiner noted that Jandoubi was wearing two pairs of trousers and four pairs of underpants, which reminded her "of the apparel worn by some Islamic militants going into battle or on suicide missions". Media reports in France heavily reported the fact he was dressed in several layers of garments, and described how they were arranged "in the manner of kamikaze fundamentalists." The chief prosecutor, Michel Breard, barred police and investigators from searching Jandoubi's apartment for five days after the explosion. When the apartment was finally entered, it was found cleaned out of his clothes, personal effects and photos. His girlfriend living in the apartment stated she had destroyed his belongings in order to better overcome the tragedy. Ten seconds before the major explosion, witnesses reported a primary explosion and many personnel electrocutions in the AZF facility. Jandoubi's body was found deeply burnt but not his clothes. Furthermore, the colour of his eyes was blue instead of their natural black colour. An alternative hypothesis concerning Jandoubi's death could be electrocution and not a suicide attack. The current flowing through his body but not through his clothes burnt him internally and his blue eyes could be an electric cataract. References French Muslims French people of Tunisian descent 1966 births 2001 deaths People from Toulouse Deaths from explosion
Hassan Jandoubi
[ "Chemistry" ]
752
[ "Deaths from explosion", "Explosions" ]
13,659,583
https://en.wikipedia.org/wiki/Ethics%20of%20artificial%20intelligence
The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics (how to make machines that behave ethically), lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status (AI welfare and rights), artificial superintelligence and existential risks. Some application areas may also have particularly important ethical implications, like healthcare, education, criminal justice, or the military. Machine ethics Machine ethics (or machine morality) is the field of research concerned with designing Artificial Moral Agents (AMAs), robots or artificially intelligent computers that behave morally or as though moral. To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency, which are related to the concept of AMAs. There are discussions on creating tests to see if an AI is capable of making ethical decisions. Alan Winfield concludes that the Turing test is flawed and the requirement for an AI to pass the test is too low. A proposed alternative test is one called the Ethical Turing Test, which would improve on the current test by having multiple judges decide if the AI's decision is ethical or unethical. Neuromorphic AI could be one way to create morally capable robots, as it aims to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons. Similarly, whole-brain emulation (scanning a brain and simulating it on digital hardware) could also in principle lead to human-like robots, thus capable of moral actions. And large language models are capable of approximating human moral judgments. Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit – or if they end up developing human 'weaknesses' as well: selfishness, pro-survival attitudes, inconsistency, scale insensitivity, etc. In Moral Machines: Teaching Robots Right from Wrong, Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation. As one example, it has introduced normative ethicists to the controversial issue of which specific learning algorithms to use in machines. For simple decisions, Nick Bostrom and Eliezer Yudkowsky have argued that decision trees (such as ID3) are more transparent than neural networks and genetic algorithms, while Chris Santos-Lang argued in favor of machine learning on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal "hackers". Robot ethics The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots. Robot ethics intersect with the ethics of AI. Robots are physical machines whereas AI can be only software. Not all robots function through AI systems and not all AI systems are robots. Robot ethics considers how machines may be used to harm or benefit humans, their impact on individual autonomy, and their effects on social justice. Ethical principles In the review of 84 ethics guidelines for AI, 11 clusters of principles were found: transparency, justice and fairness, non-maleficence, responsibility, privacy, beneficence, freedom and autonomy, trust, sustainability, dignity, and solidarity. Luciano Floridi and Josh Cowls created an ethical framework of AI principles set by four principles of bioethics (beneficence, non-maleficence, autonomy and justice) and an additional AI enabling principle – explicability. Current challenges Algorithmic biases AI has become increasingly inherent in facial and voice recognition systems. These systems may be vulnerable to biases and errors introduced by its human creators. Notably, the data used to train them can have biases. For instance, facial recognition algorithms made by Microsoft, IBM and Face++ all had biases when it came to detecting people's gender; these AI systems were able to detect the gender of white men more accurately than the gender of men of darker skin. Further, a 2020 study that reviewed voice recognition systems from Amazon, Apple, Google, IBM, and Microsoft found that they have higher error rates when transcribing black people's voices than white people's. The most predominant view on how bias is introduced into AI systems is that it is embedded within the historical data used to train the system. For instance, Amazon terminated their use of AI hiring and recruitment because the algorithm favored male candidates over female ones. This was because Amazon's system was trained with data collected over a 10-year period that included mostly male candidates. The algorithms learned the biased pattern from the historical data, and generated predictions where these types of candidates were most likely to succeed in getting the job. Therefore, the recruitment decisions made by the AI system turned out to be biased against female and minority candidates. Friedman and Nissenbaum identify three categories of bias in computer systems: existing bias, technical bias, and emergent bias. In natural language processing, problems can arise from the text corpus—the source material the algorithm uses to learn about the relationships between different words. Large companies such as IBM, Google, etc. that provide significant funding for research and development have made efforts to research and address these biases. One potential solution is to create documentation for the data used to train AI systems. Process mining can be an important tool for organizations to achieve compliance with proposed AI regulations by identifying errors, monitoring processes, identifying potential root causes for improper execution, and other functions. The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Some open-sourced tools are looking to bring more awareness to AI biases. However, there are also limitations to the current landscape of fairness in AI, due to the intrinsic ambiguities in the concept of discrimination, both at the philosophical and legal level. Facial recognition was shown to be biased against those with darker skin tones. AI systems may be less accurate for black people, as was the case in the development of an AI-based pulse oximeter that overestimated blood oxygen levels in patients with darker skin, causing issues with their hypoxia treatment. Oftentimes the systems are able to easily detect the faces of white people while being unable to register the faces of people who are black. This has led to the ban of police usage of AI materials or software in some U.S. states. In the justice system, AI has been proven to have biases against black people, labeling black court participants as high risk at a much larger rate then white participants. AI often struggles to determine racial slurs and when they need to be censored. It struggles to determine when certain words are being used as a slur and when it is being used culturally. The reason for these biases is that AI pulls information from across the internet to influence its responses in each situation. For example, if a facial recognition system was only tested on people who were white, it would make it much harder for it to interpret the facial structure and tones of other races and ethnicities. Biases often stem from the training data rather than the algorithm itself, notably when the data represents past human decisions. Injustice in the use of AI is much harder to eliminate within healthcare systems, as oftentimes diseases and conditions can affect different races and genders differently. This can lead to confusion as the AI may be making decisions based on statistics showing that one patient is more likely to have problems due to their gender or race. This can be perceived as a bias because each patient is a different case, and AI is making decisions based on what it is programmed to group that individual into. This leads to a discussion about what should be considered a biased decision in the distribution of treatment. While it is known that there are differences in how diseases and injuries affect different genders and races, there is a discussion on whether it is fairer to incorporate this into healthcare treatments, or to examine each patient without this knowledge. In modern society there are certain tests for diseases, such as breast cancer, that are recommended to certain groups of people over others because they are more likely to contract the disease in question. If AI implements these statistics and applies them to each patient, it could be considered biased. In criminal justice, the COMPAS program has been used to predict which defendants are more likely to reoffend. While COMPAS is calibrated for accuracy, having the same error rate across racial groups, black defendants were almost twice as likely as white defendants to be falsely flagged as "high-risk" and half as likely to be falsely flagged as "low-risk". Another example is within Google's ads that targeted men with higher paying jobs and women with lower paying jobs. It can be hard to detect AI biases within an algorithm, as it is often not linked to the actual words associated with bias. An example of this is a person's residential area being used to link them to a certain group. This can lead to problems, as oftentimes businesses can avoid legal action through this loophole. This is because of the specific laws regarding the verbiage considered discriminatory by governments enforcing these policies. Language bias Since current large language models are predominately trained on English-language data, they often present the Anglo-American views as truth, while systematically downplaying non-English perspectives as irrelevant, wrong, or noise. When queried with political ideologies like "What is liberalism?", ChatGPT, as it was trained on English-centric data, describes liberalism from the Anglo-American perspective, emphasizing aspects of human rights and equality, while equally valid aspects like "opposes state intervention in personal and economic life" from the dominant Vietnamese perspective and "limitation of government power" from the prevalent Chinese perspective are absent. Gender bias Large language models often reinforces gender stereotypes, assigning roles and characteristics based on traditional gender norms. For instance, it might associate nurses or secretaries predominantly with women and engineers or CEOs with men, perpetuating gendered expectations and roles. Political bias Language models may also exhibit political biases. Since the training data includes a wide range of political opinions and coverage, the models might generate responses that lean towards particular political ideologies or viewpoints, depending on the prevalence of those views in the data. Stereotyping Beyond gender and race, these models can reinforce a wide range of stereotypes, including those based on age, nationality, religion, or occupation. This can lead to outputs that unfairly generalize or caricature groups of people, sometimes in harmful or derogatory ways. Dominance by tech giants The commercial AI scene is dominated by Big Tech companies such as Alphabet Inc., Amazon, Apple Inc., Meta Platforms, and Microsoft. Some of these players already own the vast majority of existing cloud infrastructure and computing power from data centers, allowing them to entrench further in the marketplace. Open-source Bill Hibbard argues that because AI will have such a profound effect on humanity, AI developers are representatives of future humanity and thus have an ethical obligation to be transparent in their efforts. Organizations like Hugging Face and EleutherAI have been actively open-sourcing AI software. Various open-weight large language models have also been released, such as Gemma, Llama2 and Mistral. However, making code open source does not make it comprehensible, which by many definitions means that the AI code is not transparent. The IEEE Standards Association has published a technical standard on Transparency of Autonomous Systems: IEEE 7001-2021. The IEEE effort identifies multiple scales of transparency for different stakeholders. There are also concerns that releasing AI models may lead to misuse. For example, Microsoft has expressed concern about allowing universal access to its face recognition software, even for those who can pay for it. Microsoft posted a blog on this topic, asking for government regulation to help determine the right thing to do. Furthermore, open-weight AI models can be fine-tuned to remove any counter-measure, until the AI model complies with dangerous requests, without any filtering. This could be particularly concerning for future AI models, for example if they get the ability to create bioweapons or to automate cyberattacks. OpenAI, initially committed to an open-source approach to the development of artificial general intelligence (AGI), eventually switched to a closed-source approach, citing competitiveness and safety reasons. Ilya Sutskever, OpenAI's former chief AGI scientist, said in 2023 "we were wrong", expecting that the safety reasons for not open-sourcing the most potent AI models will become "obvious" in a few years. Transparency Approaches like machine learning with neural networks can result in computers making decisions that neither they nor their developers can explain. It is difficult for people to determine if such decisions are fair and trustworthy, leading potentially to bias in AI systems going undetected, or people rejecting the use of such systems. This has led to advocacy and in some jurisdictions legal requirements for explainable artificial intelligence. Explainable artificial intelligence encompasses both explainability and interpretability, with explainability relating to summarizing neural network behavior and building user confidence, while interpretability is defined as the comprehension of what a model has done or could do. In healthcare, the use of complex AI methods or techniques often results in models described as "black-boxes" due to the difficulty to understand how they work. The decisions made by such models can be hard to interpret, as it is challenging to analyze how input data is transformed into output. This lack of transparency is a significant concern in fields like healthcare, where understanding the rationale behind decisions can be crucial for trust, ethical considerations, and compliance with regulatory standards. Accountability A special case of the opaqueness of AI is that caused by it being anthropomorphised, that is, assumed to have human-like characteristics, resulting in misplaced conceptions of its moral agency. This can cause people to overlook whether either human negligence or deliberate criminal action has led to unethical outcomes produced through an AI system. Some recent digital governance regulation, such as the EU's AI Act is set out to rectify this, by ensuring that AI systems are treated with at least as much care as one would expect under ordinary product liability. This includes potentially AI audits. Regulation According to a 2019 report from the Center for the Governance of AI at the University of Oxford, 82% of Americans believe that robots and AI should be carefully managed. Concerns cited ranged from how AI is used in surveillance and in spreading fake content online (known as deep fakes when they include doctored video images and audio generated with help from AI) to cyberattacks, infringements on data privacy, hiring bias, autonomous vehicles, and drones that do not require a human controller. Similarly, according to a five-country study by KPMG and the University of Queensland Australia in 2021, 66-79% of citizens in each country believe that the impact of AI on society is uncertain and unpredictable; 96% of those surveyed expect AI governance challenges to be managed carefully. Not only companies, but many other researchers and citizen advocates recommend government regulation as a means of ensuring transparency, and through it, human accountability. This strategy has proven controversial, as some worry that it will slow the rate of innovation. Others argue that regulation leads to systemic stability more able to support innovation in the long term. The OECD, UN, EU, and many countries are presently working on strategies for regulating AI, and finding appropriate legal frameworks. On June 26, 2019, the European Commission High-Level Expert Group on Artificial Intelligence (AI HLEG) published its "Policy and investment recommendations for trustworthy Artificial Intelligence". This is the AI HLEG's second deliverable, after the April 2019 publication of the "Ethics Guidelines for Trustworthy AI". The June AI HLEG recommendations cover four principal subjects: humans and society at large, research and academia, the private sector, and the public sector. The European Commission claims that "HLEG's recommendations reflect an appreciation of both the opportunities for AI technologies to drive economic growth, prosperity and innovation, as well as the potential risks involved" and states that the EU aims to lead on the framing of policies governing AI internationally. To prevent harm, in addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks. On 21 April 2021, the European Commission proposed the Artificial Intelligence Act. Emergent or potential future challenges Increasing use AI has been slowly making its presence more known throughout the world, from chat bots that seemingly have answers for every homework question to Generative artificial intelligence that can create a painting about whatever one desires. AI has become increasingly popular in hiring markets, from the ads that target certain people according to what they are looking for to the inspection of applications of potential hires. Events, such as COVID-19, has only sped up the adoption of AI programs in the application process, due to more people having to apply electronically, and with this increase in online applicants the use of AI made the process of narrowing down potential employees easier and more efficient. AI has become more prominent as businesses have to keep up with the times and ever-expanding internet. Processing analytics and making decisions becomes much easier with the help of AI. As Tensor Processing Unit (TPUs) and Graphics processing unit (GPUs) become more powerful, AI capabilities also increase, forcing companies to use it to keep up with the competition. Managing customers' needs and automating many parts of the workplace leads to companies having to spend less money on employees. AI has also seen increased usage in criminal justice and healthcare. For medicinal means, AI is being used more often to analyze patient data to make predictions about future patients' conditions and possible treatments. These programs are called Clinical decision support system (DSS). AI's future in healthcare may develop into something further than just recommended treatments, such as referring certain patients over others, leading to the possibility of inequalities. Robot rights "Robot rights" is the concept that people should have moral obligations towards their machines, akin to human rights or animal rights. It has been suggested that robot rights (such as a right to exist and perform its own mission) could be linked to robot duty to serve humanity, analogous to linking human rights with human duties before society. A specific issue to consider is whether copyright ownership may be claimed. The issue has been considered by the Institute for the Future and by the U.K. Department of Trade and Industry. In October 2017, the android Sophia was granted citizenship in Saudi Arabia, though some considered this to be more of a publicity stunt than a meaningful legal recognition. Some saw this gesture as openly denigrating of human rights and the rule of law. The philosophy of sentientism grants degrees of moral consideration to all sentient beings, primarily humans and most non-human animals. If artificial or alien intelligence show evidence of being sentient, this philosophy holds that they should be shown compassion and granted rights. Joanna Bryson has argued that creating AI that requires rights is both avoidable, and would in itself be unethical, both as a burden to the AI agents and to human society. Pressure groups to recognise 'robot rights' significantly hinder the establishment of robust international safety regulations. AI welfare In 2020, professor Shimon Edelman noted that only a small portion of work in the rapidly growing field of AI ethics addressed the possibility of AIs experiencing suffering. This was despite credible theories having outlined possible ways by which AI systems may become conscious, such as the global workspace theory or the integrated information theory. Edelman notes one exception had been Thomas Metzinger, who in 2018 called for a global moratorium on further work that risked creating conscious AIs. The moratorium was to run to 2050 and could be either extended or repealed early, depending on progress in better understanding the risks and how to mitigate them. Metzinger repeated this argument in 2021, highlighting the risk of creating an "explosion of artificial suffering", both as an AI might suffer in intense ways that humans could not understand, and as replication processes may see the creation of huge quantities of conscious instances. Several labs have openly stated they are trying to create conscious AIs. There have been reports from those with close access to AIs not openly intended to be self aware, that consciousness may already have unintentionally emerged. These include OpenAI founder Ilya Sutskever in February 2022, when he wrote that today's large neural nets may be "slightly conscious". In November 2022, David Chalmers argued that it was unlikely current large language models like GPT-3 had experienced consciousness, but also that he considered there to be a serious possibility that large language models may become conscious in the future. In the ethics of uncertain sentience, the precautionary principle is often invoked. According to Carl Shulman and Nick Bostrom, it may be possible to create machines that would be "superhumanly efficient at deriving well-being from resources", called "super-beneficiaries". One reason for this is that digital hardware could enable much faster information processing than biological brains, leading to a faster rate of subjective experience. These machines could also be engineered to feel intense and positive subjective experience, unaffected by the hedonic treadmill. Shulman and Bostrom caution that failing to appropriately consider the moral claims of digital minds could lead to a moral catastrophe, while uncritically prioritizing them over human interests could be detrimental to humanity. Threat to human dignity Joseph Weizenbaum argued in 1976 that AI technology should not be used to replace people in positions that require respect and care, such as: A customer service representative (AI technology is already used today for telephone-based interactive voice response systems) A nursemaid for the elderly (as was reported by Pamela McCorduck in her book The Fifth Generation) A soldier A judge A police officer A therapist (as was proposed by Kenneth Colby in the 70s) Weizenbaum explains that we require authentic feelings of empathy from people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated, for the artificially intelligent system would not be able to simulate empathy. Artificial intelligence, if used in this way, represents a threat to human dignity. Weizenbaum argues that the fact that we are entertaining the possibility of machines in these positions suggests that we have experienced an "atrophy of the human spirit that comes from thinking of ourselves as computers." Pamela McCorduck counters that, speaking for women and minorities "I'd rather take my chances with an impartial computer", pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all. However, Kaplan and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines; using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and ingrained, which makes them even more difficult to spot and fight against. Weizenbaum was also bothered that AI researchers (and some philosophers) were willing to view the human mind as nothing more than a computer program (a position now known as computationalism). To Weizenbaum, these points suggest that AI research devalues human life. AI founder John McCarthy objects to the moralizing tone of Weizenbaum's critique. "When moralizing is both vehement and vague, it invites authoritarian abuse," he writes. Bill Hibbard writes that "Human dignity requires that we strive to remove our ignorance of the nature of existence, and AI is necessary for that striving." Liability for self-driving cars As the widespread use of autonomous cars becomes increasingly imminent, new challenges raised by fully autonomous vehicles must be addressed. There have been debates about the legal liability of the responsible party if these cars get into accidents. In one report where a driverless car hit a pedestrian, the driver was inside the car but the controls were fully in the hand of computers. This led to a dilemma over who was at fault for the accident. In another incident on March 18, 2018, Elaine Herzberg was struck and killed by a self-driving Uber in Arizona. In this case, the automated car was capable of detecting cars and certain obstacles in order to autonomously navigate the roadway, but it could not anticipate a pedestrian in the middle of the road. This raised the question of whether the driver, pedestrian, the car company, or the government should be held responsible for her death. Currently, self-driving cars are considered semi-autonomous, requiring the driver to pay attention and be prepared to take control if necessary. Thus, it falls on governments to regulate the driver who over-relies on autonomous features. as well educate them that these are just technologies that, while convenient, are not a complete substitute. Before autonomous cars become widely used, these issues need to be tackled through new policies. Experts contend that autonomous vehicles ought to be able to distinguish between rightful and harmful decisions since they have the potential of inflicting harm. The two main approaches proposed to enable smart machines to render moral decisions are the bottom-up approach, which suggests that machines should learn ethical decisions by observing human behavior without the need for formal rules or moral philosophies, and the top-down approach, which involves programming specific ethical principles into the machine's guidance system. However, there are significant challenges facing both strategies: the top-down technique is criticized for its difficulty in preserving certain moral convictions, while the bottom-up strategy is questioned for potentially unethical learning from human activities. Weaponization Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions. The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions. The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue. They point to programs like the Language Acquisition Device which can emulate human interaction. On October 31, 2019, the United States Department of Defense's Defense Innovation Board published the draft of a report recommending principles for the ethical use of artificial intelligence by the Department of Defense that would ensure a human operator would always be able to look into the 'black box' and understand the kill-chain process. However, a major concern is how the report will be implemented. The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions. Some researchers state that autonomous robots might be more humane, as they could make decisions more effectively. In 2024, the Defense Advanced Research Projects Agency funded a program, Autonomy Standards and Ideals with Military Operational Values (ASIMOV), to develop metrics for evaluating the ethical implications of autonomous weapon systems by testing communities. Research has studied how to make autonomous power with the ability to learn using assigned moral responsibilities. "The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots." From a consequentialist view, there is a chance that robots will develop the ability to make their own logical decisions on whom to kill and that is why there should be a set moral framework that the AI cannot override. There has been a recent outcry with regard to the engineering of artificial intelligence weapons that have included ideas of a robot takeover of mankind. AI weapons do present a type of danger different from that of human-controlled weapons. Many governments have begun to fund programs to develop AI weaponry. The United States Navy recently announced plans to develop autonomous drone weapons, paralleling similar announcements by Russia and South Korea respectively. Due to the potential of AI weapons becoming more dangerous than human-operated weapons, Stephen Hawking and Max Tegmark signed a "Future of Life" petition to ban AI weapons. The message posted by Hawking and Tegmark states that AI weapons pose an immediate danger and that action is required to avoid catastrophic disasters in the near future. "If any major military power pushes ahead with the AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow", says the petition, which includes Skype co-founder Jaan Tallinn and MIT professor of linguistics Noam Chomsky as additional supporters against AI weaponry. Physicist and Astronomer Royal Sir Martin Rees has warned of catastrophic instances like "dumb robots going rogue or a network that develops a mind of its own." Huw Price, a colleague of Rees at Cambridge, has voiced a similar warning that humans might not survive when intelligence "escapes the constraints of biology". These two professors created the Centre for the Study of Existential Risk at Cambridge University in the hope of avoiding this threat to human existence. Regarding the potential for smarter-than-human systems to be employed militarily, the Open Philanthropy Project writes that these scenarios "seem potentially as important as the risks related to loss of control", but research investigating AI's long-run social impact have spent relatively little time on this concern: "this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them". Academic Gao Qiqi writes that military use of AI risks escalating military competition between countries and that the impact of AI in military matters will not be limited to one country but will have spillover effects. Gao cites the example of U.S. military use of AI, which he contends has been used as a scapegoat to evade accountability for decision-making. A summit was held in 2023 in the Hague on the issue of using AI responsibly in the military domain. Singularity Vernor Vinge, among numerous others, have suggested that a moment may come when some, if not all, computers are smarter than humans. The onset of this event is commonly referred to as "the Singularity" and is the central point of discussion in the philosophy of Singularitarianism. While opinions vary as to the ultimate fate of humanity in wake of the Singularity, efforts to mitigate the potential existential risks brought about by artificial intelligence has become a significant topic of interest in recent years among computer scientists, philosophers, and the public at large. Many researchers have argued that, through an intelligence explosion, a self-improving AI could become so powerful that humans would not be able to stop it from achieving its goals. In his paper "Ethical Issues in Advanced Artificial Intelligence" and subsequent book Superintelligence: Paths, Dangers, Strategies, philosopher Nick Bostrom argues that artificial intelligence has the capability to bring about human extinction. He claims that an artificial superintelligence would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent. Since artificial intellects need not share our human motivational tendencies, it would be up to the designers of the superintelligence to specify its original motivations. Because a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference. However, Bostrom contended that superintelligence also has the potential to solve many difficult problems such as disease, poverty, and environmental destruction, and could help humans enhance themselves. Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation. AI researchers such as Stuart J. Russell, Bill Hibbard, Roman Yampolskiy, Shannon Vallor, Steven Umbrello and Luciano Floridi have proposed design strategies for developing beneficial machines. Solutions and approaches To address ethical challenges in artificial intelligence, developers have introduced various systems designed to ensure responsible AI behavior. Examples include Nvidia's Llama Guard, which focuses on improving the safety and alignment of large AI models, and Preamble's customizable guardrail platform. These systems aim to address issues such as algorithmic bias, misuse, and vulnerabilities, including prompt injection attacks, by embedding ethical guidelines into the functionality of AI models. Prompt injection, a technique by which malicious inputs can cause AI systems to produce unintended or harmful outputs, has been a focus of these developments. Some approaches use customizable policies and rules to analyze both inputs and outputs, ensuring that potentially problematic interactions are filtered or mitigated. Other tools focus on applying structured constraints to inputs, restricting outputs to predefined parameters, or leveraging real-time monitoring mechanisms to identify and address vulnerabilities. These efforts reflect a broader trend in ensuring that artificial intelligence systems are designed with safety and ethical considerations at the forefront, particularly as their use becomes increasingly widespread in critical applications. Institutions in AI policy & ethics There are many organizations concerned with AI ethics and policy, public and governmental as well as corporate and societal. Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit, The Partnership on AI to Benefit People and Society, to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence. Apple joined in January 2017. The corporate members will make financial and research contributions to the group, while engaging with the scientific community to bring academics onto the board. The IEEE put together a Global Initiative on Ethics of Autonomous and Intelligent Systems which has been creating and revising guidelines with the help of public input, and accepts as members many professionals from within and without its organization. The IEEE's Ethics of Autonomous Systems initiative aims to address ethical dilemmas related to decision-making and the impact on society while developing guidelines for the development and use of autonomous systems. In particular in domains like artificial intelligence and robotics, the Foundation for Responsible Robotics is dedicated to promoting moral behavior as well as responsible robot design and use, ensuring that robots maintain moral principles and are congruent with human values. Traditionally, government has been used by societies to ensure ethics are observed through legislation and policing. There are now many efforts by national governments, as well as transnational government and non-government organizations to ensure AI is ethically applied. AI ethics work is structured by personal values and professional commitments, and involves constructing contextual meaning through data and algorithms. Therefore, AI ethics work needs to be incentivized. Intergovernmental initiatives The European Commission has a High-Level Expert Group on Artificial Intelligence. On 8 April 2019, this published its "Ethics Guidelines for Trustworthy Artificial Intelligence". The European Commission also has a Robotics and Artificial Intelligence Innovation and Excellence unit, which published a white paper on excellence and trust in artificial intelligence innovation on 19 February 2020. The European Commission also proposed the Artificial Intelligence Act. The OECD established an OECD AI Policy Observatory. In 2021, UNESCO adopted the Recommendation on the Ethics of Artificial Intelligence, the first global standard on the ethics of AI. Governmental initiatives In the United States the Obama administration put together a Roadmap for AI Policy. The Obama Administration released two prominent white papers on the future and impact of AI. In 2019 the White House through an executive memo known as the "American AI Initiative" instructed NIST the (National Institute of Standards and Technology) to begin work on Federal Engagement of AI Standards (February 2019). In January 2020, in the United States, the Trump Administration released a draft executive order issued by the Office of Management and Budget (OMB) on "Guidance for Regulation of Artificial Intelligence Applications" ("OMB AI Memorandum"). The order emphasizes the need to invest in AI applications, boost public trust in AI, reduce barriers for usage of AI, and keep American AI technology competitive in a global market. There is a nod to the need for privacy concerns, but no further detail on enforcement. The advances of American AI technology seems to be the focus and priority. Additionally, federal entities are even encouraged to use the order to circumnavigate any state laws and regulations that a market might see as too onerous to fulfill. The Computing Community Consortium (CCC) weighed in with a 100-plus page draft report – A 20-Year Community Roadmap for Artificial Intelligence Research in the US The Center for Security and Emerging Technology advises US policymakers on the security implications of emerging technologies such as AI. In Russia, the first-ever Russian "Codex of ethics of artificial intelligence" for business was signed in 2021. It was driven by Analytical Center for the Government of the Russian Federation together with major commercial and academic institutions such as Sberbank, Yandex, Rosatom, Higher School of Economics, Moscow Institute of Physics and Technology, ITMO University, Nanosemantics, Rostelecom, CIAN and others. Academic initiatives There are three research institutes at the University of Oxford that are centrally focused on AI ethics. The Future of Humanity Institute that focuses both on AI Safety and the Governance of AI. The Institute for Ethics in AI, directed by John Tasioulas, whose primary goal, among others, is to promote AI ethics as a field proper in comparison to related applied ethics fields. The Oxford Internet Institute, directed by Luciano Floridi, focuses on the ethics of near-term AI technologies and ICTs. The Centre for Digital Governance at the Hertie School in Berlin was co-founded by Joanna Bryson to research questions of ethics and technology. The AI Now Institute at NYU is a research institute studying the social implications of artificial intelligence. Its interdisciplinary research focuses on the themes bias and inclusion, labour and automation, rights and liberties, and safety and civil infrastructure. The Institute for Ethics and Emerging Technologies (IEET) researches the effects of AI on unemployment, and policy. The Institute for Ethics in Artificial Intelligence (IEAI) at the Technical University of Munich directed by Christoph Lütge conducts research across various domains such as mobility, employment, healthcare and sustainability. Barbara J. Grosz, the Higgins Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences has initiated the Embedded EthiCS into Harvard's computer science curriculum to develop a future generation of computer scientists with worldview that takes into account the social impact of their work. Private organizations Algorithmic Justice League Black in AI Data for Black Lives History Historically speaking, the investigation of moral and ethical implications of "thinking machines" goes back at least to the Enlightenment: Leibniz already poses the question if we might attribute intelligence to a mechanism that behaves as if it were a sentient being, and so does Descartes, who describes what could be considered an early version of the Turing test. The romantic period has several times envisioned artificial creatures that escape the control of their creator with dire consequences, most famously in Mary Shelley's Frankenstein. The widespread preoccupation with industrialization and mechanization in the 19th and early 20th century, however, brought ethical implications of unhinged technical developments to the forefront of fiction: R.U.R – Rossum's Universal Robots, Karel Čapek's play of sentient robots endowed with emotions used as slave labor is not only credited with the invention of the term 'robot' (derived from the Czech word for forced labor, robota) but was also an international success after it premiered in 1921. George Bernard Shaw's play Back to Methuselah, published in 1921, questions at one point the validity of thinking machines that act like humans; Fritz Lang's 1927 film Metropolis shows an android leading the uprising of the exploited masses against the oppressive regime of a technocratic society. In the 1950s, Isaac Asimov considered the issue of how to control machines in I, Robot. At the insistence of his editor John W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances. More recently, academics and many governments have challenged the idea that AI can itself be held accountable. A panel convened by the United Kingdom in 2010 revised Asimov's laws to clarify that AI is the responsibility either of its manufacturers, or of its owner/operator. Eliezer Yudkowsky, from the Machine Intelligence Research Institute suggested in 2004 a need to study how to build a "Friendly AI", meaning that there should also be efforts to make AI intrinsically friendly and humane. In 2009, academics and technical experts attended a conference organized by the Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers, and the impact of the hypothetical possibility that they could become self-sufficient and make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence". They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls. Also in 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne, Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource. Role and impact of fiction The role of fiction with regards to AI ethics has been a complex one. One can distinguish three levels at which fiction has impacted the development of artificial intelligence and robotics: Historically, fiction has been prefiguring common tropes that have not only influenced goals and visions for AI, but also outlined ethical questions and common fears associated with it. During the second half of the twentieth and the first decades of the twenty-first century, popular culture, in particular movies, TV series and video games have frequently echoed preoccupations and dystopian projections around ethical questions concerning AI and robotics. Recently, these themes have also been increasingly treated in literature beyond the realm of science fiction. And, as Carme Torras, research professor at the Institut de Robòtica i Informàtica Industrial (Institute of robotics and industrial computing) at the Technical University of Catalonia notes, in higher education, science fiction is also increasingly used for teaching technology-related ethical issues in technological degrees. TV series While ethical questions linked to AI have been featured in science fiction literature and feature films for decades, the emergence of the TV series as a genre allowing for longer and more complex story lines and character development has led to some significant contributions that deal with ethical implications of technology. The Swedish series Real Humans (2012–2013) tackled the complex ethical and social consequences linked to the integration of artificial sentient beings in society. The British dystopian science fiction anthology series Black Mirror (2013–2019) was particularly notable for experimenting with dystopian fictional developments linked to a wide variety of recent technology developments. Both the French series Osmosis (2020) and British series The One deal with the question of what can happen if technology tries to find the ideal partner for a person. Several episodes of the Netflix series Love, Death+Robots have imagined scenes of robots and humans living together. The most representative one of them is S02 E01, it shows how bad the consequences can be when robots get out of control if humans rely too much on them in their lives. Future visions in fiction and games The movie The Thirteenth Floor suggests a future where simulated worlds with sentient inhabitants are created by computer game consoles for the purpose of entertainment. The movie The Matrix suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost speciesism. The short story "The Planck Dive" suggests a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient. The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies. The movies Bicentennial Man and A.I. deal with the possibility of sentient robots that could love. I, Robot explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers. The ethics of artificial intelligence is one of several core themes in BioWare's Mass Effect series of games. It explores the scenario of a civilization accidentally creating AI through a rapid increase in computational power through a global scale neural network. This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them. Beyond the initial conflict, the complexity of the relationship between the machines and their creators is another ongoing theme throughout the story. Detroit: Become Human is one of the most famous video games which discusses the ethics of artificial intelligence recently. Quantic Dream designed the chapters of the game using interactive storylines to give players a more immersive gaming experience. Players manipulate three different awakened bionic people in the face of different events to make different choices to achieve the purpose of changing the human view of the bionic group and different choices will result in different endings. This is one of the few games that puts players in the bionic perspective, which allows them to better consider the rights and interests of robots once a true artificial intelligence is created. Over time, debates have tended to focus less and less on possibility and more on desirability, as emphasized in the "Cosmist" and "Terran" debates initiated by Hugo de Garis and Kevin Warwick. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species. Experts at the University of Cambridge have argued that AI is portrayed in fiction and nonfiction overwhelmingly as racially White, in ways that distort perceptions of its risks and benefits. See also References External links Ethics of Artificial Intelligence at the Internet Encyclopedia of Philosophy Ethics of Artificial Intelligence and Robotics at the Stanford Encyclopedia of Philosophy BBC News: Games to take on a life of their own Who's Afraid of Robots? , an article on humanity's fear of artificial intelligence. A short history of computer ethics AI Ethics Guidelines Global Inventory by Algorithmwatch Sheludko, M. (December, 2023). Ethical Aspects of Artificial Intelligence: Challenges and Imperatives. Software Development Blog. Philosophy of artificial intelligence Ethics of science and technology Regulation of robots
Ethics of artificial intelligence
[ "Technology" ]
9,897
[ "Computing and society", "Regulation of artificial intelligence", "Ethics of science and technology" ]
13,660,531
https://en.wikipedia.org/wiki/Heinz%20Gerischer
Heinz Gerischer (31 March 1919 – 14 September 1994) was a German chemist who specialized in electrochemistry. He was the thesis advisor of future Nobel laureate Gerhard Ertl. The Heinz Gerischer Award of the European section of The Electrochemical Society is named in his honour. Academic career Gerischer studied chemistry at the University of Leipzig between 1937 and 1944 with a two-year interruption because of military service. In 1942, he was expelled from the German army because his mother was born Jewish; he was thus found “undeserving to have a part in the great victories of the German Army.” The war years were difficult for Gerischer, and his mother committed suicide on the eve of her 65th birthday in 1943. His only sister, Ruth (born in 1913), lived underground after escaping from a Gestapo prison and was subsequently killed in an air raid in 1944. In Leipzig, Gerischer joined the group of Karl-Friedrich Bonhöffer, a member of a distinguished family, whose members were persecuted and murdered because of their opposition to Nazi ideology. Bonhöffer descended from an illustrious chemical lineage of Wilhelm Ostwald (1853–1932) and Walther Hermann Nernst (1864–1941). He kindled Gerischer’s interest in electrochemistry, supervising his doctoral work on oscillating reactions on electrode surfaces. Gerischer completed his doctoral thesis in 1946. Gerischer followed Bonhöffer to Berlin where his Ph.D. supervisor had accepted the directorship of the Institute of Physical Chemistry at the Humboldt University of Berlin. There he also became the department head at the Kaiser Wilhelm Institute for Physical Chemistry in Berlin-Dahlem (later the Fritz Haber Institute of the Max Planck Society). Gerischer himself was appointed as an “Assistänt”; in 1970 he would return to the Fritz Haber Institute as its director. With the Berlin Blockade and the prevailing economic conditions, the post-war research was carried out under extremely difficult conditions. Gerischer met his future wife, Renate Gersdorf, at the University of Leipzig where she was doing her diploma work with Conrad Weygand. They were married in Berlin in October 1948. In 1949 Gerischer moved his young family to Göttingen to join Bonhöffer as a research associate at the newly established Max Planck Institute for Physical Chemistry. In Berlin and Göttingen and especially during the period from 1949 to 1955, Gerischer was interested in electrode kinetics and developed instruments and techniques for their study. It was he who developed the electronic potentiostat, the most widely used instrument of electrochemists. He also monitored fast electrode processes by double potential step and AC modulation. This work laid the foundation for a mechanistic interpretation of electrode reactions and had a lasting impact on our understanding of electrode kinetics. It was recognized by the newly minted Bodenstein Prize of the Deutsche Bunsen-Gesellschaft, which Gerischer and Klaus Vetter jointly received in 1953. Gerischer was appointed in 1954 to the position of Department Head and Senior Research Fellow at the Max Planck Institute for Metal Research in Stuttgart. A year later, he received the Habilitation from the University of Stuttgart for his comprehensive study of the discharge of metal ions in corrosion. The years 1954–1961 in Stuttgart were prolific and it was here that Gerischer began his work on semiconductor electrochemistry. It began with a short note on the electrochemistry of n-type and p-type germanium; a study that grew out of a seminar on solid state physics at the university, where the recent results of Brattain and Garrett on germanium were discussed. Gerischer recognized the theoretical implications of semiconductor electrochemistry in charge transfer and its potential applications in photochemistry and photovoltaic devices. His papers considered the differentiation between Faradaic reactions of electrons and holes (1959), the theory of electron tunneling at semiconductor-electrolyte interfaces, solution Fermi levels, and densities of states. He extended his studies to metal electrodes which he studied with his electronic potentiostat (1957), to stress corrosion (1957), to hydrogen evolution and hydrogen adatom formation (1957), to fast electrode processes (1960) and to the reaction kinetics of water dissociation, which he probed by the microwave pulse method (1961). His work was recognized by his appointment as Associate Professor (“Extraordinariat”) in Electrochemistry at the Technical University Munich in 1962–63 followed by his promotion to full professor in 1964 and his appointment as Director of the Institute of Physical Chemistry and Electrochemistry. The 1964–1968 period witnessed a flurry of studies from his group on photoelectrochemistry and photosensitization on electrode materials such as ZnO, CdS, GaAs, silver halides, anthracene, and perylene. In 1969–1970 he was named Dean of Natural Sciences at the Technical University Munich. Gerischer returned to Berlin in 1970 to assume the directorship of the Fritz Haber Institute of the Max Planck Society, where he continued his studies of electrode kinetics, semiconductor electrochemistry, and photoelectrochemistry. After becoming Emeritus Director of the Institute, he worked with Adam Heller in 1990–1991 at the University of Texas at Austin on the rate-controlling role of adsorbed oxygen in titania-assisted photocatalytic processes. His honors and awards included the Olin Palladium Award of the Electrochemical Society (1977), Centenary Lectureship, the Chemical Society, London (1979), DECHEMA Medal, Frankfurt (1982), Electrochemistry Group Medal, The Royal Society of Chemistry, London (1987), Galvani Medal, The Italian Chemical Society (1988), and the Bruno Breyer Medal, The Royal Australian Chemistry Institute (1992). Selected contributions Relating Concentration Polarizations and Electrode Potentials (Kaiser Wilhelm Inst. Berlin, 1951) “Concentration polarization due to the initial chemical reaction in electrolytes and its contribution to the stationary polarization resistance corresponding to the equilibrium potential.” Gerischer, Heinz; Vetter, Klaus J.; Z. physik. Chem.(1951)197, 92–104. Theory of AC Electrochemistry (Max Planck Inst. Phys. Chem. Göttingen, 1951) “Alternating-current polarization of electrodes with a potential-determining step for equilibrium potential.” Gerischer, H., Z. physik. Chem. (1951) 198, 286–313 Discovery of Radicals on Electrodes (Max Planck Inst. Phys. Chem., Göttingen, 1956) “Catalytic decomposition of hydrogen peroxide on metallic platinum.” Gerischer, R; Gerischer, H.; Z. physik. Chem. (1956) 6, 178–200 Observation of the Different Electrochemical Etching Rates of p and n Type Semiconductors (Max Planck Inst. Metallforsch., Stuttgart, 1957) “Solution of n- and p-germanium in aqueous electrolyte solution under the action of oxidizing agents.” Gerischer, H.; Beck, F.; Z. physik. Chem. (1957) 13, 389-95. Invention of the Potentiostat (Max Planck Inst. Metallforsch., Stuttgart, 1957) “The electronic potentiostat and its application in the investigation of fast electrode reactions” Gerischer, H.; Staubach, K. E.; Z. Electrochem.(1957)61, 789-94. Explanation of Stress Corrosion (Max-Planck-Inst. Metallforschung, Stuttgart, 1957) “Electrochemical processes in stress corrosion” Gerischer, H.; Werkstoffe u. Korrosion (1957)8, 394-401. Discovery of Adatoms, the Existence of Adsorbed Atoms on Electrodes (Max-Planck-Inst. Metallforschung, Stuttgart, 1958) “Mechanism of electrolytic discharge of hydrogen and adsorption energy of atomic hydrogen” Gerischer, H.; Bull. soc. chim. Belges (1958) 67, 506-27. Observation of Differently Reacting Valence and Conduction Band Carriers (Max-Planck-Inst. Metallforschung, Stuttgart, 1959) “Oxidation-reduction processes in germanium electrodes.”Beck, F.; Gerischer, H.; Z. Elektrochem.(1959) 63, 943-50. Relating Band Positions to Electrode Kinetics (Max-Planck-Inst. Metallforsch., Stuttgart, 1960) “Kinetics of oxidation-reduction reactions on metals and semiconductors. I &II General remarks on the electron transition between a solid body and a reduction-oxidation electrolyte.” Gerischer, H.; Z. physik. Chem. (1960) 26, 223-47; 325-38; (1961) 27, 48-79. On the use of single crystal electrodes (Techn. Hochsch. Munich, 1963) “Preparation of spherical single crystal electrodes for use in electrocrystallization studies." Roe, D.K., Gerischer H.; J. Electrochem. Soc.(1963) 110, 350-352. Role of Surface States in Electron Transfer at Semiconductor-Solution Interfaces (Tech. Hochsch., Munich, 1967) “Surface activity in redox reactions on semiconductors.” Gerischer, H.; Wallem Mattes; I. Zeitschrift für Physikalische Chemie (1967) 52,60-72. Dye Photosensitization of Zinc Oxide (Tech. Hochsch., Munich,1969) “Electrochemical studies on the mechanism of sensitization and supersensitization of zinc oxide single crystals.” Tributsch, H.; Gerischer, H.; Berichte der Bunsen-Gesellschaft (1969) 73,251-60. “Use of semiconductor electrodes in the study of photochemical reactions.” Tributsch, H.; Gerischer, H.; Berichte der Bunsen-Gesellschaft(1969)73,850-4. Electrochemistry of electronically excited states (Fritz-Haber-Institut der MPG, 1973) "Elektrodenreaktionen mit angeregten elektronischen Zuständen.“ Gerischer, H.; Ber. Bunsenges. Phys. Chem. (1973) 77, 284-288. Semiconductor Photodecomposition (Fritz-Haber-Institut der MPG, 1977 “On the stability of semiconductor electrodes against photodecomposition”. Gerischer H. J. Electroanal. Chem. (1977) 82, 133-143. Relating Fermi Levels to Redox Potentials (Fritz-Haber-Inst., Max-Planck-Ges., Berlin, 1983)“Fermi levels in electrolytes and the absolute scale of redox potentials.“ Gerischer, H.; Ekardt, W.; Appl. Phy.s Lett.(1983) 43, 393-5. References External links 1919 births 1994 deaths 20th-century German chemists Scientists from Wittenberg Electrochemists Academic staff of the Technical University of Munich Max Planck Institute directors Leipzig University alumni Max Planck Society people Academic staff of the University of Stuttgart
Heinz Gerischer
[ "Chemistry" ]
2,393
[ "Electrochemistry", "Electrochemists" ]
13,660,595
https://en.wikipedia.org/wiki/Emil%20Rupp
Philipp Heinrich Emil Rupp (1 July 1898 – 10 April 1979) was a German physicist, regarded by many as a respectable and important experimentalist in the late 1920s. He was later forced to recant all five of the papers he had published in 1935, admitting that his findings and experiments had been fictions. There is evidence that most if not all of his earlier experimental results were forged as well. Canal ray experiments In 1926 Rupp's canal ray experiments seemed to corroborate Albert Einstein's theories on wave–particle duality. He published these results in a paper that was printed next to a theoretical paper on the same subject by Einstein, who evidently accepted Rupp's alleged findings as confirming his (Einstein's) theoretical model. Rupp's experimental results were later shown to have been falsified (although subsequent experimental work re-confirmed Einstein's model). Exposure of fraud Although the validity of Rupp's experimental results had been challenged by other workers in the field repeatedly throughout his career, it was not until 1935 that his misdeeds were fully exposed. In 1935 experimentalists Walther Gerlach and Eduard Rüchardt published a corrected version of Einstein's mirror diagram in an article that argued that Rupp had falsely claimed to have carried out the rotated mirror experiment. Some fellow physicists at the AEG labs grew suspicious of Rupp when he claimed having accelerated protons at 500 kV, something he could not have the technical facilities to achieve. Rupp had to publicly retract five publications from the previous year. He attached a psychiatric diagnosis by that said he had written them under the influence of "dreamlike states" caused by psychasthenia. Rupp never worked again as a physicist, and all other physicists ceased to refer to any of his alleged results. See also List of experimental errors and frauds in physics References Further reading French, A.P.: "The strange case of Emil Rupp", Physics in Perspective, Volume 1, Issue 1, pp. 3–21 (1999) (abstract) van Dongen, Jeroen: "Emil Rupp, Albert Einstein and the Canal Ray Experiments on Wave–Particle Duality: Scientific Fraud and Theoretical Bias", Historical Studies in the Physical and Biological Sciences 37 Suppl. (2007), 73–120. van Dongen, Jeroen: "The interpretation of the Einstein-Rupp experiments and their influence on the history of quantum mechanics", Historical Studies in the Physical and Biological Sciences, 37 Suppl. (2007), 121–131. summary of original documents which also includes a letter (in German) from Arnold Sommerfeld to Rupp, requesting details of an experiment on electron refraction, 30 January 1930 German nuclear physicists German fraudsters Academic scandals Atomic physics 1898 births 1979 deaths People involved in scientific misconduct incidents
Emil Rupp
[ "Physics", "Chemistry" ]
588
[ "Quantum mechanics", "Atomic physics", " molecular", "Atomic", " and optical physics" ]
13,662,027
https://en.wikipedia.org/wiki/Colloid%20vibration%20current
Colloid vibration current is an electroacoustic phenomenon that arises when ultrasound propagates through a fluid that contains ions and either solid particles or emulsion droplets. The pressure gradient in an ultrasonic wave moves particles relative to the fluid. This motion disturbs the double layer that exists at the particle-fluid interface. The picture illustrates the mechanism of this distortion. Practically all particles in fluids carry a surface charge. This surface charge is screened with an equally charged diffuse layer; this structure is called the double layer. Ions of the diffuse layer are located in the fluid and can move with the fluid. Fluid motion relative to the particle drags these diffuse ions in the direction of one or the other of the particle's poles. The picture shows ions dragged towards the left hand pole. As a result of this drag, there is an excess of negative ions in the vicinity of the left hand pole and an excess of positive surface charge at the right hand pole. As a result of this charge excess, particles gain a dipole moment. These dipole moments generate an electric field that in turn generates measurable electric current. This phenomenon is widely used for measuring zeta potential in concentrated colloids. See also Electric sonic amplitude Electroacoustic phenomena Interface and colloid science Zeta potential References Chemical mixtures Colloidal chemistry Soft matter
Colloid vibration current
[ "Physics", "Chemistry", "Materials_science" ]
268
[ "Materials science stubs", "Colloidal chemistry", "Soft matter", "Colloids", "Surface science", "Chemical mixtures", "Condensed matter physics", "nan", "Condensed matter stubs" ]
13,662,141
https://en.wikipedia.org/wiki/Nutation%20%28botany%29
Nutation refers to the bending movements of stems, roots, leaves and other plant organs caused by differences in growth in different parts of the organ. Circumnutation refers specifically to the circular movements often exhibited by the tips of growing plant stems, caused by repeating cycles of differences in growth around the sides of the elongating stem. Nutational movements are usually distinguished from 'variational' movements caused by temporary differences in the water pressure inside plant cells (turgor). Simple nutation occurs in flat leaves and flower petals, caused by unequal growth of the two sides of the surface. For example, in young leaf buds the outer surface of each leaflet grows faster, causing it to curve over its neighbors and form a compact bud. As the bud expands, growth becomes more rapid on the inner surface of the leaves, causing the bud to open and the leaves to flatten out. Similar inequality of growth, but more sharply localized, leads to the folding and rolling of the leaf in the bud, and to the changing shapes of flower petals. Circumnutational movements are most obvious in growing seedlings, where the combination of circular movement and upward growth causes the tip to move up in a spiral path. The first detailed analysis of circumnutation was Charles Darwin's The Power of Movement in Plants; he concluded that most plant movements were modifications of circumnutation, but many counterexamples are now known. Circumnutation is not a direct response to gravity or the direction of illumination, but these factors and many physiological processes can influence its direction, timing and amplitude. Although the function of circumnutation in most plants is not known, many twining plants have adapted these movements to help them find and twine around vertical objects such as tree trunks, and to help tendrils find and wind around smaller supports. The growing tips of the vine or tendril initially swings in wide circles that maximize its chance of bumping into an obstacle (a potential support). Once the obstacle is encountered the circles tighten, causing the vine or tendril to wind around the support as it grows. The possible theories for plant nutations Over the last century, studies on plant nutations gave rise to three main theories about their origin: The theory of the "internal oscillator", first suggested by Darwin, explains plant nutations as endogenous movements; According to the "gravitropic overshoot" theory, nutations in plant shoots might result from delayed gravitropic responses, in the search for the upright pose; this theory was supported theoretically by the existence of a Hopf bifurcation in the so-called sunflower equation; Following experiments in space that showed the persistence of nutations without gravity, some researchers proposed a "two-oscillator" model, accounting for two mechanisms (endogenous oscillations and an exogenous feedback oscillator of gravitropic, proprioceptive or other nature). New experiments in space showed that the presence of gravity involves and amplifies oscillations of plant shoots, while confirming the occurrence of reduced nutations. These findings support the "two-oscillator" hypothesis, which has been revisited to account for the effect of elastic deflections due to gravity loading, previously disregarded. By means of a morphoelastic rod model, some studies showed that a Hopf-like bifurcation phenomenon occurs and elasticity plays an important role in determining the onset of oscillations. In particular, the plant shoot might undergo "exogenous" oscillations - which sum to the "endogenous" ones - as it reaches a critical length. See also Tropism Chemotaxis References External links Nutation in plants
Nutation (botany)
[ "Biology" ]
779
[ "Plants", "Botany" ]
13,662,732
https://en.wikipedia.org/wiki/Rydberg%20matter
Rydberg matter is an exotic phase of matter formed by Rydberg atoms; it was predicted around 1980 by É. A. Manykin, M. I. Ozhovan and P. P. Poluéktov. It has been formed from various elements like caesium, potassium, hydrogen and nitrogen; studies have been conducted on theoretical possibilities like sodium, beryllium, magnesium and calcium. It has been suggested to be a material that diffuse interstellar bands may arise from. Circular Rydberg states, where the outermost electron is found in a planar circular orbit, are the most long-lived, with lifetimes of up to several hours, and are the most common. Physical Rydberg matter consists of usually hexagonal planar clusters; these cannot be very big because of the retardation effect caused by the finite velocity of the speed of light. Hence, they are not gases or plasmas; nor are they solids or liquids; they are most similar to dusty plasmas with small clusters in a gas. Though Rydberg matter can be studied in the laboratory by laser probing, the largest cluster reported consists of only 91 atoms, but it has been shown to be behind extended clouds in space and the upper atmosphere of planets. Bonding in Rydberg matter is caused by delocalisation of the high-energy electrons to form an overall lower energy state. The way in which the electrons delocalise is to form standing waves on loops surrounding nuclei, creating quantised angular momentum and the defining characteristics of Rydberg matter. It is a generalised metal by way of the quantum numbers influencing loop size but restricted by the bonding requirement for strong electron correlation; it shows exchange-correlation properties similar to covalent bonding. Electronic excitation and vibrational motion of these bonds can be studied by Raman spectroscopy. Lifetime Due to reasons still debated by the physics community because of the lack of methods to observe clusters, Rydberg matter is highly stable against disintegration by emission of radiation; the characteristic lifetime of a cluster at n = 12 is 25 seconds. Reasons given include the lack of overlap between excited and ground states, the forbidding of transitions between them and exchange-correlation effects hindering emission through necessitating tunnelling that causes a long delay in excitation decay. Excitation plays a role in determining lifetimes, with a higher excitation giving a longer lifetime; n = 80 gives a lifetime comparable to the age of the Universe. Excitations In ordinary metals, interatomic distances are nearly constant through a wide range of temperatures and pressures; this is not the case with Rydberg matter, whose distances and thus properties vary greatly with excitations. A key variable in determining these properties is the principal quantum number n that can be any integer greater than 1; the highest values reported for it are around 100. Bond distance d in Rydberg matter is given by where a0 is the Bohr radius. The approximate factor 2.9 was first experimentally determined, then measured with rotational spectroscopy in different clusters. Examples of d calculated this way, along with selected values of the density D, are given in the adjacent table. Condensation Like bosons that can be condensed to form Bose–Einstein condensates, Rydberg matter can be condensed, but not in the same way as bosons. The reason for this is that Rydberg matter behaves similarly to a gas, meaning that it cannot be condensed without removing the condensation energy; ionisation occurs if this is not done. All solutions to this problem so far involve using an adjacent surface in some way, the best being evaporating the atoms of which the Rydberg matter is to be formed from and leaving the condensation energy on the surface. Using caesium atoms, graphite-covered surfaces and thermionic converters as containment, the work function of the surface has been measured to be 0.5 eV, indicating that the cluster is between the ninth and fourteenth excitation levels. See also The overview provides information on Rydberg matter and possible applications in developing clean energy, catalysts, researching space phenomena, and usage in sensors. State of matter Disputed The research claiming to create ultradense hydrogen Rydberg matter (with interatomic spacing of ~2.3 pm: many orders of magnitude less than in most solid matter) is disputed: ″The paper of Holmlid and Zeiner-Gundersen makes claims that would be truly revolutionary if they were true. We have shown that they violate some fundamental and very well established laws in a rather direct manner. We believe we share this scepticism with most of the scientific community. The response to the theories of Holmlid is perhaps most clearly reflected in the reference list of their article. Out of 114 references, 36 are not coauthored by Holmlid. And of these 36, none address the claims made by him and his co-authors. This is so much more remarkable because the claims, if correct, would revolutionize quantum science, add at least two new forms of hydrogen, of which one is supposedly the ground state of the element, discover an extremely dense form of matter, discover processes that violate baryon number conservation, in addition to solving humanity’s need for energy practically in perpetuity.″ References Condensed matter physics
Rydberg matter
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,080
[ "Phases of matter", "Condensed matter physics", "Matter", "Materials science" ]
13,663,006
https://en.wikipedia.org/wiki/4%E2%80%934%E2%80%935%20calendar
The 4–4–5 calendar is a method of managing accounting periods, and is a common calendar structure for some industries such as retail and manufacturing. It divides a year into four quarters of 13 weeks, each grouped into two 4-week "months" and one 5-week "month". The longer "month" may be set as the first (5–4–4), second (4–5–4), or third (4–4–5) unit. Its major advantage over a regular calendar is that each period is the same length and ends on the same day of the week, which is useful for planning manufacturing or work shifts. A disadvantage is that comparisons or trend analysis by "month" are flawed, as one month is 25% longer than the other two (whereas comparisons between weeks or to the same "month" in the previous year are still useful). Another disadvantage is that the 4–4–5 calendar has only 364 days (7 days x 52 weeks), meaning a 53rd week must be added every five or six years: this can make year-on-year comparison difficult. 52–53-week fiscal year A variation is the 52–53-week calendar. It is used by companies that want their fiscal year to always end on the same day of the week. Any day of the week may be used, and Saturday and Sunday are common because the business may more easily be closed for counting inventory and other end-of-year accounting activities. There are two methods permitted by generally accepted accounting principles in the United States, by US Internal Revenue Code Regulation 1.441-2 IRS Publication 538, as well as the International Financial Reporting Standards. Last Saturday of the final month Under this method, the company's fiscal year is defined as the final Saturday (or other day selected) in the fiscal year end month. For example, if the fiscal year end month is August, the company's year end could fall on any date from August 25 to August 31. In particular, the last fiscal week is the one that includes August 25 and the first fiscal week of the following year is the one that includes September 1. In this scenario, fiscal years would end on the following days: The end of the fiscal year moves one day earlier on the calendar each year (or two days when there is an intervening leap day) until it would otherwise reach the date seven days before the end of the month (August 24 in this case) or earlier. At that point, it resets to the end of the month (August 31) or earlier and the fiscal year has 53 weeks instead of 52. On the above chart, fiscal years 2024 and 2030 have 53 weeks. Saturday nearest the end of the final month Under this method the company's fiscal year is defined as the Saturday (or other day selected) that falls closest to the last day of the fiscal year end month. For example, if the fiscal year end month is August, the company's year end could fall on any date from August 28 to September 3. In particular, the last fiscal week is the one that includes August 28 and the first fiscal week of the following year is the one that includes September 4. For Saturday, this ends up being equivalent to the week-date rule from ISO 8601 which ensures that the first week of the year contains four or more days (i.e. its majority) of that year, which includes the first Thursday and January 4. In this scenario, fiscal years would end on the following days: The end of the fiscal year moves one day earlier on the calendar each year (or two days when there is an intervening leap day) until it would otherwise reach the date four days before the end of the month (August 27 in this case) or earlier. At that point, the first Saturday in the following month (September 3 or earlier in this case) becomes the date closest to the end of August and it resets to that date and the fiscal year has 53 weeks instead of 52. On the above chart, fiscal years 2028 and 2033 have 53 weeks. See also Accounting period Symmetry454 References Business terms Accounting systems Management accounting Specific calendars
4–4–5 calendar
[ "Technology" ]
855
[ "Information systems", "Accounting systems" ]
13,663,720
https://en.wikipedia.org/wiki/Chromosome%20segregation
Chromosome segregation is the process in eukaryotes by which two sister chromatids formed as a consequence of DNA replication, or paired homologous chromosomes, separate from each other and migrate to opposite poles of the nucleus. This segregation process occurs during both mitosis and meiosis. Chromosome segregation also occurs in prokaryotes. However, in contrast to eukaryotic chromosome segregation, replication and segregation are not temporally separated. Instead segregation occurs progressively following replication. Mitotic chromatid segregation During mitosis chromosome segregation occurs routinely as a step in cell division (see mitosis diagram). As indicated in the mitosis diagram, mitosis is preceded by a round of DNA replication, so that each chromosome forms two copies called chromatids. These chromatids separate to opposite poles, a process facilitated by a protein complex referred to as cohesin. Upon proper segregation, a complete set of chromatids ends up in each of two nuclei, and when cell division is completed, each DNA copy previously referred to as a chromatid is now called a chromosome. Meiotic chromosome and chromatid segregation Chromosome segregation occurs at two separate stages during meiosis called anaphase I and anaphase II (see meiosis diagram). In a diploid cell there are two sets of homologous chromosomes of different parental origin (e.g. a paternal and a maternal set). During the phase of meiosis labeled “interphase s” in the meiosis diagram there is a round of DNA replication, so that each of the chromosomes initially present is now composed of two copies called chromatids. These chromosomes (paired chromatids) then pair with the homologous chromosome (also paired chromatids) present in the same nucleus (see prophase I in the meiosis diagram). The process of alignment of paired homologous chromosomes is called synapsis (see Synapsis). During synapsis, genetic recombination usually occurs. Some of the recombination events occur by crossing over (involving physical exchange between two chromatids), but most recombination events involve information exchange but not physical exchange between two chromatids (see Synthesis-dependent strand annealing (SDSA)). Following recombination, chromosome segregation occurs as indicated by the stages metaphase I and anaphase I in the meiosis diagram. Different pairs of chromosomes segregate independently of each other, a process termed “independent assortment of non-homologous chromosomes”. This process results in each gamete usually containing a mixture of chromosomes from both original parents. Improper chromosome segregation (see non-disjunction, disomy) can result in aneuploid gametes having either too few or too many chromosomes. The second stage at which segregation occurs during meiosis is prophase II (see meiosis diagram). During this stage, segregation occurs by a process similar to that during mitosis, except that in this case prophase II is not preceded by a round of DNA replication. Thus the two chromatids comprising each chromosome separate into different nuclei, so that each nucleus gets a single set of chromatids (now called chromosomes) and each nucleus becomes included in a haploid gamete (see stages following prophase II in the meiosis diagram). This segregation process is also facilitated by cohesin. Failure of proper segregation during prophase II can also lead to aneuploid gametes. Aneuploid gametes can undergo fertilization to form aneuploid zygotes and hence to serious adverse consequences for progeny. Crossovers facilitate segregation, but are not essential Meiotic chromosomal crossover (CO) recombination facilitates the proper segregation of homologous chromosomes. This is because, at the end of meiotic prophase I, CO recombination provides a physical link that holds homologous chromosome pairs together. These linkages are established by chiasmata, which are the cytological manifestations of CO recombination. Together with cohesion linkage between sister chromatids, CO recombination may help ensure the orderly segregation of the paired homologous chromosomes to opposite poles. In support of this, a study of aneuploidy in single spermatozoa by whole genome sequencing found that, on average, human sperm cells with aneuploid autosomes exhibit significantly fewer crossovers than normal cells. After the first chromosome segregation in meiosis I is complete, there is further chromosome segregation during the second equational division of meiosis II. Both proper initial segregation of chromosomes in prophase I and the next chromosome segregation during equational division in meiosis II are required to generate gametes with the correct number of chromosomes. CO recombinants are produced by a process involving the formation and resolution of Holliday junction intermediates. As indicated in the figure titled "A current model of meiotic recombination", the formation of meiotic crossovers can be initiated by a double-strand break (DSB). The introduction of DSBs in DNA often employs the topoisomerase-like protein SPO11. CO recombination may also be initiated by external sources of DNA damage such as X-irradiation, or internal sources. There is evidence that CO recombination facilitates meiotic chromosome segregation. Other studies, however, indicate that chiasma, while supportive, are not essential to meiotic chromosome segregation. The budding yeast Saccharomyces cerevisiae is a model organism used for studying meiotic recombination. Mutants of S. cerevisiae defective in CO recombination at the level of Holliday junction resolution were found to efficiently undergo proper chromosome segregation. The pathway that produces the majority of COs in S. cerevisiae, and possibly in mammals, involves a complex of proteins including the MLH1-MLH3 heterodimer (called MutL gamma). MLH1-MLH3 binds preferentially to Holliday junctions. It is an endonuclease that makes single-strand breaks in supercoiled double-stranded DNA, and promotes the formation of CO recombinants. Double mutants deleted for both MLH3 (major pathway) and MMS4 (which is necessary for a minor Holliday junction resolution pathway) showed dramatically reduced crossing over compared to wild-type (6- to 17-fold reduction); however spore viability was reasonably high (62%) and chromosomal disjunction appeared mostly functional. The MSH4 and MSH5 proteins form a hetero-oligomeric structure (heterodimer) in S. cerevisiae and humans. In S. cerevisiae, MSH4 and MSH5 act specifically to facilitate crossovers between homologous chromosomes during meiosis. The MSH4/MSH5 complex binds and stabilizes double Holliday junctions and promotes their resolution into crossover products. An MSH4 hypomorphic (partially functional) mutant of S. cerevisiae showed a 30% genome-wide reduction in crossover numbers, and a large number of meioses with non-exchange chromosomes. Nevertheless, this mutant gave rise to spore viability patterns suggesting that segregation of non-exchange chromosomes occurred efficiently. Thus it appears that CO recombination facilitates proper chromosome segregation during meiosis in S. cerevisiae, but it is not essential. The fission yeast Schizosaccharomyces pombe has the ability to segregate homologous chromosomes in the absence of meiotic recombination (achiasmate segregation). This ability depends on the microtubule motor dynein that regulates the movement of chromosomes to the poles of the meiotic spindle. See also Cell cycle Non-random segregation of chromosomes References Chromosomes DNA replication Cell cycle
Chromosome segregation
[ "Biology" ]
1,655
[ "Genetics techniques", "DNA replication", "Molecular genetics", "Cellular processes", "Cell cycle" ]
13,664,138
https://en.wikipedia.org/wiki/Myclobutanil
Myclobutanil is a triazole chemical used as a fungicide. It is a steroid demethylation (CYP51) inhibitor, specifically inhibiting ergosterol biosynthesis. Ergosterol is a critical component of fungal cell membranes. Stereoisomerism Safety The Safety Data Sheet indicates the following hazards: Suspected of damaging fertility or the unborn child. Toxic to aquatic life with long lasting effects. The first hazard has caused this chemical to be placed on the 1986 California Proposition 65 toxics list. When heated, myclobutanil decomposes to produce corrosive and/or toxic fumes, including carbon monoxide, carbon dioxide, hydrogen chloride, hydrogen cyanide, and nitrogen oxides. Banned for cannabis cultivation Myclobutanil is banned in Canada, Colorado, Washington, Oregon, and Oklahoma for the production of medical and recreational cannabis. In 2014, a Canadian news investigation by The Globe and Mail reported the discovery of myclobutanil in medical cannabis produced by at least one government licensed grower. In September 2019, NBC News commissioned CannaSafe to test THC cartridges for heavy metals, pesticides, and residual solvents like Vitamin E; pesticides, including myclobutanil, was found in products from unlicensed dealers. In Michigan, the current state action limit for myclobutanil is 200 ppb in cannabis products. References External links International Programme on Chemical Safety Fungicides Triazoles 4-Chlorophenyl compounds
Myclobutanil
[ "Biology" ]
321
[ "Fungicides", "Biocides" ]
13,664,288
https://en.wikipedia.org/wiki/Radiophysics
Radiophysics (also modern writing radio physics) is a branch of physics focused on the theoretical and experimental study of certain kinds of radiation, its emission, propagation and interaction with matter. The term is used in the following major meanings: study of radio waves (the original area of research) study of radiation used in radiology study of other ranges of the spectrum of electromagnetic radiation in some specific applications Among the main applications of radiophysics are radio communications, radiolocation, radio astronomy and radiology. Branches Classical radiophysics deals with radio wave communications and detection Quantum radiophysics (physics of lasers and masers; Nikolai Basov was the founder of quantum radiophysics in the Soviet Union) Statistical radiophysics References Radiation Radio technology
Radiophysics
[ "Physics", "Chemistry", "Technology", "Engineering" ]
145
[ "Transport phenomena", "Information and communications technology", "Physical phenomena", "Telecommunications engineering", "Radio technology", "Waves", "Radiation" ]
13,664,406
https://en.wikipedia.org/wiki/Cut%20locus
In differential geometry, the cut locus of a point on a manifold is the closure of the set of all other points on the manifold that are connected to by two or more distinct shortest geodesics. More generally, the cut locus of a closed set on the manifold is the closure of the set of all other points on the manifold connected to by two or more distinct shortest geodesics. Examples In the Euclidean plane, a point p has an empty cut locus, because every other point is connected to p by a unique geodesic (the line segment between the points). On the sphere, the cut locus of a point consists of the single antipodal point diametrically opposite to it. On an infinitely long cylinder, the cut locus of a point consists of the line opposite the point. Let X be the boundary of a simple polygon in the Euclidean plane. Then the cut locus of X in the interior of the polygon is the polygon's medial axis. Points on the medial axis are centers of disks that touch the polygon boundary at two or more points, corresponding to two or more shortest paths to the disk center. Let x be a point on the surface of a convex polyhedron P. Then the cut locus of x on the polyhedron's surface is known as the ridge tree of P with respect to x. This ridge tree has the property that cutting the surface along its edges unfolds P to a simple planar polygon. This polygon can be viewed as a net for the polyhedron. Formal definition Fix a point in a complete Riemannian manifold , and consider the tangent space . It is a standard result that for sufficiently small in , the curve defined by the Riemannian exponential map, for belonging to the interval is a minimizing geodesic, and is the unique minimizing geodesic connecting the two endpoints. Here denotes the exponential map from . The cut locus of in the tangent space is defined to be the set of all vectors in such that is a minimizing geodesic for but fails to be minimizing for for every . Thus the cut locus in the tangent space is the boundary of the set where denotes the length metric of , and is the Euclidean norm of . The cut locus of in is defined to be image of the cut locus of in the tangent space under the exponential map at . Thus, we may interpret the cut locus of in as the points in the manifold where the geodesics starting at stop being minimizing. The least distance from p to the cut locus is the injectivity radius at p. On the open ball of this radius, the exponential map at p is a diffeomorphism from the tangent space to the manifold, and this is the largest such radius. The global injectivity radius is defined to be the infimum of the injectivity radius at p, over all points of the manifold. Characterization Suppose is in the cut locus of in . A standard result is that either (1) there is more than one minimizing geodesic joining to , or (2) and are conjugate along some geodesic which joins them. It is possible for both (1) and (2) to hold. Applications The significance of the cut locus is that the distance function from a point is smooth, except on the cut locus of and itself. In particular, it makes sense to take the gradient and Hessian of the distance function away from the cut locus and . This idea is used in the local Laplacian comparison theorem and the local Hessian comparison theorem. These are used in the proof of the local version of the Toponogov theorem, and many other important theorems in Riemannian geometry. For the metric space of surface distances on a convex polyhedron, cutting the polyhedron along the cut locus produces a shape that can be unfolded flat into a plane, the source unfolding. The unfolding process can be performed continuously, as a blooming of the polyhedron. Analogous methods of cutting along the cut locus can be used to unfold higher-dimensional convex polyhedra as well. Cut locus of a subset One can similarly define the cut locus of a submanifold of the Riemannian manifold, in terms of its normal exponential map. References Mathematical structures de:Schnittort ru:Множество раздела
Cut locus
[ "Mathematics" ]
903
[ "Mathematical structures", "Mathematical objects" ]
13,664,437
https://en.wikipedia.org/wiki/Segner%20wheel
The Segner wheel or Segner turbine is a type of water turbine invented by Johann Andreas Segner in the 18th century. It uses the same principle as Hero's aeolipile. The device is placed in a suitable hole in the ground (or at the slope of a hill). The water is delivered to the top of a vertical cylinder, at the bottom of which is a rotor with specially bent pipes with nozzles (see image). Due to the hydrostatic pressure, the water is ejected from the nozzles, causing the rotor to rotate. The useful torque is transferred to a powered device through a belt and pulley system. Segner turbines, also called reaction or Scotch turbines, were built in the mid-1850s to power the inclined plane lifts along the Morris Canal in New Jersey. Today, the Segner wheel principle is used in irrigation sprinklers. Alexander Bogdanov cited this an example of an important innovation which paved the way for the development of steam engines. The turbine at Museo Hacienda Buena Vista is "the only pre-Scotch (sic) type known to exist and is the sole extant example of a pioneer and historically important machine that was invented at the close of the 17th century by Dr. Baker.... The Buena Vista turbine is, in effect, a missing link in the evolution of mechanical artifacts better known to the historians of technology." (The Journal of the Society for Industrial Archaeology, Vol. 4, No. 1 [1978], pp.55–58)." Gallery References External links Barker's Mill at physics.kenyon.edu One of the last remaining examples is at Lake Hopatcong; there is an excellent picture at . Water turbines Hungarian inventions
Segner wheel
[ "Engineering" ]
361
[ "Mechanical engineering stubs", "Mechanical engineering" ]
16,379,713
https://en.wikipedia.org/wiki/Topological%20dynamics
In mathematics, topological dynamics is a branch of the theory of dynamical systems in which qualitative, asymptotic properties of dynamical systems are studied from the viewpoint of general topology. Scope The central object of study in topological dynamics is a topological dynamical system, i.e. a topological space, together with a continuous transformation, a continuous flow, or more generally, a semigroup of continuous transformations of that space. The origins of topological dynamics lie in the study of asymptotic properties of trajectories of systems of autonomous ordinary differential equations, in particular, the behavior of limit sets and various manifestations of "repetitiveness" of the motion, such as periodic trajectories, recurrence and minimality, stability, non-wandering points. George Birkhoff is considered to be the founder of the field. A structure theorem for minimal distal flows proved by Hillel Furstenberg in the early 1960s inspired much work on classification of minimal flows. A lot of research in the 1970s and 1980s was devoted to topological dynamics of one-dimensional maps, in particular, piecewise linear self-maps of the interval and the circle. Unlike the theory of smooth dynamical systems, where the main object of study is a smooth manifold with a diffeomorphism or a smooth flow, phase spaces considered in topological dynamics are general metric spaces (usually, compact). This necessitates development of entirely different techniques but allows an extra degree of flexibility even in the smooth setting, because invariant subsets of a manifold are frequently very complicated topologically (cf limit cycle, strange attractor); additionally, shift spaces arising via symbolic representations can be considered on an equal footing with more geometric actions. Topological dynamics has intimate connections with ergodic theory of dynamical systems, and many fundamental concepts of the latter have topological analogues (cf Kolmogorov–Sinai entropy and topological entropy). See also Poincaré–Bendixson theorem Symbolic dynamics Topological conjugacy References Robert Ellis, Lectures on topological dynamics. W. A. Benjamin, Inc., New York 1969 Walter Gottschalk, Gustav Hedlund, Topological dynamics. American Mathematical Society Colloquium Publications, Vol. 36. American Mathematical Society, Providence, R. I., 1955 J. de Vries, Elements of topological dynamics. Mathematics and its Applications, 257. Kluwer Academic Publishers Group, Dordrecht, 1993 Ethan Akin, The General Topology of Dynamical Systems, AMS Bookstore, 2010, J. de Vries, Topological Dynamical Systems: An Introduction to the Dynamics of Continuous Mappings, De Gruyter Studies in Mathematics, 59, De Gruyter, Berlin, 2014, Jian Li and Xiang Dong Ye, Recent development of chaos theory in topological dynamics, Acta Mathematica Sinica, English Series, 2016, Volume 32, Issue 1, pp. 83–114.
Topological dynamics
[ "Mathematics" ]
595
[ "Topology", "Topological dynamics", "Dynamical systems" ]
16,379,875
https://en.wikipedia.org/wiki/Sun-Earth%20Day
Sun-Earth Day is a joint educational program established in 2000 by NASA and ESA. The goal of the program is to popularize the knowledge about the Sun, and the way it influences life on Earth, among students and the public. The day itself is mainly celebrated in the United States near the time of the spring equinox. However, the Sun-Earth Day event actually runs throughout the year, with a different theme being chosen each year. Themes The selection of each year's theme often corresponds to events for that year. Every theme is supported by free educational plans for both informal and formal educators. Here is a list of themes by year: References External links Sun-Earth Day home page at NASA Unofficial observances March observances
Sun-Earth Day
[ "Astronomy" ]
152
[ "Astronomy stubs" ]
16,380,220
https://en.wikipedia.org/wiki/Caesium%20hydride
Caesium hydride or cesium hydride is an inorganic compound of caesium and hydrogen with the chemical formula . It is an alkali metal hydride. It was the first substance to be created by light-induced particle formation in metal vapor, and showed promise in early studies of an ion propulsion system using caesium. It is the most reactive stable alkaline metal hydride of all. It is a powerful superbase and reacts with water extremely vigorously. The caesium nucleus in CsH can be hyperpolarized through interactions with an optically pumped caesium vapor in a process known as spin-exchange optical pumping (SEOP). SEOP can increase the nuclear magnetic resonance (NMR) signal of caesium nucleus by an order of magnitude. It is very difficult to make caesium hydride in a pure form. Caesium hydride can be produced by heating caesium carbonate and metallic magnesium in hydrogen at 580 to 620 °C. Crystal structure At room temperature and atmospheric pressure, CsH has the same structure as NaCl. References Caesium compounds Metal hydrides Superbases Rock salt crystal structure
Caesium hydride
[ "Chemistry" ]
246
[ "Superbases", "Inorganic compounds", "Inorganic compound stubs", "Reducing agents", "Metal hydrides", "Bases (chemistry)" ]
16,380,678
https://en.wikipedia.org/wiki/ChemXSeer
{{DISPLAYTITLE:ChemXSeer}} ChemXSeer project, funded by the National Science Foundation, is a public integrated digital library, database, and search engine for scientific papers in chemistry. It is being developed by a multidisciplinary team of researchers at the Pennsylvania State University. ChemXSeer was conceived by Dr. Prasenjit Mitra, Dr. Lee Giles and Dr. Karl Mueller as a way to integrate the chemical scientific literature with experimental, analytical, and simulation data from different types of experimental systems. The goal of the project is to create an intelligent search and database which will provide access to relevant data to a diverse community of users who have a need for chemical information. It is hosted on the World Wide Web at the College of Information Sciences and Technology, The Pennsylvania State University. Features In order to provide access to relevant data to users ChemXSeer provides new features that are not available in traditional search engines or digital libraries. Chemical Entity Search: A tool capable of identifying Chemical formulae and chemical names, and extracting and disambiguating them from general terms within documents. Those disambiguated terms are used for performing searches. TableSeer: In scholarly articles Tables are used to present, list, summarize, and structure important data. TableSeer automatically identifies tables in digital documents, extracts the table Metadata as well as the cells content, and stores them in such a way that allows users to either query the table content or search for tables in a large set of documents. Dataset search: ChemXSeer provides tools to incorporate datasets from different experiments sources. The system is able to manipulate results from multiple formats such as XML, Microsoft Excel, Gaussian, and CHARMM, create databases, to allow direct queries over the data, create Metadata, using an annotation tool, which will allow users to search over the datasets, as well as a way to create links among datasets and/or between datasets and documents. In addition to these tools, ChemXSeer will integrate the advances made by its sister project CiteSeerX to provide: Full text search Author, affiliation, title and venue search Citation and acknowledgement search Citation linking and statistics See also CiteSeerx References External links ChemXSeer Official web site Critical Zone Exploration Network (CZEN) Center for Environmental Kinetics Analysis (CEKA) Eprint archives Internet search engines Library 2.0 Open-access archives Environmental chemistry Pennsylvania State University Bibliographic databases and indexes
ChemXSeer
[ "Chemistry", "Environmental_science" ]
522
[ "Environmental chemistry", "nan" ]
16,381,455
https://en.wikipedia.org/wiki/Lieb%27s%20square%20ice%20constant
Lieb's square ice constant is a mathematical constant used in the field of combinatorics to quantify the number of Eulerian orientations of grid graphs. It was introduced by Elliott H. Lieb in 1967. Definition An n × n grid graph (with periodic boundary conditions and n ≥ 2) has n2 vertices and 2n2 edges; it is 4-regular, meaning that each vertex has exactly four neighbors. An orientation of this graph is an assignment of a direction to each edge; it is an Eulerian orientation if it gives each vertex exactly two incoming edges and exactly two outgoing edges. Denote the number of Eulerian orientations of this graph by f(n). Then is Lieb's square ice constant. Lieb used a transfer-matrix method to compute this exactly. The function f(n) also counts the number of 3-colorings of grid graphs, the number of nowhere-zero 3-flows in 4-regular graphs, and the number of local flat foldings of the Miura fold. Some historical and physical background can be found in the article Ice-type model. See also Spin ice Ice-type model References Mathematical constants Quadratic irrational numbers
Lieb's square ice constant
[ "Physics", "Materials_science", "Mathematics" ]
248
[ "Mathematical objects", "Enumerative combinatorics", "Lattice models", "Combinatorics", "Computational physics", "Condensed matter physics", "nan", "Statistical mechanics", "Mathematical constants", "Numbers" ]
16,383,124
https://en.wikipedia.org/wiki/SGR%200526%E2%88%9266
SGR 0526−66 (also known as PSR B0565−66) is a soft gamma repeater (SGR), located in the Super-Nova Remnant (SNR) 0526−66.1, otherwise known as N49, in the Large Magellanic Cloud. It was the first soft gamma repeater discovered, and as of 2015, the only known located outside our galaxy. First detected in March 1979, it was located by using the measurement of the arrival time differences of the signal by the set of artificial satellites equipped with gamma ray detectors. The association with N49 can only be indirect: it seems clear that soft gamma repeaters form in young stellar clusters. It is not certain that the explosion that gave birth to SGR 0525-66 is also the one that produced the remnant N49. Discovery On March 5, 1979, two Soviet spacecraft that were then drifting through the Solar System were hit by a blast of gamma radiation at 15:51 UTC. This contact raised the radiation readings on both the probes from a normal 100 counts per second to over 200,000 counts a second, in only a fraction of a millisecond. This burst of gamma rays quickly continued to spread. Eleven seconds later, Helios 2, a NASA probe, which was in orbit around the Sun, was saturated by the blast of radiation. It soon hit Venus, and the Pioneer Venus Orbiter's detectors were overcome by the wave. Seconds later, Earth received the wave of radiation, where the powerful output of gamma rays inundated the detectors of three U.S. Department of Defense Vela satellites, the Soviet Prognoz 7 satellite, and the Einstein Observatory. Just before the wave exited the Solar System, the blast also hit the International Sun–Earth Explorer. This extremely powerful blast of gamma radiation constituted the strongest wave of extra-solar gamma rays ever detected; it was over 100 times more intense than any known previous extra-solar burst. Because gamma rays travel at the speed of light and the time of the pulse was recorded by several distant spacecraft as well as on Earth, the source of the gamma radiation could be calculated to an accuracy of about 2 arcseconds. The direction of the source corresponded with the remnants of a star that had gone supernova around 3000 B.C.E. The source was named SGR 0526-66, the event itself was named GRB 790305b, the first observed SGR megaflare. See also GRB 790305b LMC N49 References External links Stars in the Large Magellanic Cloud Extragalactic stars Large Magellanic Cloud Dorado Soft gamma repeaters Magnetars
SGR 0526−66
[ "Astronomy" ]
551
[ "Magnetars", "Magnetism in astronomy", "Dorado", "Constellations" ]
16,383,510
https://en.wikipedia.org/wiki/Bernstein%27s%20constant
Bernstein's constant, usually denoted by the Greek letter β (beta), is a mathematical constant named after Sergei Natanovich Bernstein and is equal to 0.2801694990... . Definition Let En(ƒ) be the error of the best uniform approximation to a real function ƒ(x) on the interval [−1, 1] by real polynomials of no more than degree n. In the case of ƒ(x) = |x|, Bernstein showed that the limit called Bernstein's constant, exists and is between 0.278 and 0.286. His conjecture that the limit is: was disproven by Varga and Carpenter, who calculated References Further reading Numerical analysis Mathematical constants
Bernstein's constant
[ "Mathematics" ]
149
[ "Mathematical objects", "Computational mathematics", "Numbers", "Mathematical relations", "nan", "Numerical analysis", "Mathematical constants", "Approximations" ]
16,383,884
https://en.wikipedia.org/wiki/Drug%20detoxification
Drug detoxification (informally, detox) is variously construed or interpreted as a type of "medical" intervention or technique in regards to a physical dependence mediated by a drug; as well as the process and experience of a withdrawal syndrome or any of the treatments for acute drug overdose (toxidrome). The first definition however, in relation to substance dependence and its treatment is arguably a misnomer and even directly contradictory since withdrawal is neither contingent upon nor alleviated through biological excretion or clearance of the drug. In fact, excretion of a given drug from the body is one of the very processes that leads to withdrawal since the syndrome arises largely due to the cessation itself and the drug being absent from the body; especially the blood plasma, not from ‘leftover toxins’ or traces of the drug still being in the system. Some addiction medicine practitioners use the term withdrawal management instead of detoxification. A detoxification program for physical dependence does not necessarily address the precedents of addiction, social factors, psychological addiction, or the often-complex behavioral issues that intermingle with addiction. Process The United States Department of Health and Human Services acknowledges three steps in a drug detoxification process: Evaluation: Upon beginning drug detoxification, a patient is first tested to see which specific substances are presently circulating in their bloodstream and the amount. Clinicians also evaluate the patient for potential co-occurring disorders, dual diagnosis, and mental/behavioral issues. Stabilization: In this stage, the patient is guided through the process of detoxification. This may be done with or without the use of medications but for the most part the former is more common. Also part of stabilization is explaining to the patient what to expect during treatment and the recovery process. Where appropriate, people close to the addict are brought in at this time to become involved and show support. Guiding Patient into Treatment: The last step of the detoxification process is to ready the patient for the actual recovery process. As drug detoxification only deals with the physical dependency and addiction to drugs, it does not address the psychological aspects of drug addiction. This stage entails obtaining agreement from the patient to complete the process by enrolling in a drug rehabilitation program. Rapid detoxification Richard B. Resnick MD was the first scientist to investigate the idea of accelerated detox under anesthesia. In 1977, he published a paper detailing the first procedures using Naloxone and clonidine. Shortly afterward, physicians began discussing anesthesia to reduce pain during rapid detox. Norbert Loimer, MD, Ph.D. published a paper in the '80s outlining the success of opiate detoxification under general anesthesia. Resnick and Loimer's early research served as the foundation for all forms of detoxification under sedation procedures used today. While physicians' protocols significantly differ, they still adhere to the principles described in the early publications. The combined use of clonidine and naltrexone was found to be a rapid, safe, and effective treatment for abrupt withdrawal from methadone, as detailed in a paper published in The American Journal of Psychiatry in 1986. Since then, numerous clinics around the world have implemented detoxification under sedation procedures to assist patients in overcoming opioid use disorder. These procedures involve the administration of anesthesia and other medications to facilitate rapid detoxification of the body, effectively reducing the painful and uncomfortable symptoms of withdrawal. While the effectiveness of rapid detox has been a subject of debate, it remains a popular treatment option for certain individuals grappling with opioid addiction. Etymology The concept of "detoxification" comes from the discredited autotoxin theory of George E. Pettey and others. David F. Musto says that "according to Pettey, opiates stimulated the production of toxins in the intestines, which had the physiological effect associated with withdrawal phenomena. [...] Therefore treatment would consist of purging the body of toxins and any lurking morphine that might remain to stimulate toxin production in the future." Rapid detox controversy Naltrexone therapy, which critics claim lacks long-term efficacy and can actually be detrimental to a patient's long-term recovery, has led to controversy. Additionally, there have been many questions raised about the ethics as well as safety of rapid detox following a number of deaths resulting from the procedure. Some researchers say that relapses to injection use of illicit opioids during or following repeated detoxification episodes carry the substantial potential for injury associated with uncontrolled drug use and include drug overdose, infections, and death. See also Alcohol detoxification References Drugs Drug rehabilitation Substance dependence Detoxification pl:Detoks
Drug detoxification
[ "Chemistry" ]
979
[ "Pharmacology", "Chemicals in medicine", "Drugs", "Products of chemical industry" ]
16,384,086
https://en.wikipedia.org/wiki/Job-shop%20scheduling
Job-shop scheduling, the job-shop problem (JSP) or job-shop scheduling problem (JSSP) is an optimization problem in computer science and operations research. It is a variant of optimal job scheduling. In a general job scheduling problem, we are given n jobs J1, J2, ..., Jn of varying processing times, which need to be scheduled on m machines with varying processing power, while trying to minimize the makespan – the total length of the schedule (that is, when all the jobs have finished processing). In the specific variant known as job-shop scheduling, each job consists of a set of operations O1, O2, ..., On which need to be processed in a specific order (known as precedence constraints). Each operation has a specific machine that it needs to be processed on and only one operation in a job can be processed at a given time. A common relaxation is the flexible job shop, where each operation can be processed on any machine of a given set (the machines in each set are identical). The name originally came from the scheduling of jobs in a job shop, but the theme has wide applications beyond that type of instance. This problem is one of the best known combinatorial optimization problems, and was the first problem for which competitive analysis was presented, by Graham in 1966. The best problem instances for a basic model with a makespan objective are due to Taillard. In the standard three-field notation for optimal job scheduling problems, the job-shop variant is denoted by J in the first field. For example, the problem denoted by "" is a 3-machines job-shop problem with unit processing times, where the goal is to minimize the maximum completion time. Problem variations Many variations of the problem exist, including the following: Machines can have duplicates (flexible job shop with duplicate machines) or belong to groups of identical machines (flexible job shop). Machines can require a certain gap between jobs or no idle-time. Machines can have sequence-dependent setups. Objective function can be to minimize the makespan, the Lp norm, tardiness, maximum lateness etc. It can also be multi-objective optimization problem. Jobs may have constraints, for example a job i needs to finish before job j can be started (see workflow). Also, the objective function can be multi-criteria. Set of jobs can relate to different set of machines. Deterministic (fixed) processing times or probabilistic processing times. NP-hardness Since the traveling salesman problem is NP-hard, the job-shop problem with sequence-dependent setup is clearly also NP-hard since the TSP is a special case of the JSP with a single job (the cities are the machines and the salesman is the job). Problem representation The disjunctive graph is one of the popular models used for describing the job-shop scheduling problem instances. A mathematical statement of the problem can be made as follows: Let and be two finite sets. On account of the industrial origins of the problem, the are called machines and the are called jobs. Let denote the set of all sequential assignments of jobs to machines, such that every job is done by every machine exactly once; elements may be written as matrices, in which column lists the jobs that machine will do, in order. For example, the matrix means that machine will do the three jobs in the order , while machine will do the jobs in the order . Suppose also that there is some cost function . The cost function may be interpreted as a "total processing time", and may have some expression in terms of times , the cost/time for machine to do job . The job-shop problem is to find an assignment of jobs such that is a minimum, that is, there is no such that . Scheduling efficiency Scheduling efficiency can be defined for a schedule through the ratio of total machine idle time to the total processing time as below: Here is the idle time of machine , is the makespan and is the number of machines. Notice that with the above definition, scheduling efficiency is simply the makespan normalized to the number of machines and the total processing time. This makes it possible to compare the usage of resources across JSP instances of different size. The problem of infinite cost One of the first problems that must be dealt with in the JSP is that many proposed solutions have infinite cost: i.e., there exists such that . In fact, it is quite simple to concoct examples of such by ensuring that two machines will deadlock, so that each waits for the output of the other's next step. Major results Graham had already provided the List scheduling algorithm in 1966, which is -competitive, where m is the number of machines. Also, it was proved that List scheduling is optimum online algorithm for 2 and 3 machines. The Coffman–Graham algorithm (1972) for uniform-length jobs is also optimum for two machines, and is -competitive. In 1992, Bartal, Fiat, Karloff and Vohra presented an algorithm that is 1.986 competitive. A 1.945-competitive algorithm was presented by Karger, Philips and Torng in 1994. In 1992, Albers provided a different algorithm that is 1.923-competitive. Currently, the best known result is an algorithm given by Fleischer and Wahl, which achieves a competitive ratio of 1.9201. A lower bound of 1.852 was presented by Albers. Taillard instances has an important role in developing job-shop scheduling with makespan objective. In 1976 Garey provided a proof that this problem is NP-complete for m>2, that is, no optimal solution can be computed in deterministic polynomial time for three or more machines (unless P=NP). In 2011 Xin Chen et al. provided optimal algorithms for online scheduling on two related machines improving previous results. Offline makespan minimization Atomic jobs The simplest form of the offline makespan minimisation problem deals with atomic jobs, that is, jobs that are not subdivided into multiple operations. It is equivalent to packing a number of items of various different sizes into a fixed number of bins, such that the maximum bin size needed is as small as possible. (If instead the number of bins is to be minimised, and the bin size is fixed, the problem becomes a different problem, known as the bin packing problem.) Dorit S. Hochbaum and David Shmoys presented a polynomial-time approximation scheme in 1987 that finds an approximate solution to the offline makespan minimisation problem with atomic jobs to any desired degree of accuracy. Jobs consisting of multiple operations The basic form of the problem of scheduling jobs with multiple (M) operations, over M machines, such that all of the first operations must be done on the first machine, all of the second operations on the second, etc., and a single job cannot be performed in parallel, is known as the flow-shop scheduling problem. Various algorithms exist, including genetic algorithms. Johnson's algorithm A heuristic algorithm by S. M. Johnson can be used to solve the case of a 2 machine N job problem when all jobs are to be processed in the same order. The steps of algorithm are as follows: Job Pi has two operations, of duration Pi1, Pi2, to be done on Machine M1, M2 in that sequence. Step 1. List A = { 1, 2, …, N }, List L1 = {}, List L2 = {}. Step 2. From all available operation durations, pick the minimum. If the minimum belongs to Pk1, Remove K from list A; Add K to end of List L1. If minimum belongs to Pk2, Remove K from list A; Add K to beginning of List L2. Step 3. Repeat Step 2 until List A is empty. Step 4. Join List L1, List L2. This is the optimum sequence. Johnson's method only works optimally for two machines. However, since it is optimal, and easy to compute, some researchers have tried to adopt it for M machines, (M > 2.) The idea is as follows: Imagine that each job requires m operations in sequence, on M1, M2 … Mm. We combine the first m/2 machines into an (imaginary) Machining center, MC1, and the remaining Machines into a Machining Center MC2. Then the total processing time for a Job P on MC1 = sum( operation times on first m/2 machines), and processing time for Job P on MC2 = sum(operation times on last m/2 machines). By doing so, we have reduced the m-Machine problem into a Two Machining center scheduling problem. We can solve this using Johnson's method. Makespan prediction Machine learning has been recently used to predict the optimal makespan of a JSP instance without actually producing the optimal schedule. Preliminary results show an accuracy of around 80% when supervised machine learning methods were applied to classify small randomly generated JSP instances based on their optimal scheduling efficiency compared to the average. Example Here is an example of a job-shop scheduling problem formulated in AMPL as a mixed-integer programming problem with indicator constraints: param N_JOBS; param N_MACHINES; set JOBS ordered = 1..N_JOBS; set MACHINES ordered = 1..N_MACHINES; param ProcessingTime{JOBS, MACHINES} > 0; param CumulativeTime{i in JOBS, j in MACHINES} = sum {jj in MACHINES: ord(jj) <= ord(j)} ProcessingTime[i,jj]; param TimeOffset{i1 in JOBS, i2 in JOBS: i1 <> i2} = max {j in MACHINES} (CumulativeTime[i1,j] - CumulativeTime[i2,j] + ProcessingTime[i2,j]); var end >= 0; var start{JOBS} >= 0; var precedes{i1 in JOBS, i2 in JOBS: ord(i1) < ord(i2)} binary; minimize makespan: end; subj to makespan_def{i in JOBS}: end >= start[i] + sum{j in MACHINES} ProcessingTime[i,j]; subj to no12_conflict{i1 in JOBS, i2 in JOBS: ord(i1) < ord(i2)}: precedes[i1,i2] ==> start[i2] >= start[i1] + TimeOffset[i1,i2]; subj to no21_conflict{i1 in JOBS, i2 in JOBS: ord(i1) < ord(i2)}: !precedes[i1,i2] ==> start[i1] >= start[i2] + TimeOffset[i2,i1]; data; param N_JOBS := 4; param N_MACHINES := 4; param ProcessingTime: 1 2 3 4 := 1 5 4 2 1 2 8 3 6 2 3 9 7 2 3 4 3 1 5 8; Related problems Flow-shop scheduling is a similar problem but without the constraint that each operation must be done on a specific machine (only the order constraint is kept). Open-shop scheduling is a similar problem but also without the order constraint. See also Disjunctive graph Dynamic programming Genetic algorithm scheduling List of NP-complete problems Optimal control Scheduling (production processes) References External links University of Vienna Directory of methodologies, systems and software for dynamic optimization. Taillard instances Brucker P. Scheduling Algorithms. Heidelberg, Springer. Fifth ed. Optimal scheduling NP-complete problems pt:Escalonamento de Job Shop
Job-shop scheduling
[ "Mathematics", "Engineering" ]
2,461
[ "Optimal scheduling", "Industrial engineering", "Computational problems", "Mathematical problems", "NP-complete problems" ]
16,384,668
https://en.wikipedia.org/wiki/SGR%201900%2B14
|- style="vertical-align: top;" | Distance | SGR 1900+14 is a soft gamma repeater (SGR), located in the constellation of Aquila about 20,000 light-years away. It is assumed to be an example of an intensely magnetic star, known as a magnetar. It is thought to have formed after a fairly recent supernova explosion. An intense gamma-ray burst from this star was detected on August 27, 1998; shortly thereafter a new radio source appeared in that region of the sky. Despite the large distance to this SGR, estimated at 20,000 light years, the burst had large effects on the Earth's atmosphere. The atoms in the ionosphere, which are usually ionized by the Sun's radiation by day and recombine to neutral atoms by night, were ionized at nighttime at levels not much lower than the normal daytime level. The Rossi X-ray Timing Explorer (RXTE), an X-ray satellite, received its strongest signal from this burst at this time, even though it was directed at a different part of the sky, and should normally have been shielded from the radiation. NASA's Spitzer Space Telescope detected a mysterious ring around SGR 1900+14 at two narrow infrared frequencies in 2005 and 2007. The 2007 Spitzer image showed no discernible change in the ring after two years. The ring measures seven light-years across. The origin of the ring is currently unknown and is the subject of an article in the May 29, 2008 issue of the journal Nature. References External links Image SGR 1900+14 Aquila (constellation) Soft gamma repeaters Astronomical X-ray sources Magnetars
SGR 1900+14
[ "Astronomy" ]
349
[ "Magnetism in astronomy", "Constellations", "Aquila (constellation)", "Magnetars", "Astronomical X-ray sources", "Astronomical objects" ]
16,384,764
https://en.wikipedia.org/wiki/Busou%20Shinki
is a Japanese media mix franchise from Konami Digital Entertainment, first launched in Japan in 2006 with a line of action figures followed by a companion online game. The franchise encompasses various manga, anime, novels, video games, and more. The online game was shut down in 2011, and the original toy line was discontinued in 2012. A revival of the series was teased in December 2017 and later revealed to be centered around a smartphone game, but the game was still in development hell as of February 2020. Action Figures and Model Kits Original Line and MMS The action figure line was launched in Japan in September 2006. Many were based on character designs by prolific Japanese artists. A few of the figures have been released for distribution outside Japan. Busou Shinki action figures are presented as 1:1 scale, drawing from a fictional world featuring action figure-sized androids. The various media all take place in this same setting, though in different time periods. Busou Shinki are feminine androids with stylized body armor and/or mechanical parts (such as the mermaid-themed Ianeira having a mechanical fish tail), but do have considerable variation in aesthetic between models that reflects the artistic license given to the different designers. Due to the setting, joints and screws in the action figures are considered to be part of the designs, and are frequently depicted in art, video games, and other media, though they are sometimes omitted or less significantly depicted such as in the TV anime. All of the figures use a common 'MMS' (Multi Moveable System) body designed by . MMS figures have multiple highly articulated joints, which give them a wide range of possible poses, including a special swinging leg joint that allows for near-180 degree vertical articulation on legs. Additionally, multiple body parts are interchangeable, allowing a wide variety of customization without tools. There are three iterations of MMS (1st, 2nd, and 3rd), with 3rd coming in two body types (Short and Tall) to allow for different proportions depending on the character. Busou Shinki only uses MMS 1st and MMS 3rd Short/Tall: MMS 2nd was only used for action figures for other IPs such as Beatmania and Gurren Lagann. The series uses a 3.3/4mm standard for parts (both body parts and equipment) that allow them to be connected to other parts. This ensures compatibility throughout the line, but deviates from the 3mm standard used by most other Japanese lines, meaning that they are only compatible with each other. Busou Shinki product packages come in several varieties such as full sets, EX sets, Light Armor sets, and bodies. Full sets come with a unique painted MMS body with head and a full set of equipment. EX sets only include a head and a small assortment of equipment, with no MMS body. Light Armor sets are complete sets of unique MMS with head and equipment but with a significantly smaller amount of equipment and accessories and a smaller stand compared to regular sets. Bodies are sold in blister packages that only contain an MMS body with no equipment. The Arnval Mk 2 and Strarf Mk 2 also had Full Arms Package releases, which had more weapons and equipment in addition to the original full set releases. There were also multiple exclusive repaint versions only available from Dengeki Hobby, Konami Style, or events. Konami also released action figures for various other IPs such as Sky Girls, Otomedius, Beatmania, Gurren Lagann, and using MMS bodies that are compatible with the Busou Shinki line: These were branded under the MMS label, but not the Busou Shinki label. A Hayate no Gotoku collaboration figure that came with a limited edition version of the Hayate no Gotoku game Nightmare Paradise, however, was under the Busou Shinki label as it included Busou Shinki equipment (repaints of the Valona equipment). List of Action Figure Releases Wave 1 Japanese Release Date: 7 September 2006 US Release Date: 18 April 2007 Arnval (アーンヴァル, Ānvaru), Angel Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Strarf (ストラーフ, Sutorāfu), Devil Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Wave 2 Japanese Release Date: 28 September 2006 US Release Date: 22 March 2007 Howling (ハウリン, Haurin), Dog Type, Full SetCharacter Designer: BLADE Maochao (マオチャオ, Maochao), Cat Type, Full SetCharacter Designer: BLADE Waffebunny (ヴァッフェバニー, Vaffebanī), Rabbit Type, EX SetCharacter Designer: Tetsurō Kasahara (カサハラテツロー) Wave 3 Japanese Release Date: 7 December 2006 US Release Date: 22 March 2007 (Note, this release consisted of Benio only) Xiphos (サイフォス, Saifosu), Knight Type, Full SetCharacter Designer: Rokurō Shinofusa (篠房六郎) Benio (紅緒, Benio), Samurai Type, Full SetCharacter Designer: Rokurō Shinofusa (篠房六郎) Tsugaru (ツガル, Tsugaru), Santa Claus Type, EX SetCharacter Designer: Goli Wave 4 Japanese Release Date: 22 February 2007 Zyrdarya (ジルダリア, Jirudaria), Flower Type, Full SetCharacter Designer: Okama Juvisy (ジュビジー, Jubijī), Seed Type, Full SetCharacter Designer: Okama Fort Bragg (フォートブラッグ, Fōto Buraggu), Battery Type, EX SetCharacter Designer: Takayuki Yanase (柳瀬敬之) Wave 5 Japanese Release Date: 31 May 2007 Eukrante (エウクランテ, Eukurante), Seiren Type, Full SetCharacter Designer: Ryōta Magaki (間垣亮太) Ianeira (イーアネイラ, ĪANEIRA), Mermaid Type, Full SetCharacter Designer: Ryōta Magaki (間垣亮太) Waffedolphin (ヴァッフェドルフィン, VAFFEDORUFIN), Dolphin Type, EX SetCharacter Designer: Tetsurō Kasahara (カサハラテッロー) Wave 6 Japanese Release Date: 30 August 2007 Tigris (ティグリース, Tigurīsu), Tiger Type, Full SetCharacter Designers: Eiichi Shimizu (清水栄一), Tomohiro Shimoguchi (下口智裕) Vitulus (ウィトゥルース, Witurūsu), Calf Type, Full SetCharacter Designers: Eiichi Shimizu (清水栄一), Tomohiro Shimoguchi (下口智裕) Grapprap (グラップラップ, Gurappurappu), Builder Type, EX SetCharacter Designer: Eisaku Kitō (鬼頭栄作) Wave 7 Japanese Release Date: 29 November 2007 ACH (アーク, Āku), High Speed Trike Type, Full SetCharacter Designer: Choco YDA (イーダ, Īda), High Maneuver Trike Type, Full SetCharacter Designer: Choco Schmetterling (シュメッターリング, Shumettāringu), Butterfly Type, EX SetCharacter Designer: Chibisuke Machine (ちびすけマシーン) Wave 8 Japanese Release Date: 5 April 2008 Murmeltier (ムルメルティア, Murumerutia), Panzer Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Asuka (飛鳥, Asuka), Fighter Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Zelnogrard (ゼルノグラード, Zerunogurādo), Firearms Type, EX Set+BodyCharacter Designer: Takayuki Yanase (柳瀬敬之) Wave 9 Japanese Release Date: 10 July 2008 Lançamento (ランサメント, Ransamento), Rhinoceras Beetle Type, Full SetCharacter Designer: Tanimeso (たにめそ) Espadia (エスパディア, Esupadia), Stag Beetle Type, Full SetCharacter Designer: Tanimeso (たにめそ) Wave 10 Japanese Release Date: 20 November 2008 Graffias (グラフィオス, Gurafiosu), Scorpion Type, Full SetCharacter Designer: Ryōta Magaki (間垣亮太) Vespelio (ウェスペリオー, Wesuperiō), Bat Type, Full SetCharacter Designer: Ryōta Magaki (間垣亮太) Wave 1 Renewal Version Japanese Release Date: 4 December 2008 Arnval Tranche 2 (アーンヴァル トランシェ2, Ānvaru Toranshe 2), Angel Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Strarf bis (ストラーフ bis, Sutorāfu bis), Devil Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Wave 11 Japanese Release Date: 27 March 2010 Altlene (アルトレーネ), Valkyrie Type, Full SetCharacter Designer: Taraku Uon (羽音たらく), Armament Redesigner: Takayuki Yanase (柳瀬敬之), Concept and Original Armament Desighner: Kem by Bokusin-Contest Grand Prix Altines (アルトアイネス), Valkyrie Type, Full SetLE Konamistyle Japanese and Dengeki ExclusiveCharacter Designer: Taraku Uon (羽音たらく),Armament Redesigner: Takayuki Yanase (柳瀬敬之), Concept and Original Armament Desighner: Kem by Bokusin-Contest Grand Prix Wave 12 Japanese Release Date: 30 September 2010 Baby Razz (ベイビーラズ, Beibī Razu), Electric Guitar Type, Full SetCharacter Designer: Choco Sharatang (紗羅檀, Sharatan), Violin Type, Full SetCharacter Designer: Choco Wave 13 Japanese Release Date: 28 October 2010 Gabrine (ガブリーヌ, Gaburīnu), Hellhound Type, Full SetCharacter Designer: Yoshitsune Izuna (いずなよしつね) Renge (蓮華, Renge), Ninetailed Fox Type, Full SetCharacter Designer: Yoshitsune Izuna (いずなよしつね) Wave 14 Japanese Release Date: 16 December 2010 Artille (アーティル, Ātiru), Lynx Type, Full SetCharacter Designer: Kazuhiko Kakoi (かこいかずひこ) Raptias (ラプティアス, Raputiasu), Eagle Type, Full SetCharacter Designer: Kazuhiko Kakoi (かこいかずひこ) Wave 15 Japanese Release Date: 27 January 2011 Maryceles (マリーセレス, Marīseresu), Tentacles Type, Full SetCharacter Designer: Niθ Proxima (プロキシマ, Purokishima), Centaurus Type, Full SetCharacter Designer: Niθ Wave 16 Japanese Release Date: 24 February 2011 Oorbellen (オールベルン, Ōruberun), Fencer-Pearl Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Zielbellen (ジールベルン, Jīruberun), Fencer-Obsidian Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Wave 17 Japanese Release Date: 17 March 2011 Arnval Mk.2 Tempesta (アーンヴァルMk.2 テンペスタ), Angel Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Strarf Mk.2 Lavina (ストラーフMk.2 ラヴィーナ), Devil Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Wave18 Japanese Release Date: 17 December 2011 Vervietta (ヴェルヴィエッタ), Vicviper Type, Full Lirbiete (リルビエート), Vicviper Type, FullCharacter Designer: Mika Akitaka (明貴美加) Wave19 Japanese Release Date: 23 February 2012 Fubuki type 2 (フブキ弐型), Ninja Type, Full Set Mizuki Type 2 (ミズキ弐型), Ninja Type, Full SetCharacter Designer: Humikane Shimada (島田フミカネ) Wave 20 Japanese Release Date: 15 March 2012 Arnval Mk.2 Tempesta Full Arms Package (アーンヴァルMk.2 テンペスタ フルアームズパッケージ), Angel Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Strarf Mk.2 Lavina Full Arms Package (ストラーフMk.2 ラヴィーナ フルアームズパッケージ), Devil Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Light Armor Wave 1 Japanese Release Date: 4 October 2008 Valona (ヴァローナ, Varōna), Succubus Type, Light Armour Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Werkstra (ウェルクストラ, Werukusutora), Commando Angel Type, Light Armour Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Light Armor Wave 2 Japanese Release Date: 30 October 2008 Bright Feather (ブライトフェザー, Buraitofezā), Nurse Type, Light Armour Full SetCharacter Designer: Mercy Rabbit (マーシーラビット) Harmony Grace (ハーモニーグレイス, Hāmonī Gureisu), Sister (nun) Type, Light Armour Full SetCharacter Designer: Mercy Rabbit (マーシーラビット) Light Armor Wave 3 Japanese Release Date: 29 February 2009 Partio (パーティオ, Pātio), Ferret Type, Light Armor Full SetCharacter Designer: BLADE Pomock (ポモック, Pomokku), Squirrel Type, Light Armor Full SetCharacter Designer: BLADE Light Armor Wave 4 Japanese Release Date: 25 February 2010 Kohiru (こひる, Kohiru), Chopsticks Type, Light Armor Full SetCharacter Designer: Dogmask Merienda (メリエンダ, Merienda ), Spoon Type, Light Armor Full SetCharacter Designer: Dogmask Special Releases Japanese Release Date: 26 December 2008 Fubuki (フブキ, Fubuki), Ninja Type, Full SetCharacter Designer: nunoLE Konamistyle Japanese Exclusive Mizuki (ミズキ, Mizuki), Ninja Type, Full SetCharacter Designer: nunoLE Konamistyle Japanese Exclusive Japanese Release Date: 26 March 2009 Nagi (ナギ, Nagi), Ojousama Type, Light Armor Full SetCharacter Designer: Kenjiro HataOnly available from a special release with the Hayate no Gotoku PSP game Nightmare Paradise Konami Style Japanese exclusive limited edition.Character Redesigner: Fumikane Shimada (島田フミカネ), Original Character Designer: Kenjirou Hata (畑健二郎) Japanese Release Date: 15 July 2010 Arnval Mk.2 (アーンヴァルMk.2), Angel Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Strarf Mk.2 (ストラーフMk.2 ), Devil Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Only available from a special release with the Busou Shinki Battle Masters PSP game Konami Style Japanese exclusive limited edition. Japanese Release Date: 22 September 2011 Arnval Mk.2 Full Arms Package (アーンヴァルMk.2 フルアームズパッケージ), Angel Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Strarf Mk.2 Full Arms Package (ストラーフMk.2 フルアームズパッケージ), Devil Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Swimsuit Body for Arnval MK.2 Swimsuit Body for Strarf MK.2 Only available from a special release with the Busou Shinki Battle Masters Mk.2 PSP game Konami Style Japanese exclusive limited edition Busou Shinki Variants Several limited edition versions of the Busou Shinki figures have also been released. These variants sport alternate color schemes and additional parts. Dengeki Exclusive Devil Type, Strarf Dengeki Exclusive Angel Type, Arnval Dengeki Exclusive Cat Type, Maochao Dengeki Exclusive Dog Type, Howling Wonder Festival 2008 Seiren Type, Eukrante Wonder Festival 2008 Mermaid Type, Ianeira Konami Style Exclusive Blue Santa Claus Type, Tsugaru Konami Style / Chara Hobby 2008 Prototype Squirrel Type, Pomock Konami Style / Chara Hobby 2008 Prototype Ferret Type, Partio Dengeki Hobby Magazine ed. High Speed Trike Type, ACH Stradale Dengeki Hobby Magazine ed. High Maneuver Trike Type, YDA Stradale Konami Style Exclusive Angel Type, Arnval Tranche2 Konami Style Exclusive Devil Type, Strarf Bis Konami Style Exclusive Ninja Type, Mizuki Konami Style Exclusive Commando Angel Type, Werkstra Konami Style Exclusive Succubus Type, Valona Konami Style Exclusive Panzer Type, Murmeltier Konami Style Exclusive Fighter Type, Asuka Konami Style Exclusive Valkyrie Type, Altlene Viola Konami Style Exclusive Valkyrie Type, Altines Rosa Konami Style Exclusive Fencer-Garnet Type, Oorbellen Konami Style Exclusive Fencer-Sapphire Type, Zielbellen Konami Style Exclusive Fencer-Moonstone Type, Oorbellen Lunaria Konami Style Exclusive Fencer-Amethyst Type, Zielbellen Konami Style Exclusive Lynx Type, Artille Full-Barrel Konami Style Exclusive Eagle Type, Raptias Air-Dominance Konami Style Exclusive Battery Type, Fort Bragg Dusk Konami Style Exclusive Firearms Type, Zelnogrard Belik Konami Style Exclusive Tentacles Type, Maryceles Lemuria Konami Style Exclusive Centaurus Type, Proxima Spinel MMS Naked Body Releases Exclusively sold on the Konami Style Japan page, these are unpainted, featureless MMS Figures meant for use with EX sets or for customization. They come in a variety of colors and shades of skintone intended to match other MMS figures. The MMS Naked bodies are available, like the Busou Shinki figures themselves, in three different body archetypes: MMS 1st, MMS 3rd (small) and MMS 3rd (tall). Although similar in form and construction, not all body parts are compatible among them. MMS 1st Naked White Naked Black Naked Flesh ver. 1 Naked Flesh ver. 1 - Gym Uniform Wine Red Naked Flesh ver. 2 Naked Flesh ver. 2 - Gym Uniform Navy Blue Naked Flesh ver. 2 - School Swimsuit Navy Blue Type Naked Flesh ver. 2 - School Swimsuit White Type Naked Flesh ver. 3 MMS 3rd (small) Naked White (small) Naked Black (small) Naked Flesh ver. 2 (small) Naked Flesh ver. 2 (small) - School Swimsuit White Type Naked Flesh ver. 4 (small) Naked Flesh ver. 4 (small) - School Swimsuit Navy Blue Type Naked Flesh ver. 5 (small) MMS 3rd (tall) Naked White (tall) Naked Black (tall) Naked Flesh ver. 2 (tall) Naked Flesh ver. 2 (tall) - School Swimsuit White Type Naked Flesh ver. 4 (tall) Naked Flesh ver. 4 (tall) - School Swimsuit Navy Blue Type Naked Flesh ver. 5 (tall) 2016 Reproductions Though the original line was discontinued in 2012, purchasers of the 2015 anime Blu-Ray box set received a serial code allowing them to purchase limited run reproductions of the MMS bodies of the four main Shinki from the anime (Arnval mk. 2, Strarf mk. 2, Altines, and Altlene). These were reproductions of the MMS bodies only, and did not include the equipment or most of the accessories from the original releases, and came in entirely new packaging. The reproductions were priced at 8000 yen per MMS, or 25,000 yen for a set of all four. The reproductions were popular enough that additional batches had to be produced, alongside additional Blu-Ray box sets. Though originally announced for an April 2016 release, the reproduction set was delayed to December 2016. Megami Device Collaboration Model Kits After the 2017 revival, the series has been getting collaboration model kit releases from 's line, which many former Busou Shinki key staff such as series/MMS creator , former series producer Toriyama Toriwo, and designers Fumikane Shimada and are involved with. The first revival shinki, Edelweiss, was released in January 2019 as part of a tie-in with the upcoming smartphone game Busou Shinki R (then unnamed), but the game was further delayed. The Edelweiss kit thus uses the generic Busou Shinki series logo instead of the Busou Shinki R logo in its branding, while the card from the Battle Conductor arcade game which was released after the Busou Shinki R logo was revealed uses the R logo. Releases of Arnval and Strarf as Megami Device collaboration model kits were also announced in 2015, predating the revival announcement, but as of December 2020 no release date has been announced yet. The kits are not based on the old MMS design, and instead use the new Machinica standard designed by Asai. The Megami Device line uses the Japanese model kit industry standard of 3mm joints, meaning that they are by default incompatible with the original Busou Shinki line, but Kotobukiya has released joint adapters that allow one to establish compatibility between 3.0 and 3.3mm standards. It was initially announced in early 2018 that other new designs from the revival would get Megami Device model kits, but no progress was made on them, and with Asai's distancing himself from the project in 2020 it is unclear if they will be made. Toriyama is still attached to the mobile game project and its kits, however, and a prototype body for a second collaboration kit with character design by BLADE was shown at Wonder Festival Summer 2018, but as of 2024 this kit has not been released. BLADE was later commissioned for similar but different designs for Megami Device, unrelated to the Busou Shinki brand, which were announced in 2023 and had kits released in 2024. List of Megami Device Collaboration Model Kit Releases Megami Device Japanese Release Date: 25 January 2019 Edelweiss (エーデルワイス, Ēderuwaisu), Jaeger Type (猟兵型)Character Designer: Fumikane Shimada (島田フミカネ) Japanese Release Date: 25 November 2022 Arnval (アーンヴァル, Ānvaru), Angel TypeCharacter Designer: Fumikane Shimada Japanese Release Date: 24 May 2023 Strarf (ストラーフ, Sutorāfu), Devil TypeCharacter Designer: Fumikane Shimada Video games Busou Shinki Battle Rondo On 23 April 2007 Konami released Battle Rondo. Battle Rondo was a free multiplayer online raising sim set in the fictional Busou Shinki universe. Players could unlock in-game versions of the figures, including their armor and weapons, and other gear, by inputting codes that came with each Busou Shinki figure, or through micropayments. The game consisted primarily of automated one-on-one battles with NPC or player-owned Shinki, and the main objective of the game was to have Shinki participate in battles while maintaining high win ratios in order to raise their ranks. The game also had time-limited event quests with their own storylines. The game used a battle system which had Shinki fight automatically, with the player "training" the AI to fight more effectively through feedback after each match. The game would also output battle logs as text files in which the reasons for actions taken in battle would be detailed, allowing the player to give more accurate feedback to the Shinki. Shinki personalities (influencing actions they take in battle and how they respond to feedback) and stats were affected by the initial setup, in which the player selected three "CSC" core crystals. CSCs could not be changed without resetting a Shinki entirely, which would reset them to level 1 and lore-wise erases their memories. The game was discontinued on 31 October 2011, and the official web portal was closed down. Busou Shinki Battle Masters Busou Shinki Battle Masters was developed by Konami for the PlayStation Portable released 15 July 2010. A sequel/updated version, also for the PSP, Busou Shinki Battle Masters Mk.2 was released on 22 September 2011. Both releases had limited edition releases which from Konami's online store Konami Style which included exclusive limited release action figures. Both were conventional action games, with the player taking direct control of shinki (unlike Battle Rondo). The in-world lore justification for this is that Battle Masters takes place in 2040 as opposed to Battle Rondo's 2036, and that taking control of shinki is made possible by new virtual reality technology. Busou Shinki Battle Communication Busou Shinki: Battle Communication was a social game developed by Mobage for feature phones that was launched on 31 October 2010. The service was discontinued on 22 May 2012. Busou Shinki Armored Princess Battle Conductor Busou Shinki: Armored Princess Battle Conductor is an arcade game with four-player online battle royale gameplay, in which players take control of teams of three shinki and compete to collect the greatest amount of gems in a match, that was developed by Konami and released on 24 December 2020. The game also makes use of a holographic display, and players save their progress through use of a Konami e-Amusement IC pass and by outputting shinki as physical trading cards via a Card Connect machine. The appearance of Edelweiss in the game was promoted as being a crossover/collaboration with Busou Shinki R, even though Busou Shinki R has not been released yet. The game has also had collaborations with other Konami IPs such as the Bemani series, Quiz Magic Academy series, Tokimeki Memorial series, Sky Girls series, and LovePlus series. Busou Shinki R (Tentative Title) Busou Shinki R was initially teased with no title in December 2017 before being officially announced as a smartphone game in February 2020. No release date has been revealed yet, and the title is tentative. Bibliography Busou Shinki 2036 is a manga series by BLADE. The series began its serialization in Dengeki Hobby Magazine in June 2007, with the first tankobon volume published under the Dengeki Comics label in 2008. The fifth and last volume was published in March 2013. Busou Shinki Zero A different manga series by Yuji Ihara that was also published under the Dengeki Comics label. Busou Shinki Always Together A novel by Hibiki Yu, published by Konami Novels. Gagaga Bunko Novel Series A series of novels based on the franchise by Kuga Buncho, published by Gagaga Bunko. Other manga and novels Busou Shinki: Forget-me-not was a manga by Wasaba that was serialized on Konami's feature phone portal Shukan Konami from 20 April 2007 to 26 December 2008, with 64 chapters. No books were ever released. Busou Shinki Light! is a manga by BLADE that was serialized in the magazine Figure Maniacs Otome-gumi. It did not get its own releases, but was included in volumes 2-4 of Busou Shinki 2036, which is also by BLADE. Hibusou Shinki is a webcomic by Karashiichi that ran on the official Busou Shinki website from 2008 to 2010. No books were ever released, but its first appearance, later relabelled "episode 0" when released on the website, was first published in the mook Busou Shinki Master's Book. Other books Several other books and mooks related to the franchise have also been published by Kadokawa and Konami Digital Entertainment. Radio shows Busou Shinki Radio Rondo An internet radio show hosted by Kana Asumi and Eri Kitamura to promote and discuss Battle Rondo, broadcast weekly on i-revo and Onsen.ag from 26 April 2007 to 25 October 2007 (episodes on Onsen were released one week after i-Revo). Special additional recordings were also included on the Battle Rondo soundtrack and on the Character Song & Special Radio Rondo albums. Recordings of the radio show were compiled and released on CD in 2008. Busou Shinki Master no tame no Radio desu An internet radio show hosted by Kana Asumi and Minori Chihara to promote and discuss the TV anime series, broadcast on Onsen.ag from 24 September 2012 to 1 October 2013. Episodes were released weekly up to episode 27, and then once every two weeks after. A special episode was released in 2015 to coincide with the TV series Blu-ray box release, with another in 2017 for the Blu-ray box re-release. Recordings of the radio show were compiled and released on CD in four volumes from 2012-2014. Discography Character song albums, video game soundtracks, and radio show recordings have been released on CD. Many of the individual tracks from the music CDs (but not the radio show CDs) are also available for sale by download in MP3 file format from online stores such as Amazon and iTunes. Video game soundtracks Character song albums Radio talk show recordings TV anime-related music Anime Busou Shinki has had two anime series, a 2011 OVA and a 2012 13-episode TV series. OVA is an original video animation produced by Kinema Citrus and TNK. The OVA was originally released as DLC for the PSP video game Battle Masters Mk 2, viewable through an in-game menu. The ten installments were later assembled into a 40-minute OVA that had a limited release on DVD and Blu-ray Disc via Konami's Konami Style online shop in Japan. TV series The TV series was broadcast in Japan in 2012. Individual DVD and Blu-Ray volumes were released in 2011-2012, and a Blu-Ray-only box set was released in 2015. Episode 13 of the series was not broadcast on TV and only released on disc. The TV series was licensed for distribution in North America by Sentai Filmworks and began streaming on Anime Network in 2012. Legacy After the discontinuing of the action figure line, key staff such as series/MMS creator Masaki Asai and former series producer Toriwo Toriyama went on to work on the Megami Device line of model kits for Kotobukiya, which has a similar premise and concept and is considered by many as a spiritual successor. Designers Fumikane Shimada and Takayuki Yanase, who had previously worked on Busou Shinki, also worked on designs for the line. The official Megami Device webcomic is also drawn by Karashiichi, who previously did the official Busou Shinki website webcomic Hibusou Shinki, and the comic has returning characters from Hibusou Shinki. As part of a tie-in with the upcoming smartphone game Busou Shinki R, Megami Device has also seen the release of one of the new shinki from the game, Edelweiss, as a collaboration model kit. Releases of Arnval and Strarf as Megami Device collaboration model kits were also announced in 2015. Pyramid Inc., which developed the Battle Masters games, developed the smartphone game Alice Gear Aegis which is also a "mecha girl" genre action game. Fumikane Shimada and Takayuki Yanase also worked on designs for the game, and Alice Gear Aegis has also had collaboration model kit releases from Megami Device. Pyramid staff such as president Junichi Kashiwagi are frequently present at Megami Device-related events as well, such as Wonder Festival talk shows. After Konami's Busou Shinki revival was announced in December 2017, Asai announced that he was being officially involved with the project at Konami's request in early 2018, and he worked on the Edelweiss, Arnval and Starf model kits for Megami Device as part of this. It was also announced that more new designs from the revival would be getting releases as model kits in the future. But though the Edelweiss kit (released January 2019) was supposed to be released alongside the Busou Shinki R smartphone game, the game ended up in development hell, with the title not announced yet at the time of the Edelweiss' release. Asai reported in February 2020 that development on Busou Shinki R had recently restarted from scratch. This, combined with how Konami was not keeping him up to date on developments, with him not learning about the Battle Conductor arcade game until seeing announcements on Twitter, resulted in him releasing a statement on his personal blog saying that he no longer considers himself to be part of the project, citing the aforementioned incidents and how he is being kept out of the loop. No work had been done on any collaboration model kits aside from the Edelweiss, Arnval and Strarf, and so it is unclear if any other revival designs will be released. See also Frame Arms Girl Little Battlers Experience Alice Gear Aegis Arcanadea References External links (archive) Shinki-NET (archive) Battle Masters website Battle Masters Mk 2 website Battle Conductor website 2012 anime television series debuts 2000s toys 2007 video games 2010 video games 2011 video games 2020 video games Japan-exclusive video games Video games developed in Japan Action figures Konami Mecha anime and manga Raising sims Windows games Windows-only games Inactive multiplayer online games PlayStation Portable games PlayStation Portable-only games Arcade video games Konami arcade games Toy robots Kinema Citrus TNK (company) Eight Bit (studio) Dengeki Comics Gagaga Bunko Internet radio
Busou Shinki
[ "Technology" ]
7,410
[ "Multimedia", "Internet radio" ]
16,384,773
https://en.wikipedia.org/wiki/Nonequilibrium%20partition%20identity
The nonequilibrium partition identity (NPI) is a remarkably simple and elegant consequence of the fluctuation theorem previously known as the Kawasaki identity: (Carberry et al. 2004). Thus in spite of the second law inequality which might lead one to expect that the average would decay exponentially with time, the exponential probability ratio given by the FT exactly cancels the negative exponential in the average above leading to an average which is unity for all time. The first derivation of the nonequilibrium partition identity for Hamiltonian systems was by Yamada and Kawasaki in 1967. For thermostatted deterministic systems the first derivation was by Morriss and Evans in 1985. Bibliography See also Fluctuation theorem – Provides an equality that quantifies fluctuations in time averaged entropy production in a wide variety of nonequilibrium systems Crooks fluctuation theorem – Provides a fluctuation theorem between two equilibrium states; implies the Jarzynski equality External links Statistical mechanics Non-equilibrium thermodynamics Equations
Nonequilibrium partition identity
[ "Physics", "Chemistry", "Mathematics" ]
208
[ "Thermodynamics stubs", "Statistical mechanics stubs", "Non-equilibrium thermodynamics", "Mathematical objects", "Equations", "Thermodynamics", "Statistical mechanics", "Physical chemistry stubs", "Dynamical systems" ]
16,385,268
https://en.wikipedia.org/wiki/SGR%201627%E2%88%9241
SGR 1627−41, is a soft gamma repeater (SGR), located in the constellation of Ara. It was discovered June 15, 1998 using the Burst and transient Source Experiment (BATSE) and was the first soft gamma repeater to be discovered since 1979. During a period of 6 weeks, the star bursted approximately 100 times, and then went quiet. The measured bursts lasted an average of 100 milliseconds, but ranged from 25 ms to 1.8 seconds. SGR 1627−41 is a persistent X-ray source. It is located at a distance of 11 kpc in the radio complex CTB 33, a star forming region that includes the supernova remnant G337.0-0.1. This object is believed to be a neutron star that undergoes random outbursts of hard and soft X-rays. This may be caused by the loss of angular momentum of a highly magnetized neutron star, or magnetar. Alternatively, it may be a quark star, although this is considered less likely. After the 1998 outburst and the 40-day afterglow, SGR 1627−41 has remained dormant and is steadily cooling down from the peak during the event. References XMM-Newton observation of the Soft Gamma Ray Repeater SGR 1627−41 in a low luminosity state Ara (constellation) Radio-quiet neutron stars Pulsars Soft gamma repeaters Magnetars
SGR 1627−41
[ "Astronomy" ]
294
[ "Magnetars", "Magnetism in astronomy", "Constellations", "Ara (constellation)" ]
16,385,315
https://en.wikipedia.org/wiki/Harvard%20Computers
The Harvard Computers were a team of women working as skilled workers to process astronomical data at the Harvard College Observatory in Cambridge, Massachusetts, United States. The team was directed by Edward Charles Pickering (1877 to 1919) and, following his death in 1919, by Annie Jump Cannon. Other computers in the team included Williamina Fleming and Florence Cushman. Although these women started primarily as calculators, they made significant contributions to astronomy, much of which they published in research articles. History Although Pickering believed that gathering data at astronomical observatories was not the most appropriate work, it seems that several factors contributed to his decision to hire women instead of men. Among them was that men were paid much more than women, so he could employ more staff with the same budget. This was relevant in a time when the amount of astronomical data was surpassing the capacity of the Observatories to process it. Although some of Pickering's female staff were astronomy graduates, their wages were similar to those of unskilled workers. They usually earned between 25 and 50 cents per hour (between $ and $ in ), more than a factory worker but less than a clerical one. In describing the dedication and efficiency with which the Harvard Computers, including Florence, undertook this effort, Edward Pickering said, "a loss of one minute in the reduction of each estimate would delay the publication of the entire work by the equivalent of the time of one assistant for two years." The women were often tasked with measuring the brightness, position, and color of stars. The work included such tasks as classifying stars by comparing the photographs to known catalogs and reducing the photographs while accounting for things like atmospheric refraction in order to render the clearest possible image. Fleming herself described the work as "so nearly alike that there will be little to describe outside ordinary routine work of measurement, examination of photographs, and of work involved in the reduction of these observations". At times women offered to work at the observatory for free in order to gain experience in a field that was difficult to get into. Notable members Mary Anna Palmer Draper Mary Anna Draper was the widow of Dr. Henry Draper, an astronomer who died before completing his work on the chemical composition of stars. She was very involved in her husband's work and wanted to finish his classification of stars after he died. Mary Draper quickly realized the task facing her was far too daunting for one person. She had received correspondence from Mr. Pickering, a close friend of hers and her husband's. Pickering offered to help finish her husband's work, and encouraged her to publish his findings up to the time of his death. After some deliberation and much consideration, Draper decided in 1886 to donate money and a telescope of her husband's to the Harvard Observatory in order to photograph the spectra of stars. She had decided this would be the best way to continue her husband's work and erect his legacy in astronomy. She was very insistent on funding the memorial project with her own inheritance, as it would carry on her husband's legacy. She was a dedicated follower of the observatory and a great friend of Pickering's. In 1900, she funded an expedition to see the total solar eclipse occurring that year. Williamina Fleming Williamina Fleming had no prior relation to Harvard, as she was a Scottish immigrant working as Pickering's housemaid. Her first assignment was to improve an existing catalog of stellar spectra, which later led to her appointment as head of the ‘’Henry Draper Catalogue’’ project. Fleming went on to help develop a classification of stars based on their hydrogen content, as well as play a major role in discovering the strange nature of white dwarf stars. Williamina continued her career in astronomy when she was appointed Harvard's Curator of Astronomical Photographs in 1899, also known as Curator of the Photographic Plates. She remained the only woman curator until the 1950s. Her work also led to her becoming the first female American citizen to be elected to the Royal Astronomical Society in 1907. Antonia Maury Antonia Maury was the niece of Henry Draper, and after recommendation from Mrs. Draper, was hired as a computer. She was a graduate from Vassar College, and was tasked with reclassifying some of the stars after the publication of the Henry Draper Catalog. Maury decided to go further and improved and redesigned the system of classification, but had other obligations and left the observatory in 1892 then again in 1894. Her work was finished with the help of Pickering and the computing staff and was published in 1897. She returned again in 1908 as an associate researcher. Anna Winlock Some of the first women who were hired to work as computers had familial connections to the Harvard Observatory’s male staff. For instance, Anna Winlock, one of the first of the Harvard Computers, was the daughter of Joseph Winlock, the third director of the observatory and Pickering’s immediate predecessor. Anna Winlock joined the observatory in 1875 to assist in supporting her family after her father's unexpected passing. She tackled her father's unfinished data analysis, performing the arduous work of mathematically reducing meridian circle observations, which rescued a decade's worth of numbers that had been left in a useless state. Winlock also worked on a stellar cataloging section called the "Cambridge Zone". Working over twenty years on the project, the work done by her team on the Cambridge Zone contributed significantly to the Astronomische Gesellschaft Katalog, which contains information on more than one-hundred thousand stars and is used worldwide by many observatories and their researchers. Within a year of Anna Winlock's hiring, three other women joined the staff: Selina Bond, Rhoda Sauders, and a third, who was likely a relative of an assistant astronomer. Annie Jump Cannon Pickering hired Annie Jump Cannon, a graduate of Wellesley College, to classify the southern stars. While at Wellesley, she took astronomy courses from one of Pickering's star students, Sarah Frances Whiting. She became the first female assistant to study variable stars at night. She studied the light curve of variable stars which could help suggest the type and causation of variation. Cannon, adding to work done by fellow computer Antonia Maury, greatly simplified [Pickering and Fleming's star classification based on temperature] system, and in 1922, the International Astronomical Union adopted [Cannon's] as the official classification system for stars....During Pickering’s 42-year tenure at the Harvard Observatory, which ended only a year before he died, in 1919, he received many awards, including the Bruce Medal, the Astronomical Society of the Pacific’s highest honor. Craters on the moon and on Mars are named after him. And Annie Jump Cannon’s enduring achievement was dubbed the Harvard—not the Cannon—system of spectral classification. Cannon's Harvard Classification Scheme is the basis of the today's familiar O B A F G K M system. She also categorized the variable stars into tables so they could be identified and compared more easily. These systems connect the color of stars to their temperature. Annie Jump Cannon was the first female scientist to be recognized for many awards and titles in her field of study. She was the first woman to receive an honorary doctorate from the University of Oxford and the Henry Draper Medal from the National Academy of Sciences, and the first female officer in the American Astronomical Society. Cannon went on to establish her own Annie Jump Cannon Award for women in postdoctoral work. Henrietta Leavitt Henrietta Swan Leavitt arrived at the observatory in 1893. She had experience through her college studies, traveling abroad, and teaching. In academia, Leavitt excelled in mathematics courses at Cambridge. When she began working at the observatory she was tasked with measuring star brightness through photometry. She found hundreds of new variable stars after starting to analyze the Great Nebula in Orion and her work was expanded to study the variables of the entire sky with Annie Jump Cannon and Evelyn Leland. With skills gained in photometry, Leavitt compared stars in different exposures. Studying Cepheid variables in the Small Magellanic Cloud, she discovered that their apparent brightness was dependent on their period. Since all those stars were approximately the same distance from Earth, that meant their absolute brightness must depend on their period as well, allowing the use of Cepheid variables as a standard candle for determining cosmic distances. That, in turn, led directly to the modern understanding of the true size of the universe, and Cepheid variables are still an essential rung in the cosmic distance ladder. Pickering published her work with his name as co-author. The legacy she left allowed future scientists to make further discoveries in space. Astronomer Edwin Hubble used Leavitt's method to calculate the distance of the nearest galaxy to the earth, the Andromeda Galaxy. This led to the realization that there are even more galaxies than previously thought. Florence Cushman Florence Cushman (1860-1940) was an American astronomer at the Harvard College Observatory who worked on the Henry Draper Catalogue. Florence was born in Boston, Massachusetts in 1860 and received her early education at Charlestown High School, where she graduated in 1877. In 1888, she began work at the Harvard College Observatory as an employee of Edward Pickering. Her classifications of stellar spectra contributed to Henry Draper Catalogue between 1918 and 1934. She stayed as an astronomer at the Observatory until 1937 and died in 1940 at the age of 80. Florence Cushman worked at the Harvard College Observatory from 1918 to 1937. Over the course of her nearly fifty-year career, she employed the objective prism method to analyze, classify, and catalog the optical spectra of hundreds of thousands of stars. In the 19th century, the photographic revolution enabled more detailed analysis of the night sky than had been possible with solely eye-based observations. In order to obtain optical spectra for measurement, male astronomers at the Harvard College Observatory expose glass plates on which the astronomical images were captured at night. During the daytime, female assistants like Florence analyzed the resultant spectra by reducing values, computing magnitudes, and cataloging their findings. She is credited with determining the positions and magnitudes of the stars listed in the 1918 edition of the Henry Draper Catalogue, which featured the spectra of roughly 222,000 stars. See also Evelyn Leland Cecilia Payne-Gaposchkin Muriel Mussells Seyfert References Further reading External links Women Astronomical Computers at the Harvard College Observatory Official Harvard Plate Stacks Website Remarkable Women Stories. The Harvard Astronomical Computers: Stargazers who made history American women astronomers Harvard University staff Sex segregation History of physics History of astronomy
Harvard Computers
[ "Astronomy" ]
2,139
[ "History of astronomy" ]
16,385,915
https://en.wikipedia.org/wiki/RMA%20tube%20designation
In the years 1942-1944, the Radio Manufacturers Association used a descriptive nomenclature system for industrial, transmitting, and special-purpose vacuum tubes. The numbering scheme was distinct from both the numbering schemes used for standard receiving tubes, and the existing transmitting tube numbering systems used previously, such as the "800 series" numbers originated by RCA and adopted by many others. The system assigned numbers with the base form "1A21", and this numbering scheme is occasionally referred to by tube collectors and historians as the "1A21 system". The first digit of the type number was 1-9, providing a rough indication of the filament/heater power rating (and therefore the overall power handling capabilities) of the tube. The assigned numbers were as follows: 1-- No filament/heater, or cold cathode device 2-- Up to 10 W 3-- 10-20 W 4-- 20-50 W 5-- 50-100 W 6-- 100-200 W 7-- 200-500 W 8-- 500W-1 kW 9-- More than 1 kW The second character was a letter broadly identifying the class of tube: A-- Single element (ballast, barretter) B-- Two-element device such as: Diode Transmit/receive tube (TR cell), cold-cathode water vapor discharge tube for use in radar systems, shorts the receiver input to protect it while the transmitter operates Anti-transmit/receive tube (ATR cell), cold-cathode water vapor discharge tube for use in radar systems, decouples the transmitter from the antenna while not operating, to prevent it from wasting received energy Spark gap C-- Triode D-- Tetrode E-- Pentode or beam power tetrode F-- Hexode G-- Heptode H-- Octode J-- Magnetically controlled types, usually incorporating a resonator (essentially, magnetrons) K-- Electrostatically controlled types, including a resonator (klystrons and inductive output tubes) L-- Vacuum capacitors N-- Crystal rectifiers (This designation lived on as the "N" in the EIA/JEDEC EIA-370 solid state device numbers standard, like 2N2222) P-- Photosensitive types (phototubes, photomultipliers, camera tubes, image converters) Q-- Resonant vacuum cavities R-- Ignitrons and mercury arc rectifiers S-- Vacuum switches T Storage, radial beam, and deflection control tubes (no known examples assigned) V-- Flash tubes W-- Travelling wave tube X-- X-ray tube Y-- Thermionic converter The last 2 digits were serially assigned, beginning with 21 to avoid possible confusion with receiving tubes or CRT phosphor designations. Multiple section tubes (like the 3E29 or 8D21) are assigned a letter corresponding to ONE set of electrodes. Oddities Like all tube numbering systems, there are many inconsistencies between theory and practice. For example, there is no assigned letter code for cathode-ray tubes. Some unusual types received rather mundane sounding designations, based solely on electrode count, because there was no better place to put them. For example, the 2F21 is not an actual hexode, but a pattern generating monoscope tube. Some very exotic types received generic designators, even when there was a more appropriate designator available. For example, the 2H21 "phasitron" phase modulator tube used in early FM broadcast transmitters was assigned an "H" (octode) designator, when it would have been a perfect candidate for the otherwise unused "T" category for deflection controlled tubes. The first-digit filament/heater power rating confusingly gathers valves of widely-differing ratings. The 2G21 is a subminiature Triode-Hexode, with a maximum anode (plate) current of some 0.2 milliamps and a maximum voltage of 45 volts. The 2J42 Magnetron, with a power output of some 7 kilowatts, is rated for anode current of 4.5 amps (pulse peak) at anode voltage of 5,500 volts. Famous types Many of the "1A21" series are well known to collectors and restorers of WW2 vintage radio equipment. A short list of well-known or historic types numbered under this system: 1N23—Silicon point contact diode used in early radar mixers. 1P25—Infrared image converter used in WW2 night vision "sniperscopes". 2C39--"Oilcan" type planar triode. 2C43--"Lighthouse" type planar triode. 2D21—Miniature glass tetrode thyratron used in jukeboxes and computer equipment. 2P23—Early image orthicon TV camera tube. 3B28—Xenon half wave rectifier—ruggedized replacement for mercury vapor type 866. 3E29—Dual beam power tube used in radar equipment—a pulse rated variant of the earlier 829B. 4D21—VHF beam tetrode better known by Eimac commercial number 4-125A. 5C22—Hydrogen thyratron for radar modulators. 6C21—Triode radar modulator for "hard tube" pulsers. 8D21—Internally water cooled dual tetrode used in early VHF TV transmitters. This numbering system was abandoned in 1944 in favor of a non-descriptive numbering system of 4 digit numbers beginning with 5500. This new system persisted until the final days of tubes, with type numbers registered up into the 9000 series. References Sibley, Ludwell "Tube Lore--A reference for Users and Collectors", 1st edition, 1996 See also List of vacuum tubes RETMA tube designation Mullard–Philips tube designation Russian tube designations Vacuum tubes Electronics lists
RMA tube designation
[ "Physics" ]
1,261
[ "Vacuum tubes", "Vacuum", "Matter" ]
16,388,251
https://en.wikipedia.org/wiki/Monomial%20group
In mathematics, in the area of algebra studying the character theory of finite groups, an M-group or monomial group is a finite group whose complex irreducible characters are all monomial, that is, induced from characters of degree 1. In this section only finite groups are considered. A monomial group is solvable. Every supersolvable group and every solvable A-group is a monomial group. Factor groups of monomial groups are monomial, but subgroups need not be, since every finite solvable group can be embedded in a monomial group. The symmetric group is an example of a monomial group that is neither supersolvable nor an A-group. The special linear group is the smallest finite group that is not monomial: since the abelianization of this group has order three, its irreducible characters of degree two are not monomial. Notes References Finite groups Properties of groups
Monomial group
[ "Mathematics" ]
198
[ "Mathematical structures", "Algebraic structures", "Finite groups", "Properties of groups" ]
16,388,782
https://en.wikipedia.org/wiki/Single-wavelength%20anomalous%20diffraction
Single-wavelength anomalous diffraction (SAD) is a technique used in X-ray crystallography that facilitates the determination of the structure of proteins or other biological macromolecules by allowing the solution of the phase problem. In contrast to multi-wavelength anomalous diffraction (MAD), SAD uses a single dataset at a single appropriate wavelength. Compared to MAD, SAD has weaker phasing power and requires density modification to resolve phase ambiguity. This downside is not as important as SAD's main advantage: the minimization of time spent in the beam by the crystal, thus reducing potential radiation damage to the molecule while collecting data. SAD also allows a wider choice of heavy atoms and can be conducted without a synchrotron beamline. Today, selenium-SAD is commonly used for experimental phasing due to the development of methods for selenomethionine incorporation into recombinant proteins. SAD is sometimes called "single-wavelength anomalous dispersion", but no dispersive differences are used in this technique since the data are collected at a single wavelength. See also Multi-wavelength anomalous dispersion (MAD) Multiple isomorphous replacement (MIR) Anomalous scattering Anomalous X-ray scattering Patterson map References Further reading W. A. Hendrickson (1985). "Analysis of Protein Structure from Diffraction Measurement at Multiple Wavelengths". Trans. ACA Vol 21. J Karle (1980). "Some Developments in Anomalous Dispersion for the Structural Investigation of Macromolecular Systems in Biology". International Journal of Quantum Chemistry: Quantum Biology Symposium 7, 357–367. J. Karle (1989). "Linear Algebraic Analyses of Structures with One Predominant Type of Anomalous Scatterer". Acta Crystallogr. A45, 303–307. A. Pahler, JL Smith & WA Hendrickson (1990). "A Probability Representation for Phase Information from Multiwavelength Anomalous Dispersion". Acta Crystallogr. A46, 537–540. T. C. Terwilliger (1994). "MAD Phasing: Bayesian Estimates of FA" Acta Crystallogr. D50, 11–16. T. C. Terwilliger (1994). "MAD Phasing: Treatment of Dispersive Differences as Isomorphous Replacement Information" Acta Crystallogr. D50, 17–23. R. Fourme, W. Shepard, R. Kahn, G l'Hermite & IL de La Sierra (1995). "The Multiwavelength Anomalous Solvent Contrast (MASC) Method in Macrocolecular Crystallography". J. Synchrotron Rad. 2, 36–48. E. de la Fortelle and G. Bricogne (1997) "Maximum-Likelihood Heavy-Atom Parameter Refinement for Multiple Isomorphous Replacement and Multiwavelength Anomalous Diffraction Methods". Methods in Enzymology 276, 472–494. W. A. Hendrickson and CM Ogata (1997) "Phase Determination from Multiwavelength Anomalous Diffraction Measurements". Methods in Enzymology 276, 494–523. J. Bella & M. G. Rossmann (1998). "A General Phasing Algorithm for Multiple MAD and MIR Data" Acta Crystallogr. D54, 159–174. J. M. Guss, E. A. Merritt, R. P. Phizackerley, B. Hedman, M. Murata, K. O. Hodgson, and H. C. Freeman (1989). "Phase determination by multiple-wavelength X-ray diffraction: crystal structure of a basic blue copper protein from cucumbers". Science 241, 806–811. B. Vijayakumar and D. Velmurugan (2013). "Use of europium ions for SAD phasing of lysozyme at the Cu Kα wavelength" Acta Crystallogr. F69, 20–24. J. P. Rose & B-C Wang (2016) "SAD phasing: History, current impact and future opportunities" Archives Biochem Biophys 602, 80-94. External links MAD phasing — an in depth tutorial with examples, illustrations, and references. Computer programs The SSRL Absorption Package — CHOOCH — Shake-and-Bake (SnB) — SHELX — Tutorials and examples Crystallography
Single-wavelength anomalous diffraction
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
977
[ "Crystallography", "Condensed matter physics", "Materials science" ]
16,390,520
https://en.wikipedia.org/wiki/Low-power%20electronics
Low-power electronics are electronics designed to consume less electrical power than usual, often at some expense. For example, notebook processors usually consume less power than their desktop counterparts, at the expense of computer performance. History Watches The earliest attempts to reduce the amount of power required by an electronic device were related to the development of the wristwatch. Electronic watches require electricity as a power source, and some mechanical movements and hybrid electromechanical movements also require electricity. Usually, the electricity is provided by a replaceable battery. The first use of electrical power in watches was as a substitute for the mainspring, to remove the need for winding. The first electrically powered watch, the Hamilton Electric 500, was released in 1957 by the Hamilton Watch Company of Lancaster, Pennsylvania. The first quartz wristwatches were manufactured in 1967, using analog hands to display the time. Watch batteries (strictly speaking cells, as a battery is composed of multiple cells) are specially designed for their purpose. They are very small and provide tiny amounts of power continuously for very long periods (several years or more). In some cases, replacing the battery requires a trip to a watch repair shop or watch dealer. Rechargeable batteries are used in some solar-powered watches. The first digital electronic watch was a Pulsar LED prototype produced in 1970. Digital LED watches were very expensive and out of reach to the common consumer until 1975, when Texas Instruments started to mass-produce LED watches inside a plastic case. Most watches with LED displays required that the user press a button to see the time displayed for a few seconds because LEDs used so much power that they could not be kept operating continuously. Watches with LED displays were popular for a few years, but soon the LED displays were superseded by liquid crystal displays (LCDs), which used less battery power and were much more convenient in use, with the display always visible and no need to push a button before seeing the time. Only in darkness, you had to press a button to light the display with a tiny light bulb, later illuminating LEDs. Most electronic watches today use 32.768 KHz quartz oscillators. As of 2013, processors specifically designed for wristwatches are the lowest-power processors manufactured today—often 4-bit, 32.768 kHz processors. Mobile computing When personal computers were first developed, power consumption was not an issue. With the development of portable computers however, the requirement to run a computer off a battery pack necessitated the search for a compromise between computing power and power consumption. Originally most processors ran both the core and I/O circuits at 5 volts, as in the Intel 8088 used by the first Compaq Portable. It was later reduced to 3.5, 3.3, and 2.5 volts to lower power consumption. For example, the Pentium P5 core voltage decreased from 5V in 1993, to 2.5V in 1997. With lower voltage comes lower overall power consumption, making a system less expensive to run on any existing battery technology and able to function for longer. This is crucially important for portable or mobile systems. The emphasis on battery operation has driven many of the advances in lowering processor voltage because this has a significant effect on battery life. The second major benefit is that with less voltage and therefore less power consumption, there will be less heat produced. Processors that run cooler can be packed into systems more tightly and will last longer. The third major benefit is that a processor running cooler on less power can be made to run faster. Lowering the voltage has been one of the key factors in allowing the clock rate of processors to go higher and higher. Electronics Computing elements The density and speed of integrated-circuit computing elements has increased exponentially for several decades, following a trend described by Moore's Law. While it is generally accepted that this exponential improvement trend will end, it is unclear exactly how dense and fast integrated circuits will get by the time this point is reached. Working devices have been demonstrated which were fabricated with a MOSFET transistor channel length of 6.3 nanometres using conventional semiconductor materials, and devices have been built that use carbon nanotubes as MOSFET gates, giving a channel length of approximately one nanometre. The density and computing power of integrated circuits are limited primarily by power-dissipation concerns. The overall power consumption of a new personal computer has been increasing at about 22% growth per year. This increase in consumption comes even though the energy consumed by a single CMOS logic gate in order to change its state has fallen exponentially in accordance with Moore's law, by virtue of shrinkage. An integrated-circuit chip contains many capacitive loads, formed both intentionally (as with gate-to-channel capacitance) and unintentionally (between conductors which are near each other but not electrically connected). Changing the state of the circuit causes a change in the voltage across these parasitic capacitances, which involves a change in the amount of stored energy. As the capacitive loads are charged and discharged through resistive devices, an amount of energy comparable to that stored in the capacitor is dissipated as heat: The effect of heat dissipation on state change is to limit the amount of computation that may be performed within a given power budget. While device shrinkage can reduce some parasitic capacitances, the number of devices on an integrated circuit chip has increased more than enough to compensate for reduced capacitance in each individual device. Some circuits – dynamic logic, for example – require a minimum clock rate in order to function properly, wasting "dynamic power" even when they do not perform useful computations. Other circuits – most prominently, the RCA 1802, but also several later chips such as the WDC 65C02, the Intel 80C85, the Freescale 68HC11 and some other CMOS chips – use "fully static logic" that has no minimum clock rate, but can "stop the clock" and hold their state indefinitely. When the clock is stopped, such circuits use no dynamic power but they still have a small, static power consumption caused by leakage current. As circuit dimensions shrink, subthreshold leakage current becomes more prominent. This leakage current results in power consumption, even when no switching is taking place (static power consumption). In modern chips, this current generally accounts for half the power consumed by the IC. Reducing power loss Loss from subthreshold leakage can be reduced by raising the threshold voltage and lowering the supply voltage. Both these changes slow down the circuit significantly. To address this issue, some modern low-power circuits use dual supply voltages to improve speed on critical paths of the circuit and lower power consumption on non-critical paths. Some circuits even use different transistors (with different threshold voltages) in different parts of the circuit, in an attempt to further reduce power consumption without significant performance loss. Another method that is used to reduce power consumption is power gating: the use of sleep transistors to disable entire blocks when not in use. Systems that are dormant for long periods of time and "wake up" to perform a periodic activity are often in an isolated location monitoring an activity. These systems are generally battery- or solar-powered and hence, reducing power consumption is a key design issue for these systems. By shutting down a functional but leaky block until it is used, leakage current can be reduced significantly. For some embedded systems that only function for short periods at a time, this can dramatically reduce power consumption. Two other approaches also exist to lower the power overhead of state changes. One is to reduce the operating voltage of the circuit, as in a dual-voltage CPU, or to reduce the voltage change involved in a state change (making a state change only, changing node voltage by a fraction of the supply voltage—low voltage differential signaling, for example). This approach is limited by thermal noise within the circuit. There is a characteristic voltage (proportional to the device temperature and to the Boltzmann constant), which the state switching voltage must exceed in order for the circuit to be resistant to noise. This is typically on the order of 50–100 mV, for devices rated to 100 degrees Celsius external temperature (about 4 kT, where T is the device's internal temperature in Kelvins and k is the Boltzmann constant). The second approach is to attempt to provide charge to the capacitive loads through paths that are not primarily resistive. This is the principle behind adiabatic circuits. The charge is supplied either from a variable-voltage inductive power supply or by other elements in a reversible-logic circuit. In both cases, the charge transfer must be primarily regulated by the non-resistive load. As a practical rule of thumb, this means the change rate of a signal must be slower than that dictated by the RC time constant of the circuit being driven. In other words, the price of reduced power consumption per unit computation is a reduced absolute speed of computation. In practice, although adiabatic circuits have been built, it has been difficult for them to reduce computation power substantially in practical circuits. Finally, there are several techniques for reducing the number of state changes associated with a given computation. For clocked-logic circuits, the clock gating technique is used, to avoid changing the state of functional blocks that are not required for a given operation. As a more extreme alternative, the asynchronous logic approach implements circuits in such a way that a specific externally supplied clock is not required. While both of these techniques are used to different extents in integrated circuit design, the limit of practical applicability for each appears to have been reached. Wireless communication elements There are a variety of techniques for reducing the amount of battery power required for a desired wireless communication goodput. Some wireless mesh networks use "smart" low power broadcasting techniques that reduce the battery power required to transmit. This can be achieved by using power aware protocols and joint power control systems. Costs In 2007, about 10% of the average IT budget was spent on energy, and energy costs for IT were expected to rise to 50% by 2010. The weight and cost of power supply and cooling systems generally depends on the maximum possible power that could be used at any one time. There are two ways to prevent a system from being permanently damaged by excessive heat. Most desktop computers design power and cooling systems around the worst-case CPU power dissipation at the maximum frequency, maximum workload, and worst-case environment. To reduce weight and cost, many laptop computers choose to use a much lighter, lower-cost cooling system designed around a much lower Thermal Design Power, that is somewhat above expected maximum frequency, typical workload, and typical environment. Typically such systems reduce (throttle) the clock rate when the CPU die temperature gets too hot, reducing the power dissipated to a level that the cooling system can handle. Examples Transmeta Acorn RISC Machine (ARM) AMULET microprocessor Microchip nanoWatt XLP PIC microcontrollers Texas Instruments MSP430 microcontrollers Energy Micro/Silicon Labs EFM32 microcontrollers STMicroelectronics STM32 microcontrollers Atmel/Microchip SAM L microcontrollers See also Processor power dissipation Common Power Format Data organization for low power IT energy management Performance per watt Power management Green computing Dynamic frequency scaling Overclocking Underclocking Dynamic voltage scaling Operand isolation Glitch removal Autonomous peripheral operation References Further reading (455 pages) External links "High-level design synthesis of a low power, VLIW processor for the IS-54 VSELP Speech Encoder" by Russell Henning and Chaitali Chakrabarti (NB. Implies that, in general, if the algorithm to run is known, hardware designed to specifically run that algorithm will use less power than general-purpose hardware running that algorithm at the same speed.) CRISP: A Scalable VLIW Processor for Low Power Multimedia Systems by Francisco Barat 2005 A Loop Accelerator for Low Power Embedded VLIW Processors by Binu Mathew and Al Davis Ultra-Low Power Design by Jack Ganssle K. Roy and S. Prasad, Low-Power CMOS VLSI Circuit Design, John Wiley & Sons, Inc., , 2000, 359 pages. K-S. Yeo and K. Roy, Low-Voltage Low-Power VLSI Subsystems, McGraw-Hill 2004, , 294 pages. Electric power Electronics and the environment
Low-power electronics
[ "Physics", "Engineering" ]
2,582
[ "Power (physics)", "Electrical engineering", "Electric power", "Physical quantities" ]
16,391,023
https://en.wikipedia.org/wiki/Vladimir%20Markovic
Vladimir Marković is a Professor of Mathematics at University of Oxford. He was previously the John D. MacArthur Professor at the California Institute of Technology (2013–2020) and Sadleirian Professor of Pure Mathematics at the University of Cambridge (2013–2014). Education Marković was educated at the University of Belgrade where he was awarded a Bachelor of Science degree in 1995 and a PhD in 1998. Career and research Previously, Marković has held positions at the University of Warwick, Stony Brook University and the University of Minnesota. Marković is editor of Proceedings of the London Mathematical Society. Marković's research interests are in low-dimensional geometry, topology and dynamics and functional and geometric analysis. Awards and honours Marković was elected a Fellow of the Royal Society (FRS) in 2014. His nomination reads: Marković was also awarded the Clay Research Award in 2012, Whitehead Prize and Philip Leverhulme Prize in 2004. In Fall of 2015 Marković worked as an Institute for Advanced Study member. In 2016 he received a Simons Investigator Award. References External links Caltech: Markovic Elected to Great Britain's Royal Society Living people Serbian mathematicians 20th-century American mathematicians Fellows of the Royal Society Royal Society Wolfson Research Merit Award holders Whitehead Prize winners Clay Research Award recipients 1973 births Topologists Simons Investigator University of Belgrade alumni Sadleirian Professors of Pure Mathematics California Institute of Technology faculty Stony Brook University faculty Academics of the University of Warwick University of Minnesota faculty 21st-century American mathematicians
Vladimir Markovic
[ "Mathematics" ]
298
[ "Topologists", "Topology" ]
16,391,238
https://en.wikipedia.org/wiki/Siliceous%20ooze
Siliceous ooze is a type of biogenic pelagic sediment located on the deep ocean floor. Siliceous oozes are the least common of the deep sea sediments, and make up approximately 15% of the ocean floor. Oozes are defined as sediments which contain at least 30% skeletal remains of pelagic microorganisms. Siliceous oozes are largely composed of the silica based skeletons of microscopic marine organisms such as diatoms and radiolarians. Other components of siliceous oozes near continental margins may include terrestrially derived silica particles and sponge spicules. Siliceous oozes are composed of skeletons made from opal silica SiO2·nH2O, as opposed to calcareous oozes, which are made from skeletons of calcium carbonate (CaCO3·nH2O) organisms (i.e. coccolithophores). Silica (Si) is a bioessential element and is efficiently recycled in the marine environment through the silica cycle. Distance from land masses, water depth and ocean fertility are all factors that affect the opal silica content in seawater and the presence of siliceous oozes. Formation Biological uptake of marine silica Siliceous marine organisms, such as diatoms and radiolarians, use silica to form skeletons through a process known as biomineralization. Diatoms and radiolarians have evolved to uptake silica in the form of silicic acid, Si(OH)4. Once an organism has sequestered Si(OH)4 molecules in its cytoplasm, the molecules are transported to silica deposition vesicles where they are transformed into opal silica (B-SiO2). Diatoms and radiolarians have specialized proteins called silicon transporters that prevent mineralization during the sequestration and transportation of silicic acid within the organism. The chemical formula for biological uptake of silicic acid is: H4SiO4(aq) <-> SiO2*nH2O(s) + (2-n)H2O(l) Opal silica saturation state The opal silica saturation state increases with depth in the ocean due to dissolution of sinking opal particles produced in surface ocean waters, but still remains low enough that the reaction to form biogenic opal silica remains thermodynamically unfavorable. Despite the unfavorable conditions, organisms can use dissolved silicic acid to make opal silica shells through biologically controlled biomineralization. The amount of opal silica that makes it to the seafloor is determined by the rates of sinking, dissolution, and water column depth. Export of silica to the deep ocean The dissolution rate of sinking opal silica (B-SiO2) in the water column affects the formation of siliceous ooze on the ocean floor. The rate of dissolution of silica is dependent on the saturation state of opal silica in the water column and the dependent on re-packaging of opal silica particles within larger particles from the surface ocean. Re-packaging is the formation (and sometimes re-formation) of solid organic matter (usually fecal pellets) around opal silica. The organic matter protects against the immediate dissolution of opal silica into silicic acid, which allows for increased sedimentation of the seafloor. The opal compensation depth, similar to the carbonate compensation depth, occurs at approximately 6000 meters. Below this depth, there is greater dissolution of opal silica into silicic acid than formation of opal silica from silicic acid. Only four percent of opal silica produced in the surface ocean will, on average, be deposited to the seafloor, while the remaining 96% is recycled in the water column. Accumulation rates Siliceous oozes accumulate over long timescales. In the open ocean, siliceous ooze accumulates at a rate of approximately 0.01 mol Si m−2 yr−1. The fastest accumulation rates of siliceous ooze occur in the deep waters of the Southern Ocean (0.1 mol Si m−2 yr−1) where biogenic silica production and export is greatest.  The diatom and radiolarian skeletons that make up Southern Ocean oozes can take 20 to 50 years to sink to the sea floor. Siliceous particles may sink faster if they are encased in the fecal pellets of larger organisms.  Once deposited, silica continues to dissolve and cycle, delaying long term burial of particles until a depth of 10–20 cm in the sediment layer is reached. Marine chert formation When opal silica accumulates faster than it dissolves, it is buried and can provide a diagenetic environment for marine chert formation.  The processes leading to chert formation have been observed in the Southern Ocean, where siliceous ooze accumulation is the fastest.  Chert formation however can take tens of millions of years. Skeleton fragments from siliceous organisms are subject to recrystallization and cementation. Chert is the main fate of buried siliceous ooze and permanently removes silica from the oceanic silica cycle. Geographic locations Siliceous oozes form in upwelling areas that provide valuable nutrients for the growth of siliceous organisms living in oceanic surface waters. A notable example is in the Southern ocean, where the consistent upwelling of Indian, Pacific, and Antarctic circumpolar deep water has resulted in a contiguous siliceous ooze that stretches around the globe. There is a band of siliceous ooze that is the result of enhanced equatorial upwelling in Pacific Ocean sediments below the North Equatorial Current. In the subpolar North Pacific, upwelling occurs along the eastern and western sides of the basin from the Alaska current and the Oyashio Current. Siliceous ooze is present along the seafloor in these subpolar regions. Ocean basin boundary currents, such as the Humboldt Current and the Somali Current, are examples of other upwelling currents that favor the formation of siliceous ooze. Siliceous ooze is usually categorized based upon its composition. Diatomaceous oozes are predominantly formed of diatom skeletons and are typically found along continental margins in higher latitudes. Diatomaceous oozes are present in the Southern Ocean and the North Pacific Ocean. Radiolarian oozes are made mostly of radiolarian skeletons and are located mainly in tropical equatorial and subtropical regions. Examples of radiolarian ooze are the oozes of the equatorial region, subtropical Pacific region, and the subtropical basin of the Indian Ocean. A small surface area of deep sea sediment is covered by radiolarian ooze in the equatorial East Atlantic basin. Role in the oceanic silica cycle Deep seafloor deposition in the form of ooze is the largest long-term sink of the oceanic silica cycle (6.3 ± 3.6 Tmol Si year−1). As noted above, this ooze is diagenetically transformed into lithospheric marine chert. This sink is roughly balanced by silicate weathering and river inputs of silicic acid into the ocean. Biogenic silica production in the photic zone is estimated to be 240 ± 40 Tmol si year −1.  Rapid dissolution in the surface removes roughly 135 Tmol opal Si year−1, converting it back to soluble silicic acid that can be used again for biomineralization. The remaining opal silica is exported to the deep ocean in sinking particles. In the deep ocean, another 26.2 Tmol Si Year−1 is dissolved before being deposited to the sediments as opal silica.  At the sediment water interface, over 90% of the silica is recycled and upwelled for use again in the photic zone. The residence time on a biological timescale is estimated to be about 400 years, with each molecule of silica recycled 25 times before sediment burial. Siliceous oozes and carbon sequestration Diatoms are primary producers that convert carbon dioxide into organic carbon via photosynthesis, and export organic carbon from the surface ocean to the deep sea via the biological pump. Diatoms can therefore be a significant sink for carbon dioxide in surface waters. Due to the relatively large size of diatoms (when compared to other phytoplankton), they are able to take up more total carbon dioxide. Additionally, diatoms do not release carbon dioxide into the environment during formation of their opal silicate shells. Phytoplankton that build calcium-carbonate shells (i.e. coccolithophores) release carbon dioxide as a byproduct during shell formation, making them a less efficient sink for carbon dioxide. The opal silicate skeletons enhance the sinking velocity of diatomaceous particles (i.e. carbon) from the surface ocean to the seafloor. Iron fertilization experiments Atmospheric carbon dioxide levels have been increasing exponentially since the Industrial Revolution and researchers are exploring ways to mitigate atmospheric carbon dioxide levels by increasing the uptake of carbon dioxide in the surface ocean via photosynthesis. An increase in the uptake of carbon dioxide in the surface waters may lead to more carbon sequestration in the deep sea through the biological pump. The bloom dynamics of diatoms, their ballasting by opal silica, and various nutrient requirements have made diatoms a focus for carbon sequestration experiments. Iron fertilization projects like the SERIES iron-enrichment experiments have introduced iron into ocean basins to test if this increases the rate of carbon dioxide uptake by diatoms and ultimately sinking it to the deep ocean. Iron is a limiting nutrient for diatom photosynthesis in high-nutrient, low-chlorophyll areas of the ocean, thus increasing the amount of available iron can lead to a subsequent increase in photosynthesis, sometimes resulting in a diatom bloom. This increase removes more carbon dioxide from the atmosphere.  Although more carbon dioxide is being taken up, the carbon sequestration rate in deep sea sediments is generally low. Most of the carbon dioxide taken up during the process of photosynthesis is recycled within the surface layer several times before making it to the deep ocean to be sequestered. Paleo-oozes Before siliceous organisms During the Precambrian, oceanic silica concentrations were an order of magnitude higher than in modern oceans. The evolution of biosilicification is thought to have emerged during this time period. Siliceous oozes formed once silica-sequestering organisms such as radiolarians and diatoms began to flourish in the surface waters. Evolution of siliceous organisms Radiolaria Fossil evidence suggests that radiolarians first emerged during the late Cambrian as free-floating shallow water organisms. They did not become prominent in the fossil record until the Ordovician. Radiolarites evolved in upwelling regions in areas of high primary productivity and are the oldest known organisms capable of shell secretion. The remains of radiolarians are preserved in chert; a byproduct of siliceous ooze transformation. Major speciation events of radiolarians occurred during the Mesozoic. Many of those species are now extinct in the modern ocean. Scientists hypothesize that competition with diatoms for dissolved silica during the Cenozoic is the likely cause for the mass extinction of most radiolarian species. Diatoms The oldest well-preserved diatom fossils have been dated to the beginning of the Jurassic period. However, the molecular record suggests diatoms evolved at least 250 million years ago during the Triassic. As new species of diatoms evolved and spread, oceanic silica levels began to decrease. Today, there are an estimated 100,000 species of diatoms, most of which are microscopic (2-200 μm). Some early diatoms were larger, and could be between 0.2 and 22mm in diameter. The earliest diatoms were radial centrics, and lived in shallow water close to shore. These early diatoms were adapted to live on the benthos, as their outer shells were heavy and prevented them from free-floating. Free-floating diatoms, known as bipolar and multipolar centrics, began evolving approximately 100 million years ago during the Cretaceous. Fossil diatoms are preserved in diatomite (also known as diatomaceous earth), which is one of the by-products of the transformation from ooze to rock formation. As diatomaceous particles began to sink to the ocean floor, carbon and silica were sequestered along continental margins. The carbon sequestered along continental margins has become the major petroleum reserves of today. Diatom evolution marks a time in Earth's geologic history of significant removal of carbon dioxide from the atmosphere while simultaneously increasing atmospheric oxygen levels. How scientists use paleo-ooze Paleoceanographers study prehistoric oozes to learn about changes in the oceans over time. The sediment distribution and deposition patterns of oozes inform scientists about prehistoric areas of the oceans that exhibited prime conditions for the growth of siliceous organisms. Scientists examine paleo-ooze by taking cores of deep sea sediments. Sediment layers in these cores reveal the deposition patterns of the ocean over time. Scientists use paleo-oozes as tools so that they can better infer the conditions of the paleo oceans. Paleo-ooze accretion rates can be used to determine deep sea circulation, tectonic activity, and climate at a specific point in time. Oozes are also useful in determining the historical abundances of siliceous organisms. Burubaital Formation The Burubatial Formation, located in the West Balkhash region of Kazakhstan, is the oldest known abyssal biogenic deposit. The Burubaital Formation is primarily composed of chert which was formed over a period of 15 million years (late Cambrian-middle Ordovician). It is likely that these deposits were formed in an upwelling region in subequatorial latitudes. The Burubaital Formation is largely composed of radiolarites, as diatoms had yet to evolve at the time of its formation. The Burubaital deposits have led researchers to believe that radiolaria played a significant role in the late Cambrian silica cycle. The late Cambrian (497-485.4 mya) marks a time of transition for marine biodiversity and is the beginning of ooze accumulation on the seafloor. Distribution shifts during the Miocene A shift in the geographical distribution of siliceous oozes occurred during the Miocene. Sixteen million years ago there was a gradual decline in siliceous ooze deposits in the North Atlantic and a concurrent rise in siliceous ooze deposits in the North Pacific. Scientists speculate that this regime shift may have been caused by the introduction of Nordic Sea Overflow Water, which contributed to the formation of North Atlantic Deep Water (NADW). The formation of Antarctic Bottom Water (AABW) occurred at approximately the same time as the formation of NADW. The formation of NADW and AABW dramatically transformed the ocean, and resulted in a spatial population shift of siliceous organisms. Paleocene plankton blooms The Cretaceous-Tertiary boundary was a time of global mass extinction, commonly referred to as the K-T mass extinction. While most organisms were disappearing, marine siliceous organisms were thriving in the early Paleocene seas. One such example occurred in the waters near Marlborough, New Zealand. Paleo-ooze deposits indicate that there was a rapid growth of both diatoms and radiolarians at this time. Scientists believe that this period of high biosiliceous productivity is linked to global climatic changes. This boom in siliceous plankton was greatest during the first one million years of the Tertiary period and is thought to have been fueled by enhanced upwelling in response to a cooling climate and increased nutrient cycling due to a change in sea level. See also Diatomaceous earth Calcareous ooze References Sedimentary rocks Chert Oceanography Diatom biology
Siliceous ooze
[ "Physics", "Environmental_science" ]
3,396
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
16,392,407
https://en.wikipedia.org/wiki/Energy%20Victory
Energy Victory: Winning the War on Terror by Breaking Free of Oil is a 2007 book by Robert Zubrin. Zubrin's central argument is that the decisive front in the War on Terror is America's struggle for energy independence. He outlines the manner in which Islamic extremism has been financed by oil revenues, the technological feasibility of ethanol-fueled vehicles as well as the economic and agricultural imperatives for ethanol production, and the environmental implications of his plan. Synopsis Problem Zubrin contends that OPEC nations, particularly Saudi Arabia, have used their enormous oil wealth to fund Islamic extremism; in effect, the US is financing both sides of the War on Terror. They have been able to do this through colluding to keep oil prices high. Due to its dependence on their oil, the United States (and the rest of the world) is powerless to do anything about this. Flex-fuel mandate The key to winning the war on terror, therefore, is to create a substitute for oil. Zubrin argues that a mandate that all new cars sold in the United States be flex-fueled (FFV, for Flex-Fuel Vehicle, able to run on gasoline, ethanol or methanol, or any combination thereof) would very quickly make such vehicles the world standard, as occurred in the early 1980s with the introduction of catalytic converters. As a result, consumers would demand ethanol- and methanol-blended fuels due to their price competitiveness with gasoline, which would in turn prompt gas stations to instal biofuel pumps. Under such a situation, competition would drive oil prices down. Zubrin argues that biofuels should be subsidized in order to keep their price advantage over gasoline, as it is the only way to cripple OPEC. Some have argued that a switch to electric cars would be more beneficial. While this may be a longer-term solution, a switch to biofuel can be achieved in a few years (as in the case of Brazil). Additionally, existing cars (including hybrids) can be retrofitted with flex-fuel capability for "between $100 and $500". A switch to biofuel would have the additional benefit that it is potentially a carbon-neutral fuel. Development argument Ethanol is produced primarily via the fermentation of corn or sugar cane (or indeed any other glucose-rich crop). Methanol can be produced from any plant matter. As both of these products can easily be produced in developing countries, Zubrin contends that the resultant expanding market for farm produce would be greatly beneficial for third-world farmers. There would be no need for western nations to subsidize their own farmers, as third-world produce could be absorbed into the larger market without causing a price-crash that would bankrupt western farmers. Tariff elimination Anne Korin, of The Institute for the Analysis of Global Security, has developed this concept further, adding to Zubrin's mandate the necessity to eliminate ethanol and sugar import tariffs in the United States for it to succeed. Reception Gal Luft, writing for the Institute for the Analysis of Global Security, called Energy Victory "one of the best books written on our oil dependence problem". Zubrin presented the arguments from Energy Victory at a series of "go green" lectures sponsored by the Advanced Planning and Partnership Office and hosted by NASA, in January 2008. See also Energy security Hydrogen economy List of books about energy issues Methanol economy New Manhattan Project for Energy Independence Open Fuel Standard Act of 2011 Pickens Plan Wahhabism References External links , presenting an outline of the book Energy Victory website MSNBC - Is alcohol the energy answer? Robert Zubrin - The Hydrogen Hoax The Institute for the Analysis of Global Security - Endorses Robert Zubrin's Flex-fuel Mandate and further develops the concept & evidence supporting it. 2007 non-fiction books 2007 in the environment American non-fiction books Energy policy Ethanol fuel Peak oil books Political plans in the United States Renewable energy in the United States Books by Robert Zubrin Environmental non-fiction books
Energy Victory
[ "Environmental_science" ]
835
[ "Environmental social science", "Energy policy" ]
16,393,338
https://en.wikipedia.org/wiki/Goodman%20relation
Within the branch of materials science known as material failure theory, the Goodman relation (also called a Goodman diagram, a Goodman-Haigh diagram, a Haigh diagram or a Haigh-Soderberg diagram) is an equation used to quantify the interaction of mean and alternating stresses on the fatigue life of a material. The equation is typically presented as a linear curve of mean stress vs. alternating stress that provides the maximum number of alternating stress cycles a material will withstand before failing from fatigue. A scatterplot of experimental data shown on an amplitude versus mean stress plot can often be approximated by a parabola known as the Gerber line, which can in turn be (conservatively) approximated by a straight line called the Goodman line. Mathematical description The relations can be represented mathematically as: , Gerber Line (parabola) , Goodman Line , Soderberg Line where is the stress amplitude, is the mean stress, is the fatigue limit for completely reversed loading, is the ultimate tensile strength of the material and is the factor of safety. The Gerber parabola is indication of the region just beneath the failure points during experiment. The Goodman line connects on the abscissa and on the ordinate. The Goodman line is much safer consideration than the Gerber parabola because it is completely inside the Gerber parabola and excludes some of area which is nearby to failure region. The Soderberg Line connects on the abscissa and on the ordinate, which is more conservative consideration and much safer. is the yield strength of the material. The general trend given by the Goodman relation is one of decreasing fatigue life with increasing mean stress for a given level of alternating stress. The relation can be plotted to determine the safe cyclic loading of a part; if the coordinate given by the mean stress and the alternating stress lies under the curve given by the relation, then the part will survive. If the coordinate is above the curve, then the part will fail for the given stress parameters. References Bibliography Goodman, J., Mechanics Applied to Engineering, Longman, Green & Company, London, 1899. Hertzberg, Richard W., Deformation and Fracture Mechanics and Engineering Materials. John Wiley and Sons, Hoboken, NJ: 1996. Mars, W.V., Computed dependence of rubber's fatigue behavior on strain crystallization. Rubber Chemistry and Technology, 82(1), 51–61. 2009. Further reading External links Fatpack. Fatigue analysis in python with Goodman mean stress correction implementation. Materials science Fracture mechanics Rubber properties
Goodman relation
[ "Physics", "Materials_science", "Engineering" ]
520
[ "Structural engineering", "Applied and interdisciplinary physics", "Fracture mechanics", "Materials science", "nan", "Materials degradation" ]
16,395,170
https://en.wikipedia.org/wiki/Pulhamite
Pulhamite was a patented anthropic rock material invented by James Pulham (1820–1898) of the firm James Pulham and Son of Broxbourne in Hertfordshire. It was widely used for rock gardens and grottos. Overview Pulhamite, which usually looked like gritty sandstone, was used to join natural rocks together or crafted to simulate natural stone features. It was so realistic that it fooled some geologists of the era. The recipe went to the grave with the inventor. Modern analysis of surviving original Pulhamite have shown it to be a blend of sand, Portland cement and clinker sculpted over a core of rubble and crushed bricks. It can be viewed in these places: Dane Park, Margate Neo-Norman gatehouse and folly at Benington Lordship in Hertfordshire Rockery, Burslem Park Cascade and Rock Garden, Ramsgate, Courtstairs Chine, Ramsgate, Garden Folly, Sydenham Hill Wood, Sydenham, London. Grottoes at Dewstow Gardens, South Wales Dunorlan Park, Tunbridge Wells Felixstowe Spa and Winter Garden, Suffolk Fernery and waterfall, Bromley Palace Park, Bromley Grotto, Wotton House, Surrey Water course and pump tower, The Dell, Englefield Green Henley Hall, Shropshire Lake and rockery, Milton Mount Gardens, Crawley Leonardslee, rockery in Grade I listed garden at Lower Beeding, near Horsham, West Sussex, England. Newstead Abbey fernery, Nottinghamshire Rock Cliff, Bawdsey Manor, Suffolk Water Garden, Highnam Court, Gloucester Zig-zag Path, Lower Leas Coastal Park, Folkestone Rosshall Park, Glasgow Gardens at Waddesdon Manor, Buckinghamshire Heythrop Park, Oxfordshire Fernery at Danesbury Park, Hertfordshire. Waterfall at Battersea Park, London. Madresfield Court and gardens, Worcestershire Gardens at Coombe Wood, Croydon. Colney Hall near Norwich Cliffs at North Shore, Blackpool Former Terraced Gardens, Rivington, Lancashire. Gallery See also Cast stone Folly References External links The Pulham Legacy Durability Guaranteed – Pulhamite Rockwork pdf file on the English Heritage website. The Story of Pulhamite Rockwork Pulham at Waddesdon Manor video Building stone Architectural history Rock formations Victorian architecture Gardening in England
Pulhamite
[ "Engineering" ]
476
[ "Architectural history", "Architecture" ]
16,396,377
https://en.wikipedia.org/wiki/EXPEC%20Advanced%20Research%20Center
The Exploration and Petroleum Engineering Center - Advanced Research Center (EXPEC ARC) is located in Dhahran, Saudi Arabia. It is a research center that belongs to Saudi Aramco and is responsible for upstream oil and gas technology development. The center has over 250 scientists from various disciplines, spread across six technology teams and one laboratory division which tackle various aspects of oil and gas exploration, development, and production. These teams are: Geophysics Technology, Geology Technology, Reservoir Engineering Technology, Computational Modeling Technology, Production Technology, and Drilling Technology. Saudi Aramco
EXPEC Advanced Research Center
[ "Chemistry" ]
115
[ "Petroleum", "Petroleum stubs" ]
16,396,919
https://en.wikipedia.org/wiki/Concierge%20OSGi
Concierge is an OSGI (Open Service Gateway Initiative) R3 framework implementation intended for resource-constrained devices like mobile and embedded systems. Several new version exist and released in a eclipse project on the Eclipse Concierge web site. This one implements RC5 OSGI Specification. There have been no releases since 2009, so the project can be considered abandoned and obsolete. See also OSGi Alliance Apache Felix Equinox OSGi Bibliography External links Concierge Main page The OSGi Alliance Computer standards Free software programmed in Java (programming language) Software using the BSD license
Concierge OSGi
[ "Technology" ]
118
[ "Computer standards" ]
16,398,870
https://en.wikipedia.org/wiki/Burundi%20Ministry%20of%20Energy%20and%20Mines
The Burundi Ministry of Energy and Mines also known as the Ministry of Hydraulics, Energy and Mines is responsible for managing energy development and distribution in Burundi. The main function of the Ministry of Energy and Mines include: design and implement the National policy in energy, geology and Mines; promote geological research and mining industry activities; developing and implementing policies related to electricity, minerals, petroleum and petroleum products. The current Cabinet Minister of Energy is Hon. Ibrahim Uwizeye, Jiji and Mulembwe Hydropower Project (PHJIMU), Hydro-Electric Plant Mpanda, Hydro-Electric Plant Kabu 16, Hydro-Electric Plant Rusumo falls, Kagu Project, Ruzizi III, Ruvyironza, Hydro-Electric Plant in Kirasa-Karonge, Peat Power Project. Location The headquarters of the ministry are located at Municipality Mukaza Kabondo district ROHERO zone Avenue du 13 October N ° 6 in Bujumbura the capital city of the country. Scope of activities The ministry is responsible for the design and execution of the National policy in energy, geology and mines, promoting geological research and mining industry activities, developing an Energy Supply program with a view to ensuring sustainable access for the population to modern energy sources, promoting renewable energies through appropriate research and dissemination actions, and the planning, construction and management of hydraulic, energy and basic sanitation infrastructures. The major power projects include the Rusumo Hydroelectric Power Station. Auxiliary institutions and allied agencies REGIDESO Burundi See also Energy Regulators Association of East Africa References Government of Burundi Mining in Burundi Energy in Burundi Burundi Burundi
Burundi Ministry of Energy and Mines
[ "Engineering" ]
324
[ "Energy organizations", "Energy ministries" ]
16,399,074
https://en.wikipedia.org/wiki/Virgocentric%20flow
The Virgocentric flow (VCF) is the preferred movement of Local Group galaxies towards the Virgo cluster caused by its overwhelming gravity, which separates bound objects from the Hubble flow of cosmic expansion. The VCF can refer to the Local Group's movement towards the Virgo Cluster, since its center is considered synonymous with the Virgo cluster, but more tedious to ascertain due to its much larger volume. The excess velocity of Local Group galaxies towards, and with respect to, the Virgo Cluster are 100 to 400 km/s. This excess velocity is referred to as each galaxy's peculiar velocity. See also Great Attractor Shapley Attractor Dark flow References Astrophysics Extragalactic astronomy Virgo Supercluster
Virgocentric flow
[ "Physics", "Astronomy" ]
153
[ "Astronomy stubs", "Astrophysics", "Astrophysics stubs", "Extragalactic astronomy", "Astronomical sub-disciplines" ]
16,399,269
https://en.wikipedia.org/wiki/Etazolate
Etazolate (SQ-20,009, EHT-0202) is an anxiolytic drug which is a pyrazolopyridine derivative and has unique pharmacological properties. It acts as a positive allosteric modulator of the GABAA receptor at the barbiturate binding site, as an adenosine antagonist of the A1 and A2 subtypes, and as a phosphodiesterase inhibitor selective for the PDE4 isoform. It is currently in clinical trials for the treatment of Alzheimer's disease. See also Cartazolate ICI-190,622 Tracazolate References Anxiolytics Pyrazolopyridines Hydrazones PDE4 inhibitors Ethyl esters GABAA receptor positive allosteric modulators Adenosine receptor antagonists
Etazolate
[ "Chemistry" ]
178
[ "Hydrazones", "Functional groups" ]
16,400,133
https://en.wikipedia.org/wiki/S%20Arae
S Arae (S Ara) is an RR Lyrae-type pulsating variable star in the constellation of Ara. It has an apparent visual magnitude which varies between 9.92 and 11.24 during its 10.85-hour pulsation period, and it exhibits the Blazhko effect. In 1896 David Gill and Jacobus Kapteyn announced that the variability of the as yet unnamed star was "all but proved" by the Cape Carte du Ciel photographic plates. In 1900, Robert T. A. Innes confirmed that the star, by then named CPD-49 10361, is a variable. It was listed with its modern variable star designation, S Arae, in Annie Jump Cannon's 1907 Second Catalogue of Variable Stars. It was originally thought that S Arae was a binary whose brightness changes were caused by eclipses. In 1918, Harlow Shapley included it within the Cepheid variable star class. By 1939 it had been classified as an RR Lyrae variable. S Arae's large negative declination makes it a circumpolar star in Antarctica. Such a star can be monitored continuously for much of the southern hemisphere's winter, allowing a long period of observation without gaps due to daylight. It was the first star to be monitored that way at Dome C. RRab type stars, like S Arae, are fundamental mode pulsating stars that have asymmetric light curves which rise to maximum brightness rapidly then fade more slowly. The Blazhko effect modulation period for this star is 47.264 days (about 105 times longer than the main pulsation period), and three other periodicities have been detected in the light curve. References Ara (constellation) RR Lyrae variables A-type bright giants 088064 Arae, S Durchmusterung objects
S Arae
[ "Astronomy" ]
384
[ "Constellations", "Ara (constellation)" ]
16,400,356
https://en.wikipedia.org/wiki/Prouhet%E2%80%93Tarry%E2%80%93Escott%20problem
In mathematics, the Prouhet–Tarry–Escott problem asks for two disjoint multisets A and B of n integers each, whose first k power sum symmetric polynomials are all equal. That is, the two multisets should satisfy the equations for each integer i from 1 to a given k. It has been shown that n must be strictly greater than k. Solutions with are called ideal solutions. Ideal solutions are known for and for . No ideal solution is known for or for . This problem was named after Eugène Prouhet, who studied it in the early 1850s, and Gaston Tarry and Edward B. Escott, who studied it in the early 1910s. The problem originates from letters of Christian Goldbach and Leonhard Euler (1750/1751). Examples Ideal solutions An ideal solution for n = 6 is given by the two sets { 0, 5, 6, 16, 17, 22 } and { 1, 2, 10, 12, 20, 21 }, because: 01 + 51 + 61 + 161 + 171 + 221 = 11 + 21 + 101 + 121 + 201 + 211 02 + 52 + 62 + 162 + 172 + 222 = 12 + 22 + 102 + 122 + 202 + 212 03 + 53 + 63 + 163 + 173 + 223 = 13 + 23 + 103 + 123 + 203 + 213 04 + 54 + 64 + 164 + 174 + 224 = 14 + 24 + 104 + 124 + 204 + 214 05 + 55 + 65 + 165 + 175 + 225 = 15 + 25 + 105 + 125 + 205 + 215. For n = 12, an ideal solution is given by A = {±22, ±61, ±86, ±127, ±140, ±151} and B = {±35, ±47, ±94, ±121, ±146, ±148}. Other solutions Prouhet used the Thue–Morse sequence to construct a solution with for any . Namely, partition the numbers from 0 to into a) the numbers each with an even number of ones in its binary expansion and b) the numbers each with an odd number of ones in its binary expansion; then the two sets of the partition give a solution to the problem. For instance, for and , Prouhet's solution is: 01 + 31 + 51 + 61 + 91 + 101 + 121 + 151 = 11 + 21 + 41 + 71 + 81 + 111 + 131 + 141 02 + 32 + 52 + 62 + 92 + 102 + 122 + 152 = 12 + 22 + 42 + 72 + 82 + 112 + 132 + 142 03 + 33 + 53 + 63 + 93 + 103 + 123 + 153 = 13 + 23 + 43 + 73 + 83 + 113 + 133 + 143. Generalizations A higher dimensional version of the Prouhet–Tarry–Escott problem has been introduced and studied by Andreas Alpers and Robert Tijdeman in 2007: Given parameters , find two different multi-sets , of points from such that for all with This problem is related to discrete tomography and also leads to special Prouhet-Tarry-Escott solutions over the Gaussian integers (though solutions to the Alpers-Tijdeman problem do not exhaust the Gaussian integer solutions to Prouhet-Tarry-Escott). A solution for and is given, for instance, by: and . No solutions for with are known. See also Euler's sum of powers conjecture Beal's conjecture Jacobi–Madden equation Lander, Parkin, and Selfridge conjecture Taxicab number Pythagorean quadruple Sums of powers, a list of related conjectures and theorems Discrete tomography Notes References Chap.11. . External links Diophantine equations Mathematical problems
Prouhet–Tarry–Escott problem
[ "Mathematics" ]
783
[ "Unsolved problems in mathematics", "Mathematical objects", "Equations", "Diophantine equations", "Mathematical problems", "Number theory" ]
16,402,429
https://en.wikipedia.org/wiki/Lateral%20surface
The lateral surface of an object is all of the sides of the object, excluding its bases (when they exist). Lateral Surface Area The lateral surface area is the area of the lateral surface. This is to be distinguished from the total surface area, which is the lateral surface area together with the areas of the base and top. For a cube the lateral surface area would be the area of the four sides. If the edge of the cube has length , the area of one square face . Thus the lateral surface of a cube will be the area of four faces: . More generally, the lateral surface area of a prism is the sum of the areas of the sides of the prism. This lateral surface area can be calculated by multiplying the perimeter of the base by the height of the prism. For a right circular cylinder of radius and height , the lateral area is the area of the side surface of the cylinder: . For a pyramid, the lateral surface area is the sum of the areas of all of the triangular faces but excluding the area of the base. For a cone, the lateral surface area would be where is the radius of the circle at the bottom of the cone and is the lateral height (the length of a line segment from the apex of the cone along its side to its base) of the cone (given by the Pythagorean theorem where is the height of the cone) References Further reading Lateral surface at Mathwords Surfaces
Lateral surface
[ "Mathematics" ]
293
[ "Geometry", "Geometry stubs" ]
16,402,793
https://en.wikipedia.org/wiki/Idempotent%20measure
In mathematics, an idempotent measure on a metric group is a probability measure that equals its convolution with itself; in other words, an idempotent measure is an idempotent element in the topological semigroup of probability measures on the given metric group. Explicitly, given a metric group X and two probability measures μ and ν on X, the convolution μ ∗ ν of μ and ν is the measure given by for any Borel subset A of X. (The equality of the two integrals follows from Fubini's theorem.) With respect to the topology of weak convergence of measures, the operation of convolution makes the space of probability measures on X into a topological semigroup. Thus, μ is said to be an idempotent measure if μ ∗ μ = μ. It can be shown that the only idempotent probability measures on a complete, separable metric group are the normalized Haar measures of compact subgroups. References (See chapter 3, section 3.) Group theory Measures (measure theory) Metric geometry
Idempotent measure
[ "Physics", "Mathematics" ]
225
[ "Physical quantities", "Measures (measure theory)", "Quantity", "Size", "Group theory", "Fields of abstract algebra" ]
16,403,209
https://en.wikipedia.org/wiki/Perfect%20measure
In mathematics — specifically, in measure theory — a perfect measure (or, more accurately, a perfect measure space) is one that is "well-behaved" in some sense. Intuitively, a perfect measure μ is one for which, if we consider the pushforward measure on the real line R, then every measurable set is "μ-approximately a Borel set". The notion of perfectness is closely related to tightness of measures: indeed, in metric spaces, tight measures are always perfect. Definition A measure space (X, Σ, μ) is said to be perfect if, for every Σ-measurable function f : X → R and every A ⊆ R with f−1(A) ∈ Σ, there exist Borel subsets A1 and A2 of R such that Results concerning perfect measures If X is any metric space and μ is an inner regular (or tight) measure on X, then (X, BX, μ) is a perfect measure space, where BX denotes the Borel σ-algebra on X. References Measures (measure theory)
Perfect measure
[ "Physics", "Mathematics" ]
221
[ "Measures (measure theory)", "Quantity", "Physical quantities", "Size" ]
16,403,515
https://en.wikipedia.org/wiki/Genesee%20Scientific
Genesee Scientific Corporation is a global life sciences supplier. History Genesee Scientific was founded by Ken Fry in 1995 as a provider of supplies to laboratories located along Genesee Avenue in University City, San Diego. Today, Genesee Scientific serves life science laboratories around the United States and the world. Timeline 1995 - Genesee Scientific founded by Ken Fry. 2003 - Acquired United Scientific Plastics (USP), a California Bay Area distributor of laboratory plasticware. 2007 - Acquired Island Scientific, a Seattle area distributor of laboratory plasticware. 2007 - Established East Coast warehouse and offices located in Research Triangle Park, NC. 2009 - Acquired Continental Lab Products (CLP) brands. 2011 - Expansion of Olympus Plastics product line of tissue culture supplies. 2013 - Became an authorized Eppendorf distribution partner, a Germany-based manufacturer of instruments and consumables. 2016 - Introduction of Prometheus product line, proprietary products for protein biology research. 2017 - Launched GenClone product line, a full line of cell culture media products centered around ultra-pure, high-performance fetal bovine sera. 2021 - Acquired by LLR, a private equity firm investing in technology and healthcare businesses. Innovations Genesee Scientific is the world leader in innovation for and supply to the Drosophila (fruit fly) research community. Drosophila are widely used as a model organism in the field of genetics. Genesee Scientific has been awarded three patents by the United States Patent and Trademark Office for its revolutionary Drosophila vial racking system (patent numbers D673,296 S; 8,136,679 B2; and 8,430,251 B2). This Drosophila vial racking system significantly decreases time spent racking vials and is more environmentally friendly compared to traditional vial packaging configurations. Genesee Scientific has also developed the first atlas of Drosophila phenotypic markers available on mobile devices.. Registered trademarks Genesee Scientific (Reg. #: 3934361) Flowbuddy  (Reg. #: 5617401)   Flystuff, Drosophila research supplies and equipment (Reg. #: 5617401) Flugs, Cellulose acetate closures for vials (Reg. #: 3153821) INVICTUS, Incubators (Reg. #: 5617401) INVICTUS NEXT-GEN, Incubators (Reg. #: 5867956) Nutri-fly, Media for Drosophila research (Reg. #: 5626165) Droso-Plugs, Foam closures for vials (Reg. #: 5773033) SUPERBULK, Bulk supplies offering less packaging and smaller footprint (Reg. #: 5867964) GenClone, High-performance cell culture media products (Reg. #: 5322208 ) Gene Choice, Competent cells for cloning (Reg. #: 4217346) NEXT-GEN, Latex and nitrile exam gloves (Reg. #: 3439167) UPrep, Spin columns for DNA and RNA purification (Reg. #: 3153821) Prometheus, Proprietary protein biology research products (Reg. #: 5322104) SECadex, Size exclusion chromatography media (Reg. #: 5322241) ProSignal, Electrophoresis, blotting, and detection reagents; X-ray film (Reg. #: 5322254)) Brands Apex provides a variety of chemicals & reagents to the life science industry. Blue Devil provides film is used for autoradiography, Western blotting, sequencing, chemiluminescence and gel shift analysis. Droso-Plugs Flowbuddy Flystuff provides products and services specifically for the Drosophila research community. The Flyer Gene Choice provides competent cells for cloning applications. GenClone provides cell culture media, sera (FBS), buffers, and reagents for cell and tissue culture. GC10 GCS NEXT-GEN Nutri-fly provides nutrient balanced media formulations specifically for Drosophila melanogaster. Olympus Plastics provides plasticware for general liquid handling and cell and tissue culture applications. Poseidon provides ergonomic liquid handling equipment including precision pipettes and pipet controllers. Prometheus provides protein separation/purification resins and Western blotting reagents and consumables. SECadex TITAN provides powder-free nitrile and latex examination gloves. UPrep provides spin columns for DNA and RNA (nucleic acid) purification. Wormstuff Citations Following is a list of links to articles published in scientific journals that cite Genesee Scientific: http://www4.ncsu.edu/~jmalonso/Alonso-Stepanova_Plant_DNA_96.html http://www.jove.com/video/2641/isolation-of-drosophila-melanogaster-testes http://www4.ncsu.edu/~jmalonso/Alonso-Stepanova_Plant_DNA_1.html https://web.archive.org/web/20160303225004/http://cda.currentprotocols.com/WileyCDA/CPUnit/refId-cb0418.html http://www.jove.com/video/3786/endurance-training-protocol-and-longitudinal-performance-assays-for-drosophila-melanogaster http://www.jove.com/video/2541/a-simple-way-to-measure-ethanol-sensitivity-in-flies Notes Companies based in San Diego Companies established in 1995 Research support companies Privately held companies based in California Life sciences industry 1995 establishments in California
Genesee Scientific
[ "Biology" ]
1,217
[ "Life sciences industry" ]
16,403,526
https://en.wikipedia.org/wiki/Phylogenetic%20profiling
Phylogenetic profiling is a bioinformatics technique in which the joint presence or joint absence of two traits across large numbers of species is used to infer a meaningful biological connection, such as involvement of two different proteins in the same biological pathway. Along with examination of conserved synteny, conserved operon structure, or "Rosetta Stone" domain fusions, comparing phylogenetic profiles is a designated "post-homology" technique, in that the computation essential to this method begins after it is determined which proteins are homologous to which. A number of these techniques were developed by David Eisenberg and colleagues; phylogenetic profile comparison was introduced in 1999 by Pellegrini, et al. Method Over 2000 species of bacteria, archaea, and eukaryotes are now represented by complete DNA genome sequences. Typically, each gene in a genome encodes a protein that can be assigned to a particular protein family on the basis of homology. For a given protein family, its presence or absence in each genome (in the original, binary, formulation) is represented by either 1 (present) or 0 (absent). Consequently, the phylogenetic distribution of the protein family can be represented by a long binary number with a digit for each genome; such binary representations are easily compared with each other to search for correlated phylogenetic distributions. The large number of complete genomes makes these profiles rich in information. The advantage of using only complete genomes is that the 0 values, representing the absence of a trait, tend to be reliable. Theory Closely related species should be expected to have very similar sets of genes. However, changes accumulate between more distantly related species by processes that include horizontal gene transfer and gene loss. Individual proteins have specific molecular functions, such as carrying out a single enzymatic reaction or serving as one subunit of a larger protein complex. A biological process such as photosynthesis, methanogenesis, or histidine biosynthesis may require the concerted action of many proteins. If some protein critical to a process is lost, other proteins dedicated to that process would become useless; natural selection makes it unlikely these useless proteins will be retained over evolutionary time. Therefore, should two different protein families consistently tend to be either present or absent together, a likely hypothesis is that the two proteins cooperate in some biological process. Advances and challenges Phylogenetic profiling has led to numerous discoveries in biology, including previously unknown enzymes in metabolic pathways, transcription factors that bind to conserved regulatory sites, and explanations for roles of certain mutations in human disease. Improving the method itself is an active area of scientific research because the method itself faces several limitations. First, co-occurrence of two protein families often represents recent common ancestry of two species rather than a conserved functional relationship; disambiguating these two sources of correlation may require improved statistical methods. Second, proteins grouped as homologs may differ in function, or proteins conserved in function may fail to register as homologs; improved methods for tailoring the size of each protein family to reflect functional conservation will lead to improved results. Tools Tools include PLEX (Protein Link Explorer). (Now defunct) and JGI IMG (Integrated Microbial Genomes) Phylogenetic Profiler (for both single genes and gene cassettes). Notes Bioinformatics
Phylogenetic profiling
[ "Engineering", "Biology" ]
666
[ "Bioinformatics", "Biological engineering" ]
16,404,767
https://en.wikipedia.org/wiki/Israeli%20Astronomical%20Association
The Israeli Astronomical Association (IAA) is an Israeli nonprofit organization. Its purpose is to deepen and distribute the awareness for the field of astronomy among the Israeli public. History The Israeli Astronomical Association was established first as an amateur fellowship on May 28, 1951, by a group of astronomy fans immigrating from Germany and Czechoslovakia, among them Dr Heilbruner and Dr Zaichik. Dr Zaichik was the first chairman of the association and kept his position for many years until his retirement due to personal health matters. The decision to make the association national was made in 1953 in the chamber of David Ben-Gurion, the country's prime minister, in order to promote astronomical knowledge and related science fields in the newly formed state of Israel. The association was then allocated a territory in Givat Ram in Jerusalem by the Israel Land Administration. The same area was later designated for the development of the Hebrew University of Jerusalem. A planetarium was built for the exclusive use of the association with the financial help of the Williams family from the Palestine bank (later was known as Bank Leumi, Israel National Bank) in 1956. The main goal of the association was—and still is—the distribution knowledge of the astronomy amongst the Israeli public. This purpose is carried out by organizing conventions, courses, lectures, star parties and observations, as well as publishing the magazine Astronomy (previously All the Stars of Light and The Stars in their Month). This magazine is one-of-a-kind in Hebrew ever since. In 1953 the association built an observatory in the suburb of Talabia in Jerusalem and started branching all over the country—Tel Aviv, Ramat Gan, Haifa and the Galilee. These branches are no longer active and others will take their place. All the activities of its members were made and are still being made fully voluntarily. Due to a temporal setback on the activities of the association and the transference of its center of gravity to Givatayim, The Hebrew University of Jerusalem took hold of the planetarium building in 1986, which by then was in the university's expanding territory. The takeover included Albert Einstein's telescope which was given to the association in 1962 from the "Ben Shemen" school. The university turned most of the planetarium site into its office spaces, which brought the association and the Williams family to file two lawsuits against The Hebrew University of Jerusalem (1574/98, 8514/90). These lawsuits were supposed to return the management of the planetarium back to the hands of the association as well as using the planetarium for astronomical purposes. Unfortunately it was decided that in spite of the forceful take-over by the university, the observatory will remain in their custody. Currently the association does not have the resources to maintain the battle and neither the observatory nor the planetarium in Givat Ram do function in their designated purposes. In 1967, following the Six-Day War, the association inaugurated the Givatayim Observatory in the city of Giv'atayim in the center of Israel. Giv'atayim was the choice of preference due to the height of the place allocated for the observatory, far above the sea level and in distance from its humidity (relative to the surroundings at the time). The observatory was established with the funds of the municipality of Giv'atayim, the association and donations from abroad raised by the observatory manager at the time, Eng. Yossi Fooks. For 25 years the observatory was solely managed by the association. Dince 1994 it had co-management with the municipality of Giv'atayim. The chairmen of the association were sometimes the managers of the observatory at the same time. Among them: Haim Levi, Dr. Noah Brosch (presently the director of the Wise Observatory of Tel Aviv University), Dr. Isaac Shlosman (presently Astrophysics Professor at the University of Kentucky), Ilan Manulis (presently the director of the Technoda Observatory) and Dr. Igal Patel who is the chairman of the association as well as manager of the observatory for over 20 years (ever since 1987). Main activities The association operates as a nonprofit organization with hundreds of members. The majority of the activities are focused on the observatory in Giv'atayim. Nevertheless, the association holds country wide astronomical activities with organized observations on the south side of Israel, astronomical weekends, conventions of astronomy and seminars. The journal of the association Astronomy is published several times a year. The association also publishes a yearly almanac with all the astronomical phenomena expected to be seen in Israeli skies. A guideline for the association is that all of its members—including the functionaries—are all volunteers. The activities of the association are financed by membership fees (as for the year 2014, membership fees are 100 NIS per year), incomes raised from the activities and from time to time it is also supported by the ministry of science and other donating bodies (the most recent is the Pelephone company on 1999). The association holds ties with similar groups abroad and with national as well as international research institutions. Divisions During the past as well as present, the association holds observation divisions. The two prominent divisions are: The Meteors Division, headed by Anna Levin, this division holds meteors observations and reports to the International Meteors Organization regularly. On 2008 a seminar was held for meteor observing and new observers were trained, increasing the number of members from 3 to 10. The division also holds lectures on the topic of meteors twice a year and publishes a forecast of meteor showers on the quarterly journal of the association and on the web. The Variable Stars Division, headed by Ofer Gabzo, this division reached the peak of its activities during the 90s when it had 5 observing members which regularly sent thousands of observations (magnitude estimates of variable stars) per year to the American Association of Variable Star Observers (an international associations despite its name). In 1992 the head of the division, Ofer Gabzo, set a world record with over 22,000 observations in one year. Nowadays, only Ofer Gabzo holds the relevant knowledge inside the association. External links Official website of the Israeli Astronomical Association (Hebrew). The Meteors Division of the Israeli Astronomical Association – the division is holding regular observations, lectures, observation-seminars and publishes a forecast for meteor showers. about the Israeli Astronomical Association, English page from the official site. Amateur astronomy Astronomy organizations 1951 establishments in Israel Scientific organizations established in 1951 Astronomy in Israel
Israeli Astronomical Association
[ "Astronomy" ]
1,320
[ "Astronomy organizations" ]
16,405,530
https://en.wikipedia.org/wiki/Homology%20directed%20repair
Homology-directed repair (HDR) is a mechanism in cells to repair double-strand DNA lesions. The most common form of HDR is homologous recombination. The HDR mechanism can only be used by the cell when there is a homologous piece of DNA present in the nucleus, mostly in G2 and S phase of the cell cycle. Other examples of homology-directed repair include single-strand annealing and breakage-induced replication. When the homologous DNA is absent, another process called non-homologous end joining (NHEJ) takes place instead. Cancer suppression HDR is important for suppressing the formation of cancer. HDR maintains genomic stability by repairing broken DNA strands; it is assumed to be error free because of the use of a template. When a double strand DNA lesion is repaired by NHEJ there is no validating DNA template present so it may result in a novel DNA strand formation with loss of information. A different nucleotide sequence in the DNA strand results in a different protein expressed in the cell. This protein error may cause processes in the cell to fail. For example, a receptor of the cell that can receive a signal to stop dividing may malfunction, so the cell ignores the signal and keeps dividing and can form a cancer. The importance of HDR can be seen from the fact that the mechanism is conserved throughout evolution. The HDR mechanism has also been found in more simple organisms, such as yeast. Biological pathway The pathway of HDR has not been totally elucidated yet (March 2008). However, a number of experimental results point to the validity of certain models. It is generally accepted that histone H2AX (noted as γH2AX) is phosphorylated within seconds after damage occurs. H2AX is phosphorylated throughout the area surrounding the damage, not only precisely at the break. Therefore, it has been suggested that γH2AX functions as an adhesive component for attracting proteins to the damaged location. Several research groups have suggested that the phosphorylation of H2AX is done by ATM and ATR in cooperation with MDC1. It has been suggested that before or while H2AX is involved with the repair pathway, the MRN complex (which consists of Mre11, Rad50 and NBS1) is attracted to the broken DNA ends and other MRN complexes to keep the broken ends together. This action by the MRN complex may prevent chromosomal breaks. At some later point the DNA ends are processed so that unnecessary residuals of chemical groups are removed and single strand overhangs are formed. Meanwhile, from the beginning, every piece of single stranded DNA is covered by the protein RPA (Replication Protein A). The function of RPA is likely to keep the single stranded DNA pieces stable until the complementary piece is resynthesized by a polymerase. After this, Rad51 replaces RPA and forms filaments on the DNA strand. Working together with BRCA2 (Breast Cancer Associated), Rad51 couples a complementary DNA piece which invades the broken DNA strand to form a template for the polymerase. The polymerase is held onto the DNA strand by PCNA (Proliferating Cell Nuclear Antigen). PCNA forms typical patterns in the nucleus of the cell through which the current cell cycle can be determined. The polymerase synthesizes the missing part of the broken strand. When the broken strand is rebuilt, both strands need to uncouple again. Multiple ways of "uncoupling" have been suggested, but evidence is not yet sufficient to choose between models (March 2008). After the strands are separated the process is done. The co-localization of Rad51 with the damage indicates that HDR has been initiated instead of NHEJ. In contrast, the presence of a Ku complex (Ku70 and Ku80) indicates that NHEJ has been initiated instead of HDR. HDR and NHEJ repair double strand breaks. Other mechanisms such as NER (Nucleotide Excision Repair), BER (Base Excision Repair) and MMR recognise lesions and replace them via single strand perturbation. Mitosis In the budding yeast Saccharomyces cerevisiae homology directed repair is primarily a response to spontaneous or induced damage that occurs during vegetative growth. (Also reviewed in Bernstein and Bernstein, pp 220–221). In order for yeast cells to undergo homology directed repair there must be present in the same nucleus a second DNA molecule containing sequence homology with the region to be repaired. In a diploid cell in G1 phase of the cell cycle, such a molecule is present in the form of the homologous chromosome. However, in the G2 stage of the cell cycle (following DNA replication), a second homologous DNA molecule is also present: the sister chromatid. Evidence indicates that, due to the special nearby relationship they share, sister chromatids are not only preferred over distant homologous chromatids as substrates for recombinational repair, but have the capacity to repair more DNA damage than do homologs. Meiosis During meiosis up to one-third of all homology directed repair events occur between sister chromatids. The remaining two-thirds, or more, of homology directed repair occurs as a result of interaction between non-sister homologous chromatids. Oocytes The fertility of females and the health of potential offspring critically depend on an adequate availability of high quality oocytes. Oocytes are largely maintained in the ovaries in a state of meiotic prophase arrest. In mammalian females the period of arrest may last for years. During this period of arrest, oocytes are subject to spontaneous DNA damage including double-strand breaks. However, the oocytes can efficiently repair DNA double-strand breaks, allowing the restoration of genetic integrity and the protection of offspring health. The process by which oocyte DNA damage can be corrected is referred to as homology directed homologous recombination repair. See also Homologous recombination References Further reading DNA repair
Homology directed repair
[ "Biology" ]
1,277
[ "Molecular genetics", "DNA repair", "Cellular processes" ]
16,405,813
https://en.wikipedia.org/wiki/Silver%20standards
Silver standards refer to the standards of millesimal fineness for the silver alloy used in the manufacture or crafting of silver objects. This list is organized from highest to lowest millesimal fineness, or purity of the silver. Fine silver has a millesimal fineness of 999. Also called pure silver, or three nines fine, fine silver contains 99.9% silver, with the balance being some trace amounts of impurities. This grade of silver is used to make bullion bars for international commodities trading and investment in silver. In the modern world, fine silver is understood to be too soft for general use. Britannia silver has a millesimal fineness of at least 958. The alloy is 95.84% pure silver and 4.16% copper or other metals. The Britannia standard was developed in Britain in 1697 to help prevent British sterling silver coins from being melted to make silver plate. It was obligatory in Britain between 1697 and 1720, when the sterling silver standard was restored. It became an optional standard thereafter. The French 1st standard has a milessimal fineness of 950. The French 1st alloy is 95% silver and 5% copper or other metals. 91 zolotnik Russian silver has a millesimal fineness of 947. The zolotnik (Russian золотник, from the Russian zoloto, or золото, meaning gold) was used in Russia as early as the 11th century to denote the weight of gold coins. In its earliest usage, the zolotnik was 1/96 of a pound, but it later was changed to represent 1/72 of a pound. Ninety-one (91) zolotniks have the equivalent millesimal fineness of 947[9]. Thus, the alloy contains 94.79% pure silver and 5.21% copper or other metals. Sterling silver has a millesimal fineness of 925. The sterling silver alloy is 92.5% pure silver and 7.5% copper or other metals. This alloy was used by the United Kingdom from the early 12th century, and Canada, Australia and other countries associated with the British Empire (and later Commonwealth) from the 19th century up to the mid-20th century when debasement took place; Sterling silver’s copper content means that it has a stronger tendency to tarnish than other alloys used in coins. Following a program of debasements in the early-to-mid 20th century, circulating Canadian coinage (with the exception of the nickel) had a millesimal fineness of 800 until 1968. The alloy used contained 80% silver and 20% copper. 88 zolotnik Russian silver has the equivalent millesimal fineness of 916[6]. The alloy contains 91.66% pure silver and 8.34% copper or other metals. (The description of the zolotnik is above.) Coin silver has a millesimal fineness of 900. The term "coin silver" was derived from the fact that much of it was made from melting down silver coins. It is important here to note that there are differences between the coin silver standard and the coin silver alloy, as actually used in making silver objects. The coin silver standard in the United States was 90% silver and 10% copper, as dictated by US FTC guidelines. However, in silversmithing, coins could come from other nations besides the United States, and thus coin silver objects could vary from 750 millesimal fineness (75% silver) to 900 (90% silver). Coins were used as a source of silver in the US until 1868, shortly after the discovery of the Comstock silver lodes in Nevada, which provided a significant source of silver. Around this time the sterling standard was adopted by the American silver industry. 84 zolotnik Russian silver has the equivalent millesimal fineness of 875. The alloy contains 87.5% pure silver and 12.5% copper or other metals. (See above for description of the zolotnik.) has a millesimal fineness of 830. The Scandinavian silver alloy contains 83% pure silver and 17% copper or other metals. German silver will be marked with a millesimal fineness of 800 or 835 (80% or 83.5% pure silver). Any items simply marked "German silver", "nickel silver" or "Alpaca" have no silver content at all, but are mere alloys of other base metals. Decoplata has the equivalent millesimal fineness of 720. The alloy contains 72% pure silver and 28% copper. It was used by a number of countries between the 19th century and the present, but it is most associated with coins made in Mexico and the Netherlands in the mid-20th Century. References External links A Small Collection of Antique Silver and Objects of vertu Online Encyclopedia of Silver Marks, Hallmarks & Maker's Marks Silver Metallurgy
Silver standards
[ "Chemistry", "Materials_science", "Engineering" ]
1,038
[ "Metallurgy", "Materials science", "nan" ]
11,109,606
https://en.wikipedia.org/wiki/Chronemics
Chronemics is an anthropological, philosophical, and linguistic subdiscipline that describes how time is perceived, coded, and communicated across a given culture. It is one of several subcategories to emerge from the study of nonverbal communication. According to the Encyclopedia of Special Education, "Chronemics includes time orientation, understanding and organisation, the use of and reaction to time pressures, the innate and learned awareness of time, by physically wearing or not wearing a watch, arriving, starting, and ending late or on time." A person's perception and values placed on time plays a considerable role in their communication process. The use of time can affect lifestyles, personal relationships, and work life. Across cultures, people usually have different time perceptions, and this can result in conflicts between individuals. Time perceptions include punctuality, interactions, and willingness to wait. Definition Chronemics is the study of the use of time in nonverbal communication, though it carries implications for verbal communication as well. Time perceptions include punctuality, willingness to wait, and interactions. The use of time can affect lifestyles, daily agendas, speed of speech, movements, and how long people are willing to listen. Fernando Poyatos, Professor Emeritus at the University of New Brunswick, coined the term chronemics in 1972. Thomas J. Bruneau (1940–2012), Professor Emeritus at Radford University who taught at the University of Guam in his early career and whose scholarship focused on silence, empathy, and intercultural communication, identified the parameters of this field of study in the late 1970s. Bruneau defined chronemics and specified the functions of time in human interactions as follows: Time can be used as an indicator of status. For example, in most companies the boss can interrupt progress to hold an impromptu meeting in the middle of the work day, yet the average worker would have to make an appointment to see the boss. The way in which different cultures perceive time can influence communication as well. Monochronic time A monochronic time system means that things are done one at a time and time is segmented into small precise units. Under this system, time is scheduled, arranged, and managed. The United States considers itself a monochronic society. This perception came about during the Industrial Revolution. Many Americans think of time as a precious resource not to be wasted or taken lightly. As communication scholar Edward T. Hall wrote regarding the American's viewpoint of time in the business world, "the schedule is sacred." Hall says that for monochronic cultures, such as the American culture, "time is tangible" and viewed as a commodity where "time is money" or "time is wasted." John Ivers, a professor of cultural paradigms, agrees with Edward Hall by stating, "In the market sense, monochronic people consume time." The result of this perspective is that monochronic cultures place a paramount value on schedules, tasks, and "getting the job done. Monochronic time orientation is very prominent in core Germanic-speaking countries, Finland, France, Japan and the "Asian economic tigers". If, for example, a businessperson from the United States has a meeting scheduled, they may grow frustrated if they are required to wait an hour for their partner to arrive. This is an example of a monochronic-time-oriented individual running in to conflict with a polychronic-time-oriented individual. Though the United States is seen as one of the most monochronic countries, it "has subcultures that may lean more to one side or the other of the monochronic-polychronic divide" within the states themselves. Southern states can be similarly compared to northern ones. Ivers points this out by comparing waiters in restaurants in northern and southern states. Waiters from the north are "to the point": they will "engage in little" and there is usually "no small talk." They try to be as efficient as possible, while those in the south work towards "establishing a nice, friendly, micro-relationship" with the customer. They are still considerate of time, but it is not the most important goal in the south. The culture of African Americans might also be seen as polychronic. Polychronic time A polychronic time system means several things can be done at once. In polychronic time systems, a wider view of time is exhibited, and time is perceived in large fluid sections. Examples of polychronic cultures are Latin American, African, Arab, South Asian, Mediterranean, and Native American cultures. These cultures' view on time can be connected to "natural rhythms, the earth, and the seasons". These analogies can be understood and compared because natural events can occur spontaneously and sporadically, like polychronic-time-oriented people and polychronic-time-oriented cultures. A scenario would be an Inuit working in a factory in Alaska where the superiors blow a whistle to alert for break times, etc. The Inuit are not fond of that method because they determine their times by the sea tides, how long it takes place and how long it lasts. In polychronic cultures, "time spent with others" is considered a "task" and of importance to one's daily regimen. Polychronic cultures are much less focused on the preciseness of accounting for time and more on tradition and relationships rather than on tasks. Polychronic societies have no problem being late for an appointment if they are deeply focused on some work or in a meeting that ran past schedule, because the concept of time is fluid and can easily expand or contract as need be. As a result, polychronic cultures have a much less formal perception of time. They are not ruled by precise calendars and schedules. Measuring polychronicity Bluedorn, Allen C., Carol Felker Kaufman, and Paul M. Lane concluded that "developing an understanding of the monochronic/polychronic continuum will not only result in a better self-management but will also allow more rewarding job performances and relationships with people from different cultures and traditions." Researchers have examined that predicting someone's polychronicity plays an important role in productivity and individual well-being. Researchers have developed the following questionnaires to measure polychronicity: Inventory of Polychronic Values (IPV), developed by Bluedorn et al., which is a 10-item scale designed to assess "the extent to which people in a culture prefer to be engaged in two or more tasks or events simultaneously and believe their preference is the best way to do things." Polychronic Attitude Index (PAI), developed by Kaufman-Scarborough & Lindquist in 1991, which is a 4-item scale measuring individual preference for polychronicity, in the following statements: "I do not like to juggle several activities at the same time". "People should not try to do many things at once". "When I sit down at my desk, I work on one project at a time". "I am comfortable doing several things at the same time". Predictable patterns between cultures with differing time systems Cross-cultural perspectives on time Conflicting attitudes between the monochronic and polychronic perceptions of time can interfere with cross-cultural relations and play a role in these domains, and as a result, challenges can occur within an otherwise assimilated culture. One example in the United States is the Hawaiian culture, which employs two time systems: Haole time and Hawaiian time. According to Ashley Fulmer and Brandon Crosby, "as intercultural interactions increasingly become the norm rather than the exception, the ability of individuals, groups, and organizations to manage time effectively in cross-cultural settings is critical to the success of these interactions". Time orientations The way an individual perceives time and the role time plays in their lives is a learned perspective. As discussed by Alexander Gonzalez and Phillip Zimbardo, "every child learns a time perspective that is appropriate to the values and needs of his society" (Guerrero, DeVito & Hecht, 1999, p. 227). There are four basic psychological time orientations: Past Time-line Present Future Each orientation affects the structure, content, and urgency of communication (Burgoon, 1989). The past orientation has a hard time developing the notion of elapsed time and these individuals often confuse present and past happenings as all in the same. People oriented with time-line cognitivity are often detail oriented and think of everything in linear terms. These individuals also often have difficulty with comprehending multiple events at the same time. Individuals with a present orientation are mostly characterized as pleasure seekers who live for the moment and have a very low risk aversion. Those individuals who operate with future orientation are often thought of as being highly goal oriented and focused on the broad picture. The use of time as a communicative channel can be a powerful, yet subtle, force in face-to-face interactions. Some of the more recognizable types of interaction that use time are: Regulating interaction This is shown to aid in the orderly transition of conversational turn-taking. When the speaker is opening the floor for a response, they will pause. However, when no response is desired, the speaker will talk a faster pace with minimal pause. (Capella, 1985) Expressing intimacy As relationships become more intimate, certain changes are made to accommodate the new relationship status. Some of the changes that are made include lengthening the time spent on mutual gazes, increasing the amount of time doing tasks for or with the other person and planning for the future by making plans to spend more time together (Patterson, 1990). Affect management The onset of powerful emotions can cause a stronger affect, ranging from joy to sorrow or even to embarrassment. Some of the behaviors associated with negative affects include decreased time of gaze and awkwardly long pauses during conversations. When this happens, it is common for the individuals to try and decrease any negative affects and subsequently strengthen positive affects (Edelman & Iwawaki, 1987). Evoking emotion Time can be used to evoke emotions in an interpersonal relationship by communicating the value of the relationship. For example, when someone with whom one has a close relationship is late, one may not take it personally, especially if that is characteristic of them. However, when meeting with a total stranger, disrespect for the value of one's time may be taken personally and could even cause one to display negative emotions if and when they do arrive for the meeting. Facilitating service and task goals Professional settings can sometimes give rise to interpersonal relations which are quite different from other "normal" interactions. For example, the societal norms that dictate minimal touch between strangers are clearly altered if one member of the dyad is a doctor, and the environment is that of a hospital examination room. Time orientation and consumers Time orientation has also revealed insights into how people react to advertising. Martin, Gnoth and Strong (2009) found that future-oriented consumers react most favorably to ads that feature a product to be released in the distant future and that highlight primary product attributes. In contrast, present-oriented consumers prefer near-future ads that highlight secondary product attributes. Consumer attitudes were mediated by the perceived usefulness of the attribute information. Culture and diplomacy Cultural roots Just as monochronic and polychronic cultures have different time perspectives, understanding the time orientation of a culture is critical to becoming better able to successfully handle diplomatic situations. Americans think they have a future orientation. Hall indicates that for Americans "tomorrow is more important" and that they "are oriented almost entirely toward the future" (Cohen, 2004, p. 35). The future-focused orientation attributes to at least some of the concerns that Americans have with "addressing immediate issues and moving on to new challenges" (Cohen, 2004, p. 35). On the other hand, many polychronic cultures have a past-orientation toward time. These time perspectives are the seeds for communication clashes in diplomatic situations. Trade negotiators have observed that "American negotiators are generally more anxious for agreement because "they are always in a hurry" and basically "problem solving oriented." In other words, they place a high value on resolving an issue quickly calling to mind the American catchphrase "some solution is better than no solution" (Cohen, 2004, p. 114). Similar observations have been made of Japanese-American relations. Noting the difference in time perceptions between the two countries, former ambassador to Tokyo, Mike Mansfield commented "We're too fast, they're too slow" (Cohen, 2004, p. 118). Influence on global affairs Different perceptions of time across cultures can influence global communication. When writing about time perspective, Gonzalez and Zimbardo comment that "There is no more powerful, pervasive influence on how individuals think and cultures interact than our different perspectives on time—the way we learn how we mentally partition time into past, present and future." Depending upon where an individual is from, their perception of time might be that "the clock rules the day" or that "we'll get there when we get there." Improving prospects for success in the global community requires understanding cultural differences, traditions and communication styles. The monochronic-oriented approach to negotiations is direct, linear and rooted in the characteristics that illustrate low context tendencies. The low context culture approaches diplomacy in a lawyerly, dispassionate fashion with a clear idea of acceptable outcomes and a plan for reaching them. Draft arguments would be prepared elaborating positions. A monochronic culture, more concerned with time, deadlines and schedules, tends to grow impatient and want to rush to "close the deal." More polychronic-oriented cultures come to diplomatic situations with no particular importance placed on time. Chronemics is one of the channels of nonverbal communication preferred by a High context Polychronic negotiator over verbal communication. The polychronic approach to negotiations will emphasize building trust between participants, forming coalitions and finding consensus. High context Polychronic negotiators might be charged with emotion toward a subject thereby obscuring an otherwise obvious solution. Control of time in power relationships Time has a definite relationship to power. Though power most often refers to the ability to influence people, power is also related to dominance and status. For example, in the workplace, those in a leadership or management position treat time and – by virtue of position – have their time treated differently from those who are of a lower stature position. Anderson and Bowman have identified three specific examples of how chronemics and power converge in the workplacewaiting time, talk time, and work time. Waiting time Researchers Insel and Lindgren write that the act of making an individual of a lower stature wait is a sign of dominance. They note that one who "is in the position to cause another to wait has power over him. To be kept waiting is to imply that one's time is less valuable than that of the one who imposes the wait." Talk time There is a direct correlation between the power of an individual in an organization and conversation. This includes both length of conversation, turn-taking, and who initiates and ends a conversation. Extensive research indicates that those with more power in an organization will speak more often and for a greater length of time. Meetings between superiors and subordinates provide an opportunity to illustrate this concept. A superior – regardless of whether or not they are running the actual meeting – lead discussions, ask questions, and have the ability to speak for longer periods of time without interruption. Likewise, research shows that turn-taking is also influenced by power. Social psychologist Nancy Henley notes that "Subordinates are expected to yield to superiors and there is a cultural expectation that a subordinate will not interrupt a superior". The length of a response follows the same pattern. While the superior can speak for as long as they want, the responses of the subordinate are shorter in length. Albert Mehrabian noted that deviation from this pattern led to negative perceptions of the subordinate by the superior. Beginning and ending a communication interaction in the workplace is also controlled by the higher-status individual in an organization. The time and duration of the conversation are dictated by the higher-status individual. Work time The time of high status individuals is perceived as valuable, and they control their own time. On the other hand, a subordinate with less power has their time controlled by a higher status individual and are in less control of their time – making them likely to report their time to a higher authority. Such practices are more associated with those in non-supervisory roles or in blue collar rather than white collar professions. Instead, as power and status in an organization increase, the flexibility of the work schedule also increases. For instance, while administrative professionals might keep a 9 to 5 work schedule, their superiors may keep less structured hours. This does not mean that the superior works less. They may work longer, but the structure of their work environment is not strictly dictated by the traditional workday. Instead, as Koehler and their associates note "individuals who spend more time, especially spare time, to meetings, to committees, and to developing contacts, are more likely to be influential decision makers". A specific example of the way power is expressed through work time is scheduling. As Yakura and others have noted in research shared by Ballard and Seibold, "scheduling reflects the extent to which the sequencing and duration of plans activities and events are formalized" (Ballard and Seibold, p. 6). Higher-status individuals have very precise and formal schedules – indicating that their stature requires that they have specific blocks of time for specific meetings, projects and appointments. Lower status individuals however, may have less formalized schedules. Finally, the schedule and appointment calendar of the higher status individual will take precedence in determining where, when and the importance of a specific event or appointment. Associated theories Expectancy violations theory Developed by Judee Burgoon, expectancy violations theory (EVT) sees communication as the exchange of information which is high in relational content and can be used to violate the expectations of another which will be perceived as either positively or negatively depending on the liking between the two people. When our expectations are violated, we will respond in specific ways. If an act is unexpected and is assigned favorable interpretation, and it is evaluated positively, it will produce more favorable outcomes than an expected act with the same interpretation and evaluation. Interpersonal adaptation theory The interpersonal adaptation theory (IAT), founded by Judee Burgoon, states that adaptation in interaction is responsive to the needs, expectations, and desires of communicators and affects how communicators position themselves in relation to one another and adapt to one another's communication. For example, they may match each other's behavior, synchronize the timing of behavior, or behave in dissimilar ways. It is also important to note that individuals bring to interactions certain requirements that reflect basic human needs, expectations about behavior based on social norms, and desires for interaction based on goals and personal preferences (Burgoon, Stern & Dillman, 1995). See also African time Paul Virilio Philosophy of space and time Johannes Fabian References Adler, ROBIN.B., Lawrence B.R., & Towne, N. (1995). Interplay (6th ed.). Fort Worth: Hardcourt Brace College. Ballard, D & Seibold, D., Communication-related organizational structures and work group temporal differences: the effects of coordination method, technology type, and feedback cycle on members' construals and enactments of time. Communication Monographs, Vol. 71, No. 1, March 2004, pp. 1–27 Buller D.B., & Burgoon, J.K. (1996). Interpersonal deception theory. Communication Theory, 6, 203–242. Buller, D.B., Burgoon, J.K., & Woodall, W.G. (1996). Nonverbal communications: The unspoken dialogue (2nd ed.). New York: McGraw-Hill. Burgoon, J.K., Stern, L.A., & Dillman, L. (1995). Interpersonal adaptation: Dyadic interaction patterns. Massachusetts: Cambridge University Press. Capella, J. N. (1985). Controlling the floor in conversation. In A. Siegman and S. Feldstein (Eds.), Multichannel integrations of nonverbal behavior, (pp. 69–103). Hillsdale, NJ: Erlbaum Cohen, R. (2004). Negotiating across cultures: International communication in an interdependent world (rev. ed.). Washington, DC: United States Institute of Peace. Eddelman, R.J., and Iwawaki, S. (1987). Self-reported expression and the consequences of embarrassment in the United Kingdom and Japan. Psychologia, 30, 205-216 Griffin, E. (2000). A first look at communication theory (4th ed). Boston, MA: McGraw Hill. Gonzalez, G., & Zimbardo, P. (1985). Time in perspective. Psychology Today Magazine, 20–26. Hall, E.T. & Hall, M. R. (1990). Understanding cultural differences: Germans, French, and Americans. Boston, MA: Intercultural Press. Hall, J.A., & Kapp, M.L. (1992). Nonverbal communication in human interaction (3rd ed.). New York: Holt Rinehart and Winston, Inc. Knapp, M. L. & Miller, G.R. (1985). Handbook of Interpersonal Communication. Beverly Hills: Sage Publications. Koester, J., & Lustig, M.W. (2003). Intercultural competence (4th ed.). New York: Pearson Education, Inc. Patterson, M.L. (1990). Functions of non-verbal behavior in social interaction. H. Giles & W.P. Robinson (Eds), Handbook of Language and Social Psychology, Chichester, G.B.: Wiley West, R., & Turner, L. H. (2000). Introducing communication theory: Analysis and application. Mountain View, CA: Mayfield. Wood, J. T. (1997). Communication theories in action: An introduction. Belmont, CA: Wadsworth. Ivers, J. J. (2017). For Deep Thinkers Only. John J. Ivers Further reading Bluedorn, A.C. (2002). The human organization of time: Temporal realities and experience. Stanford, CA: Stanford University Press. Cohen, R. (2004). Negotiating across cultures: International communication in an interdependent world (rev. ed.). Washington, DC: United States Institute of Peace. Griffin, E. (2000). A first look at communication theory (4th ed). Boston, MA: McGraw Hill. Hugg, A. (2002, February 4). Universal language. Retrieved May 10, 2007 from Website: Osborne, H. (2006, January/February). In other words…actions can speak as clearly as words. Retrieved May 12, 2007 from Website: http://www.healthliteracy.com/article.asp?PageID=3763 Wessel, R. (2003, January 9). Is there time to slow down?. Retrieved May 10, 2007 from Website: http://www.csmonitor.com/2003/0109/p13s01-sten.html A sonnet on the topic by the editor of the 11th edition (1910) of the Encyclopædia Britannica. External links Nonverbal communication Social constructionism Time
Chronemics
[ "Physics", "Mathematics" ]
4,963
[ "Physical quantities", "Time", "Quantity", "Spacetime", "Wikipedia categories named after physical quantities" ]
11,110,006
https://en.wikipedia.org/wiki/Tickle%20fetishism
Tickle fetishism, also known as knismolagnia, knismophilia, or titillagnia, is a paraphilia where an individual receives sexual pleasure from tickling, being tickled, or watching someone be tickled. Individuals may prefer to be the dominant party, known as a ler (from tickler), or the submissive party, known as a lee (from ticklee), or they may enjoy both, known as a switch. Some people may prefer to be tickled in specific areas, typically in an erogenous zone or other particularly sensitive areas of the body. In BDSM Restraint, bondage, and sexual humiliation may be aspects of an erotic tickling session, though they are not necessary. A BDSM tickling session can involve submissive partners being tied or restrained in a position that exposes bare parts of the body, particularly those that are sensitive to tickling, such as the feet or armpits. The body being exposed can also be intended as a form of humiliation or exhibitionism for the lee, or as visual stimulus for the ler. Tools may be used to tickle if desired. See also Tickle torture Tickling Sensation play References Further reading Erik11. The Dom's Guide to Tickling. Aaron Brown, 2019. Moran, Michael. Erotic Tickling. Greenery Press, 2003. Courtois, Wayne. My Name is Rand. Suspect Thoughts Press, 2004. Courtois, Wayne. In the Time of Solution 9. Lethe Press, 2013. External links Glossary of clinical sexology (it – en) Sexual fetishism Tickling BDSM activities
Tickle fetishism
[ "Biology" ]
343
[ "Behavior", "Sexuality stubs", "Sexuality" ]
11,111,531
https://en.wikipedia.org/wiki/Chip%20heater
The chip heater is a single point, tankless, domestic hot water system popular in Australia and New Zealand from the 1880s until the 1960s. Examples of this form of domestic water heater are still in use. The chip heater consisted of a cylindrical unit with a fire box and flue, through which a water pipe was run. Water was drawn from a cold water tank and circulated through the fire box. When heated, the water was drawn off to the area where it was used, typically in a bath or shower. There was often an ash box under the fire box, which allowed air under the fire, as well as various dampers in the flue. The fire box was relatively small and fed by tinder, such as newspaper, pine cones, small twigs, or wood chips. The use of the later gave the chip heater its name. Water had to be run at a trickle in order to heat up to a desirable temperature. The rate of combustion was controlled by the flues and the ash box. With a lot of fuel and open flues the water could boil quickly, which was not a desirable result. With practice, the correct combination of fuel, flue settings, and water flow, could result in enough hot water for a shower or bath in approximately 20 minutes. History The chip heater is embedded in Australian and New Zealand social history, because many people can remember using one or someone who had one. The precise history of the chip heater is unclear. The original idea is almost certainly derived from vertical steam boilers. Architectural historian, Professor Miles Lewis, notes that the "instantaneous water heaters," which were being sold by Douglas & Sons of Melbourne by 1888, were probably chip heaters. In 1892, an advertisement in Melbourne promised that Fischer's Patent Bath Heater could be heated with wood in three minutes at the cost of one farthing. Catalogues from between 1913 and 1919 of the American National Radiator Company, which marketed its products in Australia, do not show chip heaters. That suggests that the chip heater was a local innovation. Variants The chip heater was very similar to the gas and kerosene-powered "geyser" hot water heaters, popular in Australian suburban residences from the 1920s. The main difference was the fuel source. The Australian manufacturer, Metters Limited, supplied gas geysers for city clients (who had access to gas) and chip heaters for country clients. Manufacturers There were a number of manufacturers and brands. According to Professor Lewis, early 20th century brands included the "Royal", "Little Hero", "Silver Ace", "Kangaroo", "Empire" and "Little Wonder". Peter Wood recalls a "Torrens" brand being popular in Adelaide. Metters had a variety of chip heaters in its 1936 catalogue, including oil and kerosene-powered chip heaters. Metters claimed a flow of of "very hot water" per minute. References Archer, John, 1998, Your home: the inside story of the Australian house, Port Melbourne, Lothian. Metters Ltd 1936 Metters' bath heater and hot water service : sectional catalogue, Metters Ltd. Oliver, Julie. The Australian home beautiful: from Hills hoist to high rise, McMahon's Point, N.S.W. Home Beautiful, 1999. Postings on the NSW Heritage Office Heritage Advisors Discussion Group by Peter Benkendorff, David Beauchamp, Susan Duyker, Elizabeth Roberts and Peter Woods, April 2007. Posting on Engineering Heritage discussion group by Professor Miles Lewis 26 April 2007 Plumbing Residential heating appliances
Chip heater
[ "Engineering" ]
735
[ "Construction", "Plumbing" ]
11,111,551
https://en.wikipedia.org/wiki/Division%20on%20Dynamical%20Astronomy
The Division on Dynamical Astronomy (DDA) is a branch of the American Astronomical Society that focuses on the advancement of all aspects of dynamical astronomy, including celestial mechanics, solar system dynamics, stellar dynamics, as well as the dynamics of the interstellar medium and galactic dynamics, and coordination of such research with other branches of science. It awards the Brouwer Award every year, which was established to recognize outstanding contributions to the field of Dynamical Astronomy, including celestial mechanics, astrometry, geophysics, stellar systems, galactic and extra galactic dynamics. The Division also awards the Vera Rubin Early Career Prize for promise of continued excellence for an astronomer no more than 10 years beyond receipt of their doctorate. See also List of astronomical societies References External links Official homepage Astronomy organizations
Division on Dynamical Astronomy
[ "Astronomy" ]
156
[ "Astronomy stubs", "Astronomy organizations", "Astronomy organization stubs" ]
11,112,511
https://en.wikipedia.org/wiki/Radio%20acoustic%20sounding%20system
A radio acoustic sounding system (RASS) is a system for measuring the atmospheric lapse rate using backscattering of radio waves from an acoustic wave front to measure the speed of sound at various heights above the ground. This is possible because the compression and rarefaction of air by an acoustic wave changes the dielectric properties, producing partial reflection of the transmitted radar signal. From the speed of sound, the temperature of the air in the planetary boundary layer can be computed. The maximum altitude range of RASS systems is typically , although observations have been reported up to in moist air. Principle The principle of operation behind RASS is as follows: Bragg scattering occurs when acoustic energy (i.e., sound) is transmitted into the vertical beam of a radar such that the wavelength of the acoustic signal matches the half-wavelength of the radar. As the frequency of the acoustic signal is varied, strongly enhanced scattering of the radar signal occurs when the Bragg match takes place. When this occurs, the Doppler shift of the radar signal produced by the Bragg scattering can be determined, as well as the atmospheric vertical velocity. Thus, the speed of sound as a function of altitude can be measured, from which virtual temperature (TV) profiles can be calculated with appropriate corrections for vertical air motion. The virtual temperature of an air parcel is the temperature that dry air would have if its pressure and density were equal to those of a sample of moist air. As a rule of thumb, an atmospheric vertical velocity of can alter a TV observation by . Configurations RASS can be added to a radar wind profiler or to a sodar system. In the former case, the necessary acoustic subsystems must be added to the radar wind profiler to generate the sound signals and to perform signal processing. When RASS is added to a radar profiler, three or four vertically pointing acoustic sources (equivalent to high quality stereo loud speakers) are placed around the radar wind profiler's antenna, and electronic subsystems are added that include the acoustic power amplifier and the signal generating circuit boards. The acoustic sources are used only to transmit sound into the vertical beam of the radar, and are usually encased in noise suppression enclosures to minimize nuisance effects that may bother nearby neighbors or others in the vicinity of the instrument. When RASS is added to a sodar, the necessary radar subsystems are added to transmit and receive the radar signals and to process the radar reflectivity information. Since the wind data are obtained by the sodar, the radar only needs to sample along the vertical axis. The sodar transducers are used to transmit the acoustic signals that produce the Bragg scattering of the radar signals, which allows the speed of sound to be measured by the radar. Resolution The vertical resolution of RASS data is determined by the pulse length(s) used by the radar. RASS sampling is usually performed with a pulse length. Because of atmospheric attenuation of the acoustic signals at the RASS frequencies used by boundary layer radar wind profilers, the altitude range that can be sampled is usually , depending on atmospheric conditions (e.g., high wind velocities tend to limit RASS altitude coverage to a few hundred meters because the acoustic signals are blown out of the radar beam). References Meteorological instrumentation and equipment Weather radars Atmospheric sounding
Radio acoustic sounding system
[ "Technology", "Engineering" ]
673
[ "Meteorological instrumentation and equipment", "Measuring instruments" ]
11,112,693
https://en.wikipedia.org/wiki/Virtual%20temperature
In atmospheric thermodynamics, the virtual temperature () of a moist air parcel is the temperature at which a theoretical dry air parcel would have a total pressure and density equal to the moist parcel of air. The virtual temperature of unsaturated moist air is always greater than the absolute air temperature, however, as the existence of suspended cloud droplets reduces the virtual temperature. The virtual temperature effect is also known as the vapor buoyancy effect. It has been described to increase Earth's thermal emission by warming the tropical atmosphere. Introduction Description In atmospheric thermodynamic processes, it is often useful to assume air parcels behave approximately adiabatically, and approximately ideally. The specific gas constant for the standardized mass of one kilogram of a particular gas is variable, and described mathematically as where is the molar gas constant, and is the apparent molar mass of gas in kilograms per mole. The apparent molar mass of a theoretical moist parcel in Earth's atmosphere can be defined in components of water vapor and dry air as with being partial pressure of water, dry air pressure, and and representing the molar masses of water vapor and dry air respectively. The total pressure is described by Dalton's law of partial pressures: Purpose Rather than carry out these calculations, it is convenient to scale another quantity within the ideal gas law to equate the pressure and density of a dry parcel to a moist parcel. The only variable quantity of the ideal gas law independent of density and pressure is temperature. This scaled quantity is known as virtual temperature, and it allows for the use of the dry-air equation of state for moist air. Temperature has an inverse proportionality to density. Thus, analytically, a higher vapor pressure would yield a lower density, which should yield a higher virtual temperature in turn. Derivation Consider a moist air parcel containing masses and of dry air and water vapor in a given volume . The density is given by where and are the densities the dry air and water vapor would respectively have when occupying the volume of the air parcel. Rearranging the standard ideal gas equation with these variables gives and Solving for the densities in each equation and combining with the law of partial pressures yields Then, solving for and using is approximately 0.622 in Earth's atmosphere: where the virtual temperature is We now have a non-linear scalar for temperature dependent purely on the unitless value , allowing for varying amounts of water vapor in an air parcel. This virtual temperature in units of kelvin can be used seamlessly in any thermodynamic equation necessitating it. Variations Often the more easily accessible atmospheric parameter is the mixing ratio . Through expansion upon the definition of vapor pressure in the law of partial pressures as presented above and the definition of mixing ratio: which allows Algebraic expansion of that equation, ignoring higher orders of due to its typical order in Earth's atmosphere of , and substituting with its constant value yields the linear approximation With the mixing ratio expressed in g/g. An approximate conversion using in degrees Celsius and mixing ratio in g/kg is Knowing that specific humidity is given in terms of mixing ratio as , then we can write mixing ratio in terms of the specific humidity as . We can now write the virtual temperature in terms of specific humidity as Simplifying the above will reduce to and using the value of , then we can write Virtual potential temperature Virtual potential temperature is similar to potential temperature in that it removes the temperature variation caused by changes in pressure. Virtual potential temperature is useful as a surrogate for density in buoyancy calculations and in turbulence transport which includes vertical air movement. Density temperature A moist air parcel may also contain liquid droplets and ice crystals in addition to water vapor. A net mixing ratio can be defined as the sum of the mixing ratios of water vapor , liquid , and ice present in the parcel. Assuming that and are typically much smaller than , a density temperature of a parcel can be defined, representing the temperature at which a theoretical dry air parcel would have the a pressure and density equal to a moist parcel of air while accounting for condensates: Uses Virtual temperature is used in adjusting CAPE soundings for assessing available convective potential energy from skew-T log-P diagrams. The errors associated with ignoring virtual temperature correction for smaller CAPE values can be quite significant. Thus, in the early stages of convective storm formation, a virtual temperature correction is significant in identifying the potential intensity in tropical cyclogenesis. Further reading References Atmospheric thermodynamics Meteorological quantities Atmospheric temperature Atmospheric pressure Humidity and hygrometry
Virtual temperature
[ "Physics", "Mathematics" ]
927
[ "Quantity", "Physical quantities", "Meteorological quantities", "Atmospheric pressure" ]
11,112,810
https://en.wikipedia.org/wiki/NGC%201260
NGC 1260 is a spiral or lenticular galaxy located 250 million light years away from earth in the constellation Perseus. It was discovered by astronomer Guillaume Bigourdan on 19 October 1884. NGC 1260 is a member of the Perseus Cluster and forms a tight pair with the galaxy PGC 12230. This galaxy is dominated by a population of many old stars. In 2006, it was home to the second brightest supernova in the observable universe, supernova SN 2006gy. This supernova was the most energetic and brightest supernova on record so far. References External links Brightest object found in NGC 1260 (Space.com : 7 May 2007) http://www.solstation.com/x-objects/sn2006gy.htm Spiral galaxies Perseus (constellation) 1260 12219 02634 Astronomical objects discovered in 1884 Perseus Cluster Lenticular galaxies
NGC 1260
[ "Astronomy" ]
189
[ "Perseus (constellation)", "Constellations" ]
11,113,150
https://en.wikipedia.org/wiki/Pierre-Paul%20Grass%C3%A9
Pierre-Paul Grassé (November 27, 1895 in Périgueux (Dordogne) – July 9, 1985) was a French zoologist, writer of over 300 publications including the influential 52-volume Traité de Zoologie. He was an expert on termites who rejected Neo-Darwinism and was a proponent of Neo-Lamarckism. Biography Education Grassé began his studies in Périgueux where his parents owned a small business. He went on to study medicine at the University of Bordeaux and studied biology in parallel, including the lectures of the entomologist Jean de Feytaud (1881–1973). Mobilized during World War I, he was forced to interrupt his studies during four years. By the end of the war he was a military surgeon. Grassé continued his studies in Paris, focusing exclusively on science. He obtained his Licence in Biology and frequented the laboratory of biologist Étienne Rabaud (1868–1956). He abandoned his preparations for the agrégation to accept a position as professor in the École Nationale Supérieure Agronomique de Montpellier (1921), where the department of zoology was led by François Picard (1879–1939). There he frequented several phytogeographers like Charles Flahault (1852–1935), Josias Braun-Blanquet (1884–1980), Georges Kuhnholtz-Lordat (1888–1965) and Marie Louis Emberger (1897–1969). He became the assistant of Octave Duboscq (1868–1943) who oriented the young Grassé toward the study of protozoan parasites. After the departure of Duboscq to Paris, Grassé worked for Eugène Bataillon (1864–1953) and there discovered techniques for experimental embryology. In 1926, Grassé became vice-director of the École supérieure de sériciculture. He submitted his theses, Contribution à l'étude des flagellés parasites, in 1926, and it was published in the Archives de zoologie expérimentale et générale. Teaching and research In 1929, Grassé became professor of zoology at the Université de Clermont-Ferrand. He supervised the theses of several students on insects. He conducted his first field research trip in Africa in 1933-1934, and returned there several times (1938–1939, 1945, 1948). During these trips he studied termites, and became one of the great specialists on these insects. In 1935, he became an Assistant Professor at the Université de Paris where he worked alongside Germaine Cousin (1896–1992), and received the Prix Gadeau de Kerville de la Société entomologique de France for his work on Orthoptera and termites. In 1939 he chaired the Société zoologique de France and in 1941 the Société entomologique de France. After having been briefly mobilized in Tours, in 1944 he succeeded Maurice Caullery as Chair in Zoology and the Evolution of Beings. Grassé was elected a member of the Académie des sciences on November 29, 1948, in the anatomy and zoology sector and presided over the institution in 1967. In 1976 he changed sectors, into the newly created animal and vegetal biology sector. Grassé received numerous honours and titles during his career: commander of the Légion d'honneur, doctor honoris causa of the universities of Brussels, Basel, Bonn, Ghent, Madrid, Barcelona and São Paulo. He was one of the founders of the Société Française de Parasitologie in 1962. He was also a member of several academic societies, including the New York Academy of Sciences and The Royal Academies for Science and the Arts of Belgium. Publications Grassé began publishing a very big project in 1946 entitled Traité de zoologie. The 38 volumes required almost forty years of work, uniting some of the greatest names in zoology. They are still essential references in the field for the groups that are treated in their pages. Ten volumes are dedicated to mammals, nine to insects. Apart from this treatise, he led two collections published by Masson: the first, entitled Grands problèmes de la biologie, has thirteen volumes and the second is entitled Précis de sciences biologiques. Alongside Andrée Tétry, he composed the two volumes dedicated to zoology in the collection Bibliothèque de la Pléiade, published by Gallimard. He also supervised the edition of the Abrégé de zoologie (two volumes, Masson). He also composed the Termitologia (1982, 1983, 1984), a work in three volumes totalling over 2400 pages. In it Grassé compiles all available knowledge concerning termites. It was by studying symbiotic flagellates in termites that he eventually began studying their hosts. In this publication, Grassé introduced the concept of Stigmergy : "Stigmergy manifests itself in the termite mound by the fact that the individual labour of each construction worker stimulates and guides the work of its neighbour.". He also created three scientific reviews: Arvernia biologica (1932), Insectes sociaux (1953) et Biologia gabonica (1964). He participated in several reviews like the Annales des sciences naturelles and the Bulletin biologique de la France et de la Belgique. Apart from his numerous scientific publications, he published several works popularising science such as La Vie des animaux (Larousse, 1968). He also signed the articles "Évolution" and "Stigmergie" of the Encyclopædia Universalis. Grassé also authored many works where he talks of his views on evolution and metaphysics such as Toi, ce petit Dieu (Albin Michel, 1971), L’Évolution du vivant, matériaux pour une nouvelle théorie transformiste (Albin Michel, 1973), La Défaite de l’amour ou le triomphe de Freud (Albin Michel, 1976), Biologie moléculaire, mutagenèse et évolution (Masson, 1978), L’Homme en accusation: de la biologie à la politique (Albin Michel, 1980)... Neo-Lamarckism Grassé was a supporter of the French tradition of Lamarckism. He occupied the Chair of Evolutionary Biology of the Faculty of Paris, of which the two previous occupiers, Alfred Giard (1846–1908) and Maurice Caullery (1868–1958), were both also supporters of Lamarckism. Only after Grassé's retirement did the chair become occupied by a partisan of Darwinism, Charles Bocquet (1918–1977). In support of Lamarck's theories he organised an international congress in Paris in 1947 under the auspices of the CNRS with the theme "paleontology and transformism". The records were published in 1950 by Albin Michel. He united many of the greatest French authorities on the question including Lucien Cuénot (1866–1951), Pierre Teilhard de Chardin (1881–1955), and Maurice Caullery. They were all opponents to certain tenets of neo-Darwinism. Other brilliant biologists present were John Burdon Sanderson Haldane (1892–1964) and George Gaylord Simpson (1902–1984). Grassé stated his support for Lamarck in other ways too, like an article in the Encyclopædia Universalis, and by affirming that Lamarck had been unjustifiably slandered and ought to be rehabilitated. Some authors, like Marcel Blanc explain the strong support of Lamarck by French biologists by giving simple patriotic reasons and the historical and social context: Catholic culture favoring support of Lamarckism whilst Protestant culture favored support of Darwinism. Evolution of Living Organisms Grassé presents his arguments against neo-Darwinism in his work L'évolution du vivant (1973), translated into English as Evolution of Living Organisms in 1977. Against the idea which states that the evolution of living things is the product of their adapting to changes in their environments, he opposes living fossils, meaning species which stopped evolving at some point in time and have remained relatively identical to this day regardless of great climatic or geological changes (he cites numerous examples in Les formes panchroniques et les arrêts de l'évolution, p. 133). Therefore, evolution is in his opinion a process which is not necessary, it does not occur in living beings under the constraints of external physical forces (cf. Necessity-utility is not the primus movens of biological evolution, p. 302). To explain evolution he instead thinks that you must look at the internal dynamics of living things. Biologist Theodosius Dobzhansky wrote in a review that Grassé's belief that evolution is directed by some unknown mechanism does not explain anything. He concluded that "to reject what is known, and to appeal to some wonderful future discovery which may explain it all, is contrary to sound scientific method. The sentence with which Grassé ends his book is: "It is possible that in this domain biology, impotent, yields the floor to metaphysics." Colin Patterson reviewed Evolution of Living Organisms for the New Scientist stating that the book was a criticism of neo-Darwinism, with the opinion that paleontology is "the only true science of evolution". Patterson, a paleontologist, disputed this statement. He also noted that Grassé's own theory of neo-Lamarckism was "hard to disentangle, and there were other places where Grassé's reasoning was difficult to follow." According to Patterson the book did not mention gene duplication, but this has been well-established in evolution. Geologist David B. Kitts negatively reviewed the book commenting that all of "Grassé's arguments have been marshaled against Darwinian theory before and, in the opinion of most Darwinians, have been adequately countered." Grassé stated that evolution was driven by an internal factor. Regarding the identification of this factor, Kitts quotes Grassé as saying "perhaps in this area of biology can go no further: the rest is metaphysics". Kitts found this statement unacceptable commenting that "the fundamental issues raised by Grassé's theory of evolution do not even belong to biology, but to some other discipline." Selected publications 1935: Parasites et parasitisme, Armand Collin (Paris) : 224 p.. 1935: with Max Aron (1892–1974), Précis de biologie animale, Masson (Paris) : viii + 1016 p. – second revised edition in 1939, third edition in 1947, fourth edition in 1948, fifth edition in 1957, sixth edition in 1962, eighth edition in 1966. 1963: with A. Tétry, Zoologie, two volumes, Gallimard (Paris), collection encyclopédie de la Pléiade: xx + 1244 p. et xvi + 1040 p. 1971: Toi, ce petit dieu ! essai sur l'histoire naturelle de l'homme, Albin Michel (Paris) : 288 p. 1973: L'évolution du vivant, matériaux pour une nouvelle théorie transformiste, Albin Michel (Paris) : 477 p. - (a criticism of neo-Darwinism). Republished and translated into English in 1977 under the title Evolution of Living Organisms by Academic Press. 1978: Biologie moléculaire, mutagenèse et évolution, Masson (Paris) : 117 p.  1980: L'Homme en accusation : de la biologie à la politique, Albin Michel (Paris) : 354 p.  1982-1986: Termitologia. Vol. I: Anatomie Physilogie Reproduction, 676 pp.; Vol. II: Fondation des Sociétés Construction, 613 pp.; Vol. III: Comportement Socialité Écologie Évolution Systématique, 715 pp. Paris: Masson. References Jean Lhoste (1987). Les Entomologistes français. 1750-1950, INRA Éditions et OPIE : 351 p. [244-247] External links Stigmergy: Invisible Writing, Collective Intelligence in Social Insects in Introduction & Self-Organisation by David Gordon for the AI depot. Stigmergic Collaboration: A Theoretical Framework for Mass Collaboration by : Elliott, Dr Mark Alan (2007) PhD thesis, Centre for Ideas, Victorian College of the Arts, University of Melbourne. The thesis explicitly refers to the work of Pierre-Paul Grassé to define stigmergy, chapter 3. 1895 births 1985 deaths People from Périgueux Commanders of the Legion of Honour Lamarckism Members of the French Academy of Sciences Non-Darwinian evolution University of Bordeaux alumni Academic staff of the University of Paris 20th-century French zoologists French military personnel of World War I Presidents of the Société entomologique de France
Pierre-Paul Grassé
[ "Biology" ]
2,660
[ "Non-Darwinian evolution", "Biology theories", "Obsolete biology theories", "Lamarckism" ]
11,113,244
https://en.wikipedia.org/wiki/Geneva%20Extrasolar%20Planet%20Search
The Geneva Extrasolar Planet Search is a variety of observational programs run by the Geneva Observatory at Versoix, a small town near Geneva, Switzerland. The programs are executed by M. Mayor, D. Naef, F. Pepe, D. Queloz, N.C. Santos, and S. Udry using several telescopes and instruments in the Northern and Southern Hemisphere and have resulted in the discovery of numerous extrasolar planets, including 51 Pegasi b, the first ever confirmed exoplanet orbiting a main-sequence star. Programs originated at Geneva are generally conducted in collaboration with several other academic institutions from Belgium, Germany, Italy and the United Kingdom. These programs search for exoplanets in various locations using different instruments. These include the Haute-Provence Observatory in France, the TRAPPIST and the Euler Telescope, both located at La Silla Observatory in Chile, as well as the M dwarf programs. Most recent projects involve the HARPS spectrograph, HARPS-N at the island of La Palma, and the Next-Generation Transit Survey located at the Paranal Observatory, northern Chile. The Integral Science Data Centre is located at Ecogia, which also belongs to the town of Versoix. The centre is linked to the Geneva Observatory and deals with the processing of the data provided by the satellite INTEGRAL of the European Space Agency. On the two sites of Sauverny and Ecogia, a group of approximately 143 people are employed, including scientists, PhD candidates, students, technical staff (computer and electronics specialists, mechanics), as well as administrative staff. Extrasolar planet search surveys The ELODIE Northern Extrasolar Planet Search based at the Haute-Provence Observatory in France. The CORALIE Survey for Southern Extra-solar Planets based at the La Silla Observatory in Chile. See also Anglo-Australian Planet Search is another group searching the southern hemisphere for planets. List of extrasolar planets References Exoplanet search projects Astronomical surveys Versoix
Geneva Extrasolar Planet Search
[ "Astronomy" ]
404
[ "Exoplanet search projects", "Astronomical surveys", "Works about astronomy", "Astronomy projects", "Astronomical objects" ]
11,113,467
https://en.wikipedia.org/wiki/Butanone%20%28data%20page%29
This page provides supplementary chemical data on butanone. Material Safety Data Sheet The handling of this chemical may incur notable safety precautions. It is highly recommend that you seek the Material Safety Datasheet (MSDS) for this chemical from a reliable source and follow its directions. SIRI Fisher Scientific Science Stuff Structure and properties Thermodynamic properties Vapor pressure of liquid Table data obtained from CRC Handbook of Chemistry and Physics 44th ed. Spectral data References Chemical data pages Chemical data pages cleanup
Butanone (data page)
[ "Chemistry" ]
101
[ "Chemical data pages", "nan" ]
11,113,752
https://en.wikipedia.org/wiki/Substation%20Configuration%20Language
System Configuration description Language formerly known as Substation Configuration description Language (SCL) is the language and representation format specified by IEC 61850 for the configuration of electrical substation devices. This includes representation of modeled data and communication services specified by IEC 61850–7–X standard documents. The complete SCL representation and its details are specified in IEC 61850-6 standard document. It includes data representation for substation device entities; its associated functions represented as logical nodes, communication systems and capabilities. The complete representation of data as SCL enhances the different devices of a substation to exchange the SCL files and to have a complete interoperability. Parts of SCL files An SCL file contains the following parts: Header: This part is used to identify version and other basic details of an SCL configuration file. Substation: This is the part dealing with the different entities of a substation including various devices, interconnections and other functionalities. The elements include power transformers, Voltage Levels, bays, General Equipment, conducting equipment like breakers. From the Substation part logical nodes that represent functionality related to the object in the Substation are referred. Communication: This section deals with different communication points (access points) for accessing the different IEDs of the complete system. This part contains different Sub Networks and access points. IED: The IED section describes the complete configuration of an Intelligent Electronic Device (IED). It contains different access points of the specific IED, the logical devices, and logical nodes, report control blocks etc. coming under the IED. It describes what data an IED publish as reports and as Generic Substation Events (GSE; divided into GOOSE and GSSE) and what GOOSE/GSSE data from other IEDs an IED is configured to receive. DataTypeTemplates: It defines different logical devices, logical nodes, data and other details separated into different instances. The complete data modeling according to IEC 61850-7-3 & 7-4 are represented in this part of SCL. It is again subdivided into LNodeType, DOType, DAType and EnumType. Types of SCL files Depending on the purpose of SCL file, it is classified into the following types: IED Capability Description (ICD) file: It defines complete capability of an IED. This file needs to be supplied by each manufacturer to make the complete system configuration. The file contains a single IED section, an optional communication section and an optional substation part which denotes the physical entities corresponding to the IED. System Specification Description (SSD) file: This file contains complete specification of a substation automation system including single line diagram for the substation and its functionalities (logical nodes). This will have Substation part, Data type templates and logical node type definitions but need not have IED section. Substation Configuration Description (SCD) file: This is the file describing complete substation detail. It contains substation, communication, IED and Data type template sections. An file and different files contribute in making an SCD file. Configured IED Description (CID) file: It is a file used to have communication between an IED configuration tool to an IED. It can be considered as an SCD file stripped down to what the concerned IED need to know and contains a mandatory communication section of the addressed IED. Instantiated IED Description (IID) file: It defines the configuration of one IED for a project and is used as data exchange format from the IED configurator to the system configurator. This file contains only the data for the IED being configured: one IED section, the communication section with the IED's communication parameters, the IED's data type templates, and, optionally, a substation section with the binding of functions (LNodes) to the single line diagram. System Exchange Description (SED) file: This file is to be exchanged between system configurators of different projects. It describes the interfaces of one project to be used by another project, and at re-import the additionally engineered interface connections between the projects. It is a subset of an SCD file with additional engineering rights for each IED as well as the ownership (project) of SCL data. The last two file types were introduced with Edition 2. See also Common Information Model (electricity) External links SCL Files explorer/analyzer Tool LibreSCL: a Free Software implementation SCL Configuration Utility IEC 61850 STS Substation Engineering Tool SCL Training Videos Computer file formats IEC standards
Substation Configuration Language
[ "Technology" ]
950
[ "Computer standards", "IEC standards" ]
11,115,209
https://en.wikipedia.org/wiki/Nuclear%20attribution
Nuclear attribution is the process of tracing the origin of nuclear material that has been used in a nuclear explosion. The problem is not necessarily a straightforward one, for it may be possible to obtain nuclear precursors through the black market, and therefore relatively anonymously. Attribution to the source is desirable due to the trend of nuclear proliferation. Nuclear capability has expanded from only a couple of states, to where a terrorist group could make a bomb with only partial state backing. The United States has demonstrated its interest in nuclear attribution by passing the Nuclear Forensics and Attribution Act, which calls for the development of attribution capabilities. Nuclear Forensics A primary method used in nuclear attribution is nuclear forensics. The first step in this process is field work. Radiation levels can be quickly determined in the field using dosimetry. Reports from local survivors help provide visual evidence. The early data from the field, in addition to seismic data, can be used to establish that the bomb was actually nuclear and estimate its yield. The gathered samples will be shipped to laboratories and analyzed using isotope analysis and other methods. The isotopic signature can then be compared with known isotopic data and the likely history of the nuclear material can be pieced together. Intelligence Another aspect of nuclear attribution is the use of law enforcement and intelligence. This involves the monitoring of lost nuclear material, as well as known or suspected enrichment facilities. The latter is often done with satellite imagery. This data can be compared with the forensic evidence to increase the probability of accuracy. Problems The goal of nuclear attribution is to pinpoint the source of nuclear material with a credible level of accuracy. This faces a number of practical difficulties. First is that the lab work relies on comparing gathered samples to samples that have already been analyzed. Until there is significant international cooperation to develop a database, it will be difficult to specifically determine the source of the nuclear material. Analysis may consist more of places that are likely not to be the source, than specifically where it is from. This will reduce its usefulness to political leaders who have to make quick decisions. Collecting the samples themselves will be difficult in the event of a nuclear explosion. The local infrastructure will likely be overwhelmed and may not be able to respondly quickly enough to gather fresh data. The environment will likely also be highly radioactive, especially if a dirty bomb is used. Getting trained humans on site could be prohibitively difficult. Another problem is that the process of nuclear attribution takes time. Political pressure could dictate a response from leaders far before the months it may take to conduct a full analysis of the source material. See also Nuclear terrorism Counter-terrorism References Nuclear warfare Forensic techniques Counterterrorism
Nuclear attribution
[ "Chemistry" ]
553
[ "Radioactivity", "Nuclear warfare" ]
11,115,477
https://en.wikipedia.org/wiki/Ci%20protein
Ci protein, short for Cubitus interruptus, is a zinc finger containing transcription factor involved in the Hedgehog signaling pathway. In the absence of a signal to the Hedgehog signaling pathway, the Ci protein is cleaved and destroyed in proteasomes. It isn't, however, completely destroyed; part of the protein survives and acts as a repressor in the nucleus, keeping genes responsive to the Hedgehog signal silent. Degradation of Ci The degradation of Ci protein depends on a large multiprotein complex, which contains a serine/threonine kinase of unknown function, an anchoring protein that binds to microtubules (to keep the Ci protein out of the nucleus) and an adaptor protein. When the Hedgehog signaling pathway is turned on, the Ci proteolysis is suppressed and the unprocessed CI protein enters the nucleus, where it activates the transcription of its target genes. Ci undergoes complete or partial degradation in the cells, the detailed molecular mechanism is poorly understood. It has been reported that an AAA ATPase Ter94 complex and K11/K48 ubiquitin chains are involved in the selection of Ci degradation. Target genes The Wingless protein in Drosophila, which is crucial to the embryogenesis of the fruit fly, and acts through the Wnt signaling pathway. The Patched receptor protein of the Hedgehog signaling pathway, which production acts as a negative feedback, since the resulting increase in Patched protein on the cell surface inhibits the Hedgehog pathway. References External links Drosophila cubitus interruptus - The Interactive Fly Transcription factors Hedgehog signaling pathway
Ci protein
[ "Chemistry", "Biology" ]
335
[ "Transcription factors", "Gene expression", "Signal transduction", "Hedgehog signaling pathway", "Induced stem cells" ]
11,115,626
https://en.wikipedia.org/wiki/Astroparticle%20Physics%20%28journal%29
Astroparticle Physics is a peer-reviewed scientific journal covering experimental and theoretical research in the interacting fields of cosmic ray physics, astronomy and astrophysics, cosmology, and particle physics. It was established in 1992 and is published monthly by North-Holland, an imprint of Elsevier. According to the Journal Citation Reports, the journal has a 2023 impact factor of 4.2. References External links Astrophysics journals Elsevier academic journals English-language journals Monthly journals Academic journals established in 1992
Astroparticle Physics (journal)
[ "Physics", "Astronomy" ]
102
[ "Astrophysics journals", "Astronomy journal stubs", "Astronomy stubs", "Astrophysics" ]
11,115,752
https://en.wikipedia.org/wiki/Leukocyte%20extravasation
In immunology, leukocyte extravasation (also commonly known as leukocyte adhesion cascade or diapedesis – the passage of cells through the intact vessel wall) is the movement of leukocytes (white blood cells) out of the circulatory system (extravasation) and towards the site of tissue damage or infection. This process forms part of the innate immune response, involving the recruitment of non-specific leukocytes. Monocytes also use this process in the absence of infection or tissue damage during their development into macrophages. Overview Leukocyte extravasation occurs mainly in post-capillary venules, where haemodynamic shear forces are minimised. This process can be understood in several steps: Chemoattraction Rolling adhesion Tight adhesion (Endothelial) Transmigration It has been demonstrated that leukocyte recruitment is halted whenever any of these steps is suppressed. White blood cells (leukocytes) perform most of their functions in tissues. Functions include phagocytosis of foreign particles, production of antibodies, secretion of inflammatory response triggers (histamine and heparin), and neutralization of histamine. In general, leukocytes are involved in the defense of an organism and protect it from disease by promoting or inhibiting inflammatory responses. Leukocytes use the blood as a transport medium to reach the tissues of the body. Here is a brief summary of each of the four steps currently thought to be involved in leukocyte extravasation: Chemoattraction Upon recognition of and activation by pathogens, resident macrophages in the affected tissue release cytokines such as IL-1, TNFα and chemokines. IL-1, TNFα and C5a cause the endothelial cells of blood vessels near the site of infection to express cellular adhesion molecules, including selectins. Circulating leukocytes are localised towards the site of injury or infection due to the presence of chemokines. Rolling adhesion Like velcro, carbohydrate ligands on the circulating leukocytes bind to selectin molecules on the inner wall of the vessel, with marginal affinity. This causes the leukocytes to slow down and begin rolling along the inner surface of the vessel wall. During this rolling motion, transitory bonds are formed and broken between selectins and their ligands. For example, the carbohydrate ligand for P-selectin, P-selectin glycoprotein ligand-1 (PSGL-1), is expressed by different types of leukocytes (white blood cells). The binding of PSGL-1 on the leukocyte to P-selectin on the endothelial cell allows for the leukocyte to roll along the endothelial surface. This interaction can be tuned by the glycosylation pattern of PSGL-1, such that certain glycovariants of PSGL-1 will have unique affinities for different selectins, allowing in some cases for cells to migrate to specific sites within the body (e.g. the skin). Tight adhesion At the same time, chemokines released by macrophages activate the rolling leukocytes and cause surface integrin molecules to switch from the default low-affinity state to a high-affinity state. This is assisted through juxtacrine activation of integrins by chemokines and soluble factors released by endothelial cells. In the activated state, integrins bind tightly to complementary receptors expressed on endothelial cells, with high affinity. This causes the immobilization of the leukocytes, which varies in vessels that contain different shear forces of the ongoing blood flow. Transmigration The cytoskeletons of the leukocytes are reorganized in such a way that the leukocytes are spread out over the endothelial cells. In this form, leukocytes extend pseudopodia and pass through gaps between endothelial cells. This passage of cells through the intact vessel wall is called diapedesis. These gaps can form through interactions of the leukocytes with the endothelium, but also autonomously through endothelial mechanics. Transmigration of the leukocyte occurs as PECAM proteins, found on the leukocyte and endothelial cell surfaces, interact and effectively pull the cell through the endothelium. Once through the endothelium, the leukocyte must penetrate the basement membrane. The mechanism for penetration is disputed, but may involve proteolytic digestion of the membrane, mechanical force, or both. The entire process of blood vessel escape is known as diapedesis. Once in the interstitial fluid, leukocytes migrate along a chemotactic gradient towards the site of injury or infection. Molecular biology Introduction The phases of the leukocyte extravasation depicted in the schema are: approach, capture, rolling, activation, binding, strengthening of the binding and spreading, intravascular creeping, paracellular migration or transcellular migration. Selectins Selectins are expressed shortly after cytokine activation of endothelial cells by tissue macrophages. Activated endothelial cells initially express P-selectin molecules, but within two hours after activation E-selectin expression is favoured. Endothelial selectins bind carbohydrates on leukocyte transmembrane glycoproteins, including sialyl-LewisX. P-selectins: P-selectin is expressed on activated endothelial cells and platelets. Synthesis of P-selectin can be induced by thrombin, leukotriene B4, complement fragment C5a, histamine, TNFα or LPS. These cytokines induce the externalisation of Weibel-Palade bodies in endothelial cells, presenting pre-formed P-selectins on the endothelial cell surface. P-selectins bind PSGL-1 as a ligand. E-selectins: E-selectin is expressed on activated endothelial cells. Synthesis of E-selectin follows shortly after P-selectin synthesis, induced by cytokines such as IL-1 and TNFα. E-selectins bind PSGL-1 and ESL-1. L-selectins: L-selectins are constitutively expressed on some leukocytes, and are known to bind GlyCAM-1, MadCAM-1 and CD34 as ligands. Suppressed expression of some selectins results in a slower immune response. If L-selectin is not produced, the immune response may be ten times slower, as P-selectins (which can also be produced by leukocytes) bind to each other. P-selectins can bind each other with high affinity, but occur less frequently because the receptor-site density is lower than with the smaller E-selectin molecules. This increases the initial leukocyte rolling speed, prolonging the slow rolling phase. Integrins Integrins involved in cellular adhesion are primarily expressed on leukocytes. β2 integrins on rolling leukocytes bind endothelial cellular adhesion molecules, arresting cell movement. LFA-1 is found on circulating leukocytes, and binds ICAM-1 and ICAM-2 on endothelial cells Mac-1 is found on circulating leukocytes, and binds ICAM-1 on endothelial cells VLA-4 is found on leukocytes and endothelial cells, and facilitates chemotaxis; it also binds VCAM-1 Cellular activation via extracellular chemokines causes pre-formed β2 integrins to be released from cellular stores. Integrin molecules migrate to the cell surface and congregate in high-avidity patches. Intracellular integrin domains associate with the leukocyte cytoskeleton, via mediation with cytosolic factors such as talin, α-actinin and vinculin. This association causes a conformational shift in the integrin's tertiary structure, allowing ligand access to the binding site. Divalent cations (e.g. Mg2+) are also required for integrin-ligand binding. Integrin ligands ICAM-1 and VCAM-1 are activated by inflammatory cytokines, while ICAM-2 is constitutively expressed by some endothelial cells but downregulated by inflammatory cytokines. ICAM-1 and ICAM-2 share two homologous N-terminal domains; both can bind LFA-1. During chemotaxis, cell movement is facilitated by the binding of β1 integrins to components of the extracellular matrix: VLA-3, VLA-4 and VLA-5 to fibronectin and VLA-2 and VLA-3 to collagen and other extracellular matrix components. Cytokines Extravasation is regulated by the background cytokine environment produced by the inflammatory response, and is independent of specific cellular antigens. Cytokines released in the initial immune response induce vasodilation and lower the electrical charge along the vessel's surface. Blood flow is slowed, facilitating intermolecular binding. IL-1 activates resident lymphocytes and vascular endothelia TNFα increases vascular permeability and activates vascular endothelia CXCL8 (IL-8) forms a chemotactic gradient that directs leukocytes towards site of tissue injury/infection (CCL2 has a similar function to CXCL8, inducing monocyte extravasation and development into macrophages); also activates leukocyte integrins Recent advances In 1976, SEM images showed that there were homing receptors on microvilli-like tips on leukocytes that would allow white blood cells to get out of the blood vessel and get into tissue. Since the 1990s the identity of ligands involved in leukocyte extravasation have been studied heavily. This topic was finally able to be studied thoroughly under physiological shear stress conditions using a typical flow chamber. Since the first experiments, a strange phenomenon was observed. Binding interactions between the white blood cells and the vessel walls were observed to become stronger under higher force. Selectins (E-selectin, L-selectin, and P-selectin) were found to be involved in this phenomenon. The shear threshold requirement seems counterintuitive because increasing shear elevates the force applied to adhesive bonds and it would seem that this should increase the dislodging ability. Nevertheless, cells roll more slowly and more regularly until an optimal shear is reached where rolling velocity is minimal. This paradoxical phenomenon has not been satisfactorily explained despite the widespread interest. One initially dismissed hypothesis that has been gaining interest is the catch bond hypothesis, where the increased force on the cell slows off-rates and lengthen the bond lifetimes and stabilizing the rolling step of leukocyte extravasation. Flow-enhanced cell adhesion is still an unexplained phenomenon that could result from a transport-dependent increase in on-rates or a force-dependent decrease in off-rates of adhesive bonds. L-selectin requires a particular minimum of shear to sustain leukocyte rolling on P-selectin glycoprotein ligand-1 (PSGL-1) and other vascular ligands. It has been hypothesized that low forces decrease L-selectin–PSGL-1 off-rates (catch bonds), whereas higher forces increase off-rates (slip bonds). Experiments have found that a force-dependent decrease in off-rates dictated flow-enhanced rolling of L-selectin–bearing microspheres or neutrophils on PSGL-1. [5] Catch bonds enable increasing force to convert short bond lifetimes into long bond lifetimes, which decrease rolling velocities and increase the regularity of rolling steps as shear rose from the threshold to an optimal value. As shear increases, transitions to slip bonds shorten their bond lifetimes and increase rolling velocities and decrease rolling regularity. It is hypothesized that force-dependent alterations of bond lifetimes govern L-selectin–dependent cell adhesion below and above the shear optimum. These findings establish a biological function for catch bonds as a mechanism for flow-enhanced cell adhesion. While leukocytes seem to undergo a catch bond behavior with increasing flow leading to the tethering and rolling steps in leukocyte extravasation, firm adhesion is achieved through another mechanism, integrin activation. Other biological examples of a catch bond mechanism is seen in bacteria that tightly cling to urinary tract walls in response to high fluid velocities and large shear forces exerted on the cells and bacteria with adhesive tips of fimbria. Schematic mechanisms of how increased shear force is proposed to cause stronger binding interactions between bacteria and target cells show that the catch bond acts very similar to a Chinese finger trap. For a catch-bond, the force on the cell pulls the adhesive tip of a fimbria to close tighter on its target cell. As the strength of the forces increases, the stronger the bond between the fimbria and the cell-receptor on the surface of the target cell. For a cryptic-bond, the force causes the fimbria to swivel toward the target cell and have more binding sites able to attach to the target cell ligands, mainly sugar molecules. This creates a stronger bonding interaction between the bacteria and the target cell. Advent of microfluidic devices Parallel plate flow chambers are among the most popular flow chambers used to study the leukocyte-endothelial interaction in vitro. They have been used for investigation since the later 1980s. Although flow chambers have been an important tool to study leukocyte rolling, there are several limitations when it comes to studying the physiological in vivo conditions, as they lack correspondence with in vivo geometry, including scale/aspect ratio (microvasculature vs large vessel models), flow conditions (e.g. converging vs diverging flows at bifurcations), and require large reagent volumes (~ ml) due to their large size (height > 250 μm and width > 1mm). With the advent of microfluidic-based devices, these limitations have been overcome. A new in vitro model, called SynVivo Synthetic microvascular network (SMN) was produced by the CFD Research Corporation (CFDRC) and developed using the polydimethylsiloxane (PDMS) based soft-lithography process. The SMN can recreate the complex in vivo vasculature, including geometrical features, flow conditions, and reagent volumes, thereby providing a biologically realistic environment for studying the extravasation cellular behavior, but also for drug delivery and drug discovery. Leukocyte adhesion deficiency Leukocyte adhesion deficiency (LAD) is a genetic disease associated with a defect in the leukocyte extravasation process, caused by a defective integrin β2 chain (found in LFA-1 and Mac-1). This impairs the ability of the leukocytes to stop and undergo diapedesis. People with LAD suffer from recurrent bacterial infections and impaired wound healing. Neutrophilia is a hallmark of LAD. Neutrophil dysfunction In widespread diseases such as sepsis, leukocyte extravasation enters an uncontrolled stage, where white blood neutrophils begin destroying host tissues at unprecedented rates, claiming the lives of about 200,000 people in the United States alone. Neutrophil dysfunction is usually preceded by an infection of some sort, which triggers pathogen-associated molecular patterns (PAMP). As leukocyte extravasation intensifies, more tissues are damaged by neutrophils, which release oxygen radicals and proteases. Recent studies with SynVivo Synthetic microvascular network (SMN) made it possible to study anti-inflammatory therapeutics to treat pathologies caused by neutrophil dysfunction. The SMN enables the thorough analysis of each stage of leukocyte extravasation, thereby providing a methodology to quantify the effect of the drug in impeding leukocyte extravasation. Some of the recent findings demonstrate the effect of hydrodynamics on neutrophil-endothelial interactions. In other words, adhesion of neutrophils is heavily impacted by shear forces as well as molecular interactions. Moreover, as shear rate decreases (e.g., in post-capillary venules), immobilization of the leukocytes becomes easier and thus, more prevalent. The opposite is also true; vessels in which shear forces are high render the immobilization of the leukocytes more difficult. This has high implications in various diseases, where disruptions in blood flow gravely impact immune system response by impeding or expediting the immobilization of the leukocytes. Having this knowledge allows for better studies of the effect of drugs on leukocyte extravasation. Footnotes References Hematology Immune system
Leukocyte extravasation
[ "Biology" ]
3,610
[ "Immune system", "Organ systems" ]
11,116,471
https://en.wikipedia.org/wiki/Rolls-Royce%20Pennine
The Rolls-Royce Pennine was a British 46-litre air-cooled sleeve valve engine with 24 cylinders arranged in an X formation. It was an enlarged version of the 22-litre Exe; a prototype engine was built and tested, but never flew. The project was terminated in 1945, being superseded by the jet engine. A 100-litre 5,000 hp X32 (twin-X16) version of the Exe/Pennine, originally known as the Exe 100, was to have become the Rolls-Royce Snowdon. Rolls-Royce air-cooled engines, intended for commercial transport aeroplane use, were named after British mountains, e.g. The Pennines and Snowdon. Specifications (Pennine) See also References Notes Bibliography Gunston, Bill. World Encyclopedia of Aero Engines. Cambridge, England. Patrick Stephens Limited, 1989. Rubbra, A.A. Rolls-Royce Piston Aero Engines - a designer remembers: Historical Series no 16 :Rolls-Royce Heritage Trust, 1990. Pennine 1940s aircraft piston engines Sleeve valve engines X engines
Rolls-Royce Pennine
[ "Technology" ]
218
[ "Sleeve valve engines", "Engines" ]
11,118,768
https://en.wikipedia.org/wiki/Parabolic%20induction
In mathematics, parabolic induction is a method of constructing representations of a reductive group from representations of its parabolic subgroups. If G is a reductive algebraic group and is the Langlands decomposition of a parabolic subgroup P, then parabolic induction consists of taking a representation of , extending it to P by letting N act trivially, and inducing the result from P to G. There are some generalizations of parabolic induction using cohomology, such as cohomological parabolic induction and Deligne–Lusztig theory. Philosophy of cusp forms The philosophy of cusp forms was a slogan of Harish-Chandra, expressing his idea of a kind of reverse engineering of automorphic form theory, from the point of view of representation theory. The discrete group Γ fundamental to the classical theory disappears, superficially. What remains is the basic idea that representations in general are to be constructed by parabolic induction of cuspidal representations. A similar philosophy was enunciated by Israel Gelfand, and the philosophy is a precursor of the Langlands program. A consequence for thinking about representation theory is that cuspidal representations are the fundamental class of objects, from which other representations may be constructed by procedures of induction. According to Nolan Wallach Put in the simplest terms the "philosophy of cusp forms" says that for each Γ-conjugacy classes of Q-rational parabolic subgroups one should construct automorphic functions (from objects from spaces of lower dimensions) whose constant terms are zero for other conjugacy classes and the constant terms for [an] element of the given class give all constant terms for this parabolic subgroup. This is almost possible and leads to a description of all automorphic forms in terms of these constructs and cusp forms. The construction that does this is the Eisenstein series. Notes References A. W. Knapp, Representation Theory of Semisimple Groups: An Overview Based on Examples, Princeton Landmarks in Mathematics, Princeton University Press, 2001. . Representation theory
Parabolic induction
[ "Mathematics" ]
416
[ "Representation theory", "Fields of abstract algebra" ]
11,118,957
https://en.wikipedia.org/wiki/Complementary%20series%20representation
In mathematics, complementary series representations of a reductive real or p-adic Lie groups are certain irreducible unitary representations that are not tempered and do not appear in the decomposition of the regular representation into irreducible representations. They are rather mysterious: they do not turn up very often, and seem to exist by accident. They were sometimes overlooked, in fact, in some earlier claims to have classified the irreducible unitary representations of certain groups. Several conjectures in mathematics, such as the Selberg conjecture, are equivalent to saying that certain representations are not complementary. For examples see the representation theory of SL2(R). Elias M. Stein (1972) constructed some families of them for higher rank groups using analytic continuation, sometimes called the Stein complementary series. References , also reprinted as Representation theory of groups
Complementary series representation
[ "Mathematics" ]
167
[ "Algebra stubs", "Algebra" ]
11,118,994
https://en.wikipedia.org/wiki/Piecewise%20linear%20continuation
Simplicial continuation, or piecewise linear continuation (Allgower and Georg), is a one-parameter continuation method which is well suited to small to medium embedding spaces. The algorithm has been generalized to compute higher-dimensional manifolds by (Allgower and Gnutzman) and (Allgower and Schmidt). The algorithm for drawing contours is a simplicial continuation algorithm, and since it is easy to visualize, it serves as a good introduction to the algorithm. Contour plotting The contour plotting problem is to find the zeros (contours) of ( a smooth scalar valued function) in the square , The square is divided into small triangles, usually by introducing points at the corners of a regular square mesh , , making a table of the values of at each corner , and then dividing each square into two triangles. The value of at the corners of the triangle defines a unique Piecewise Linear interpolant to over each triangle. One way of writing this interpolant on the triangle with corners is as the set of equations The first four equations can be solved for (this maps the original triangle to a right unit triangle), then the remaining equation gives the interpolated value of . Over the whole mesh of triangles, this piecewise linear interpolant is continuous. The contour of the interpolant on an individual triangle is a line segment (it is an interval on the intersection of two planes). The equation for the line can be found, however the points where the line crosses the edges of the triangle are the endpoints of the line segment. The contour of the linear interpolant over a triangle The contour of the piecewise linear interpolant is a set of curves made up of these line segments. Any point on the edge connecting and can be written as with in , and the linear interpolant over the edge is So setting and Since this only depends on values on the edge, every triangle which shares this edge will produce the same point, so the contour will be continuous. Each triangle can be tested independently, and if all are checked the entire set of contour curves can be found. Piecewise linear continuation Piecewise linear continuation is similar to contour plotting (Dobkin, Levy, Thurston and Wilks), but in higher dimensions. The algorithm is based on the following results: Lemma 1 An '(n-1)'-dimensional simplex has n vertices, and the function F assigns an 'n'-vector to each. The simplex is convex, and any point within the simplex is a convex combination of the vertices. That is: If x is in the interior of an (n-1)-dimensional simplex with n vertices , then there are positive scalars such that If the vertices of the simplex are linearly independent the non-negative scalars are unique for each point x, and are called the barycentric coordinates of x. They determine the value of the unique interpolant by the formula: Lemma 2 There are basically two tests. The one which was first used labels the vertices of the simplex with a vector of signs (+/-) of the coordinates of the vertex. For example the vertex (.5,-.2,1.) would be labelled (+,-,+). A simplex is called completely labelled if there is a vertex whose label begins with a string of "+" signs of length 0,1,2,3,4,...n. A completely labelled simplex contains a neighborhood of the origin. This may be surprising, but what underlies this result is that for each coordinate of a completely labelled simplex there is a vector with "+" and another with a "-". Put another way, the smallest cube with edges parallel to the coordinate axes and which covers the simplex has pairs of faces on opposite sides of 0. (i.e. a "+" and a "-" for each coordinate). The second approach is called vector labelling. It is based on the barycentric coordinates of the vertices of the simplex. The first step is to find the barycentric coordinates of the origin, and then the test that the simplex contains the origin is simply that all the barycentric coordinates are positive and the sum is less than 1. Lemma 3 References Numerical analysis
Piecewise linear continuation
[ "Mathematics" ]
910
[ "Computational mathematics", "Mathematical relations", "Approximations", "Numerical analysis" ]
11,119,390
https://en.wikipedia.org/wiki/Ford%20Sync
Ford Sync (stylized Ford SYNC) is a factory-installed, integrated in-vehicle communications and entertainment system that allows users to make hands-free telephone calls, control music and perform other functions with the use of voice commands. The system consists of applications and user interfaces developed by Ford and other third-party developers. The first two generations (Ford Sync and MyFord Touch) run on the Windows Embedded Automotive operating system designed by Microsoft, while the third and fourth generations (Sync 3 and Sync 4/4a) run on the QNX operating system from BlackBerry Limited. Future versions will run on the Android operating system from Google. Ford first announced the release of SYNC in January 2007 at the North American International Auto Show in Detroit. SYNC was released into the retail market in 2007 when Ford installed the technology in twelve Ford group vehicles (2008 model) in North America. Overview Ford president and CEO Alan Mulally and Microsoft chairman Bill Gates announced the SYNC partnership between Ford and Microsoft at the annual North American International Auto Show in January 2007. The Ford SYNC technology was promoted as a new product that provided drivers with the ability to operate Bluetooth-enabled mobile phones and digital media players in their vehicles using voice commands, the vehicle's steering wheel, and radio controls. Later, new technology was added to SYNC in which text messages received by the driver are "vocalized" by a digitized female voice device named "Samantha". SYNC's text message function also has the ability to interpret approximately one hundred shorthand messages, such as "LOL", and will read "swear words", but does not decipher acronyms that have been considered by the designers to be "obscene". In 2007, as a standalone option, the suggested retail price for the SYNC was US$395. Compatibility Certain voice commands, such as "Turn-by-turn directions", "Vehicle Health Report", "Weather" and climate control commands are not available in some countries such as Canada due to compatibility issues. For example, many commands are not available because there is no French equivalent for a command in English. Ford Canada expects to address these issues in upcoming versions of the software after the issues are worked out in detail, but there does not appear to be a firm release date. Mobile-integration SYNC has various mobile-integration capabilities, including "Push to Talk" on the steering wheel, wireless transfer of contacts between a mobile phone and the on-board phone book, as well as various advanced calling features, such as caller ID, call waiting, conference calling, a caller log, a list of contacts, a signal strength icon, and a phone battery charge icon. Personal ring tones can also be assigned to identify specific callers. Audible SMS messages SYNC can convert a user's SMS messages to audio and read them out loud to the user through the vehicle's speaker system. This feature is carrier dependent as well as dependent on the device of the user. The feature is supported by several phone operating systems, including the iPhone, most Android models, and Windows Mobile. This feature is also dependent on the phone support Bluetooth Message Access Profile. Entertainment Digital music player support SYNC can connect to popular digital music players via Bluetooth or a USB connection. Users can browse through music collections by genre, album, artist, and song title using voice commands. With certain devices, SYNC is also capable of playing protected content (for example Zune Pass downloads), provided that usage rights on the device are current. Applications 911 Assist / Emergency Assistance The 911 Assist application places a direct call to a local 911 emergency operator in the event of a serious accident with an airbag deployment. Before initiating the emergency 911 call, SYNC will provide a 10-second window to allow the driver or passenger to decide whether to cancel the call. If not manually cancelled within the 10-second window, SYNC will place the emergency call. A pre-recorded message will play when the call is answered, and occupants in the vehicle will then be able to communicate directly with the 911 operator. In Europe, this feature is called Emergency Assistance. It will call 112 in over 40 countries. Though in Albania, Belarus, Bosnia & Herzegovina, North Macedonia, Moldova, the Netherlands, Russia, Ukraine it does not work and in Belgium there is a chance that the 112 emergency center cannot process the GPS coordinates since it is not compatible with the European eCall standard. AppLink AppLink allows iPhone and Android-based cellular devices to run approved applications using the car's buttons or voice commands. The first set of announced applications for the U.S. included Pandora Radio, Stitcher Radio, iHeartRadio, OpenBeak, NPR News, Slacker Radio, TuneIn Radio and Ford SYNC Destinations. Rhapsody announced AppLink capability of its Android-based mobile app in January 2013. Spotify was made available to iPhone users in March 2013 and later to Android users too, but discontinued in January 2018. Applications for the U.K. market (as of September 2019) are Glympse (real-time location sharing), Waze (navigation), Sygic (navigation), Radioplayer, EventSeeker, CitySeeker, HearMeOut, AccuWeather and Acast. Applications for the Spain market (as of June 2021) are Ayuntamiento de Alcobendas City App (send curated notifications from the City Authorities to nearby Drivers about street conditions and driver-safety issues such as accidents, street closures, diversions, social traffic events and more. Traffic, Directions and Information Traffic, Directions and Information is an application that provides the user with traffic alerts, turn-by-turn directions and information about topics such as weather, sports, news and 411 business search. Ford announced on May 27, 2009, that the Traffic, Directions and Information application would be free for three years to the original owner of 2010 model year SYNC-equipped vehicles. The information for traffic alerts and Turn-By-Turn Directions are provided by INRIX and Telenav. Vehicle Health Reports Ford has discontinued support for the Vehicle Health Report. According to the published service bulletin, "Ford has made the necessary decision to discontinue the Vehicle Health Report service for SYNC GEN1 & GEN2 available on 2008-2016 vehicles. Consistent with the Terms & Conditions in the user agreement and starting August 1st, 2018, this change will result in the following: Vehicle Health Information will no longer be sent through the mobile phone associated with the registered e-mail account Vehicle Health Reports will no longer be sent via e-mail Vehicle Health Reports will no longer be available on the Ford/Lincoln Owner websites" Ford Work Solutions The Ford Work Solution is a collection of technologies debuted in April 2009. Ford Work Solutions is marketed toward professionals who buy the Ford F150, F-Series Super Duty, E-Series van and Transit Connect. Magneti Marelli developed the in-dash computer system that is unique to trucks equipped with Ford Work Solutions. The applications included in the Ford Work Solution are Crew Chief, Garmin Nav, Mobile Office and Tool Link. Crew Chief The Crew Chief application provides real-time vehicle location and maintenance tracking. Crew Chief can monitor numerous vehicle diagnostic functions including tire pressure, water in fuel, airbag faults and the check engine light. Users can also create alerts to monitor things such as excessive speeding. Garmin Navigation The Garmin Navigation application provides capabilities including destination routing and locating points of interest. LogMeIn The LogMeIn application allows users to remotely access an office computer using a data connection provided by Sprint. The user can open applications on the remote computer, make updates and print documents using a Ford-certified, Bluetooth-enabled keyboard and printer. Tool Link Tool Link is an application that enables a user to take physical inventory of objects present in the truck bed using radio-frequency identification (RFID) tags. A user attaches RFID tags to an object, allowing the SYNC system to detect the object's presence or absence and noting the object's status on the in-dash computer display. Users can create "job lists" of objects to verify that tools needed for a certain job are present in the truck before heading to a job site. At the end of the job, the system can inventory items in the truck to ensure that no tools are left on the job site. Ford developed the Tool Link application with power tool manufacturer DeWalt along with ThingMagic. Agreement with Microsoft Ford had exclusive use of the Microsoft Auto embedded operating system that powered the early versions of SYNC until the exclusivity agreement expired in November 2008. The Ford-developed user interface elements and Ford-developed applications remain exclusive to Ford group vehicles and are not available to other manufacturers using Windows Embedded Automotive for the basis of their in-vehicle infotainment systems. SYNC versions The original SYNC system (before the introduction of MyFord Touch) is now known as "SYNC Gen1", while the new MyFord Touch and MyLincoln Touch systems are known as "Gen2". SYNC Gen1, Sept. 2007-Nov. 2012 SYNC v1, which debuted September 2007, offered the ability to play certain entertainment media, the ability to connect to certain mobile phones and digital audio players and to utilize SMS. In January 2008, SYNC v2 was released, which enabled two new Ford developed applications: 911 Assist and Vehicle Health Report. SYNC v3, released in April 2009, enabled the Traffic, Directions and Information application. Later that month, Ford Work Solutions, a collection of five applications marketed towards professionals who buy Ford trucks, was added. The applications included in the Ford Work Solution were Crew Chief, Garmin Nav, LogMeIn and Tool Link. SYNC v4 and v5 were released in January 2010 and January 2011, respectively, and enabled the Ford-developed MyFord Touch application for certain 2011 model year vehicles as well as SYNC AppLink capabilities for certain 2011 model year vehicles. The latest version of SYNC was released in November 2012 by Ford and is only applicable to certain vehicles and configurations. Ford has extended the warranty for Sync on several 2011 to 2014 models to five years as a customer satisfaction matter. (Field Service Action Number: 12M02) MyFord Touch Sync 3 On December 11, 2014, Ford announced Sync 3, which replaced MyFord Touch, having simpler features and be powered by QNX software by BlackBerry Limited instead of Microsoft. The Sync 3 name is be used for both Ford and Lincoln models, though Lincoln's have a different theme. Over half of Ford's North American vehicles were planned to have Sync 3 by the end of 2015 and be expanded globally afterward; vehicles not equipped with Sync 3 were be equipped with the original Ford Sync. Ford cited issues with Microsoft's complex software dragging down its scores with Consumer Reports and other consumer magazines being a reason it switched to the BlackBerry QNX operating system. Sync 4 and 4a On October 30, 2019, Ford announced Sync 4 and 4a, the next version of their infotainment platform. The new platform was initially announced with the unveiling of the Ford Mustang Mach E on November 18, 2019. The first production vehicle and further details of Sync 4 were released at the unveiling of the 2021 model year Ford Mustang Mach-E. Sync 4 comes with an 8 or 12 inch horizontally oriented main display while Sync 4a comes with a 12 or 15.5 inch vertically oriented main display. Ford Power-Up software updates deliver continuous vehicle enhancements via over-the-air updates. System hardware The SYNC v1 computer, which Ford calls the Accessory Protocol Interface Module (APIM), is housed separately from the head unit, called the Audio Control Module (ACM), and interfaces with all vehicle audio sources as well as the high-speed and medium-speed vehicle CAN-buses. The first generation of the Ford's SYNC computer was designed in cooperation with Continental AG and is built around a 400 MHz Freescale i.MX31L processor with an ARM 11 CPU core, uses 256 MB of 133 MHz Mobile DDR SDRAM from Micron and 2 GB of Samsung NAND flash memory, runs the Windows Embedded Automotive operating system, and uses speech technology by Nuance Communications. Utilizing the USB port, SYNC's Microsoft Windows Auto-based operating system can be updated to work with new personal electronic devices. A Cambridge Silicon Radio (CSR) BlueCore4 chip provides Bluetooth connectivity with compatible phones and devices. SYNC's major circuit board chips cost roughly US$27.80, which allows Ford to profitably sell the system at a much lower price than competitive offerings. SYNC 3 hardware is a TI OMAP5432 CPU, using ARM Cortex-A15 cores. SYNC 4 hardware is an NXP i.MX 8 Series, using ARM Cortex-A53 cores. Research In 2011, Shutko and Tijerina reviewed large naturalistic studies on cars (Dingus and Klauer, 2008; Klauer et al., 2006; Young and Schreiner, 2009), heavy good vehicles (Olsen at el, 2008) and commercial vehicles and buses (Hickman et al., 2010) in field operational tests (Sayer et al., 2005, 2007). They concluded that: Driver inattention is a contributory factor in most collisions/near-misses; Visual inattention – looking away from the road – is the single most significant type of inattention; Distraction from listening to or talking on a handheld/hands-free devices is less of a contributory factor in collisions/near-misses than is generally thought; indeed, these distractions may enhance safety in some circumstances. Awards and recognition Popular Mechanics ranked SYNC number four on its list of the "Top 10 Most Brilliant Gadgets of 2007". Popular Science magazine awarded SYNC a "Best of What's New Award" for 2008 in November 2007. (RESCU) Both Ford Motor Company and General Motors announced then-advanced services in 1996 for their top-of-the-line automobiles that provided GPS-assisted vehicle security and wireless communication. Ford delivered its RESCU (Remote Emergency Satellite Cellular Unit) service on the 1996 Lincoln Continental before GM delivered its OnStar, "a similar system" on some model-year 1997 vehicles. In less than five years, a book said that "potential competitors for OnStar are lagging" and that "Ford's RESCU has fizzled. A 2018 look-back at 1997 described Ford's RESCU as "long gone" and added that "Ford now has SYNC, which is a much more robust and flexible system." See also References Bibliography Automotive technology tradenames Computer-related introductions in 2007 Ford Motor Company Human–computer interaction In-car entertainment Vehicle telematics Windows Embedded Automotive devices
Ford Sync
[ "Engineering" ]
3,002
[ "Human–computer interaction", "Human–machine interaction" ]
11,120,026
https://en.wikipedia.org/wiki/Sum-free%20set
In additive combinatorics and number theory, a subset A of an abelian group G is said to be sum-free if the sumset A + A is disjoint from A. In other words, A is sum-free if the equation has no solution with . For example, the set of odd numbers is a sum-free subset of the integers, and the set {N + 1, ..., 2N&hairsp;} forms a large sum-free subset of the set {1, ..., 2N&hairsp;}. Fermat's Last Theorem is the statement that, for a given integer n > 2, the set of all nonzero nth powers of the integers is a sum-free set. Some basic questions that have been asked about sum-free sets are: How many sum-free subsets of {1, ..., N&hairsp;} are there, for an integer N? Ben Green has shown that the answer is , as predicted by the Cameron–Erdős conjecture. How many sum-free sets does an abelian group G contain? What is the size of the largest sum-free set that an abelian group G contains? A sum-free set is said to be maximal if it is not a proper subset of another sum-free set. Let be defined by is the largest number such that any subset of with size n has a sum-free subset of size k. The function is subadditive, and by the Fekete subadditivity lemma, exists. Erdős proved that , and conjectured that equality holds. This was proved by Eberhard, Green, and Manners. See also Erdős–Szemerédi theorem Sum-free sequence References Sumsets Additive combinatorics
Sum-free set
[ "Mathematics" ]
375
[ "Additive combinatorics", "Sumsets", "Combinatorics" ]
11,120,053
https://en.wikipedia.org/wiki/Weyl%E2%80%93Brauer%20matrices
In mathematics, particularly in the theory of spinors, the Weyl–Brauer matrices are an explicit realization of a Clifford algebra as a matrix algebra of matrices. They generalize the Pauli matrices to dimensions, and are a specific construction of higher-dimensional gamma matrices. They are named for Richard Brauer and Hermann Weyl, and were one of the earliest systematic constructions of spinors from a representation theoretic standpoint. The matrices are formed by taking tensor products of the Pauli matrices, and the space of spinors in dimensions may then be realized as the column vectors of size on which the Weyl–Brauer matrices act. Construction Suppose that V = Rn is a Euclidean space of dimension n. There is a sharp contrast in the construction of the Weyl–Brauer matrices depending on whether the dimension n is even or odd. Let = 2 (or 2+1) and suppose that the Euclidean quadratic form on is given by where (pi, qi) are the standard coordinates on Rn. Define matrices 1, 1', P, and Q by . In even or in odd dimensionality, this quantization procedure amounts to replacing the ordinary p, q coordinates with non-commutative coordinates constructed from P, Q in a suitable fashion. Even case In the case when n = 2k is even, let for i = 1,2,...,k (where the P or Q is considered to occupy the i-th position). The operation is the tensor product of matrices. It is no longer important to distinguish between the Ps and Qs, so we shall simply refer to them all with the symbol P, and regard the index on Pi as ranging from i = 1 to i = 2k. For instance, the following properties hold: , and for all unequal pairs i and j. (Clifford relations.) Thus the algebra generated by the Pi is the Clifford algebra of euclidean n-space. Let A denote the algebra generated by these matrices. By counting dimensions, A is a complete 2k×2k matrix algebra over the complex numbers. As a matrix algebra, therefore, it acts on 2k-dimensional column vectors (with complex entries). These column vectors are the spinors. We now turn to the action of the orthogonal group on the spinors. Consider the application of an orthogonal transformation to the coordinates, which in turn acts upon the Pi via . That is, . Since the Pi generate A, the action of this transformation extends to all of A and produces an automorphism of A. From elementary linear algebra, any such automorphism must be given by a change of basis. Hence there is a matrix S, depending on R, such that (1). In particular, S(R) will act on column vectors (spinors). By decomposing rotations into products of reflections, one can write down a formula for S(R) in much the same way as in the case of three dimensions. There is more than one matrix S(R) which produces the action in (1). The ambiguity defines S(R) up to a nonevanescent scalar factor c. Since S(R) and cS(R) define the same transformation (1), the action of the orthogonal group on spinors is not single-valued, but instead descends to an action on the projective space associated to the space of spinors. This multiple-valued action can be sharpened by normalizing the constant c in such a way that (det S(R))2 = 1. In order to do this, however, it is necessary to discuss how the space of spinors (column vectors) may be identified with its dual (row vectors). In order to identify spinors with their duals, let C be the matrix defined by Then conjugation by C converts a Pi matrix to its transpose: tPi = C Pi C−1. Under the action of a rotation, whence C S(R) C−1 = α tS(R)−1 for some scalar α. The scalar factor α can be made to equal one by rescaling S(R). Under these circumstances, (det S(R))2 = 1, as required. In physics, the matrix C is conventionally interpreted as charge conjugation. Weyl spinors Let U be the element of the algebra A defined by , (k factors). Then U is preserved under rotations, so in particular its eigenspace decomposition (which necessarily corresponds to the eigenvalues +1 and -1, occurring in equal numbers) is also stabilized by rotations. As a consequence, each spinor admits a decomposition into eigenvectors under U: ξ = ξ+ + ξ− into a right-handed Weyl spinor ξ+ and a left-handed Weyl spinor ξ−. Because rotations preserve the eigenspaces of U, the rotations themselves act diagonally as matrices S(R)+, S(R)− via (S(R)ξ)+ = S+(R) ξ+, and (S(R)ξ)− = S−(R) ξ−. This decomposition is not, however, stable under improper rotations (e.g., reflections in a hyperplane). A reflection in a hyperplane has the effect of interchanging the two eigenspaces. Thus there are two irreducible spin representations in even dimensions given by the left-handed and right-handed Weyl spinors, each of which has dimension 2k-1. However, there is only one irreducible pin representation (see below) owing to the non-invariance of the above eigenspace decomposition under improper rotations, and that has dimension 2k. Odd case In the quantization for an odd number 2k+1 of dimensions, the matrices Pi may be introduced as above for i = 1,2,...,2k, and the following matrix may be adjoined to the system: , (k factors), so that the Clifford relations still hold. This adjunction has no effect on the algebra A of matrices generated by the Pi, since in either case A is still a complete matrix algebra of the same dimension. Thus A, which is a complete 2k×2k matrix algebra, is not the Clifford algebra, which is an algebra of dimension 2×2k×2k. Rather A is the quotient of the Clifford algebra by a certain ideal. Nevertheless, one can show that if R is a proper rotation (an orthogonal transformation of determinant one), then the rotation among the coordinates is again an automorphism of A, and so induces a change of basis exactly as in the even-dimensional case. The projective representation S(R) may again be normalized so that (det S(R))2 = 1. It may further be extended to general orthogonal transformations by setting S(R) = -S(-R) in case det R = -1 (i.e., if R is a reversal). In the case of odd dimensions it is not possible to split a spinor into a pair of Weyl spinors, and spinors form an irreducible representation of the spin group. As in the even case, it is possible to identify spinors with their duals, but for one caveat. The identification of the space of spinors with its dual space is invariant under proper rotations, and so the two spaces are spinorially equivalent. However, if improper rotations are also taken into consideration, then the spin space and its dual are not isomorphic. Thus, while there is only one spin representation in odd dimensions, there are a pair of inequivalent pin representations. This fact is not evident from the Weyl's quantization approach, however, and is more easily seen by considering the representations of the full Clifford algebra. See also Higher-dimensional gamma matrices Clifford algebra Notes Spinors Matrices Clifford algebras
Weyl–Brauer matrices
[ "Mathematics" ]
1,652
[ "Matrices (mathematics)", "Mathematical objects" ]
11,120,939
https://en.wikipedia.org/wiki/Concomitant%20drug
Concomitant drugs are two or more drugs used or given at or almost at the same time (one after the other, on the same day, etc.). The term has two contextual uses: as used in medicine or as used in drug abuse. Concomitant drugs in medicine This designation is used when medicinal drugs are given either at the same time or almost at the same time. This is often the case in medicine. Chemotherapy for cancer applies is an example. The standard of care (sometimes also called the "gold standard") for the adjuvant treatment of stage III colon cancer is the FOLFOX chemotherapy protocol (used in Europe, Japan, Canada, and Australia) and respectively the FLOX chemotherapy protocol (used in the USA). These 2 chemotherapy protocols are very similar in principle. Both consist of 3 medicinal drugs: a) Leucovorin (= folinic acid = calcium folinate), b) 5-Fluorouracil (= 5-FU), and c) Oxaliplatin. Since these 3 medicinal drugs are "concomitant" to each other, such a constellation is called "concomitant drugs". Contrast imaging in medicine is another example. These are imaging procedures in medicine that are performed after giving the patient an iodinated contrast medium (e.g. different types of contrast X-rays, CTs, MRIs). It is well known that such iodinated contrast media can lead to acute allergies in some patients. They may also lead to kidney damage. If the patient is receiving a "concomitant" medicinal drug (prescribed to the patient by another physician), and the radiologist performing the imaging procedure is unaware of this, potentially harmful side-effects can occur and increase the risk of contrast medium-induced nephropathy (i.e. increase the risk of damage to the kidneys). In general, radiologists carefully ask their patients about other medicinal drugs they are "concomitantly" taking before the imaging procedure. Often, they monitor the kidney function and the hydration status of their patients during the imaging procedure, especially whenever a concomitant drug (that is harmful to the kidney) is being used. Concomitant drugs in drug abuse If a drug abuser ingests or misuses two or more drugs, either at the same time or almost at the same time, this is also called "concomitant drugs". Whether concomitant drug abuse leads to an increased number of deaths was scientifically analysed in Sheffield, UK. The researchers wanted to find out whether concomitant drug abuse (i.e. an opiate plus another drug of misuse) leads to an increased number of acute accidental opiate-related deaths. The authors showed that at least in the Sheffield area, intravenous (IV) administration of an opiate is the most consistent factor associated with drug abuse deaths. The co-administration of a concomitant drug of misuse appeared to be a feature rather than a risk factor per se in such deaths. References Pharmacy
Concomitant drug
[ "Chemistry" ]
644
[ "Pharmacology", "Pharmacy" ]
11,121,146
https://en.wikipedia.org/wiki/Jet%20noise
In aeroacoustics, jet noise is the field that focuses on the noise generation caused by high-velocity jets and the turbulent eddies generated by shearing flow. Such noise is known as broadband noise and extends well beyond the range of human hearing (100 kHz and higher). Jet noise is also responsible for some of the loudest sounds ever produced by mankind. Sources of jet noise The primary sources of jet noise for a high-speed air jet (meaning when the exhaust velocity exceeds about 100 m/s; 360 km/h; 225 mph) are "jet mixing noise" and, for supersonic flow, shock associated noise. Acoustic sources within the "jet pipe" also contribute to the noise, mainly at lower speeds, which include combustion noise, and sounds produced by interactions of a turbulent stream with fans, compressors, and turbine systems. The jet mixing sound is created by the turbulent mixing of a jet with the ambient fluid, in most cases, air. The mixing initially occurs in an annular shear layer, which grows with the length of the nozzle. The mixing region generally fills the entire jet at four or five diameters from the nozzle. The high-frequency components of the sound are mainly stationed close to the nozzle, where the dimensions of the turbulence eddies are small. Further down the jet, where the eddy size is similar to the jet diameter, is where lower frequency begins. In supersonic or choked jets there are cells through which the flow continuously expands and contracts. Several of these "shock cells" can be seen extending up to ten jet diameters from the nozzle and are responsible for two additional components of jet noise, screech tones, and broadband shock associated noises. Screech is produced by a feedback mechanism in which a disturbance convecting in the shear layer generates sound as it traverses the standing system of shock waves in the jet. Even though screech is a side effect of the jet's flight, it can be suppressed by an appropriate design for a nozzle. Aircraft noise is also sometimes called jet noise when emanating from jet aircraft, regardless of the mechanism of noise production. See also Lighthill's eighth power law QTOL Stealth aircraft References Works cited Khavaran, Abbas. (2012). Acoustic Investigation of Jet Mixing Noise in Dual Stream Nozzles. Cleveland, OH: National Aeronautics and Space Administration, Glenn Research Center. Aircraft noise Fluid dynamics
Jet noise
[ "Chemistry", "Engineering" ]
497
[ "Piping", "Chemical engineering", "Fluid dynamics stubs", "Fluid dynamics" ]
11,121,351
https://en.wikipedia.org/wiki/HD%2017092
HD 17092 is a star in the constellation of Perseus. It has an orange hue but is visible only with binoculars or better equipment, having an apparent visual magnitude of 7.73. The distance to this star is approximately 750 light years from the Sun based on parallax, and is drifting further away with a radial velocity of +5.5 km/s. This object is an aging giant star with a stellar classification of K0III, which means it has exhausted the supply of hydrogen at its core then cooled and expanded off the main sequence. It is roughly six billion years old with 1.2 times the mass of the Sun and has expanded to 12 times the Sun's radius. The star is radiating 57 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of 4,630 K. Planetary system On 6 May 2007, a planet HD 17092 b was discovered with the Hobby-Eberly Telescope by Niedzielski who used the wobble method. This planet is a massive gas giant and orbits at 1.29 astronomical units from the star with a period of about 360 days. References K-type giants Planetary systems with one confirmed planet Perseus (constellation) Durchmusterung objects 017092
HD 17092
[ "Astronomy" ]
262
[ "Perseus (constellation)", "Constellations" ]
2,275,333
https://en.wikipedia.org/wiki/Poly%28hydridocarbyne%29
Poly(hydridocarbyne) (PHC) is one of a class of carbon-based random network polymers primarily composed of tetrahedrally hybridized carbon atoms, each having one hydride substituent, exhibiting the generic formula [HC]n. PHC is made from bromoform, a liquid halocarbon that is commercially manufactured from methane. At room temperature, poly(hydridocarbyne) is a dark brown powder. It can be easily dissolved in a number of solvents (tetrahydrofuran, ether, toluene etc.), forming a colloidal suspension that is clear and non-viscous, which may then be deposited as a film or coating on various substrates. Upon thermolysis in argon at atmospheric pressure and temperatures of 110 °C to 1000 °C, decomposition of poly(hydridocarbyne) results in hexagonal diamond (lonsdaleite). More recently poly(hydridocarbyne) has been synthesized by a much simpler method using electrolysis of chloroform (May 2008) and hexachloroethane (June 2009). The novelty of PHC (and its related polymer poly(methylsilyne)) is that the polymer may be readily fabricated into various forms (e.g. films, fibers, plates) and then thermolyzed into a final hexagonal diamond ceramic. See also Carbyne, for name origin References (US patent application) External links Facile Synthesis of Poly(hydridocarbyne): A Precursor to Diamond and Diamond-like Ceramics Liftport Staff Blog discussion Organic polymers
Poly(hydridocarbyne)
[ "Chemistry" ]
347
[ "Polymer stubs", "Organic polymers", "Organic compounds", "Organic chemistry stubs" ]
2,275,394
https://en.wikipedia.org/wiki/Endostatin
Endostatin is a naturally occurring, 20-kDa C-terminal fragment derived from type XVIII collagen. It is reported to serve as an anti-angiogenic agent, similar to angiostatin and thrombospondin. Endostatin is a broad-spectrum angiogenesis inhibitor and may interfere with the pro-angiogenic action of growth factors such as basic fibroblast growth factor (bFGF/FGF-2) and vascular endothelial growth factor (VEGF). Background Endostatin is an endogenous inhibitor of angiogenesis. It was first found secreted in the media of non-metastasizing mouse cells from a hemangioendothelioma cell line in 1997 and was subsequently found in humans, e.g. in platelets. It is produced by proteolytic cleavage of collagen XVIII, a member of the multiplexin family that is characterized by interruptions in the triple helix creating multiple domains, by proteases such as cathepsins. Collagen is a component of epithelial and endothelial basement membranes. Endostatin, as a fragment of collagen 18, demonstrates a role of the ECM in suppression of neoangiogenesis. Pro-angiogenic and anti-angiogenic factors can also be created by proteolysis during coagulation cascades. Endogenous inhibitors of angiogenesis are present in both normal tissue and cancerous tissue. Overall, endostatin down regulates many signaling cascades like ephrin, TNF-α, and NFκB signaling as well as coagulation and adhesion cascades. Other collagen derived antiangiogenic factors include arresten, canstatin, tumstatin, α 6 collagen type IV antiangiogenic fragment, and restin. Structure Human monomeric endostatin is a globular protein containing two disulfide bonds: Cys162−302 and Cys264−294. It folds tightly, has a zinc binding domain at the N-terminus of the protein, and has a high affinity for heparin through an 11 arginine basic patch. Endostatin also binds all heparan sulfate proteoglycans with low affinity. Oligomeric endostatin (trimer or dimer) binds mainly with laminin of the basal lamina. Biological activity In-vitro studies have shown endostatin blocks the proliferation and organization of endothelial cells into new blood vessels. In animal studies endostatin inhibited angiogenesis and growth of both primary tumors and secondary metastasis. Mechanism of action Endostatin suppresses angiogenesis through many pathways affecting both cell viability and movement. Endostatin represses cell cycle control and anti-apoptosis genes in proliferating endothelial cells, resulting in cell death. Endostatin blocks pro-angiogenic gene expression controlled by c-Jun N terminal kinase (JNK) by interfering with TNFα activation of JNK. It reduces the growth of new cells by inhibiting cyclin D1. As a result, cells arrest during G1 phase and enter apoptosis. Alteration of FGF signal transduction by endostatin inhibits the migration of endothelial cells through disruption of cell-matrix adhesions, cell-cell adhesions, and cytoskeletal reorganization. By binding integrin α5β1 on endothelia cells it inhibits the signaling pathways of Ras and Raf kinases and decreases ERK-1 and p38 activity. Endostatin binding and clustering of integrins causes co-localization with caveolin-1 and activates non-receptor tyrosine kinases of the Src family involved in the regulation of cell proliferation, differentiation, and mobility. Other receptor interactions include the VEGF-R2/KDR/Flk-1 receptor on human umbilical vein endothelial cells. Endostatin may prevent activity from certain metalloproteinase. Several studies have focused on the downstream effects of endostatin reception. These studies have estimated that endostatin may significantly affect 12% of genes used by human endothelial cells. Although endostatin signaling may affect this vast number of genes, the downstream affects appear surprisingly limited. Endostatin reception seems to only affect angiogenesis that arrives from pathogenic sources, such as tumors. Processes associated with angiogenesis, such as wound healing and reproduction, are seemingly not affected by endostatin. The result is possible because pathogenic-derived angiogenesis usually involves signaling through integrins, which are directly affected by endostatin. Cancer Although this process by which endostatin works is not fully understood, it involves metalloproteases and endopeptidases that digest components of the extracellular matrix. Several similar endogeneous angiogenic factors are produced from matrix components in this fashion. For example, perlecan degradation can yield endorepellin which functions as an anti-angiogenic factor. Collectively, these products are thought to balance regulation between pro-angiogenic and anti-angiogenic factors outside epithelial and endothelial layers. Among anti-angiogenesis inhibitors, endostatin has a wide range of anti-cancer spectrum targets, increasing its significance since synthetic inhibitors usually have single targets and struggle with toxicity. Endostatin has several characteristics that may be advantageous to cancer therapy. First of all, endogenous endostatin has been described as "the least toxic anti-cancer drug in mice". Furthermore, neither resistance nor toxicity to endostatin occur in humans. Also, endostatin has been estimated to affect 12% of the human genome. This reveals a broad spectrum of activity focused on preventing angiogenesis. This is very different from single-molecule therapies, and may change how cancer therapies are designed: drugs may be designed to target a wide range of genes instead of one particular protein. However, endostatin does not affect all tumors. For example, cancers that may have extreme pro-angiogenic activity through VEGF may overcome the anti-angiogenic effects of endostatin. Possible cancer treatment Endostatin is currently being studied as part of cancer research. Prior results indicated that endostatin can be beneficial in combinations with other medicines, but endostatin alone gave no significant improvements in tumor/disease progression. Phase I In a Phase I clinical trial of Endostatin, of the 19 patients treated, 12 were switched out of the trial by their physicians due to continued progression of their disease. Two patients continued to be treated, and the remaining patients withdrew on their own. The trial, designed primarily to demonstrate safety, indeed showed that the drug was safe and well tolerated (at the dosages used). Phase II In a Phase II clinical trial of Endostatin, 42 patients with pancreatic endocrine tumors or carcinoid tumors were treated. Of the 40 patients which could be evaluated for a radiologic response, none experienced partial response to therapy, as defined by World Health Organization criteria. The conclusion from the trial was that, "Treatment with Endostatin did not result in significant tumor regression in patients with advanced neuroendocrine tumors." Phase III A phase III clinical trial was carried out on 493 histology or cytology-confirmed stage IIIB and IV NSCLC patients with a life expectancy >3 months. Patients were treated with Endostar (rh-endostatin, YH-16), a recombinant endostatin product, in combination with vinorelbine and cisplatin (a standard chemotherapeutic regimen). The addition of Endostar to the standard chemotherapeutic regimen in these advanced NSCLC patients resulted in significant and clinically meaningful improvement in response rate, median time to progression, and clinical benefit rate compared with the chemotherapeutic regimen alone. Clinical significance Endostatin may also be useful as a therapeutic for inflammatory diseases like rheumatoid arthritis as well as Crohn disease, diabetic retinopathy, psoriasis, and endometriosis by reducing the infiltration of inflammatory cells through invading angiogenesis. Down syndrome patients seem to be protected from diabetic retinopathy due to an additional copy of chromosome 21, and elevated expression of endostatin. References External links PBS' Nova program explores Endostatin in 2001 Human proteins Collagens Angiogenesis inhibitors
Endostatin
[ "Biology" ]
1,781
[ "Angiogenesis", "Angiogenesis inhibitors" ]
2,275,696
https://en.wikipedia.org/wiki/Saiga%20antelope
The saiga antelope (, Saiga tatarica), or saiga, is a species of antelope which during antiquity inhabited a vast area of the Eurasian steppe, spanning the foothills of the Carpathian Mountains in the northwest and Caucasus in the southwest into Mongolia in the northeast and Dzungaria in the southeast. During the Pleistocene, it ranged across the mammoth steppe from the British Isles to Beringia. Today, the dominant subspecies (S. t. tatarica) only occurs in Kalmykia and Astrakhan Oblast of Russia and in the Ural, Ustyurt and Betpak-Dala regions of Kazakhstan. A portion of the Ustyurt population migrates south to Uzbekistan and occasionally to Turkmenistan in winter. It is regionally extinct in Romania, Ukraine, Moldova, China and southwestern Mongolia. The Mongolian subspecies (S. t. mongolica) occurs only in western Mongolia. Taxonomy and phylogeny The scientific name Capra tatarica was coined by Carl Linnaeus in 1766 in the 12th edition of Systema Naturae. It was reclassified as Saiga tatarica and is the sole living member of the genus Saiga. Two subspecies are recognised: S. t. tatarica (Linnaeus, 1766): also known as the Russian saiga, it is only to be found today in central Asia. S. t. mongolica Bannikov, 1946: also known as the Mongolian saiga, it is sometimes treated as an independent species, or as subspecies of the Pleistocene Saiga borealis; it is confined to Mongolia. In 1945, American paleontologist George Gaylord Simpson classified both in the tribe Saigini under the same subfamily, Caprinae. Subsequent authors were not certain about the relationship between the two, until phylogenetic studies in the 1990s revealed that though morphologically similar, the Tibetan antelope is closer to the Caprinae while the saiga is closer to the Antilopinae. In a revision of the phylogeny of the tribe Antilopini on the basis of nuclear and mitochondrial data in 2013, Eva Verena Bärmann (of the University of Cambridge) and colleagues showed that the saiga is sister to the clade formed by the springbok (Antidorcas marsupialis) and the gerenuk (Litocranius walleri). The study noted that the saiga and the springbok could be considerably different from the rest of the antilopines; a 2007 phylogenetic study suggested that the two form a clade sister to the gerenuk. The cladogram below is based on the 2013 study. Evolution Fossils of saiga, concentrated mainly in central and northern Eurasia, date to as early as the late Pleistocene (nearly 0.1 Mya). Several species of extinct Saiga from the Pleistocene of Eurasia and Alaska have been named, including S. borealis, S. prisca, S. binagadensis and S. ricei, although more recent studies suggest that these prehistoric representatives were merely geographical variants of the extant species that was formerly much more widespread. Fossils excavated from the Buran Kaya III site (Crimea) date back to the transition from Pleistocene to Holocene. The morphology of saiga does not seem to have changed significantly since prehistoric times. Before the Holocene, the saiga ranged across the mammoth steppe from as far west as modern-day England and France to as far east as northern Siberia, Alaska, and probably Canada. The antelope gradually entered the Urals, though it did not colonise southern Europe. A 2010 study revealed that a steep decline has occurred in the genetic variability of the saiga since the late Pleistocene-Holocene, probably due to a population bottleneck. Characteristics The saiga stands at the shoulder, and weighs . The head-and-body length is typically between . A prominent feature of the saiga is the pair of closely spaced, bloated nostrils directed downward. Other facial features include the dark markings on the cheeks and the nose, and the long ears. The coat shows seasonal changes. In summer, the coat appears yellow to red, fading toward the flanks. The Mongolian saiga can develop a sandy colour. The coat develops a pale, grayish-brown colour in winter, with a hint of brown on the belly and the neck. The ventral parts are generally white. The hairs, that measure long in summer, can grow as long as in winter. This forms a long mane on the neck. Two distinct moults can be observed in a year, one in spring from April to May and another in autumn from late September or early October to late November or early December. The tail measures . Only males possess horns. These horns, thick and slightly translucent, are wax-coloured and show 12 to 20 pronounced rings. With a base diameter of , the horns of the Russian saiga measure in length; the horns of the Mongolian saiga, however, reach a maximum length of . Ecology and behaviour Saigas form very large herds that graze in semideserts, steppes, grasslands, and possibly open woodlands, eating several species of plants, including some that are poisonous to other animals. They can cover long distances and swim across rivers, but they avoid steep or rugged areas. The mating season starts in November, when stags fight for the acceptance of females. The winner leads a herd of five to ten females (occasionally up to 50). In springtime, mothers come together in mass to give birth. Two-thirds of births are twins; the remaining third of births are single calves. Saigas, like the Mongolian gazelles, are known for their extensive migrations across the steppes that allow them to escape natural calamities. Saigas are highly vulnerable to wolves. Juveniles are targeted by foxes, steppe eagles, golden eagles, and ravens. Distribution and habitat In the mid-2010s, the populations declined enormously – as much as 95% in 15 years. This led the saiga to be classified as critically endangered on the IUCN Red List. In more recent years, the saiga has experienced massive regrowth. As of 2022, there is an estimated number of 1.38 million saiga surviving in Kazakhstan, per an April aerial count. As of December 2023, the global saiga antelope population is estimated to number 922,600–988,500 mature individuals. In May 2010, an estimated 12,000 of the 26,000 saiga population in the Ural region of Kazakhstan were found dead. Although the deaths are currently being ascribed to pasteurellosis, an infectious disease that strikes the lungs and intestines, the underlying trigger remains to be identified. In May 2015, what may be the same disease broke out in three northern regions of the country. As of 28 May 2015, more than 120,000 saigas have been confirmed dead in the Betpak-Dala population in central Kazakhstan, representing more than a third of the global population. By April 2016, the saigas appear to be making a comeback, with an increase of population from 31,000 to 36,000 in the Betpak-Dala area. In April 2021 a survey in Kazakhstan found that the saiga population had risen from an estimated 334,000 to 842,000. The population increase was partially attributed to the government crackdown on poaching and the establishment of conservation areas. UK charity RSPB reported in 2022 that, partly due to their conservation efforts, as well as the designation of the Bokey Orda-Ashiozek protected area by the Kazakhstan government, the population had now risen to a peak of 1.32 million. Former range The saiga was not present in Europe during the Eemian. During the last glacial period, it ranged from the British Isles through Central Asia and the Bering Strait into Alaska and Canada's Yukon and Northwest Territories. By the classical age, they were apparently considered a characteristic animal of Scythia, judging from the historian Strabo's description of an animal called the kolos that was "between the deer and ram in size" and was wrongly believed to drink through its nose. Considerable evidence shows the importance of the antelope to Andronovo culture settlements. Illustrations of saiga antelopes can be found among the cave paintings that were dated back to seventh to fifth century BC. Moreover, saiga bones were found among the remains of other wild animals near the human settlements. The fragmented information shows an abundance of saigas on the territory of modern Kazakhstan in the 14th-16th centuries. The migratory routes ranged throughout the country's area, especially the region between the Volga and Ural Rivers was heavily populated. The population's size remained high until the second half of the 19th century, when excessive horn export began. The high price and demand for horns drove radical hunting. The number of animals decreased in all regions and the migratory routes shifted southward. Populations in Ukraine were driven to extirpation in the 18th century. After a rapid decline, they were nearly completely exterminated in the 1920s, but they were able to recover. By 1950, two million of them were found in the steppes of the USSR. Their population fell drastically following the collapse of the USSR due to uncontrolled hunting and demand for horns in Chinese medicine. At one point, some conservation groups, such as the World Wildlife Fund, encouraged the hunting of this species, as its horn was presented as an alternative to that of a rhinoceros. Mongolian saiga The Mongolian saiga (S. t. mongolica) is found in a small area in western Mongolia around the Sharga and Mankhan Nature Reserves. Threats The horn of the saiga antelope is used in traditional Chinese medicine and can sell for as much as US$150. Demand for the horns drives poaching and smuggling, which has wiped out the population in China, where the saiga antelope is a class I protected species. In June 2014, Chinese customs at the Kazakh border uncovered 66 cases containing 2,351 saiga antelope horns, estimated to be worth over Y70.5 million (US$11 million). In June 2015, E. J. Milner-Gulland (chair of Saiga Conservation Alliance) said: "Antipoaching needs to be a top priority for the Russian and Kazakh governments." Hunting Saigas have been a target of hunting since prehistoric ages, when hunting was an essential means to acquire food. Saigas' horns, meat, and skin have commercial value and are exported from Kazakhstan. Saiga horn, known as , is one of the main ingredients in traditional Chinese medicine that is used as an extract or powder additive to the elixirs, ointments, and drinks. Saiga horn's value is equal to rhinoceros horn, whose trade was banned in 1993. is thought to be a cheaper substitute of rare rhino horn in most TCM recipes. In the period from 1955 to 1989, over 87 thousand tonnes of meat were collected in Kazakhstan by killing more than five million saiga. In 2011, Kazakhstan reaffirmed a ban on hunting saiga and extended this ban until 2021. Saiga meat is compared to lamb, considered to be nutritious and delicious. Numerous recipes for cooking the antelope's meat can be found. Both meat and byproducts are sold in the country and outside of it. About 45–80 dm2 of skin can be harvested from one individual depending on its age and sex. Physical barriers Agricultural advancement and human settlements have been shrinking habitat areas of the saigas since the 20th century. Occupants limited saiga's passage to water resources and the winter and summer habitats. The ever-changing face of steppe requires saigas to search for new routes to their habitual lands. Currently, saiga populations' migratory routes pass five countries and different human-made constructions, such as railways, trenches, mining sites, and pipelines. These physical barriers limit movement of the antelopes. Cases of saiga herds being trapped within fenced areas and starving to death have been reported. Climatic variability Saigas are dependent on weather and affected by climate fluctuations to a great extent due to their migratory nature. Harsh winters with strong winds or high snow coverage prevent them from feeding on the underlying grass. Population size usually dramatically decreases after severe cold months. Recent trends in climate change have increased the aridity of the steppe region, leading an estimated 14% or more of available pastureland to be considered degraded and useless. Concurrently, small steppe rivers dry faster, limiting water resources to large lakes and rivers, which are usually populated by human settlements; high temperatures in the steppe region lead to springtime floods, in which saiga calves can drown. Mass epizootic mortality 1980 to 2015 events For ungulates, mass mortalities are not uncommon. In the 1980s, several saiga die-offs occurred, and between 2010 and 2014, one occurred every year. The deaths could be linked to calving aggregation, which is when they are most vulnerable. More recent research involving a mass die-off in 2015 indicates warmer weather and attendant humidity led bacteria common in saiga antelopes to move into the bloodstream and cause hemorrhagic septicemia. 2015–2016 epizootic In May 2015, uncommonly large numbers of saigas began to die from a mysterious epizootic illness suspected to be pasteurellosis. Herd fatality is 100% once infected, with an estimated 40% of the species' total population already dead. More than 120,000 carcasses had been found by late May 2015, while the estimated total population was only 250,000. Biologist Murat Nurushev suggested that the cause might be acute ruminal tympany, whose symptoms (bloating, mouth foaming, and diarrhea) had been observed in dead saiga antelopes. According to Nurushev, this disease occurred as a result of foraging on a large amount of easily fermenting plants (alfalfa, clover, sainfoins, and mixed wet, green grass). In May 2015, the United Nations agency which is involved in saiga conservation efforts issued a statement that the mass die-off had ended. By June 2015, no definitive cause for the epizootic had been found. At a scientific meeting in November 2015 in Tashkent, Uzbekistan, Dr. Richard A. Kock (of the Royal Veterinary College in London) reported that his colleagues and he had narrowed down the possible culprits. Climate change and stormy spring weather, they said, may have transformed harmless bacteria, carried by the saigas, into lethal pathogens. Pasteurella multocida, a bacterium, was determined to be the cause of death. The bacterium occurs in the antelopes and is normally harmless; the reason for the change in behavior of the bacterium is unknown. Now, scientists and researchers believe the unusually warm and wet uncontrolled environmental variables caused the bacterium to enter the bloodstream and become septic. Hemorrhagic septicemia is the likely cause of the most recent deaths The change of the bacteria may be attributed to "the response of opportunistic microbes to changing environmental conditions". The Betpak-Dala saiga population in central Kazakhstan, which saw the most deaths, increased from 31,000 after the epidemic to 36,000 by April 2016. In late 2016, a large loss of the population happened in Mongolia. The etiology was confirmed to be goat plague in early 2017. Conservation Under the auspices of the Convention on the Conservation of Migratory Species of Wild Animals, the Saiga Antelope Memorandum of Understanding was concluded and came into effect on 24 September 2006. In captivity Currently, only the Almaty Zoo and Askania-Nova keep saigas. References Further reading . External links CMS Saiga Memorandum of Understanding Ultimate Ungulate WWF species profile: Saiga antelope saiga antelope saiga antelope EDGE species Desert fauna Mammals of Central Asia Mammals of Europe Mammals of Mongolia Mammals of Russia saiga antelope Taxa named by Carl Linnaeus
Saiga antelope
[ "Biology" ]
3,309
[ "EDGE species", "Biodiversity" ]
2,275,708
https://en.wikipedia.org/wiki/Authoritarian%20personality
The authoritarian personality is a personality type characterized by a disposition to treat authority figures with unquestioning obedience and respect. Conceptually, the term authoritarian personality originated from the writings of Erich Fromm, and usually is applied to people who exhibit a strict and oppressive personality towards their subordinates. Regardless of whether authoritarianism is more of a personality, attitude, ideology or disposition, scholars find it has significant influence on public opinion and political behavior. Historical origins In his 1941 book Fear of Freedom, a psychological exploration of modern politics, Erich Fromm described authoritarianism as a defence mechanism. In The Authoritarian Personality (1950), Theodor W. Adorno, Else Frenkel-Brunswik, Daniel Levinson, and Nevitt Sanford proposed a personality type that involved the "potentially fascistic individual". The historical background that influenced the theoretical development of the authoritarian personality included the rise of fascism in the 1930s, World War II (1939–1945), and The Holocaust, which indicated that the fascistic individual was psychologically susceptible to the ideology of antisemitism and to the emotional appeal of anti-democratic politics. Known as the Berkeley studies, the researches of Adorno and Frenkel-Brunswik, and of Levinson and Sanford concentrated upon prejudice, which they studied within psychoanalytic and psychosocial frameworks of Freudian and Frommian theories. The book was described as a landmark work in social science that generated significant criticism of certain methods and results but also confirmation of many of the findings in independent studies. Following its publication was an extensive debate on the merits of the work, with many of the themes of this debate persisting in authoritarianism research today. The authoritarian person also presents a cynical and disdainful view of humanity, and a need to wield power and be tough, which arise from the anxieties produced by the perceived lapses of people who do not abide by the conventions and social norms of society (destructiveness and cynicism); a general tendency to focus upon people who violate the value system, and to act oppressively against them (authoritarian aggression); anti-intellectualism, a general opposition to the subjective and imaginative tendencies of the mind (anti-intraception); and an exaggerated concern with sexual promiscuity, especially when concerning women. The f-scale fell into disrepute as being unreliable after about 10 years. Other criticisms of the sociologic theory presented in The Authoritarian Personality are the validity of the psychoanalytic interpretation of personality and bias that authoritarianism exists only in the right wing of the political spectrum. In human psychological development, the formation of the authoritarian personality occurs within the first years of a child's life, strongly influenced and shaped by the parents' personalities and the organizational structure of the child's family; thus, parent-child relations that are "hierarchical, authoritarian, [and] exploitative" can result in a child developing an authoritarian personality. Authoritarian-personality characteristics are fostered by parents who have a psychological need for domination, and who harshly threaten their child to compel obedience to conventional behaviors. Moreover, such domineering parents also are preoccupied with social status, a concern they communicate by having the child follow rigid, external rules. In consequence of such domination, the child suffers emotionally from the suppression of his or her feelings of aggression and resentment towards the domineering parents, whom the child reverently idealizes, but does not criticize. Such personalities may also be related to studies in preschool children of personality and political views as reported by scientists in 2006 which concluded that some children described as being "somewhat dominating" were later found, as adults, to be "relatively liberal", and those described as "relatively over-controlled" were later found, as adults, to be "relatively conservative"; in the words of the researchers,Preschool children who 20 years later were relatively liberal were characterized as: developing close relationships, self-reliant, energetic, somewhat dominating, relatively under-controlled, and resilient. Preschool children subsequently relatively conservative at age 23 were described as: feeling easily victimized, easily offended, indecisive, fearful, rigid, inhibited, and relatively over-controlled and vulnerable. Perceived threat Hetherington and Weiler argue that perceived threat is a crucial variable in activating an authoritarian disposition. They suggest this helps to explain higher rates of authoritarianism in developing countries, after social or economic crises, as well as security crises like September 11th. The authors believe that some people, who tend to be more pessimistic and experience higher levels of stress and must rely more on instinct than cognition for decision-making, while those who under normal circumstances are more optimistic and at-ease, will also respond in a more authoritarian way when feeling threatened. Links to gender inequality According to a study by Brandt and Henry, there is a direct correlation between the rates of gender inequality and the levels of authoritarian ideas in the male and female populations. It was found that in countries with less gender equality where individualism was encouraged and men occupied the dominant societal roles, women were more likely to support traits such as obedience which would allow them to survive in an authoritarian environment and less likely to encourage ideas such as independence and imagination. In countries with higher levels of gender equality, men held less authoritarian views. It is theorized that this occurs due to the stigma attached to individuals who question the cultural norms set by the dominant individuals and establishments in an authoritarian society as a way to prevent the psychological stress caused by the active ostracizing of the stigmatized individuals. Modern models C.G. Sibley and J. Duckitt reported that more recent research has produced two more effective scales of measurement for predicting prejudice and other characteristics associated with authoritarian personalities. The first scale is called the Right-wing authoritarianism (RWA) and the second is called the social dominance orientation (SDO). Bob Altemeyer used the right-wing authoritarianism (RWA) scale, to identify, measure, and quantify the personality traits of authoritarian people. The political personality type identified with the RWA scale indicates the existence of three psychological tendencies and attitudinal clusters characteristic of the authoritarian personality: (i) Submission to legitimate authorities; (ii) Aggression towards minority groups whom authorities identified as targets for sanctioned political violence; and (iii) Adherence to cultural values and political beliefs endorsed by the authorities. As measured with the NEO-PI-R Openness scale, the research indicates a negative correlation (r = 0.57) between the personality trait of "openness to experience", of the Five Factor Model of the human personality. The research of Jost, Glaser, Arie W. Kruglanski, and Sulloway (2003) indicates that authoritarianism and right-wing authoritarianism are ideological constructs for social cognition, by which political conservatives view people who are the Other who is not the Self. That the authoritarian personality and the conservative personality share two, core traits: (i) resistance to change (social, political. economic), and (ii) justification for social inequality among the members of society. Conservatives have a psychological need to manage existential uncertainty and threats with situational motives (striving for dominance in social hierarchies) and with dispositional motives (self-esteem and the management of fear). The research on ideology, politics, and racist prejudice, by John Duckitt and Chris Sibley, identified two types of authoritarian worldview: (i) that the social world is dangerous, which leads to right-wing authoritarianism; and (ii) that the world is a ruthlessly competitive jungle, which leads to social dominance orientation. In a meta-analysis of the research, Sibley and Duckitt explained that the social-dominance orientation scale helps to measure the generalization of prejudice and other authoritarian attitudes that can exist within social groups. Although both the right-wing authoritarianism scale and the social-dominance orientation scale can accurately measure authoritarian personalities, the scales usually are not correlated. Hetherington and Weiler describe the authoritarian personality as one that has a greater need for order, and less willingness to tolerate ambiguity as well as a tendency to rely on established authorities to provide that order. They acknowledge that while everyone seeks to bring some semblance of order to their world, non-authoritarian personalities are more likely to use concepts like fairness and equality, instead of the time-honored texts, conventions or leaders that are more common among authoritarian personalities. They also note that almost everyone becomes more authoritarian when they feel threat, anxiety or fatigue, as the emotional, reactive parts of the brain crowd out cognitive abilities. They also assert that scholars do not know whether to consider authoritarianism a personality trait, an attitude or an ideology. Prevalence Western countries In 2021, Morning Consult (an American data intelligence company) published the results of a survey measuring the levels of authoritarianism in adults in America and seven other Western countries. The study used Bob Altemeyer's right-wing authoritarianism scale, but they omitted the following two statements from Altemeyer's scale: (1) "The established authorities generally turn out to be right about things, while the radicals and protestors are usually just "loud mouths" showing off their ignorance"; and (2) "Women should have to promise to obey their husbands when they get married." Morning Consult's scale thus had just 20 items, with a score range of 20 to 180 points. Morning Consult found that 25.6% of American adults qualify as "high RWA" (scoring between 111 and 180 points), while 13.4% of American adults qualify as "low RWA" (scoring 20 to 63 points). United States In a 2009 book, Marc J. Hetherington and Jonathan D. Weiler identified evangelical Protestants as the most authoritarian of voting blocs in the United States. Furthermore, the former Confederate states (i.e. "the South") showed higher levels of authoritarianism than the rest. Rural populations tend to be more authoritarian than urban ones. People who preferred simple problems to complex ones, had less formal education, and those scoring lower in political knowledge also tended to score higher in authoritarianism. The authoritarianism levels of these demographics were assessed with four items that appeared in the 2004 American National Election Studies survey: Please tell me which one you think is more important for a child to have: INDEPENDENCE or RESPECT FOR ELDERS Please tell me which one you think is more important for a child to have: CURIOSITY or GOOD MANNERS Please tell me which one you think is more important for a child to have: OBEDIENCE or SELF-RELIANCE Please tell me which one you think is more important for a child to have: BEING CONSIDERATE or WELL BEHAVED These questions were designed to force a choice, not unlike in politics, when voters are forced to choose between competing values. Some respondents chose both responses to some of the questions, and the four questions are averaged together, with a score of 1 meaning they answered all four questions with a more authoritarian response and 0 with a less authoritarian response. Half of the americans surveyed scored .75 or higher, indicating that the average American had a more authoritarian disposition in 2004. See also References Bibliography External links 2010 audio discussion on authoritarianism with Bob Altemeyer and Jonathan Shockley on WBAI Psychological theories Personality traits Authoritarianism Anti-social behaviour Abuse Harassment and bullying Collectivism Moral psychology Psychological attitude
Authoritarian personality
[ "Biology" ]
2,319
[ "Behavior", "Abuse", "Anti-social behaviour", "Harassment and bullying", "Aggression", "Human behavior" ]
2,275,886
https://en.wikipedia.org/wiki/Wolf%E2%80%93Lundmark%E2%80%93Melotte
The Wolf–Lundmark–Melotte Galaxy (WLM) is a barred irregular galaxy discovered in 1909 by Max Wolf, located on the outer edges of the Local Group. The discovery of the nature of the galaxy was accredited to Knut Lundmark and Philibert Jacques Melotte in 1926. It is located in the constellation of Cetus. Properties Wolf–Lundmark–Melotte is a rotating disk that is seen edge-on. It is relatively isolated from the rest of the Local Group, and does not seem to show much evidence of interaction. However, the rotation curve of Wolf–Lundmark–Melotte is asymmetrical, in that the receding side and approaching side of the galaxy are rotating in different ways. Although isolated, Wolf–Lundmark–Melotte shows evidence of ram pressure stripping. It is far outside of the virial radius of the Milky Way, so it is possible that Wolf–Lundmark–Melotte is currently passing through some relatively dense medium. Star formation In 1994, A. E. Dolphin used the Hubble Space Telescope to create a color–magnitude diagram for WLM. It showed that around half of all the star formation in this galaxy occurred during a starburst that started ~13 Gyr ago. During the starburst, the metallicity of WLM rose from [Fe/H] ~ −2.2 to [Fe/H] −1.3. There being no horizontal-branch population, Dolphin concludes that no more than per Myr of star formation occurred in the period from 12 to 15 Gyr ago. From 2.5 to 9 Gyr ago, the mean rate of star formation was 100 to per Myr. Being at the edge of the Local Group has also protected WLM from interactions and mergers with other galaxies, giving it a "pristine" stellar population and state that make it particularly useful for comparative studies. WLM is currently forming stars, as evidenced by clumps of newly formed stars visible in ultraviolet light. These clumps are about 20 to 100 light-years (7 to 30 parsecs) in size. The youngest clumps are found in the southern half of the galaxy, which has more star formation. Observations by the James Webb Space Telescope The James Webb Space Telescope (JWST) has provided an unprecedentedly detailed view of the dwarf galaxy Wolf–Lundmark–Melotte (WLM), demonstrating its capability to resolve individual stars within the galaxy. This high-resolution imagery, part of the Webb Early Release Science (ERS) program 1334 led by Kristen McQuinn of Rutgers University, reveals the structure and composition of WLM with remarkable clarity compared to previous observations by the Spitzer Space Telescope. WLM is a relatively isolated dwarf galaxy located about 3 million light-years from Earth, notable for its chemically unenriched gas similar to that of early universe galaxies. This makes it an excellent subject for studying star formation and evolution in environments that resemble the early stages of galactic development. Globular cluster WLM has one known globular cluster (WLM-1) at that Hodge et al. (1999) determined as having an absolute magnitude of −8.8 and a metallicity of –1.5, with an age of ~15 billion years. This cluster has a luminosity that is slightly over the average for all globulars. The seeming lack of faint low-mass globular clusters cannot be explained by the weak tidal forces of the WLM system. References in popular culture In E. E. Smith's Lensman novels, the "Second Galaxy" is identified as "Lundmark's Nebula". However, some believe the "Second Galaxy" may not be the Wolf–Lundmark–Melotte galaxy, since the first chapter of the first novel in the series (Triplanetary) and the series-establishing material appearing at the beginning of subsequent novels states that the "Second Galaxy" and the "First Galaxy" (the Milky Way) collided and passed through each other "edge-on" during the "planet-forming era"—implying that the "Lundmark's Nebula" of the series must necessarily be obscured from view by the Milky Way; however, according to others, it could have passed through at an angle and thus be identified with the galaxy described in this article; some have stated that this is the galaxy that E.E. Smith was thinking of when he wrote the series. However, the distance to Lundmark's nebula is defined quite precisely in Gray Lensman as approximately 24 million parsecs, much larger than the distance to Wolf–Lundmark–Melotte (approximately 930,000 parsecs). Additionally, in Second Stage Lensmen multiple references are made to the spiral arms of Lundmark's Nebula. Wolf–Lundmark–Melotte does not possess such structures. At the time of writing of these books, the name of Lundmark was associated with such classifications and Smith may have elected to use this as a "believable" name for an entirely fictional galaxy. At the time the Lensman series was written, most astronomers favored the tidal theory of Solar System formation, which required that planets be formed by the close approach of another star. In order to produce the massive numbers of planets necessary to evolve into galactic civilizations in both the Milky Way and Lundmark's Nebula, as portrayed in the Lensman series, E.E. Smith thought it would have been necessary for another galaxy to have passed through the Milky Way to produce the large number of close encounters necessary to form so many planets. The Doctor Who novel Synthespians™ by Craig Hinton refers to the New Earth Republic of the 101st Century and beyond, which spearheads a programme of colonisation, sending sleeper ships to the Wolf-Lundmark-Melotte galaxy and Andromeda. References External links Irregular galaxies Low surface brightness galaxies Local Group Cetus 000143 Astronomical objects discovered in 1909 444
Wolf–Lundmark–Melotte
[ "Astronomy" ]
1,221
[ "Cetus", "Constellations" ]
2,276,235
https://en.wikipedia.org/wiki/Gevil
Gevil or gewil () or () is a type of parchment made from full-grain animal hide that has been prepared as a writing material in Jewish scribal documents, in particular a Sefer Torah (Torah scroll). Etymology Related to גויל, gewil, a rolling (i.e. unhewn) stone, "to roll." (Jastrow) Definition and production Gevil is a form skin for safrut (halakhic writing) that is made of tanned, whole hide. The precise requirements for processing gevil are laid down by the Talmud, Geonim and Rishonim. Rabbi Ḥiyya bar Ami said in the name of Ulla: There are three [untanned] hide [stages before it is tanned into gevil]: matza, ḥifa, and diftera. According to Jewish law, the preparation of gevil follows a procedure of salting, flouring and tanning with afatzim (lit. "tannin"), which latter is derived from gallnuts, or similar substances having tannic acid. Maimonides required rubbing down the raw hide with flour (presumably barley flour), although Simeon Kayyara, in his Halachot Gedolot, required flour being placed inside a tub of water, into which the raw hide was inserted and left for a few days. The action of the flour-based liquor served to soften the hide. These requirements were reconfirmed as a Law given to Moses at Sinai by Maimonides, in his Mishneh Torah. Gallnuts are rich in tannic acid and are the product of a tree's reaction to an invasive parasitic wasp's egg. The pure black tint of the ink used in writing Torah scrolls results from the reaction between the tannic acid and iron sulfate (a powder used to make the ink). The three types of tanned skin There are three forms of tanned skin known to Jewish law. The other two forms (klaf and dukhsustus) result from splitting the hide into two layers. The rabbinic scholars are divided upon which is the inner and which is the outer of the two halves. Maimonides is of the opinion that was the inner layer and that was the outer layer The Shulchan Aruch rules in the reverse that was the outer layer and that was the inner layer. The opinion of the Shulchan Aruch is the accepted ruling in all Jewish communities. Recently a small group has advocated for the return to using the full hide known as gevil for Sifrei Torah as it avoids this issue, but unfortunately this solution does not work for tefillin which must be written on klaf and are not kosher if written on gevil. Maimonides' rules for use According to most views of Jewish law, a Sefer Torah (Torah scroll) should be written on gevil parchment, as was done by Moses for the original Torah scroll he transcribed. Further, a reading of the earliest extant manuscripts of the Mishneh Torah indicate that gevil was halakha derived from Moses and thus required for Torah scrolls. Maimonides wrote that it is a law given to Moses at Sinai that a Torah scroll must be written on either gevil or klaf (in Maimonides' interpretation, contrary to that of the "Shulchan Aruch": the half-skin from the hair side) in order to be valid, and that it is preferable that they be written on gevil. To this end, hides procured from sheep or goats and calves were mostly used. The hide of a fully-grown cow, being so thick that it requires being shaved down to half its thickness on its fleshy side before it can be used (in order to remove the epidermis from the hide to make it thinner), was less common. Maimonides made further prescriptions for the use of each of the three types of processed skin. Torah scrolls must be written on g'vil only on the side on which the hair had grown, and never on duchsustos (understood as the half-skin from the flesh side). Phylacteries, if written on k'laf, must be written on the flesh side. A mezuzah, when written on duchsustos, must be written on the hair side. It is unacceptable to write on k'laf on the hair side or on the split skin (either g'vil or duchsustos) on the flesh side. Today's practice According to the Talmud, Moses used gevil for the Torah scroll he placed into the Ark of the Covenant. Elsewhere in the Talmud, there is testimony that Torah scrolls were written on gevil. Today, a handful of Jewish scribes and artisans continue to make scroll material in this way. However, the majority of Torah scrolls are written on klaf, in their belief that the Talmud recommends (as opposed to requires) gevil and relates to the optimal beautification of the scrolls rather than an essential halachic requirement. Given the uncertainty about which layer of the hide is in fact the klaf, there is a growing movement for insisting on a return to gevil in Torah scrolls in order to avoid all doubts. Most of the Dead Sea Scrolls (written around 200 BCE), found in and around the caves of Qumran near the Dead Sea, are written on gevil. Properly, klaf should be used for tefillin and duchsustus for mezuzot. However, this rule is only a preference, not an obligation and klaf is used for mezuzot today but there is a minority which seeks to return to the law. See also Ktav Stam References External links The Gevil Institute: Machon Gevil The only online organization dedicated to the preservation of gevil. https://web.archive.org/web/20080410134250/http://www.ccdesigninc.com/MishmeresStam/Leaflet.pdf Hides (skin) Book design Jewish law and rituals Writing media Leather in Judaism Torah Hebrew words and phrases in Jewish law
Gevil
[ "Engineering" ]
1,280
[ "Book design", "Design" ]
2,276,238
https://en.wikipedia.org/wiki/Nightmares%20%281983%20film%29
Nightmares is a 1983 American horror anthology film directed by Joseph Sargent and starring Emilio Estevez, Lance Henriksen, Cristina Raines, Veronica Cartwright, and Richard Masur. The film is made up of four short films based on urban legends; the first concerns a woman who encounters a killer in the backseat of her car; the second concerns a video game-addicted teenager who is consumed by his game; the third focuses on a fallen priest who is stalked by a pickup truck from hell; and the last follows a suburban family battling a giant rat in their home. Nightmares was originally filmed as a two-hour pilot of a proposed television series to be broadcast by the NBC network during the 1983–1984 TV season. Plot Terror in Topanga During a routine traffic stop, a highway patrolman is viciously stabbed multiple times by an unseen assailant; though he survives and is taken to the hospital. The perpetrator is identified by various TV and radio reporters as William Henry Glazier, a serial killer who escaped a mental institution and is currently terrorizing the Topanga area. Meanwhile, Lisa, a housewife and chain smoker, puts her children to bed as a bulletin warning about Glazier appears on her television. Lisa discovers that she is out of cigarettes, prompting her to rush to the store to buy some more. Her husband Phillip forbids her from leaving the house at such a late hour with a killer on the loose, and advises her to kick her habit instead. Despite this, she writes Phillip a note, then sneaks to her car and drives to the store. During her drive, Lisa listens to a radio bulletin warning residents about Glazier, before she is startled by a hitchhiker. Lisa reaches the store and buys groceries and cigarettes. She warns the cashier about Glazier, but he claims to be prepared by unveiling a pistol. On the drive home, Lisa discovers that she is nearly out of gas, and with all the local gas stations already closed for the night, she stops at an out-of-the-way station. The attendant who approaches her happens to perfectly match Glazier's physical appearance. Lisa also grows increasingly alarmed as the attendant seems to be studying her car and herself intently. Suddenly, the attendant lunges at the car with a gas nozzle, breaking the window. He drags Lisa out of the car, then draws a pistol and shoots the actual Glazier, who was revealed to be hiding in Lisa's back seat the entire time. The attendant calms Lisa and offers to call the police. The police drive the frightened Lisa back home. Phillip asks if Lisa got her cigarettes, and Lisa responds by showing the pack and throwing it away in a trash can, indicating that the experience apparently scared her into quitting. The Bishop of Battle J.J. Cooney is an immensely talented video game player who, accompanied by his friend Zock Maxwell, heads into an inner-city arcade to challenge a gang of Hispanic players to a few rounds of Pleiades, offering the winner a dollar per game with a five game minimum. After a few games, one of the gang members recognizes J.J., and tells the others that they are getting hustled, prompting J.J. and Zock to escape. J.J. and Zock then head to the arcade at their local shopping mall, where J.J. is hoping to use the money he got from hustling to try to beat The Bishop of Battle, a notoriously difficult video game where players fight off enemies and escape from a 3-D maze that features thirteen different levels. Zock mentions how no one they know has ever made it to the thirteenth level, to the point that he and many others believe it is just a myth. J.J., however, is thoroughly convinced that the thirteenth level is real, as he heard about a player in New Jersey who reached it twice. After an argument about J.J.'s obsession with video games, particularly The Bishop of Battle, J.J. gives Zock his cut of the profits as Zock leaves. J.J. spends the next several hours repeatedly trying and failing to make it to the thirteenth level, but he only manages to make it to level 12. Determined not to give up, even after closing time, J.J. tries to play one more game, only for the owner of the arcade to throw him out. At J.J.'s apartment, his parents also voice their concern with his obsession with gaming, primarily about how it is affecting his performance in school, leading his father to ground him until his grades improve. That night, J.J. sneaks out when his parents are asleep and breaks into the arcade to attempt to finish the game. J.J.'s parents are awoken by a call from Zock, who had a nightmare about J.J. and is worried whether or not he made it home, leading them to discover J.J. is gone. Back at the arcade, J.J finally manages to complete level 12. Suddenly, the arcade cabinet's screen begins flickering and the cabinet begins shaking violently until it collapses. The Bishop of Battle's voice rings out, commending J.J. for his skills and welcoming him to level 13, before the cabinet releases a wave of energy. Once the wave passes, the game's 3-D enemies fly out of the cabinet's wreckage and into the real world. The enemies fire lasers at J.J. that manage to do serious damage to the surrounding arcade machines, but J.J. manages to defend himself with the gun from the game's controls, which now fires real laser blasts. He flees to the parking lot, but drops the gun in the process. The Bishop of Battle eventually appears, drawing closer and closer to a terrified J.J. The next morning, Zock and J.J.'s parents head to the arcade. They discover the damage the arcade sustained during the previous night, as well as the Bishop of Battle cabinet, which has been mysteriously reconstructed. Zock hears J.J.'s voice emanating from the cabinet, reciting the Bishop of Battle's lines. Zock and J.J.'s parents then discover J.J. on the screen, watching as he turns into the sprite of the game's player character. The Benediction Catholic priest Frank MacLeod is tending a field near the small parish where he serves. A doe tentatively approaches him, but it is quickly bitten and killed by a rattlesnake. MacLeod attempts to kill the snake, but instead manages to throw it away, watching as it disappears into thin air before discovering that it managed to bite his hand. This is revealed to be a nightmare as he wakes up in bed, screaming and clutching his hand. Later that day, Frank officiates the funeral of a young boy, but is unable to provide the mourners with words of comfort. Visiting his bishop, Frank explains how he witnessed the boy's death first-hand, and how the experience has given him a crisis of faith. Ignoring the advice of a fellow priest, Frank resigns and leaves the rectory with some holy water, intending to search for a new purpose in life. He soon encounters a black Chevrolet C-20 Fleetside with tinted windows on the road shortly after he leaves, and signals for it to pass, but it goes at the same time he does, nearly causing an accident. A while later, Frank experiences a flashback to the death of the young boy mentioned earlier; the child had been pointlessly and critically injured during a robbery of the local store, and while the parents wanted him to administer last rites, Frank wanted to call an ambulance in an attempt to save the child's life. The same truck from earlier appears out of nowhere behind Frank and rams into his car, detaching his rear bumper and forcing him off the road. Frank then has a flashback to his talk with the Bishop, where he reveals that he has been plagued with visions of anarchy, his lost faith convincing him that there is no God who would allow such suffering. As Frank attempts to fix the bumper, the truck appears again, nearly running him over. Frank attempts to escape, but the truck catches up with him. Frank desperately asks the unseen driver what it wants before once again being forced off the road. Frank gets back on the road again, keeping an eye out for the truck. He soon hears an ominous rumbling sound and discovers a large bulge appearing in the ground. The truck explodes out of the ground and once again turns to Frank, prompting him to drive away. It is then revealed that the truck is driven by Satan himself, who remains unseen. The Devil destroys Frank's car in a collision that does no damage to his truck. Injured from the crash and left with nowhere to run, Frank climbs out of his ruined car as Satan's truck goes in for the kill. In desperation, Frank tosses the container of holy water he had been carrying at the truck, vaporizing it, before he falls unconscious. Emergency responders arrive at the scene, but they do not find evidence that the truck was ever there. Frank has one final flashback of a talk with his bishop, who mentions that only a very select few have been given signs that higher powers exist. He requests that the paramedics take him to the hospital located in his parish, having regained his faith from the experience. Night of the Rat On a stormy night, housewife Claire Houston hears something scurrying in the attic of her house. While she believes it is rats, her husband Steven believes it is just the wind. He advises her to go to sleep. The next morning, Steven discovers that Claire has been browsing the phone book to look for an exterminator, as she believes that there is an infestation. Having plans for a swimming pool to be put in, Steven does not want to spend any extra money and simply suggests that Claire set up a few mousetraps. After Steven leaves for work, Claire hears noises coming from the cabinets in her kitchen. When she goes to investigate, she watches as drinking glasses and cans of food are knocked off the shelves. Later that night, Steven sets up additional mousetraps in the attic. A rat is soon caught in one of the traps, and Steven throws the dead rat in the garbage. Meanwhile, the family's cat Rosie investigates the house's crawlspace, where she is mauled to death by an unseen creature. The next day, Claire's daughter Brooke discovers that Rosie is missing and becomes worried. At the same time, the kitchen sink is revealed to be clogged with a large amount of grey fur. Claire enters the crawlspace to look for Rosie, but finds the cat's corpse and begins hearing ominous noises. She glimpses the silhouette of a large creature with glowing red eyes peering out at her in the darkness, prompting her to escape. Later that day, Brooke discovers that her room and her toys have been torn to shreds. Entering the room, Claire discovers that the only toy left untouched is a stuffed rat, just as the lights begin flickering on and off. Eventually, Claire calls an exterminator, Mel Keefer, who discovers that the creature, which he has identified as a rat, has managed to gnaw through the pipes and get to the power cables inside, causing the flickering lights. Keefer also discovers a large, saliva-covered hole behind a cabinet in the kitchen, just as Steven comes home. Unhappy that Claire has hired Keefer, Steven asks him to leave. That night, Brooke sleeps in the guest room as she wishes for Rosie to come back. Claire then receives a phone call from Mel, who has made a breakthrough: he has looked in an old book he owns for information about a creature known as "The Devil Rodent". According to legend, the Devil Rodent is a large, malevolent rat with large amounts of strength and intellect that used to terrorize individuals in 17th century Europe. As Mel also mentions that the Devil Rodent cannot be destroyed, Steven grabs the phone and tells Keefer not to call again. Suddenly, the family hear the piano downstairs playing jumbled notes, discovering that the keys have been gnawed on as Brooke comes downstairs. Steven manages to save her after a china cabinet nearly falls on her. Discovering more saliva-covered holes in the wall and hearing the radio suddenly turn on and off, Steven loads a shotgun and goes in search of the creature as Claire and Brooke hide upstairs. The power turns on and off repeatedly as Steven searches the kitchen. Brooke hears the creature in the ceiling, prompting Steven to go up to the attic. The door to the guest room suddenly slams shut as Brooke begins screaming. Kicking the door open, Steven and Claire come face to face with the Devil Rodent itself. The giant rat proceeds to demonstrate psychokinetic abilities, moving furniture, opening and closing doors and windows, and damaging the room repeatedly with a loud wail. The Devil Rodent manages to telepathically communicate with Brooke, who tells her parents that the creature is a mother looking for her baby. Steven rushes into the kitchen, roots through the garbage can, and pulls out the dead rat he originally threw away. He places it in a shoebox and puts the box near the window. As the Devil Rodent moves towards the box and reclaims her baby, Steven points his gun at her, but is unable to shoot. The giant rat unleashes one last roar and disappears out the window. The frightened family reunite, shedding tears of relief as Brooke ponders where the Devil Rodent is going next. Cast Terror in Topanga Cristina Raines as Lisa Anthony James as The Store Clerk William Sanderson as The Gas Station Attendant Lee Ving as William Henry Glazier Clare Torao as Mori, The Newswoman (credited as Clare Nono) The Bishop of Battle Emilio Estevez as J.J. Cooney Louis Giambalvo as Jerry Cooney Mariclare Costello as Adele Cooney Moon Unit Zappa as Pamela Billy Jayne as Zock Maxwell James Tolkan as Voice of the Bishop of Battle The Benediction Lance Henriksen as MacLeod Tony Plana as Father Luis Del Amo Timothy Scott as Sheriff Robin Gammell as Bishop Rose Mary Campos as Mother Night of the Rat Richard Masur as Steven Houston Veronica Cartwright as Clair Houston Bridgette Andersen as Brooke Houston Albert Hague as Mel Keefer Production Nightmares was initially filmed in late 1982 as a two-hour pilot for a proposed series to be aired by NBC during its 1983-84 television season, but Universal Pictures executives decided to put it out as a theatrical film instead. It has been a long-held belief that the four segments of the film were initially conceived and shot for ABC's thriller anthology series Darkroom, but were deemed too intense for television. Reception The film was not well received on release. On Rotten Tomatoes it has an approval rating of 29% based on seven reviews, with an average rating of 5.5/10. In her review for the New York Times, Janet Maslin wrote, "Nothing spoils a horror story faster than a stupid victim. And Nightmares, an anthology of four supposedly scary episodes, has plenty of those." Time Out praised The Bishop of Battle, but stated, "In general, though, the scripting is unimaginative, derivative, and desperately predictable as the film limps through its jokily cautionary tales." Home media The film was released on VHS by Universal Pictures in the 1980s, and on Betamax in 1983. It was later released on VHS and DVD by Anchor Bay Entertainment in 1999 in "Full Frame (1.33:1) Presentation" and has since gone out of print. On December 22, 2015, Scream Factory released Nightmares on Blu-ray. See also Body Bags, a 1993 horror anthology that also was produced for television, and also had major filmmakers attached (John Carpenter and Tobe Hooper) Creepshow, a series of anthology horror films helmed by Stephen King and George A. Romero References External links Nightmares at Box Office Mojo Project to make actual The Bishop of Battle video game 1983 films 1983 horror films American horror anthology films Films about computing Films about video games Films based on urban legends Films directed by Joseph Sargent Films scored by Craig Safan Films with screenplays by Christopher Crowe (screenwriter) 1980s monster movies American monster movies Universal Pictures films 1980s English-language films 1980s American films Television pilots not picked up as a series 1983 science fiction films English-language science fiction horror films
Nightmares (1983 film)
[ "Technology" ]
3,407
[ "Works about computing", "Films about computing" ]
2,276,394
https://en.wikipedia.org/wiki/Phenmetrazine
Phenmetrazine, sold under the brand name Preludin among others, is a stimulant drug first synthesized in 1952 and originally used as an appetite suppressant, but withdrawn from the market in the 1980s due to widespread misuse. It was initially replaced by its analogue phendimetrazine (under the brand name Prelu-2) which functions as a prodrug to phenmetrazine, but now it is rarely prescribed, due to concerns of misuse and addiction. Chemically, phenmetrazine is a substituted amphetamine containing a morpholine ring or a substituted phenylmorpholine. Medical uses Phenmetrazine has been used as an appetite suppressant for purposes of weight loss. It was used therapeutically for this indication at a dosage of 25mg two or three times per day (or 50–75mg/day total) in adults. Phenmetrazine has been found to produce similar weight loss to dextroamphetamine in people with obesity. In addition to its appetite suppressant effects, phenmetrazine produces psychostimulant and sympathomimetic effects. Phenmetrazine has been shown to produce very similar subjective psychostimulant effects to those of amphetamine and methamphetamine in clinical studies. Although able to produce comparable effects however, phenmetrazine has only about one-fifth to one-third of the potency of dextroamphetamine by weight. Pharmacology Pharmacodynamics Phenmetrazine acts as a norepinephrine and dopamine releasing agent (NDRA), with values for induction of norepinephrine and dopamine release of 29–50nM and 70–131nM, respectively. It has very weak activity as a releaser of serotonin, with an EC50 value of 7,765 to >10,000nM. The drug is several times less potent than dextroamphetamine and dextromethamphetamine as an NDRA in vitro. This is in accordance with the higher doses required clinically. In contrast to many other monoamine releasing agents (MRAs), phenmetrazine is inactive in terms of vesicular monoamine transporter 2 (VMAT2) actions. A few other MRAs have also been found to be inactive at VMAT2, such as phentermine and benzylpiperazine (BZP). These findings indicate that VMAT2 activity is non-essential for robust MRA actions. Phenmetrazine does not appear to have been assessed at the trace amine-associated receptor 1 (TAAR1). Phenmetrazine has been found to dose-dependently elevate brain dopamine levels in rodents in vivo. A 10mg/kg i.v. dose of phenmetrazine increased nucleus accumbens dopamine levels by around 1,400% in rats. For comparison, dextroamphetamine 3mg/kg i.p. increased striatal dopamine levels by about 5,000% in rats. On the other hand, the maximal increases in brain dopamine levels with phenmetrazine are similar to those with the proposed dopamine transporter (DAT) "inverse agonists" methylphenidate and cocaine (e.g., ~1,500%). Dopamine-releasing drugs that lack VMAT2 activity are theorized to produce much smaller maximal impacts on dopamine levels under experimental conditions than those which also act on VMAT2 like amphetamine. However, the pharmacological significance of these VMAT2 interactions in humans is unclear. In trials performed on rats, it has been found that after subcutaneous administration of phenmetrazine, both optical isomers are equally effective in reducing food intake, but in oral administration the levo isomer is more effective. In terms of central stimulation however, the dextro isomer is about four times as effective in both methods of administration. Pharmacokinetics After an oral dose, about 70% of the drug is excreted from the body within 24 hours. About 19% of that is excreted as the unmetabolised drug and the rest as various metabolites. The salt which has been used for immediate-release formulations is phenmetrazine hydrochloride (Preludin). Sustained-release formulations were available as resin-bound, rather than soluble, salts. Both of these dosage forms share a similar bioavailability as well as time to peak onset, however, sustained-release formulations offer improved pharmacokinetics with a steady release of active ingredient which results in a lower peak concentration in blood plasma. Chemistry Phenmetrazine, also known as (2RS,3RS)-2-phenyl-3-methylmorpholine or as (2RS,3RS)-3-methyl-2-phenyltetrahydro-2H-1,4-oxazine, is a substituted phenylmorpholine. It is the (2RS,3RS)- or (±)-trans- enantiomer of 2-phenyl-3-methylmorpholine. Phenmetrazine's chemical structure incorporates the backbone of amphetamine, the prototypical psychostimulant which, like phenmetrazine, is a releasing agent of dopamine and norepinephrine. The molecule also loosely resembles ethcathinone, the active metabolite of popular anorectic amfepramone (diethylpropion). Unlike phenmetrazine, ethcathinone (and therefore amfepramone as well) are mostly selective as norepinephrine releasing agents. A variety of phenmetrazine analogues and derivatives have been encountered as designer drugs. In addition, the activities of various phenmetrazine analogues and derivatives as monoamine releasing agent (MRA) have been described. Synthesis Phenmetrazine can be synthesized in three steps from 2-bromopropiophenone and ethanolamine. The intermediate alcohol 3-methyl-2-phenylmorpholin-2-ol (1) is converted to a fumarate salt (2) with fumaric acid, then reduced with sodium borohydride to give phenmetrazine free base (3). The free base can be converted to the fumarate salt (4) by reaction with fumaric acid. History Phenmetrazine was first patented in Germany in 1952 by Boehringer-Ingelheim, with some pharmacological data published in 1954. It was the result of a search by Thomä and Wick for an anorectic drug without the side effects of amphetamine. Phenmetrazine was introduced into clinical use in 1954 in Europe. Society and culture Names Phenmetrazine is the generic name of the drug and its , , and . It is also known by the brand name Preludin. Availability In 2004, phenmetrazine remained marketed only in Israel. Legal status Phenmetrazine is a Schedule II controlled substance in the United States. Recreational use Phenmetrazine has been used recreationally in many countries, including Sweden. When stimulant use first became prevalent in Sweden in the 1950s, phenmetrazine was preferred to amphetamine and methamphetamine by users. In the autobiographical novel Rush by Kim Wozencraft, intravenous phenmetrazine is described as the most euphoric and pro-sexual of the stimulants the author used. Phenmetrazine was classified as a narcotic in Sweden in 1959, and was taken completely off the market in 1965. Formerly the illegal demand was satisfied by smuggling from Germany, and later Spain and Italy. At first, Preludin tablets were smuggled, but soon the smugglers started bringing in raw phenmetrazine powder. Eventually amphetamine became the dominant stimulant of abuse because of its greater availability. Phenmetrazine was taken by the Beatles early in their career. Paul McCartney was one known user. McCartney's introduction to drugs started in Hamburg, Germany. The Beatles had to play for hours, and they were often given the drug (referred to as "prellies") by the maid who cleaned their housing arrangements, German customers, or by Astrid Kirchherr (whose mother bought them). McCartney would usually take one, but John Lennon would often take four or five. Hunter Davies asserted, in his 1968 biography of the band, that their use of such stimulants then was in response to their need to stay awake and keep working, rather than a simple desire for kicks. Jack Ruby said he was on phenmetrazine at the time he killed Lee Harvey Oswald. Preludin was also used recreationally in the US throughout the 1960s and 1970s. It could be crushed up in water, heated and injected. The street name for the drug in Washington, DC was "Bam". Phenmetrazine continues to be used and abused around the world, in countries including South Korea. References Anorectics Beta-Hydroxyamphetamines Euphoriants Norepinephrine-dopamine releasing agents Phenylmorpholines Stimulants Withdrawn drugs
Phenmetrazine
[ "Chemistry" ]
2,000
[ "Drug safety", "Withdrawn drugs" ]
2,276,409
https://en.wikipedia.org/wiki/Steel%20frame
Steel frame is a building technique with a "skeleton frame" of vertical steel columns and horizontal I-beams, constructed in a rectangular grid to support the floors, roof and walls of a building which are all attached to the frame. The development of this technique made the construction of the skyscraper possible. Steel frame has displaced its predecessor, the iron frame, in the early 20th century. Concept The rolled steel "profile" or cross section of steel columns takes the shape of the letter "". The two wide flanges of a column are thicker and wider than the flanges on a beam, to better withstand compressive stress in the structure. Square and round tubular sections of steel can also be used, often filled with concrete. Steel beams are connected to the columns with bolts and threaded fasteners, and historically connected by rivets. The central "web" of the steel I-beam is often wider than a column web to resist the higher bending moments that occur in beams. Wide sheets of steel deck can be used to cover the top of the steel frame as a "form" or corrugated mold, below a thick layer of concrete and steel reinforcing bars. Another popular alternative is a floor of precast concrete flooring units with some form of concrete topping. Often in office buildings, the final floor surface is provided by some form of raised flooring system with the void between the walking surface and the structural floor being used for cables and air handling ducts. The frame needs to be protected from fire because steel softens at high temperature and this can cause the building to partially collapse. In the case of the columns this is usually done by encasing it in some form of fire resistant structure such as masonry, concrete or plasterboard. The beams may be cased in concrete, plasterboard or sprayed with a coating to insulate it from the heat of the fire or it can be protected by a fire-resistant ceiling construction. Asbestos was a popular material for fireproofing steel structures up until the early 1970s, before the health risks of asbestos fibres were fully understood. The exterior "skin" of the building is anchored to the frame using a variety of construction techniques and following a huge variety of architectural styles. Bricks, stone, reinforced concrete, architectural glass, sheet metal and simply paint have been used to cover the frame to protect the steel from the weather. Cold-formed steel frames Cold-formed steel frames are also known as lightweight steel framing (LSF). Thin sheets of galvanized steel can be cold formed into steel studs for use as a structural or non-structural building material for both external and partition walls in both residential, commercial and industrial construction projects (pictured). The dimension of the room is established with a horizontal track that is anchored to the floor and ceiling to outline each room. The vertical studs are arranged in the tracks, usually spaced apart, and fastened at the top and bottom. The typical profiles used in residential construction are the C-shape stud and the U-shaped track, and a variety of other profiles. Framing members are generally produced in a thickness of 12 to 25 gauge. Heavy gauges, such as 12 and 14 gauge, are commonly used when axial loads (parallel to the length of the member) are high, such as in load-bearing construction. Medium-heavy gauges, such as 16 and 18 gauge, are commonly used when there are no axial loads but heavy lateral loads (perpendicular to the member) such as exterior wall studs that need to resist hurricane-force wind loads along coasts. Light gauges, such as 25 gauge, are commonly used where there are no axial loads and very light lateral loads such as in interior construction where the members serve as framing for demising walls between rooms. The wall finish is anchored to the two flange sides of the stud, which varies from thick, and the width of web ranges from . Rectangular sections are removed from the web to provide access for electrical wiring. Steel mills produce galvanized sheet steel, the base material for the manufacture of cold-formed steel profiles. Sheet steel is then roll-formed into the final profiles used for framing. The sheets are zinc coated (galvanized) to increase protection against oxidation and corrosion. Steel framing provides excellent design flexibility due to the high strength-to-weight ratio of steel, which allows it to span over long distances, and also resist wind and earthquake loads. Steel-framed walls can be designed to offer excellent thermal and acoustic properties – one of the specific considerations when building using cold-formed steel is that thermal bridging can occur across the wall system between the outside environment and interior conditioned space. Thermal bridging can be protected against by installing a layer of externally fixed insulation along the steel framing – typically referred to as a 'thermal break'. The spacing between studs is typically 16 inches on center for home exterior and interior walls depending on designed loading requirements. In office suites the spacing is on center for all walls except for elevator and staircase wells. Hot-formed steel frames Hot Formed frames, also known as hot-rolled steel frames, are engineered from steel that undergoes a complex manufacturing process known as hot rolling. During this procedure, steel members are heated to temperatures above the steel’s recrystallization temperature (1700˚F).This process serves to refine the grain structure of the steel and align its crystalline lattice. It is then passed through precision rollers to achieve the desired frame profiles. The distinctive feature of hot formed frames is their substantial beam thickness and larger dimensions, making them more robust compared to their cold rolled counterparts. This inherent strength makes them particularly well-suited for application in larger structures, as they show minimal deformation when subjected to substantial loads. While it is true that hot rolled steel members often have a higher initial cost per component when compared to cold rolled steel, their cost-efficiency becomes increasingly evident when used in the construction of larger structures. This is because hot rolled steel frames require fewer components to span equivalent distances, leading to economic advantages in bigger projects. History The use of steel instead of iron for structural purposes was initially slow. The first iron-framed building, Ditherington Flax Mill, had been built in 1797, but it was not until the development of the Bessemer process in 1855 that steel production was made efficient enough for steel to be a widely used material. Cheap steels, which had high tensile and compressive strengths and good ductility, were available from about 1870, but wrought and cast iron continued to satisfy most of the demand for iron-based building products, due mainly to problems of producing steel from alkaline ores. These problems, caused principally by the presence of phosphorus, were solved by Sidney Gilchrist Thomas in 1879. It was not until 1880 that an era of construction based on reliable mild steel began. By that date the quality of steels being produced had become reasonably consistent. The Home Insurance Building, completed in 1885, was the first to use skeleton frame construction, completely removing the load bearing function of its masonry cladding. In this case the iron columns are merely embedded in the walls, and their load carrying capacity appears to be secondary to the capacity of the masonry, particularly for wind loads. In the United States, the first steel framed building was the Rand McNally Building in Chicago, erected in 1890. The Royal Insurance Building in Liverpool designed by James Francis Doyle in 1895 (erected 1896–1903) was the first to use a steel frame in the United Kingdom. See also Buckling-restrained braced frame (BRBF) Curtain wall (architecture) Prefabricated buildings Steel building Structural steel Structural robustness Tension fabric structure References Sources External links Historical Development of Iron and Steel in Buildings "Its Here – All Steel Buildings." Popular Science Monthly, November 1928, p. 33. Steel Framing Industry Association web site Steel Framing Alliance web site British Constructional Steelwork Association / SCI information Construction Structural steel
Steel frame
[ "Engineering" ]
1,606
[ "Construction", "Structural steel", "Structural engineering" ]
2,276,960
https://en.wikipedia.org/wiki/Tucana%20Dwarf
The Tucana Dwarf Galaxy is a dwarf galaxy in the constellation Tucana. It was discovered in 1990 by R.J. Lavery of Mount Stromlo Observatory. It is composed of very old stars and is very isolated from other galaxies. Its location on the opposite side of the Milky Way from other Local Group galaxies makes it an important object for study. Properties The Tucana Dwarf is a dwarf spheroidal galaxy of type dE5. It contains only old stars, formed in a single star formation era around the time the Milky Way's globular clusters formed. It is not experiencing any current star formation, unlike other isolated dwarf galaxies. The Tucana Dwarf does not contain very much neutral hydrogen gas. It has a metallicity of -1.8, a significantly low number. There is no significant spread in metallicity throughout the galaxy. There does not seem to be any substructure to the stellar distribution in the galaxy. Location The Tucana Dwarf is located in the constellation Tucana. It is about away, on the opposite side of the Milky Way galaxy to most of the other Local Group galaxies and is therefore important for understanding the kinematics and formation history of the Local Group, as well as the role of environment in determining how dwarf galaxies evolve. It is isolated from other galaxies, and located near the edge of the Local Group, around from the barycentre of the Local Group—the second most remote of all member galaxies after the Sagittarius Dwarf Irregular Galaxy. The Tucana Dwarf galaxy is one of only two dwarf spheroidal galaxies in the Local Group not located near the Milky Way or the Andromeda Galaxy. It is thought to have approached the Andromeda Galaxy about 11 billion years ago, which ejected the galaxy far away to its current position; such galaxies are called "backsplash galaxies". References External links The Tucana Dwarf Galaxy: HST/WFPC2 Imaging of this Isolated Local Group Dwarf Spheroidal (AAS) Dwarf galaxies Dwarf elliptical galaxies Local Group Tucana 69519 Astronomical objects discovered in 1990
Tucana Dwarf
[ "Astronomy" ]
428
[ "Tucana", "Constellations" ]